content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Usage
dimension: field_name {
group_item_label: "desired label"
}
}
Definition
In cases where you’ve used the
group_label parameter to combine fields into custom groups in a view’s field picker, you can also use the
group_item_label parameter for each of the grouped fields to customize how the individual fields are shown in the field picker.
For example, you can add the
group_label to a set of fields like this:
These fields will now be combined into an expandable Shipping Info section in the field picker:
But since these fields are shown under the expandable Shipping Info section, they don’t need to have “Shipping” in their individual labels as well. So you can add the
group_item_label parameter to each of the grouped fields:
Now the fields display like this in the field picker:
The
group_item_label displays only in the field picker. In the Data section of an Explore and in any visualizations, Looker displays the field’s name or label as normal. | https://docs.looker.com/reference/field-params/group_item_label | 2022-05-16T22:28:14 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.looker.com |
.
You can restrict access to this API and to the URL that it returns to a list of IP
addresses that you specify. To restrict access, attach an IAM policy that denies
access to this API unless the call comes from an IP address in the specified list
to
every AWS Identity and Access Management user, group, or role used to access the notebook
instance. Use the
NotIpAddress condition operator and the
aws:SourceIP
condition context key to specify the list of IP addresses that you want to have access
to the notebook instance. For more information, see Limit Access to a Notebook Instance by
IP Address.
Note: | https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreatePresignedNotebookInstanceUrl.html | 2019-03-18T16:09:12 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.aws.amazon.com |
To use Unified Manager, you must first configure the initial setup options, including the NTP server, the maintenance user email address, and the SMTP server host name and options. Enabling periodic AutoSupport is also highly recommended.
The OnCommand Unified Manager Initial Setup dialog box appears only when you first access the web UI. If you want to change any options later, you can use the Administration options, which are accessible by clicking
from the toolbar.
After adding clusters, you can configure additional options, such as events, alerts, and thresholds. See the OnCommand Unified Manager Workflow Guide for Managing Cluster Health for more information. | http://docs.netapp.com/ocum-73/topic/com.netapp.doc.onc-um-isg/GUID-7BAA5E13-1CBC-4D53-AD3E-196A96DE9E97.html | 2019-03-18T16:05:03 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.netapp.com |
No matches found
Try choosing different filters or resetting your filter selections.
Chat: Expanded Transcript Access and Skills-Based Routing
Every support agent can now view Live Agent transcripts. Let multiple support agents resolve a customer’s issue together with Snap-ins chat conferences. Use Omni-Channel Skills-Based Routing to route cases to the most skilled agent for the job.
- View Transcripts Without a Live Agent License
To provide more consistent support, every user with a Service Cloud license now has a 360-degree view of a customer’s support history and can see past chat transcripts. Previously, a support agent needed a Live Agent license to view chat transcripts. So when a customer chatted with an agent and then followed up on another channel, like email, the new agent couldn’t always see the full history. A Live Agent license is still required to deliver service using Live Agent in the Lightning Service Console.
- Take Advantage of Chat Conferencing for Snap-Ins
Use Snap-ins for your Live Agent chat conferences in Salesforce Classic. With chat conferencing, support agents can work together to solve your customers’ trickiest problems.
- Route Chats to Qualified Agents with Omni-Channel Skills-Based Routing (Generally Available)
Route chats to agents based on the skills needed to resolve them. Use skills, agent availability, and skill levels to assign Live Agent chats to an agent who is right for the job. Previously, this feature was in Beta.
- Support Your Agents with Enhancements to Omni-Channel Supervisor for Lightning Experience
Let agents raise flags during a chat, signaling to supervisors that they need help. Supervisors can also discretely provide suggestions during a Live Message session between an agent and a customer. Only agents can see the supervisor messages, not the customer.
- Agent Typing Indicator and Critical Alert Wait Time Supported in Lightning Experience
As in Salesforce Classic, the chat tab turns red when the critical wait alert time has passed and supervisors can view what agents and customers are typing in Omni-Channel Supervisor. | http://releasenotes.docs.salesforce.com/en-us/spring19/release-notes/rn_chat.htm | 2019-03-18T15:44:14 | CC-MAIN-2019-13 | 1552912201455.20 | [] | releasenotes.docs.salesforce.com |
Accelerate..
| http://releasenotes.docs.salesforce.com/en-us/spring19/release-notes/rn_sales_gmail_integration_hvs.htm | 2019-03-18T16:11:35 | CC-MAIN-2019-13 | 1552912201455.20 | [array(['release_notes/images/218_outlook_integration_hvs.png',
'Work Queue component in the Work Queue tab.'], dtype=object)] | releasenotes.docs.salesforce.com |
Common Web Project Conversion Issues and Solutions
Michael Bundschuh
Program Manager, Microsoft
Robert McGovern
Infusion Development
November 2005 (Updated for the Visual Studio 2005 RTM release)
Applies to:
ASP.NET 1.x
ASP.NET 2.0
Visual Studio .NET 2003
Visual Studio 2005
Summary: Reviews both Visual Studio 2005 Web project and ASP.NET 2.0 framework changes that affect Web project conversion, and then discusses common issues and solutions for converting using Visual Studio 2005. (31 printed pages)
Contents
Part 1: Web Project Changes
What Is ASP.NET and Visual Studio?
Web Project Changes
New Features
Converting a Web Project
The Conversion Wizard
The Conversion Report
Part 2: Common Conversion Issues
Issue 1: Code-behind class file (CB-CB) references
Issue 2: Stand-alone class file (SA–CB) references
Issue 3: Circular references
Issue 4: Resource Manager
Issue 5: Excluded files are no longer excluded
Issue 6: Orphaned resx files
Issue 7: Extra types in a code-behind file
Issue 8: Accessing an auto-generated control variable
Issue 9: Unable to switch to design view
Issue 10: Unable to parse filename
Part 3: Other Conversion Issues
Issue 11: Backup folder in the Web project folder
Issue 12: Multiple projects in the same location
Issue 13: Multiple projects referring to the same files
Issue 14: Invalid access modifier
Issue 15: Converting excluded files
Issue 16: Duplicate variable declarations
Issue 17: Sub-Web applications
Issue 18: OnInit() not removed
Issue 19: XHTML validation errors
Issue 20: Multiple configurations
Issue 21: Auto-generated members hide inherited members
Issue 22: Ambiguous references and naming conflicts
Issue 23: Event handlers called multiple times
Issue 24: Code-behind files moved to the App_Code folder
Issue 25: Projects referencing a Web project
Issue 26: Sharing the same code-behind file
Issue 27: Partially converted solutions
Issue 28: No command-line migration for C#
Issue 29: Batch = True or False
Summary
Appendix A: Actual Error Messages
Part 1: Web Project Changes
As you convert your Visual Studio 2002/2003 Web projects to Visual Studio 2005, you may encounter issues with the changes made in the Web project system. In this article, we will look at both the conversion process and some of the common issues you may encounter during a conversion.
What Is ASP.NET and Visual Studio?
ASP.NET is a technology for creating dynamic Web applications. ASP.NET pages (Web Forms) are compiled and allow you to build powerful forms-based Web pages. When building these pages, you can use ASP.NET user controls to create common UI elements and program them for common tasks.
Visual Studio is an integrated development environment (IDE) that developers can use to create programs in one of many languages, including C# and Visual Basic, for the .NET Framework.
You will find that converting your Web application to use the new Visual Studio 2005 and ASP.NET 2.0 features both simplifies your development and gives you more options for compiling and deploying your code.
Web Project Changes
These changes affect how you develop, configure, and deploy your Web applications. As either a developer or a Web site administrator, you will need to be aware of these changes in order to properly build, deploy, and maintain your Web applications.
- No project file. Visual Studio 2005 no longer uses a project file to explicitly list the files in your Web project. Instead, it considers all files and folders to be part of the Web project. Project information that was stored in the project file is now stored in the solution or Web.config file.
- Special directories. An ASP.NET 1.x application has one required folder (\bin) for containing assemblies. An ASP.NET 2.0 application has a larger defined folder structure. The new directories start with the prefix "App_" and are designed for storing resources, assemblies, source code, and other components. The new folder structure helps eliminate the need for a project file and also enables some new options for deployment.
- Code-behind model. In ASP.NET 1.x, the code-behind model allows separation of content (e.g. foo.aspx) from code (e.g. foo.aspx.vb). The content page inherits from the code-behind page and the code-behind page contains both user and designer generated code.
ASP.NET 2.0 enhances the code-behind model with partial classes, which allows a class to span multiple files. In the new code-behind model, the content page inherits from a compiled class consisting of its corresponding code-behind page plus an auto-generated partial class that defines the field declarations for the controls used in the content page. This change allows the auto-generated code to be separated from the user's code, and makes the code-behind page much smaller and cleaner. The partial class structure also reduces the risk of inadvertently breaking the page by editing designer generated code.
- Compilation model (one assembly to many assemblies). In Visual Studio .NET 2003 all the code-behind class files, and support code are precompiled into a single assembly with a fixed name. In Visual Studio 2005, multiple assemblies are created on the fly (by default) with uniquely generated filenames. For example, the default behavior is to compile all Web forms and user controls in a folder into its own assembly. The common source code in the App_Code folder will automatically be compiled into its own separate assembly. This new compilation model causes some changes in the structure of a Web application, but greatly enhances the options for deployment and how the Web application is served on the Web server.
- Deployment options (precompile, full compile, updateable sites, etc). In prior versions of Visual Studio, Web applications are precompiled and deployed as one large assembly. Content pages (e.g. *.aspx or *.ascx) are not compiled and editable on the server. Thanks to the new page compilation model and folder structure in Visual Studio 2005, you can deploy a Web application in many different configurations. At one extreme, you can precompile all the content pages, their code-behind class files and its hidden designer page and deploy a Web application that consists of fully compiled assemblies. In this mode, your application cannot easily be changed on the server. At the other extreme, you can deploy an application with no precompiled code at all. In this configuration, you can change the content pages, code-behind class files, or any other code in the application on the server directly. The pages will be dynamically compiled when a user asks for the page from the server.
Each of these operational changes may require you to modify your application architecture and deployment process either before or after converting your Web application.
New Features
Converting your Web application will make your application more powerful, flexible, and maintainable. Although this article focuses on the mechanics of converting your application, you can learn more about the new Visual Studio 2005 and ASP.NET 2.0 features through the following links:
- Feature Overview
This white paper will give you a good overview of the new features available with ASP.NET 2.0. If you are looking at leveraging ASP.NET 2.0 content on a site built with ASP.NET 1.x, you should read through this paper to see what is available for you.
- Personalization
ASP.NET 2.0's personalization features, called Web Parts, let you design your Web site to be reactive to individual users. You can, for instance, let each user pick a site layout or color scheme and have that information preserved between visits. Web Parts allows you to provide personalization features with a minimal amount of new code.
- Data Access
Not only has ADO.NET been updated for the .NET Framework 2.0, but ASP.NET 2.0 also now includes a new set of data source controls and features for data access.
- Master Pages
In traditional ASP.NET, most developers struggle with designing a Web application framework that combines code reuse with flexibility. Master pages give the best of both worlds by introducing true inheritance. You can set up a master page that contains your header, footer, and navigation bar, and then create child pages that fill in the content while automatically inheriting the look, behaviors, and features of the master page.
- ASP.NET Development Server
A stand-alone, development-only Web server is now bundled with Visual Studio 2005. This server, code-named "Cassini," allows users to develop and test their Web applications without having to have IIS installed on their development system. This server can be used only for development. When you deploy your application to production, you will need to use an IIS server configured with ASP.NET 2.0.
Converting a Web Project
Converting a Web Project involves more than just changing your framework version! There are three parts to a conversion:
- Before Conversion – reviewing and possibly altering your Web project architecture before running the conversion wizard.
- Conversion – running the Visual Studio 2005 conversion wizard to convert your Web project.
- Post Conversion – resolving any issues the conversion wizard did not catch or was unable to resolve.
For parts 1 and 2, you should read and apply the steps outlined in Step-By-Step Guide to Converting Web Projects to Visual Studio .NET 2005.
For part 3, you should apply the solutions outlined in this white paper.
The Conversion Wizard
Visual Studio 2005 has a built-in conversion wizard to help you convert your Web applications. This wizard automates many of the basic steps necessary to convert your application to use ASP.NET 2.0 features.
Running the Wizard
The conversion wizard is automatically invoked whenever you open a Visual Studio .NET 2003 Web project in Visual Studio 2005. The wizard detects the presence of a Web project file (e.g. *.vbproj or *.csproj) in the application folder and automatically starts the conversion process.
Figure 1. The Conversion Wizard
The first choice you have to make is to create a backup of your application before conversion. It is highly recommended you make a backup!
Figure 2. Backup your application
If you choose to create a backup, Visual Studio 2005 will automatically create a copy of your Web project in the folder of your choice.
Note Make sure you put the backup folder outside of your Web application's folder tree. See the Backup folder in the Web application folder issue for full details.
Next, you will see a summary screen of the conversion process and have one last opportunity to back out of the conversion. Please note two things about this summary screen:
Note The first paragraph is incorrect since it states a Web project will be checked out from source code control before changes are made. This is true for Visual Basic and C# client projects, but not Web projects. Instead, the Web project conversion wizard will ignore source code control and change files it must modify to read-write.
The Web project will be converted in-place, which is why making a backup is both recommended and important.
Figure 3. Summary screen
The conversion may take a few minutes, depending on the size of your application. When it completes you will see a message indicating your Web project was converted. You may also see a message about some warnings or errors. Warnings and errors occur when the conversion wizard has made changes that may modify the behavior of your application, or when it can't completely update your Web project.
Figure 4. Conversion completed
After the conversion is complete, you are ready to look at the conversion report to see if you have to perform any additional manual steps to complete your conversion.
The Conversion Report
The conversion wizard will record the changes it makes to your Web project in both an XML and a text file. When the conversion wizard finishes it displays the XML version of the report. This report will show you any issues encountered by the wizard and code areas where you may need to take additional steps to complete the conversion.
The report is divided into sections for the converted solution and one or more projects. The solution report will almost always be error free. However, the project sections may have multiple issues listed for each file in your project. You should review this section and resolve any issues reported by the conversion wizard.
Figure 5. Conversion report (click for larger image)
If you close the conversion report, you can always find a text version at the top level of your converted project.
Note In the final release of Visual Studio 2005, the name of the text report is ConversionReport.txt. Future releases will rename this file to ConversionReport.webinfo. The text version will be found in the Web application's root folder.
Notification Types
Each item in the report falls into one of three categories:
- Comments—informs you of actions taken by the wizard. You will see many comments about files that were deleted or moved, and code that was deleted or commented out. Comments are listed in the text version, but omitted in the XML version, of the conversion report.
- Warnings—A warning is generated whenever the wizard has to take an action that may cause a behavioral change or possible compilation error in your application. Warnings are items you want to review, but may not need to act on.
- Errors—An error item is generated if the wizard encounters something that it cannot automatically convert. These items require your effort to complete the conversion. Generally, an error is something that will generate a compilation error if you try to run the application.
Part 2: Common Conversion Issues
Although Visual Studio 2005 is designed to work with code developed using Visual Studio .NET 2003, you may encounter one or more common conversion issues. In this section, we will look at several of the most common issues.
Note The conversion wizard has been upgraded in the final release of Visual Studio 2005, and may automatically detect and fix some of the following issues. However, if the wizard misses a particular issue, the solutions described below can be applied manually to finish converting your Web project.
Issue 1: Code-behind class file (CB-CB) references
Note CB is an acronym for the code-behind file for either a Web form (*.aspx) or a user control (*.ascx).
The new compilation model uses multiple assemblies that are normally dynamically compiled on the server. This model improves both Web site performance and updatability.
However, if your code-behind files reference another code-behind file, then you will have a broken reference because the referenced code-behind file will no longer be in the same assembly.
Here are some common ways it may be triggered:
- Using a Web form or user control as a base class to another Web form or user control.
- Using
LoadControl()and casting the result to another user control, for example
UserControl1 c1 = (UserControl1)LoadControl("~/UserControl1.ascx");
- Creating an instance of the Web form class, for example
WebForm1 w1 = new WebForm1();
where WebForm1 is a class defined in a code-behind file for a Web form.
How to fix
To resolve the problem, you will need to change your application so the reference can be found. Since this is a CB-CB reference, the easiest way to resolve is to add a reference directive to the Web form or user control that is making the reference. This will tell the compiler what assembly to link to.
Let's assume you have the following situation:
Change your code to use a reference directive:
By using the reference directive, you are explicitly telling the compiler where to look for the Web form or control you want to use. Note that in the final release of Visual Studio 2005 the conversion wizard will do this automatically for you.
Issue 2: Stand-alone class file (SA–CB) references
Note SA is an acronym for a stand-alone class file.
You may encounter another type of broken reference if you have a stand-alone class file that references code in a code-behind class file. This is similar to a CB-CB broken reference, except you have a stand-alone class file in the App_Code folder trying to reference a separate page assembly. A common way it can be triggered is by accessing a class variable in the CB class.
How to fix
Fixing an SA-CB broken reference is more involved. Since the problem occurs in an SA file, you cannot use a reference directive to find the reference. Also, after conversion, the SA file is moved to the App_Code folder so the class file will only have access to the App_Code assembly where it is compiled.
The solution is to create an abstract base class in the App_Code folder to reference at compile time, which will load the actual class from the page assembly at run time.
Let's assume you have the following situation:
Change your code to use an abstract base class:
The abstract base class will allow both standalone class files and CB files to find a class during compilation (named Control1 in this example) since it now exists in the App_Code folder. However, the standalone class file will use late-binding to load the original class (renamed to migrated_control in this example) during runtime. Note: in the final release of Visual Studio 2005, the conversion wizard will create this code for you automatically.
Issue 3: Circular references
A circular reference happens when a code-behind file references another code-behind
file, which in turn references the original file. It can happen with two or more code-behind files, e.g.:
It can also happen between assemblies, where one assembly references another
assembly, which in turn references the original assembly, e.g.:
How to fix
The solution for first type of circular reference is to create an abstract base class in the App_Code folder with one of the references, then remove the Reference directive from the associated Web form or user control file. This will break the circular reference.
The second type of circular reference is a by-product of the ASP.NET compiler "batching" assemblies together for performance reasons. By default it will take the Web forms and user controls in a folder and compile them into an assembly.
There are several ways to solve this issue, but we recommend moving the referenced pages (e.g. Pages2.aspx.vb and Pages3.aspx.vb) to their own folder. By
default, the ASP.NET compiler will create a separate assembly containing these pages and the circular reference will be removed between Assembly A and B.
Issue 4: Resource Manager
In Visual Studio .NET 2003, a resource manager is used to manage resources in your Web application. A typical example would look like the following:
This type of code is problematic since it depends on you knowing the name of the assembly to load, which is no longer a fixed name in Visual Studio 2005.
How to fix
Since Visual Studio 2005 uses non-deterministic assembly naming, you will need to change your code to use the new resource model. The easiest way is to move Resource1.resx to a special folder named App_GlobalResources. Visual Studio 2005 will automatically make any resources found in this folder available via strong naming (and discoverable using IntelliSense) to your Web application. Here is the converted example:
This is just one way to use the new resource model in Visual Studio 2005—you should review the resource model documentation to discover the new capabilities in this model.
Issue 5: Excluded files are no longer excluded
In Visual Studio .NET 2003, you had to explicitly decide whether or not to include files in your Web project. If a file was not explicitly listed as included, then the file was excluded from the project. You could also stop a code file from being built by setting its build action to "none". This information is stored in the project file.
When converting, several issues have to be considered, e.g.:
- A file that is excluded from one project may be included in another project
- Should you try to convert an excluded file, since it may just be a code snippet the user wants to remember?
In the final release of Visual Studio 2005, the conversion wizard will leave excluded files as-is and note them in the conversion report. As a result, your Web project will contain extra, unconverted files that are now part of your project. Depending on the file's extension, the compiler may try to compile the file, which may cause conflicts in your application.
How to fix
After conversion, you can delete any formerly excluded files you do not want from your project. You can also use the "Exclude from Project" feature (found on the Solution Explorer menu) to exclude them by renaming them with the safe extension ".exclude" to effectively remove them from your Web application.
Note files excluded with the ".exclude" extension are still part of the Web project but they will not be compiled, nor will these files be served by IIS/ASP.NET if they happen to be on the server.
Note future releases of the conversion wizard will be more proactive and exclude files when it is considered safe to do so.
Issue 6: Orphaned resx files
Visual Studio .NET 2003 generated a resx (resource) file for Web forms and user controls. In general, users did not use them since they were auto-generated and could potentially overwrite user-added code.
After conversion, these resx files are not deleted since the migration wizard is not sure if the user has added resources that need to be saved.
How to fix
Review the generated resx files and save user data to its own resource file. It is fine to combine this data into one resource file.
Move the resource file to the special folders App_GlobalResources or App_LocalResources as appropriate so it is available to your Web application.
After saving the user data in this way, delete the generated resx files from your Web project.
Issue 7: Extra types in a code-behind file
In Visual Studio .NET 2003, it was possible to share types (e.g. structs, enums, interfaces, modules, etc.) across different pages by storing the types in a code-behind file for one of the Web forms or user controls.
This model breaks in Visual Studio 2005 since a Web form and user controls are compiled into their own assemblies—the extra types will no longer be discoverable.
How to fix
After the conversion wizard has converted your application, just move any non-private extra type to its own standalone code file in the App_Code folder. You will also need to change the access modifier for the type to public since it has to work across assemblies. The shared type will automatically be compiled and discoverable throughout your Web application.
Issue 8: Accessing an auto-generated control variable
In Visual Studio .NET 2003, the code-behind class file contained both user and auto-generated designer code. The latter could contain both control variable declarations and page functions. Although it is not recommended, some developers changed the access permission of the control variables to Public so they could be modified from outside their class. An example looks like this:
In Visual Studio 2005, this change will not work since the user and auto-generated designer code is separated using a partial class. A partial class allows a class to span more than one file, and is used to maintain a clean separation between user and auto-generated code.
How to fix
In general, it is recommended you change your code to not depend on the changed access level of auto-generated code. However, there is a workaround that is useful if you have hundreds of places in your code calling such auto-generated control variables, and you need to quickly get your code running. You can rename the original control variable to something else, then create a public property to access your renamed control variable. For example:
Issue 9: Unable to switch to design view
The new Visual Web Designer built into Visual Studio 2005 is more strict about proper HTML than is Visual Studio .NET 2003. If your aspx page contains mismatched tags or poorly formed HTML, then the designer will not let you switch to design view inside Visual Studio 2005. Instead, you will be restricted to code view until you fix the problems. This issue occurs because of the new source code preservation and validation functions built into Visual Studio 2005.
How to fix
All you can do to avoid this issue is make sure that the tags in your aspx pages are well formed. If you encounter a problem switching from code view to design view post-conversion, then the problem is almost certainly a bad tag.
Issue 10: Unable to parse filename
You may see a slightly ambiguous error message informing you that a file could not be parsed. This means either the 'Codebehind' or 'Src' attribute could not be found above the HTML tag.
If your Web form or user control does not contain either of these attributes, then the wizard cannot find the matching code-behind file and cannot convert the page.
How to fix
If this happens to a pure HTML page then this error can be ignored—you will often encounter this error if your application has pure HTML pages that use the aspx extension.
To avoid this issue, make sure you name your html files appropriately and use the 'Codebehind' and 'Src' attribute in your Web forms and user controls above the HTML tag.
Note in the next release of the Web project migration wizard, this error will be made more descriptive by using the following messages.
- "Warning: Unable to parse file %1, since no code-behind attribute or page/control directive found."
- "Warning: The code file %1 is not a valid code file."
Part 3: Other Conversion Issues
Here are other, less common conversion issues you may run across.
Issue 11: Backup folder in the Web project folder
As mentioned before, Visual Studio 2005 normally considers all files in a folder or sub-folder to be part of the Web project. By default, the conversion wizard will create a backup of your Web project in a safe location, but it is possible for the user to overwrite the default. If you put the backup folder in the Web application's folder tree, you will get this somewhat cryptic build error:
Error 1 - It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level
How to fix
Make sure your backup location is not in the Web application's root folder or sub-folder!
Issue 12: Multiple projects in the same location
If you have a Web project sharing a folder location with another project in the same solution, Visual Studio 2005 will consider all files found in that folder part of a single Web application.
If they are all Web projects and they are truly separate, i.e. self-contained, then converting those into a single Web application may not cause a problem. However, you may want to keep these projects separate if:
- It is a mix of Web and client projects
- Different code languages are used
- Aesthetic reasons
How to fix
If they are a mix of Web and client projects, then simply move the client project files into their own folder.
If different code languages are used, it is recommended you move the Web projects into separate folders for conversion. Later, if you want to merge these Web projects into a single, multi-language Web application, review the Visual Studio 2005 documentation on structuring the App_Code folder and Web.config for multiple languages and merge your Web projects manually into a single Web application.
Issue 13: Multiple projects referring to the same files
If more than one project refers to the same set of files, then it is possible for those common files to be migrated twice.
How to fix
Make sure that a given file is only referenced by one project file.
Usually, multiple projects referencing the same set of files means there is a duplicate or old version of the project file in the folder. In this case you can remove or rename the extra project files to resolve the issue.
If there is a true need to have multiple Web projects refer to the same set of files, then it is best to move the common files into their own separate client project, and then have the multiple Web projects refer to that separate client project. Please refer to Step-By-Step Guide to Converting Web Projects to Visual Studio .NET 2005.
Issue 14: Invalid access modifier
After converting your Web project, you get an Invalid Access Modifier error.
Since Visual Studio 2005 now uses multiple assemblies, the access level to member variables and functions need to be modified if you want to access them from another assembly.
How to fix
Review your code and change the access level to allow access. Normally changing the modifier to public will solve the problem.
Note You should always consider the security of the member variable or function before doing this, i.e. we do not recommend doing this blindly.
Issue 15: Converting excluded files
Excluded files will not be converted by the conversion wizard—so how can you get the conversion wizard to convert these files?
How to fix
Prior to conversion, temporarily include the file you wish to convert in your Web project and make sure its build action is not set to "none".
After the conversion, the file will be converted. You can then use Exclude from Project feature (found on the Solution Explorer menu) to exclude the file again.
Issue 16: Duplicate variable declarations
After conversion, you may find control objects declared via client scripts are also declared in the partial class. An example could look like the following:
How to fix
By design, the conversion wizard does not parse client script and will not catch this reference. To resolve, simply remove the declaration from the partial class and your code will compile.
Issue 17: Sub-Web applications
As mentioned before, Visual Studio 2005 normally considers all files in a folder or sub-folder to be part of the Web project. However, it is possible to designate a sub-folder as its own Web application by using the IIS manager (inetmgr) to set the sub-folder as an IIS virtual folder. When Visual Studio 2005 discovers this virtual folder, it will not consider it part of the Web application. This works if you open your Web project via HTTP.
The problem occurs if you open the Web project via File Location. In this case, Visual Studio 2005 will not know it is an IIS configured Web application and will not discover the IIS meta-information stating a sub-folder is a virtual folder. Visual Studio 2005 will now try to open and compile the Web application along with the sub-Web application.
The conversion wizard will give you the following warning if you convert a file-based Web project.
Warning: This Web project was converted as a file-based Web application. If your site contained any IIS meta-information, e.g. sub-folders marked as virtual directories, it is recommended that you close this Web site and re-open it using the Open Web Site command and selecting the Local IIS tab.
How to fix
Be sure to mark sub-Web applications as virtual directories using the IIS manager (inetmgr), and to open the Web project via HTTP rather than File Location so the IIS meta-information will be found.
Issue 18: OnInit() not removed
In Visual Studio .NET 2003, the designer added auto-generated member functions OnInit() and InitializeComponent() to the code-behind class file. These functions are not used by Visual Studio 2005 but will be called if they exist. The functions are not removed by the conversion wizard.
This behavior is by design since the wizard does not know if user code exists within these functions, so the code is left as-is in the code-behind file.
How to fix
After reviewing the functions to make sure there is no user code that should be saved, remove these functions from the code-behind file.
Issue 19: XHTML validation errors
After converting your Web project, or when opening a converted Web project, you see XHTML validation errors in your error window after building your Web project.
This is a feature of Visual Studio 2005, and is designed to help you write more XHTML compliant code. (Click the graphic below for a larger image.)
How to fix
To fix, you should change your code to match the level of XHTML standard you are targeting. For example, a strict level would be "XHTML 1.1".
If you want to turn off the showing of these errors, you can set the validation level to a less strict level. By default, when a Web project is converted, the validation level is set to "Internet Explorer 6.0" which is close to the validation level used by Visual Studio .Net 2003.
You can set the validation level by clicking Tools, then selecting Options, and then selecting the appropriate level on the Text Editor/HTML/Validation dialog.
Issue 20: Multiple configurations
Visual Studio .Net 2003 allowed Web projects to have multiple configurations, e.g. Release and Debug, to configure your build and deployment environment.
Visual Studio 2005 only supports one configuration per Web project. During conversion, the Debug version is the default unless a custom configuration is found. In this case, a dialog will prompt you to choose the configuration to use in the converted Web project.
How to fix
As a workaround, use two or more Web configuration files corresponding to the configuration you want to support. Copy the version you want to use to Web.config during your operation.
For example, you could have a Web.config.release file for your release configuration and a Web.config.debug for your debug configuration. Copy the appropriate one to Web.config when debugging or deploying your Web project.
Note Web Deployment Projects, a technology preview feature available as a VSIP add-in for Visual Studio 2005, will provide true configuration support for Web projects. See the "Visual Studio 2005 Web Deployment Projects" white paper on for more detail.
Issue 21: Auto-generated members hide inherited members
If you use base classes to define control variables (Label1, for example) or event handlers for use in derived classes, then you might find that these members are hidden by auto-generated members for the page's partial class in Visual Studio 2005.
Visual Studio 2005 generates the page class and does not normally see the inherited members. So it would generate control variables or event handlers that would hide the inherited members.
How to fix
ASP.NET 2.0 provides a new attribute called
CodeFileBaseClass to handle this situation. The
@Page and
@Control directives now support this new attribute, which specifies the grandparent class for a page. The compiler uses this attribute during class generation to get a reference to the page's base type and generate code that does not hide inherited members in the base type
Issue 22: Ambiguous references and naming conflicts
The .NET Framework 2.0 adds a host of new namespaces and classes. Several of these are likely to cause clashes with ASP.NET 1.x applications. For example, the new personalization features introduce classes with the names of Profile, Membership, and MembershipUser. The Profile name, in particular, is fairly commonly used by developers who want to keep track of user information. Therefore if you have a Profile class in your application, and you try to use any of the new personalization features, you may encounter compiler warnings about ambiguous class references.
How to fix
Planning ahead for naming conflicts can be rather difficult. You will want to take a quick look through the new ASP.NET classes. If you see any names that might conflict with what you have in your application, you might consider using explicit naming. For example, use System.Web.Security.Membership instead of importing/using System.Web.Security and then using the Membership class.
Issue 23: Event handlers called multiple times
Because of the way that the conversion wizard merges code-behind files and aspx pages, you may encounter scenarios where an event gets called twice (page load, for example). This scenario occurs if you had event wireup code in your code-behind file that wasn't in the InitializeComponent method. The conversion wizard is only smart enough to remove duplicate wireups if they are in the InitalizeComponent method.
You may have a difficult time noticing this error because in most cases a second event firing will be harmless. However, if you do discover that an automatically called event is occurring multiple times, you should examine your converted code-behind file to see if the handler has been wired to the event twice. If so, you will have to manually remove the second wireup.
How to fix
You can avoid this issue entirely by scanning your existing code and making sure that all event wireups are contained in the InitialzeComponent function for the code-behind file or by setting AutoEventWireUp=False in your page.
Issue 24: Code-behind files moved to the App_Code folder
After the conversion wizard runs, you might find some of your code-behind files (*.aspx.cs or *.ascx.vb, for example) moved to the App_Code folder.
This usually indicates your content page has a malformed Codebehind directive and is not set correctly to the code-behind file. In other words, the conversion wizard wasn't able to determine that the code-behind file was actually tied to a specific aspx page.
The conversion wizard will automatically move all standalone class files (for example *.cs or *.vb) to the App_Code folder. If you have a malformed Codebehind directive, then the code-behind file will be considered a standalone class file and moved to the App_Code folder.
Note Web service files (e.g. *.asmx and *.asmx.cs) are different from normal content pages and their code-behind page. The code-behind for a Web service file is meant to go in the
App_Codefolder, so if you find one there it is not an error.
It is also possible that a Web form or user control was migrated twice—see the Multiple Web Projects Referencing the Same Files issue for ways to correct this issue.
How to fix
Prior to conversion, you can avoid this issue by making sure your Codebehind directive is correctly set in all your content pages.
After conversion, move the code-behind file to the same folder as the associated content page and correct the content page's Codefile (renamed in ASP.NET 2.0) directive to point to that code-behind file.
Issue 25: Projects referencing a Web project
In Visual Studio .Net 2003, it was valid to have a project reference a Web project's assembly. This will not work in Visual Studio 2005 since Web projects create multiple assemblies and the file name of the assembly can change from build to build.
How to fix
Prior to conversion, you will need to move common, shared code to a separate class library, and reference that library instead.
If you are using shared user controls, then you will have to create base classes that exist in the class library to be referenced by the other projects, and have the Web project's user controls derive from them.
For more information on this design pattern, refer to Step-By-Step Guide to Converting Web Projects to Visual Studio .NET 2005.
Issue 26: Sharing the same code-behind file
It is possible to have several pages share the same code-behind class file. This may be by user design or a result of copying a Web form or user control and not updating the @Page directive properly.
This can confuse the ASP.NET 2.0 compiler, which expects each Web form or user control to contain a unique code-behind file. You may get build errors or invalid run-time behavior when running the converted code.
How to fix
Prior to conversion, change your code to use a unique code-behind file for each Web form and user control.
If you wish to share common elements between several Web forms or user controls, then move those common elements to a base class, and have the Web forms or user controls derive from that base class.
Issue 27: Partially converted solutions
In both Visual Studio .NET 2003 and Visual Studio 2005, it is possible to have a solution that contains both Web projects and client projects, for example a C# or Visual Basic class library or Windows application
If you are using an express product, such as Visual Web Developer or Visual Basic Express Edition, you will be able only to convert projects in the solution that relate to the express product. For example, if you are using Visual Web Developer and open a solution with a Web project and a Visual Basic class library project, only the Web project will be converted, leaving you with a partially converted solution.
How to fix
You should use either the Standard, Professional, or Team System editions of Visual Studio 2005 to convert solutions containing multiple, mixed project types.
If that is not possible (you have only an Express edition), then you should create a new solution containing only the supported project type.
Issue 28: No command-line migration for C#
Command-line migration for Visual Basic Web projects is possible by using the command "
devenv.exe <PorSfile> /upgrade", where
<PorSfile> is a Visual Basic Web project file or a solution containing such a file.
Unfortunately, a C# bug was found late in the product cycle where an invalid C# code model is given to the Web project conversion wizard. As a result, errors are introduced to the Web application during a command line C# Web project conversion.
Since command-line conversion is considered a secondary feature as compared to converting via the user interface, and a C# fix for this bug threatened the release date for Visual Studio 2005, the bug was not fixed in the final release.
How to fix
There is no fix but the workaround is to use the user interface to open and convert the Web project or the solution containing the Web project.
Issue 29: Batch = True or False
As mentioned in Web Project Changes, the ASP.NET compiler's default behavior is to compile all Web forms and user controls in a folder into its own assembly. The Batch attribute can be set in Web.config to change the compiler's behavior.
For example, the following code segment will direct the compiler to compile a given Web form or user control into its own assembly (creating more assemblies than normal).
There are several issues you should be aware of when using this attribute.
- Performance—when Batch=false, the ASP.NET compiler will create an assembly for every Web form and user control in your Web application. It also causes the compiler to do a full compile, not an incremental compile, in Visual Studio 2005 when you build using F5. The net result is your Web application may run slower when deployed, and your build times will increase significantly in Visual Studio 2005.
- Assembly References—the Batch attribute may hide potential broken assembly references (when Batch=True), or even introduce a Circular Reference (when Batch=False).
How to fix
After you run the conversion wizard on a Web project, you should temporarily set Batch=False while you finish the manual steps to convert your Web project. This will make sure you catch any assembly reference issues.
After your Web project is fully converted, you should set Batch=True for normal development and deployment.
Note When you deploy your Web application, you should do one final check with Batch=False to make sure no assembly reference issues were introduced during development of your Web application. After you have done this check, be sure to turn Batch=True again.
Summary
Converting an application from Visual Studio .NET 2003 to Visual Studio 2005 is generally a smooth process. However, you have to make sure that your development and deployment environments are properly configured. You also have to evaluate your conversion report to resolve any potential issues not handled by the conversion wizard. You may also wish to review your application ahead of time and plan ahead to avoid known issues with the conversion.
Future Releases
Given the importance of Web project migration, we are always looking for feedback to improve this experience. We plan to update the migration wizard as needed even after the official release of Visual Studio 2005 in November 2005. If you have questions or feedback, then please post them to the "Visual Studio .NET 2003 to Visual Studio 2005 Migration" forum located at.
Appendix A: Actual Error Messages | https://docs.microsoft.com/en-us/previous-versions/dotnet/articles/aa479312(v=msdn.10) | 2019-03-18T15:33:15 | CC-MAIN-2019-13 | 1552912201455.20 | [array(['images/aa479312.upgradingaspnet01%28en-us%2cmsdn.10%29.gif',
'Aa479312.upgradingaspnet01(en-us,MSDN.10).gif Aa479312.upgradingaspnet01(en-us,MSDN.10).gif'],
dtype=object)
array(['images/aa479312.upgradingaspnet02%28en-us%2cmsdn.10%29.gif',
'Aa479312.upgradingaspnet02(en-us,MSDN.10).gif Aa479312.upgradingaspnet02(en-us,MSDN.10).gif'],
dtype=object)
array(['images/aa479312.upgradingaspnet03%28en-us%2cmsdn.10%29.gif',
'Aa479312.upgradingaspnet03(en-us,MSDN.10).gif Aa479312.upgradingaspnet03(en-us,MSDN.10).gif'],
dtype=object)
array(['images/aa479312.upgradingaspnet04%28en-us%2cmsdn.10%29.gif',
'Aa479312.upgradingaspnet04(en-us,MSDN.10).gif Aa479312.upgradingaspnet04(en-us,MSDN.10).gif'],
dtype=object)
array(['images/aa479312.circular-ref1%28en-us%2cmsdn.10%29.gif',
'Aa479312.circular-ref1(en-us,MSDN.10).gif Aa479312.circular-ref1(en-us,MSDN.10).gif'],
dtype=object)
array(['images/aa479312.circular-ref2%28en-us%2cmsdn.10%29.gif',
'Aa479312.circular-ref2(en-us,MSDN.10).gif Aa479312.circular-ref2(en-us,MSDN.10).gif'],
dtype=object)
array(['images/aa479312.circular-ref3%28en-us%2cmsdn.10%29.gif',
'Aa479312.circular-ref3(en-us,MSDN.10).gif Aa479312.circular-ref3(en-us,MSDN.10).gif'],
dtype=object)
array(['images/aa479312.issue-orphaned-resx%28en-us%2cmsdn.10%29.gif',
'Aa479312.issue-orphaned-resx(en-us,MSDN.10).gif Aa479312.issue-orphaned-resx(en-us,MSDN.10).gif'],
dtype=object)
array(['images/aa479312.oninit%28en-us%2cmsdn.10%29.gif',
'Aa479312.oninit(en-us,MSDN.10).gif Aa479312.oninit(en-us,MSDN.10).gif'],
dtype=object)
array(['images/aa479312.xhtmlval%28en-us%2cmsdn.10%29.gif',
'Aa479312.xhtmlval(en-us,MSDN.10).gif Aa479312.xhtmlval(en-us,MSDN.10).gif'],
dtype=object) ] | docs.microsoft.com |
Welcome to the VMware Horizon® HTML Access™ documentation page. The documents on this page are designed to help you install, configure, and use HTML Access.
To find the release notes, user guide, and installation and setup guide for your HTML Access HTML Access.
Finding Archived Documentation
To read the documentation for earlier HTML Access versions, go to the Documentation Archive page at VMware Horizon View Clients Documentation Archive. | https://docs.vmware.com/en/VMware-Horizon-HTML-Access/index.html | 2019-03-18T16:11:19 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.vmware.com |
The State of ODF Interoperability Version 1.0
Committee Specification 01
10 December 2010
Specification URIs:
This Version:
(Authoritative)
Previous Version:
(Authoritative)
Latest Version:
(Authoritative)
Technical Committee:
OASIS Open Document Format Interoperability and Conformance (OIC) TC
Chair(s):
Bart Hanssens, Fedict
Editor(s):
Robert Weir, IBM
Related Work:
This specification is related to:
OASIS Open Document Format
Abstract:
This report discusses interoperability with respect to the OASIS OpenDocument Format (ODF) and notes specific areas where implementors might focus in order to improve interoperability among ODF-supporting applications.
Status:
This document was last revised or approved by the ODF Interoperability and Conformance”, “Open Document Format”, “OIC” Normative References 5
1.2 Non-Normative References 5
2 Conformance and Interoperability 6
2.1 ODF Conformance 6
2.2 ODF Interoperability 7
2.3 Conformance and Interoperability 7
3 An Interoperability Model 9
3.1 Sources of Interoperability Problems 9
3.2 Round-trip Interoperability 10
4 Problems and Solutions 11
4.1 Priority Areas for Improvement 11
4.2 Approaches to Improving ODF Interoperability 11
5 Conclusion 14
6 Conformance 15
Appendix A.Acknowledgments 16
Appendix B.Revision History 17 contains no normative references.
This report contains no non-normative references. However, footnotes containing hyperlinks to relevant online resources are provided.
Conformance is the relationship between a product1 and a standard. A standard defines provisions that constrain the allowable attributes and behaviors of a conforming product. Some provisions define mandatory requirements, meaning requirements that all conforming products must satisfy, while other provisions define optional requirements, meaning that where applicable they must be satisfied. Conformance exists when the product meets all of the mandatory requirements defined by the standard, as well as those applicable optional requirements. For example, validity with respect to the ODF schema is a mandatory requirement that all conformant ODF documents must meet. However, support of the “fo:text-align” attribute is not required. Nevertheless, there are optional requirements that constrain how this feature must be implemented by products that do support that feature, namely that its value is restricted to “start”, “end”, “left”, “right”, “center” or “justify”.
A standard may define requirements for one or more conformance targets in one or more classes, in which case a product may be said to conform to a particular conformance target and class. For example, ODF 1.1 defines requirements for an ODF Document target as well as an ODF Consumer target. ODF 1.2, in its current draft, defines requirements for an additional conformance class for the ODF Document target, namely the Extended ODF Document class.
There have been several attempts to assess conformance of ODF Documents. The ODF Fellowship hosts an online validator2 that allows the user to upload a document which is then validated according to the ODF schema. Oracle hosts a similar tool at the OpenOffice.org web site3. The ODFToolkit Union has a ODF Conformance project4 to further develop this code. Office-a-tron5 is another validator. The ODF TC has posted instructions for how to validate ODF document instances using a variety of tools6.
These tools primarily look at document validity, an XML concept, which is a necessary but insufficient condition for document conformance. These tools are limited to a static examination of document contents and are not capable of testing conformance of ODF applications. However, we believe that XML validation, combined with additional static analysis, is a promising approach to automate conformance testing of ODF documents.
Based on the results from existing online validators, which only test a subset of conformance requirements, the overall level of document conformance appears to be good. However, there are some recurring issues, including:
Style names beginning with a number, making them an invalid NCName. Implementations with this issue could trivially transfer their style names into valid ones by pre-pending an underscore character ('_') to the style name.
Documents missing a mimetype file
MathML documents with non-standard doctype declarations.
Since automated validation testing of documents is a cheap and effective way to find errors in ODF documents, its widespread use is especially encouraged..
Since the capabilities of ODF applications extend beyond the common desktop editors, and include other product categories such as web-based editors, mobile device editors, document convertors, content repositories, search engines, and other document-aware applications, interoperability will mean different things to users of these different applications. However, to one degree or another, interoperability consists of meeting user expectations regarding one or more of the following qualities when transferring documents:.
In any given user task, one or more of these qualities may be of overriding concern.
The relationship between conformance and interoperability is subtle and often confused. On one hand, it is possible to have good interoperability, even with non-conformant documents. Take HTML, for example. A study of 2.5 million web pages found that only 0.7% of them conformed to the HTML standard1. The other 99.3% of HTML pages were not conformant. So conformance is clearly not a prerequisite to interoperability. On the other hand, web browsers require significant additional complexity to handle the errors in non-conformant documents. This complexity comes at a tangible cost, in development resources required to write a robust (tolerant of non-conformance) web browser, and well as less tangible liabilities, such as greater code complexity which typically results in slower performance and decreased reliability. So although conformance is not required for interoperability, we observe that interoperability is most efficiently achieved in the presence of conforming applications and documents. However, this is an ideal alignment that is rarely achieved, since standards have defects, applications have bugs and users make mistakes. So, in practice, achieving a satisfactory degree of interoperability almost always requires additional efforts beyond mere conformance.
A model for describing document interoperability is given in Figure 1.
Any document exchange scenario will involve these steps:
Authoring: The human author of the document express his intentions by instructing a software application, typically via a combination of keyboard and GUI commands. Note that computer-generated documents also have an authoring step, in which case the human intentions are expressed by the author of the document generation software, via his code instructions.
Encoding: The software application, when directed to save the document, executes program logic to encode into ODF format a document that corresponds to the instructions given to it by the user.
Storage: The ODF document then represents a transferable data store that can be given to another user.
Decoding: The receiving user then uses their software application to decode the ODF document, and render it in fashion suitable for User B to interact with it.
Interpretation: User B then perceives the document in their software application and interprets the original author's intentions.
Interoperability defects can be introduced at any step in this process. For example:
An inexperienced user might instruct the authoring software application incorrectly, so that his instructions do not match his intentions. For example, a user may try to center text by padding a line with extra spaces rather than using the text align feature of their word processor. Or he might try to express a table header by merely applying the bold attribute to the text rather than defining it as a table header.
The software application that writes the document may have defects that cause it to incorrectly encode the user's instructions.
Due to ambiguities in the ODF specification, the document may be subject to different interpretations by Application A and Application B.
The software application that reads the document may have defects that cause it to incorrectly decode and render the document.
The user reading the document may incorrectly perceives its contents.
Round-trip interoperability refers to scenarios where a document author collaborates with one or more other users, such that the document will be edited, not merely viewed, by those other users, and where the revised document is eventually returned to the original author.
From the perspective of the interoperability model, this introduces nothing new. A round-trip scenario is simply a iteration of the above steps, with the same opportunities for errors being introduced, e.g., A→B→A is the same as A→B, followed by B→A. However, since errors introduced at any step in the process tend to accumulate, a complex round-trip scenario will tend to suffer more in the presence of any interoperability defects.
Also, since the original author is the person who most knows the author's intentions, he will also be the most sensitive to the slightest alterations in content or formatting introduced into his document. So minor differences that might not have been noticed when read by a second user will be more obvious to the original author. This sensitivity to even the smallest differences will tend to cause the perception of round-trip interoperability to be lower.
Based on initial testing in the OASIS ODF Interoperability and Conformance TC, as well as scenario-based testing at the ODF Plugfest, several feature areas have been identifies as needing improvement in one or more ODF editors:
Some ODF features are not widely-implemented, such as document encryption, change tracking and form controls.
There are implementation bugs and inconsistencies that reduce interoperability of embedded charts, specifically:
When writing out embedded charts, some implementations write out a link to the embedded chart asxlink:href=“Object 1/”. Other implementations write it out as: xlink:href=”./Object 1”.
Some optional attributes, such as table:cell-range-address and chart:values-cell-range-address are not written by all implementations, although some implementations will not correctly render a chart without these values being set. More details are on the ODF Plugfest wiki.1
Implementations vary in their ability to to parse spreadsheet formulas written by other implementations. Adoption of OpenFormula (part of ODF 1.2) as the interchange format for spreadsheet formula expressions is encouraged. When an implementation does not support a formula syntax, typically the values are shown when office:value is present, but formulas themselves are removed. A simple example is a SUM function that calculates the sum of 2 cells. When opening a spreadsheet, a user may not notice any difference because the values are still there. But when changing the values of these 2 cells, the value of the cell originally containing the SUM function will not be updated accordingly because the formula has been removed.
Although simple lists appear to work well across the tested implementations, documents using advanced list structures (nested lists and list continuations) fared less well. When a document containing change tracking information is loaded in an implementation that does not support change tracking, the <text:tracked-changes> element and its content can be removed. Also, the content of deleted paragraphs, stored in <text:p> elements inside <text:tracked-changes>, can become visible at the top of the first page of the document (because <text:tracked-changes> is stored before all other <text:p>'s), which can be very confusing for the user
Improving interoperability generally follows three steps:
Define the expected behavior
Identify defects in implementations
Fix the defects
The primary definition of expected behavior for the rendering of ODF documents is the published ODF standard. However, there are other sources of expected behavior, and meeting these expectations, where they do not conflict with the ODF standard, are, from the user's perspective, very important as well. For example:
Approved Errata1 to the ODF standard
Other publications of the OASIS ODF TC, such as the Accessibility Guidelines2
Draft versions of ODF, where they clarify the expected behavior
Interoperability Guidelines, as may be published by the ODF Interoperability and Conformance TC.
Common sense and convention, which often provides a shared set of expectations between the user and the application vendor. For example, the ODF standard does not define the exact colorimetric value of the color “red”, though undoubtedly users would be surprised to see their word processor render it as yellow.
There are several tools, processes and techniques that have been suggested for identifying interoperability defects, including:
Automated static testing of ODF documents, which could range in complexity from simple XML validation to more involved testing of conformance and portability.
“Atomic” conformance tests, which test the interoperability of individual features of the ODF standard at the lowest level of granularity. The ODF Fellowship's OpenDocument Sample Documents3 is an early example of this approach. The approach used by Shah and Kesan4 falls into this category as well.
Scenario-based testing, which combine multiple ODF features into documents which reflect typical real-world uses.
“Acid” tests aim to give the end-user a quick view how well their application supports the standard. This approach was popularized in the Web Standards Project5 to improve browser interoperability. Sam Johnston has created a prototype ACID test for spreadsheet formulas6.
Plugfests, face-to-face and virtual This approach has proven useful in interactive testing among vendors in the ODF Plugfests7.
OfficeShots.org8 allows an end-user to upload a document and compare how it will render in different applications. This can allow the user to see potential interoperability problems before they distribute their document.
User-submitted bug reports, sent to their application vendor, can help the vendor prioritize areas which are in need of improvement.
Public comments, submitted to the OASIS ODF TC9, which report ambiguities in the specification which can impact interoperability. Such comments can be resolved in errata or revisions of the standard.
As enumerated above, there are a variety of approaches to identifying interoperability defects. At this point it is not clear which of these techniques will, over the long term, be the most effective. Some require more substantial up-front investment in automation development, but this investment also permits a degree of test automation. Some approaches require substantial efforts in analysis of the ODF standard and construction of individual test cases. But once these test cases are designed, they can be executed many times at low cost. There are also techniques that require far less advance preparation and are suitable for formal and informal testing at face-to-face plugfests.
These options are familiar in the field of software quality assurance (SQA), and from that field we are taught to consider several factors:
What is the “defect yield” of each testing approach? The defect yield is the number of defects found per unit of testing time, and is a measure of the efficiency of any given approach.
What is the test coverage that can be achieved by any given approach? Test coverage would indicate what fraction of the features of ODF are tested by that approach.
Once an initial investment is made, how expensive will it be to update test materials as new ODF versions and new application versions are released?
How well does the testing approach prioritize the testing effort, so that the defects that most impact interoperability for the most number of users are found quickly?
The goal should be to identify an approach that finds the most number of defects with the least effort, with an emphasis on those defects that have the greatest real-world impact. It is well-known from SQA research that that the optimal approach often involves a blend of different complementary techniques.
Of course, interoperability will not improve merely by testing. The results of testing must feed forward into the vendors' development plans, so these defects are fixed and the fixes make it into the hands of end-users. Nothing improves until vendors change their code. Merely talking about the problem doesn't make it go away.
This State of ODF Interoperability report is a snapshot in time based on current ODF implementations, current interoperability activities and the current thoughts of the OASIS ODF Interoperability and Conformance TC. We intend to periodically reissue this report as conditions evolve, to note progress and remaining challenges.
This report is informative and contains no normative clauses or references.
The following individuals have participated in the creation of this specification and are gratefully acknowledged
Participants:
Jeremy Allison, Google Inc.
Mathias Bauer, Oracle Corporation
Michael Brauer, Oracle Corporation
Alan Clark, Novell
Xiaohong Dong, Beijing Redflag Chinese 2000 Software Co., Ltd.
Pierre Ducroquet
Bernd Eilers, Oracle Corporation
Andreas J. Guelzow
Dennis E. Hamilton
Bart Hanssens, Fedict
Donald Harbison, IBM
Shuran Hua, Beijing Redflag Chinese 2000 Software Co., Ltd.
Mingfei Jia, IBM
Peter Junge, Beijing Redflag Chinese 2000 Software Co., Ltd.
Doug Mahugh, Microsoft Corporation
Eric Patterson, Microsoft Corporation
Tom Rabon, Red Hat
Daniel Rentz, Oracle Corporation
Andrew Rist , Oracle Corporation
Svante Schubert, Oracle Corporation
Charles Schulz, Ars Aperta
Jerry Smith, US Department of Defense (DoD)
Patrick Strader, Microsoft Corporation
Louis Suarez-Potts, Oracle Corporation
Alex Wang, Beijing Sursen International Information Technology Co., Ltd.
Robert Weir, IBM
Thorsten Zachmann
CD 05 Rev 1 (2010-09-08) Added revision history appendix, non-normative references
CD 05 (2010-07-12) Updated based on public review comments
CD 04 (2010-02-10) Corrected error in description of chart URI
CD 03 (2009-11-27) Fixed numbered lists.
CD 02 (2010-11-02) Explain OIC TC in intro, more detail in “Priority Areas for Improvement”, spelling corrections
CD 01 (2009-10-20) Initial version
1'Product' here is used in its neutral sense, as “something produced”. It is not intended to connote a commercial use.
2 | http://docs.oasis-open.org/oic/StateOfInterop/v1.0/StateOfInterop.html | 2019-03-18T16:01:54 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.oasis-open.org |
: In Setup, search for Session Settings. Under Clickjack Protection, select Enable clickjack protection for customer Visualforce pages either with headers disabled or with standard headers. Both these options allow framing on whitelisted external domains and provide clickjack protection.
Then under Whitelisted Domains for Visualforce Inline Frames, add the trusted external domains where you allow framing. | http://releasenotes.docs.salesforce.com/en-us/spring19/release-notes/rn_vf_external_iframe.htm | 2019-03-18T16:02:16 | CC-MAIN-2019-13 | 1552912201455.20 | [] | releasenotes.docs.salesforce.com |
.
Hortonworks Docs » Data Platform 2.6.2 » Data Access
Data Access
- 1. What's New in Data Access for HDP 2.6
- 2. Data Warehousing with Apache Hive
- Content Roadmap
- Features Overview
- Temporary Tables
- Optimized Row Columnar (ORC) Format
- SQL Optimization
- Transactions in Hive
- SQL Compliance
- Streaming Data Ingestion
- Query Vectorization
- Beeline versus Hive CLI
- Hive JDBC and ODBC Drivers
- Moving Data into Apache Hive
- Configuring HiveServer2
- Securing Apache Hive
- Troubleshooting
- 3. Enabling Efficient Execution with Apache Pig and Apache Tez
- 4. Managing Metadata Services with Apache HCatalog
- | https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_data-access/content/sql-compliance.html | 2018-01-16T09:42:08 | CC-MAIN-2018-05 | 1516084886397.2 | [array(['../common/images/loading.gif', 'loading table of contents...'],
dtype=object) ] | docs.hortonworks.com |
IT Service Management Deliver IT services and support to business users. Asset ManagementAsset management integrates the physical, technological, contractual, and financial aspects of information technology assets.Change ManagementThe ServiceNow® Change Management application provides a systematic approach to control the life cycle of all changes, facilitating beneficial changes to be made with minimum disruption to IT services.Configuration ManagementWith the ServiceNow® Configuration Management application, build logical representations of assets, services, and the relationships between them that comprise the infrastructure of your organization. Details about these components are stored in the configuration management database (CMDB) which you can use to monitor the infrastructure, helping ensure integrity, stability, and continuous service operation.Contract ManagementManage and track contracts with the Contract Management application in the ServiceNow platform.Expense LineExpense lines enable you to track costs and represent a point-in-time expense incurred. Expense lines can be created manually or generated by the scheduled processing of recurring costs.Incident Alert ManagementIncident Alert Management enables organizations to create and manage communications related to major business issues or incidents.Incident ManagementThe goal of Incident Management is to restore normal service operation as quickly as possible following an incident, while minimizing impact to business operations and ensuring quality is maintained.Password Reset and Password Change applicationsThe Password Reset application allows end users to use a self-service process to reset their own passwords on the local ServiceNow instance. Alternatively, your organization can implement a process that requires service-desk personnel to reset passwords for end users.Problem ManagementProblem Management helps to identify the cause of an error in the IT infrastructure that is usually reported as occurrences of related incidents.ProcurementProcurement managers can use the. Service CatalogThe Service Catalog application helps you create service catalogs that provide your customers with self-service opportunities. You can customize portals where your customers can request catalog items, such as service and product offerings. You can also define catalog items and standardize request fulfillment to ensure the accuracy and availability of the items provided within the catalogs.Service DeskThe ServiceNow platform includes a default homepage and a Service Desk application to provide a basic set of service desk functions.Service Level ManagementService Level Management (SLM) enables you to monitor and manage the quality of the services offered by your organization.Service Portfolio ManagementService Portfolio Management addresses three core business needs.Related TopicsKnowledge Management | https://docs.servicenow.com/bundle/geneva-it-service-management/page/product/it_service_management/reference/r_ITServiceManagement.html | 2018-01-16T09:46:38 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.servicenow.com |
Software Asset Management plugin ServiceNow® Software Asset Management plugin is a feature provided with the Asset Management application. A strong processTo get started with Software Asset Management plugin, you need to identify and discover software owned, create software models, create license records, and configure software counters.Request Software Asset Management pluginThe Software Asset Management (com.snc.software_asset_management) plugin. | https://docs.servicenow.com/bundle/jakarta-it-service-management/page/product/asset-management/concept/c_SoftwareAssetManagement.html | 2018-01-16T09:46:43 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.servicenow.com |
Creating and Managing Queries
You create queries on the cache server by obtaining a
QueryService method and manage them through the resulting
Query object. The
Region interface has several shortcut query methods.
The
newQuery method for the
Query interface binds a query string. By invoking the
execute method, the query is submitted to the cache server and returns
SelectResults, which is either a
ResultSet or a
StructSet.
The
QueryService method is the entry point to the query package. It is retrieved from the Cache instance through
Cache::getQueryService. If you are using the Pool API you must obtain the
QueryService from the pools and not from the cache.
Query
A
Query is obtained from a
QueryService method, which is obtained from the cache. The
Query interface provides methods for managing the compilation and execution of queries, and for retrieving an existing query string.
You must obtain a
Query object for each new query. The following example demonstrates the method used to obtain a new instance of
Query:
QueryPtr newQuery(const char * querystr);
Region Shortcut Query Methods
The
Region interface has several shortcut query methods. All take a
query predicate which is used in the
WHERE clause of a standard query. See WHERE Clause for more information. Each of the following examples also set the query response timeout to 10 seconds to allow sufficient time for the operation to succeed.
The
querymethod retrieves a collection of values satisfying the query predicate. This call retrieves active portfolios, which in the sample data are the portfolios with keys
111,
222, and
333:
SelectResultsPtr results = regionPtr->query("status 'active' ");
The
selectValuemethod retrieves one value object. In this call, you request the portfolio with
ID ABC-1:
SerializablePtr port = region->selectValue("ID='ABC-1'");
The
existsValuemethod returns a boolean indicating if any entry exists that satisfies the predicate. This call returns false because there is no entry with the indicated type:
bool entryExists = region->existsValue("'type' = 'QQQ' ");
For more information about these shortcut query methods, see the Region class description in the native client API documentation. | http://gemfire-native-90.docs.pivotal.io/native/remote-querying/95-remotequeryapi/2-create-manage-queries.html | 2018-01-16T09:44:53 | CC-MAIN-2018-05 | 1516084886397.2 | [] | gemfire-native-90.docs.pivotal.io |
Used by workers to tell the service that the ActivityTask identified by the taskToken completed successfully with a result (if provided). The result appears in the ActivityTaskCompleted event in the workflow history.
IMPORTANT:.CompletedResponse RespondActivityTaskCompleted( RespondActivityTaskCompletedRequest respondActivityTaskCompletedRequest )
- respondActivityTaskCompletedRequest (RespondActivityTaskCompletedRequest)
- Container for the necessary parameters to execute the RespondActivityTaskCompleted service method on AmazonSimpleWorkflow.
| https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/M_Amazon_SimpleWorkflow_AmazonSimpleWorkflowClient_RespondActivityTaskCompleted.htm | 2018-01-16T10:09:57 | CC-MAIN-2018-05 | 1516084886397.2 | [array(['../icons/collapse_all.gif', None], dtype=object)
array(['../icons/collapse_all.gif', None], dtype=object)
array(['../icons/collapse_all.gif', None], dtype=object)] | docs.aws.amazon.com |
View backup details
Supported inSync mobile app versions:3.5 or later
You can view backup details of the inSync mobile app, such as the status of the backup, the completion time of the backup, and the start time for the next scheduled backup.
Procedure
To view the backup details
- On the inSync mobile app sidebar, tap Settings.
- Tap Backup. | https://docs.druva.com/inSync_mobile_app/020_inSync_mobile_app_for_iOS/040_Monitor/050_View_backup_details | 2018-01-16T09:29:53 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.druva.com |
Scopes
Local scopes
These scopes are not necessarily available between components in the same request -- in other words elements or variables in these scopes may be "out of scope" if set in one module and referred to in another.
Request scopes
These scopes persist through a single request, i.e. any code, in any module, can refer to these scopes during the life of the request:
Global scopes
These scopes persist between requests, i.e. a value can be set during one request then retrieved in a subsequent one:
What scope should I use in...?
Custom tags
Classic custom tags
The variables scope of a classic CFM-based custom tag is local to the custom tag only. It is safe to use that scope for any variables that you want to be available only within the tag itself.
CFC-based custom tags
The normal rules for CFCs apply.
Should I use scopes explicitly?
Different developers have opinions about whether it is best to explicitly write every scope, or to let Railo look for the variable in each scope, i.e.
myVariable
vs
variables.myVariable
Please see Using scopes explicitly in code for details.
Further information required
- CGI scope reference
- thisTag | http://docs.lucee.org/guides/developing-with-lucee-server/scope.html | 2018-01-16T09:46:14 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.lucee.org |
.
Ignore Issues
You can have SonarQube ignore issues in files or in sections of file by file content, by file path pattern, or both. See the Patterns section for more details on the syntax.
<TODO> Include UI path to these settings
Ignore Issues on Files
Set the File Exclusion Patterns property to ignore all issues on files that contain a block of code matching a given regular expression.
Example: Ignore all issues on files containing
@javax.annotation.Generated:
Ignore Issues in Blocks<<
Ignore Issues on Multiple Criteria
Set the Multi-criteria Exclusion Patterns property to ignore all issues on specific components, coding rules and line ranges.
Examples:
- I want to ignore all kinds of issue on all files =>
**/*;*;*
- I want to ignore all issues on COBOL program bank/ZTR00021.cbl =>
bank/ZTR00021.cbl;*;*
- I want to ignore all issues on classes located in the Java package
com.foo=>
com/foo/*;*;*
- I want to ignore all issues against coding rule cpp.Union on files in directory object and its sub-directories =>
object/**/*;cpp.Union;*
- I want to ignore all issues on all files on several line ranges =>
*;*;[1-5,15-20]
<TODO> Update the screenshot
Include Issues on Multiple Criteria
This criteria allows you to define on which components you want to restrict the check for issues or for issues violating a specific set of coding rules.
Examples:
- I just want to check for issues on COBOL programs located in directories bank/creditcard and
bank/bankcard=> two criteria to define:
bank/creditcard/**/*;*;* and
bank/bankcard/**/*
- I just want to check the rule Magic Number on Beans object =>
**/*Bean.java;checkstyle:
com.puppycrawl.tools.checkstyle.checks.coding.MagicNumber path ".java" extension. For the above example, the fully qualified name is "org/sonar/api/utils/KeyValueFormat.java". For other languages, the path is displayed in the fully qualified format automatically.
Absolute Path
To define an absolute path, start the pattern with "file:" | http://docs.codehaus.org/pages/viewpage.action?pageId=232358534 | 2015-04-18T07:14:19 | CC-MAIN-2015-18 | 1429246633972.52 | [array(['/download/attachments/232358185/file-suffixes.png?version=2&modificationDate=1382976593212&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/232358185/file-exclusion-patterns.png?version=3&modificationDate=1379945749182&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/232358185/block-exclusion-patterns.png?version=2&modificationDate=1379949157553&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/232358185/logical_path.png?version=1&modificationDate=1380100603839&api=v2&effects=drop-shadow',
None], dtype=object) ] | docs.codehaus.org |
JController::createModel
From Joomla! Documentation
Revision as of 18:42, 18::createModel
Description
Method to load and return a model object.
Description:JController::createModel [Edit Descripton]
protected function createModel ( $name $prefix= '' $config=array )
- Returns mixed Model object on success; otherwise null failure.
- Defined on line 504 of libraries/joomla/application/component/controller.php
- Since
Replaces _createModel.
See also
JController::createModel source code on BitBucket
Class JController
Subpackage Application
- Other versions of JController::createModel
SeeAlso:JController::createModel [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=JController::createModel/11.1&oldid=75411 | 2015-04-18T08:22:03 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
Upgrade Guide
Local Navigation
Previous version detected but no database available
This error message appears when you cannot start the setup application or the setup application stops responding.
Possible solution
Verify that the registry keys that identify the BlackBerry® Configuration Database exist in the Windows® registry.
- On the computer that you want to install or upgrade the BlackBerry® Enterprise Server Express on, on the Start menu, click Run.
- Type regedit.
- Click OK.
- In the left pane, navigate to HKEY_LOCAL_MACHINE\Software\Research In Motion\BlackBerry Enterprise Server\Database.
- If necessary, create case-sensitive strings that you name DatabaseName and DatabaseServerMachineName.
- Specify the name of the BlackBerry Configuration Database as the value for DatabaseName.
- Specify the FQDN name of the database server as the value for DatabaseServerMachineName.
- Restart the setup application.
Previous topic: Failed to write License Key to the Database
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/27981/Previous_version_detected_but_no_DB_available_404972_11.jsp | 2015-04-18T07:23:26 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.blackberry.com |
This plugin will automatically assign new issues raised in the current analysis to the SCM author responsible
for the violation.
If the author is not registered in SonarQube the issue.
The plugin can be installed via the Update Center.
Usage
Configure the plugin as described below and run the SonarQube analysis as normal.
NOTE: If you wish to avoid mass issue assignments on first-time analysis or when rule changes are introduced, disable the plugin during the initial analysis and re-enable it for subsequent analyses.
For Git users, the SCM author is an email address. The plugin can map this email address to a Sonar user,
provided the email address is the same for the SCM and SonarQube accounts..
The plugin is disabled by default. It can be enabled or disabled in either the global or project settings.
Configure this value to be a valid SonarQube login of a user to whom all issues will be assigned regardless of the SCM author. Useful to avoid issues being assigned and notifications being sent out to unsuspecting SonarQube users in testing scenarios.
Assign issue to the last committer of the file, rather than the author as determined by the SCM metrics.
Assign blameless issues to the last committer of the file. Blameless issues are issues that don't have an associated line number and therefore cannot be resolved to a particular commit. For example: squid:S00104 'Files should not have too many lines'
Extract the SonarQube username from the SCM username associated with an issue using a given regular expression.
Only assign issues introduced after this date. Use the format dd/MM/yyyy.
Only assign new issues raised in the current analysis. Set to false to assign all qualified unassigned issues.
Only assign issues with a severity equal to or greater than a configurable value.
Notifications can now be sent when an issue is assigned. In the top-right corner of the GUI, go to <username> -> My profile -> Overall notifications. Tick 'New issues assigned to me (batch)' to receive a single notification of all issues assigned to you during the latest analysis.
Notification content is also configurable. See the options on the plugin settings page. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=238911717 | 2015-04-18T07:14:55 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.codehaus.org |
Difference between revisions of "Hands-on adding a new article: Joomla! 1.5"
From Joomla! Documentation
Revision as of 13:19,
Publishing and editing permissions:
There are some restrictions on what different groups of people adding content are allowed to do. There are three group .
To get an Article published depends on the way the site is managed. Publishing is often done by the site Adminstrator. If you are an Author you are likely to need to ask for it to be published before you can do more work on it but Editors can continue to edit before it is published.
More about Permissions in Joomla! 1.5 here:- Permissions in a Joomla! Site
Before saving a new article for the first time, you need to enter some information in the Publishing section of the editor, including whether it is published and where it is to be located..
-. icon
If you are an Editor or a Publisher - you should now be able to find the article in the place you expect!
Continue to alter it or add to it using the Editor - an orange pencil
icon for a published Article or a blue pencil
for one that has not been published.
Next time you save it, you will not have to fill out the Publisher part of the editor again.
Where Next?
Depends on your role.
More editing -
Setting up a site -
Further information
--Lorna Scammell December 2010 | https://docs.joomla.org/index.php?title=Hands-on_adding_a_new_article:_Joomla!_1.5&diff=36489&oldid=36487 | 2015-04-18T07:59:22 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
changes.mady.by.user S. Ali Tokmen
Saved on Nov 13, 2010
...
These Javadocs contain a less cluttered list of classes and packages compared to the Javadoc for Cargo Core APIs, but inner-links will not work. Typically, you won't be able of seeing the linkes links between a Container Implementation and the Container API it is based on. To access that kind of information, please rather prefer the Javadoc for Cargo Core APIs.
Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today. | http://docs.codehaus.org/pages/diffpages.action?pageId=228165358&originalId=186187840 | 2015-04-18T07:25:26 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.codehaus.org |
The Chinalaw List Home Page
(formerly the Chinese Law Net)
Information about the Chinalaw list
Subscribing, posting, and unsubscribing
Technical questions
Technical home page for subscribing, unsubscribing, and archives (click here if you cannot access this link)
Chinese Law Prof Blog
What's new
Latest jobs and internships
Latest grants, fellowships, and research opportunities
Last updated: April 6, 2015
.
. | http://docs.law.gwu.edu/facweb/dclarke/chinalaw/index.htm | 2015-04-18T07:11:34 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.law.gwu.edu |
Cross-correlate two 2-dimensional arrays.
Description:
Cross correlate in1 and in2 with output size determined by mode and boundary conditions determined by boundary and fillvalue.
Inputs:
in1 – a 2-dimensional array. in2 – a 2-dimensional array. mode – a flag indicating the size of the output
- ‘valid’ (0): The output consists only of those elements that
- do not rely on the zero-padding.
- ‘same’ (1): The output is the same size as the input centered
- with respect to the ‘full’ output.
- ‘full’ (2): The output is the full discrete linear convolution
- of the inputs. (Default)
- boundary – a flag indicating how to handle boundaries
- ‘fill’ : pad input arrays with fillvalue. (Default) ‘wrap’ : circular boundary conditions. ‘symm’ : symmetrical boundary conditions.
fillvalue – value to fill pad input arrays with (Default = 0)
Outputs: (out,)
- out – a 2-dimensional array containing a subset of the discrete linear
- cross-correlation of in1 with in2. | http://docs.scipy.org/doc/scipy-0.7.x/reference/generated/scipy.signal.correlate2d.html | 2015-04-18T07:21:02 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.scipy.org |
After you have set up the profiler, you are ready to profile your application.
To start a profiling session for a running application, click the Start icon in the toolbar.
The profiler will start logging application activity. The session selector on the left side of the profiler connection window allows you to switch between viewing events for separate sessions, and the entire profiler log.
The main grid view displays the actual event log corresponding to the selected session on the right.
The main grid view is represented by the Data Grid control, which is highly customizable. To simplify the profiling process, you can reorder columns, create summaries, use custom filters, etc. (If you are not familiar with the capabilities of the XtraGrid control, refer to the End-User Capabilities help topic from the XtraGrid documentation.) You can also customize the grid view via profiler toolbar commands. The Expand View/Collapse View commands expand/collapse all parameter lists in the grid view. These commands are available only when Message Log is selected in the session selector. The Short Text command enables a short text mode. In this mode, long SQL queries are truncated when displayed in the grid view. You can still see a complete SQL query in the accompanying details view. This view is displayed under the main grid view, and displays event parameters of the currently selected event.
When the profiler collects enough information on the activity of the running application, you will be able to identify the following issues.
Attempts to access a session from different threads
Base XPO classes, such as Session, XPCollection and XPObject are not thread safe. Thus, you should not use instances of these classes in threads that are different from those in which they have been created. The profiler captures thread information and displays thread identifiers in the Thread Id column of the main grid view. When a session is accessed from different threads, the profiler highlights this event in purple.
The "Cross-thread session access is detected" warnings are normal in stateful ASP.NET applications. The web server internally uses several threads to process requests to a single Session object. For details, see ASP.NET Thread Usage on IIS 7.5, IIS 7.0, and IIS 6.0.
Attempts to execute requests via inappropriate data layers
A typical example is an attempt to create a persistent object via the default constructor. In this instance, the created object will use the database connection specified via the static members of the XpoDefault class. By default, an Access database will be created for this object, which oftentimes is not what you would expect (see XPO Best Practices). When the profiler detects default constructor usage, a corresponding session event is highlighted in purple. You can also look at the ConnectionProvider parameter value of an SQL query to ensure that it uses the correct connection provider. The Practical usage section of the Profile it! blog post demonstrates the discovery of such an issue.
Incorrect usage of the query cache (DataCacheRoot / DataCacheNode)
XPO has a customizable data-level caching system (see Cached Data Store). This system caches queries along with their results when they are executed on the database server. The profiler allows you to test your application to see how your cache configuration affects an application, what queries are retrieved from the cache, and what queries are executed by the SQL server. With this information, you can fine-tune the cache system or find associated issues.
Inefficient implementations of methods manipulating database data
Your application can have data methods that are implemented in ways that are less than optimal. Generally, these can be methods that include complex data manipulations. This can be, for instance, a calculated property get accessor. Such methods might make hundreds of SQL queries when only a couple of them are required. There are no general solutions for diagnosing these issues, and the recommended approach is to profile the application execution to look for excessive SQL queries. The Profiling an XAF application with XPO Profiler webinar demonstrates a problematic method, its discovery and its correction.
When you have gathered enough information and wish to finish profiling, stop the profiling session by clicking the Stop icon in the toolbar.
To save the collected profiler logs and analyze them later, use the Save Log, Open Log and Export Log commands.
The profiler automatically saves grid layout customizations for each connection individually. To manage the configurations, invoke the Options menu that contains layout management commands.
Default Layout
Use this menu to manage the default layout for new connections. You can save the current layout as the default, load the default layout, or reset it.
Custom Layout
Use this menu to name the current layout and save it for future use, or load an existing custom layout.
Manage Layouts...
Use this menu to see all layouts saved by the profiler, as well as remove unnecessary layouts. | https://docs.devexpress.com/XpoProfiler/10660/profile-your-application | 2019-01-16T10:41:16 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.devexpress.com |
BindToStream Method
The BindToStream method of the IUrlAccessor interface binds the item being processed to a data stream and returns a pointer to that stream to the Filter Daemon.
Parameters
ppStream [out] Address of a pointer to an IStream that contains the contents of the item represented by the URL.
Return Value
For a list of error messages returned by Microsoft Office SharePoint Portal Server 2003 protocol handlers, see Protocol Handler Error Messages.
Remarks
All protocol handlers must implement the BindToFilter, BindToStream, or GetFileName method for the Filter Daemon to retrieve any useful information from that item. Protocol handlers may implement either the BindToFilter or BindToStream method, or both methods. For example, protocol handlers may use the BindToFilter method for metadata associated with items in the content source, and use the BindToStream method to retrieve the content of the items.
You only need to implement simple sequential access to a Stream object when using the BindToFilter method. Reading from a stream may generate a temporary file. If this impacts the performance of your protocol handler implementation, consider using the BindToFilter and GetFileName methods to access the item content.
Requirements
Platforms: Microsoft Windows Server 2003 | https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint2003/dd585594%28v%3Doffice.11%29 | 2019-01-16T10:04:05 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.microsoft.com |
. These instructions apply to any type of scriptworker that already has an instance running, hence configurations already exists.
1. initial setup¶
To begin with, you need to figure out the network basics, IP and DNS entries. With these at hand, you’ll be ready to spin out a new instance. To ease some of these operations, you need to use the
build-cloud-tools repository.
Firstly, you need to clone it and have a python environment read, to run some of its scripts:
$ git clone [email protected]:mozilla-releng/build-cloud-tools.git $ cd build-cloud-tools $ mkvirtualenv cloudtools $ python setup.py develop
The scripts that you’re about to run are using
boto library under the hood, for which you need to define the credentials to access the AWS console. These can be set in different ways, from environment variables to configurations files. More on this here.
It’s indicated that you create a dedicated set of AWS credentials in your AWS RelEng console, under your IAM user. For example, if you were to choose config files, you should have something like this:
$ cat ~/.aws/credentials [default] aws_access_key_id= ... aws_secret_access_key= ...
Next, make sure you’re connected to the VPN.
You can now run your script to find out available IPs. For example, if one wanted to ramp-up another
beetmoverworker instance within the
us-west-2 AWS region, the script to run would be:
(cloudtools) $ python cloudtools/scripts/free_ips.py -c configs/beetmoverworker -r us-west-2 -n1
With the IP returned above, file a bug like bug 1503550 to ask for DNS records to be assigned for your IP.
Once the bug is fixed, you should be able to see the DNS entries correspond to your IP address by using the following:
dig -x <ip_address>
2. creating an AWS instance¶
Furthermore, you need to spin up an AWS instance and for that there are two options.
2a - using automation script¶
In order to use the
build-cloud-tools scripts, you’ll first need access to some of the secrets.
Grab the
deploypass from private repo and dump it in a JSON file.
$ cat secrets.json { "deploy_password": "..." }
Grab the
aws-releng SSH key from private repo.
Make sure to split the DNS as anything
mozilla.(com|net|org) should be forwarding over the VPN. In order to do so, you should have something like this in Viscosity’s connection details:
This change is needed because your local machine is likely using other DNS servers by default, hence the sanity checks that automation does will bail for
PTR records not found. By splitting the DNS trafic via VPN, you ensure that
both
A and
PTR records can be found for your IP/hostname touple so that sanity check passes successfully.
Now it’s time to spin a new instance in AWS. For example, continuing the aforemnetioned example, if one wanted to ramp-up a
beetmoverworker instance within the
us-west-2 AWS region, the script to run would be:
$ host=<your hostname> $ python cloudtools/scripts/aws_create_instance.py -c configs/beetmoverworker -r us-west-2 -s aws-releng -k secrets.json --ssh-key aws-releng -i instance_data/us-west-2.instance_data_prod.json $host
This spins a new AWS instance, but puppetization may fail, hence you’ll have to run in manually. See instructions below.
2b - using AWS console¶
Go to the EC2 console, go to the appropriate region (usw2, use1).
- Instances -> Launch Instance -> My AMIs ->
centos-65-x86_64-hvm-base-2015-08-28-15-51-> Select
- t2-micro -> configure instance details
- change the subnet to the type of scriptworker’s configs from
build-cloud-tools; add the public IP; specify the DNS IP at the bottom -> Add storage
- General purpose SSD -> Tag Instance
- Tag with its name, e.g.
beetmoverworker-> Configure security group
- Select an existing group; e.g. choose the
beetmover-workergroup; review and launch
- make sure to choose a keypair you have access to, e.g. aws-releng or generate your own keypair. Puppet will overwrite this.
Alternatively, you can create a template based on an existing instance and then launch another instance based on that template, after you amend the
ami-id,
subnet,
security-groups and IP/DNS entries.
3. Puppetize the instance¶
Once the machine is up and running (can check its state in the AWS console), ssh into the instance as root, using the ssh keypair you specified above.
$ ssh -i aws-releng root@<fqdn>
To begin with, Install puppet:
sed -i -e 's/puppetagain.*\/data/releng-puppet2.srv.releng.mdc1.mozilla.com/g' /etc/yum-local.cfg.mdc1.mozilla.com sh puppetize.sh # else if we want to puppetize against an environment PUPPET_SERVER=releng-puppet2.srv.releng.mdc1.mozilla.com PUPPET_EXTRA_OPTIONS="--environment=USER" sh puppetize.sh # run puppet puppet agent --test # first round takes > 6000 seconds puppet agent --test # second run is faster, applies kernel update, requires reboot afterwards reboot
The instance should be now accessible via LDAP login via ssh keys.
Wipe of the secrets from your local machine
rm secrets.json rm aws-releng
monitoring¶
The new instance(s) should be added to the nagios configuration in IT’s puppet repo so that we’re notified of any problems. There’s more information about what is monitored in the new instance doc.
With the VPN running, clone the puppet repo
(see point 10 for the location).
Then modify
modules/nagios4/manifests/prod/releng/mdc1.pp by copying over the config of one of
the existing instances and updating the hostname. For example a new balrogworker would look like:
'balrogworker-42.srv.releng.use1.mozilla.com' => { contact_groups => 'build', hostgroups => [ 'balrog-scriptworkers' ] },
The checks are added by membership to that hostgroup.
Create a patch, attach it to Bugzilla, and ask someone from RelOps for review, before landing. After a 15-30 minute delay the checks should be live and you can look for them on the mdc1 nagios server. | https://scriptworker.readthedocs.io/en/latest/new_instance.html | 2019-01-16T10:54:58 | CC-MAIN-2019-04 | 1547583657151.48 | [array(['_static/dns.png?raw=true', 'DNS split in VPN'], dtype=object)] | scriptworker.readthedocs.io |
Introduction
Read this section to learn about the Parser endpoint and its features
One of the most important elements of being able to publish content is being able to see the end result before the rest of the world. The Parser API endpoints will make this possible. At the moment, it is still rather basic, but future updates will give the function a great deal of power.
Sections
Parsing a Text Object
10Centuries makes publishing various post types a snap. Anything that is published must go through a single text formatter. This feature will let people see what the end result will look like before publishing. This function requires a valid
authorization token in the HTTP header.
From your application, send a request to the following endpoint:
Required Data:
content: {post content}
Optional Data:
title: {Title for Blog Post, Draft, Page, or Podcast}
This information can be send URL-encoded in the POST body or as a JSON package.
Example:
curl -X POST -H "Authorization: {Authorization Token}" \ -H "Content-Type: application/x-www-form-urlencoded" -d "content=This is, perhaps, one of the most boring posts in the history of the universe." \ ""
If the post was successful, the API will respond with a JSON package:
{ "meta": { "code": 200 }, "data": { "title": "", "content": { "text": "This is, perhaps, one of the most boring posts in the history of the universe.", "html": "<p>This is, perhaps, one of the most boring posts in the history of the universe.</p>" } } }
If posting failed or there was some other problem, the API will respond with an error:
{ "meta": { "code": 404, "text": "No Text Received" }, "data": false }
If the
meta.text key is present in this result, you must show a modal dialog or equivalent containing this message. | https://docs.10centuries.org/parser | 2019-01-16T10:40:32 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.10centuries.org |
KYC Document Upload
KYC Document Upload API is used to remotely upload KYC Document from merchant to interface.Synchronous flow is used to upload file with supported extension ( pdf,xlsx,jpg,png).
KYC Documents :
In KYC Document Profile the request is send over HTTPS to the
/applicationServices/api/ApplicationManager/uploadKycDocuments
resource using POST method. The request must contain all the document field list on which we have to upload KYC documents.
In our API Specifications you can find a full list of parameters that can be sent in the request. Also the number of parameters varies depending on the acquiring bank selected. This can be seen in the sample request given below.
Sample Request
Sample Response
Hashing Rule
supports MD5 Cryptographic Hash for the authenticity of payment request send to the server.
KYC document upload API requires below details for authentication and authorization to be passed along with request.
- memberId <Merchant ID as shared by >
- random Id <Any random generated ID>
- partner's secureKey <Partner key shared by >
How to generate Checksum ?
Checksum has to be calculated with following combination and need to be send along with the authentication parameters in each server-to-server request:
<memberId>|<secureKey>|<random>
Sample Code
| https://docs.paymentz.com/merchant-boarding-api/kyc-upload.php | 2019-01-16T10:48:43 | CC-MAIN-2019-04 | 1547583657151.48 | [array(['../assets/img/ajax-loader.gif', None], dtype=object)
array(['../assets/img/ajax-loader.gif', None], dtype=object)
array(['../assets/img/ajax-loader.gif', None], dtype=object)] | docs.paymentz.com |
Use a predefined policy
You have just learnt to use the technique editor, which gives access to a broad range of building blocks for policies. To allow easy configuration of common system settings, Rudder also comes with a pre-defined set of techniques, directly usable after installation.
Go to the Configuration Management → Directives page. What is a directive? It is a simply a technique instance: a technique plus some parameters, making it an applicable piece of configuration.
We will now configure the SSH service on our nodes using the dedicated pre-built technique.
Let’s use the filter in the directive tree to find it easily:
Now click on "SSH server (OpenSSH)", the right part of the page will display the technique details, and the list of available versions of the technique. We will use the latest one, by clicking on "Create with latest version".
This will make a new form appear, containing the configuration of the directive itself. As we said before, a technique is a parametrizable policy. A directive is a technique instance, i.e. a technique plus a set of parameters.
Defining a directive consist of choosing a technique and providing its parameters.
First, let’s give it a description. When using Rudder, keep in mind that filling all these documentation filed, though it may seem tedious, is very important for future reference, or collaboration with other. Is is for example useful to include reference to external information (ticketing system, etc.), to allow keeping a track of all configuration changes.
We now need to decide ho we want to configure our SSH server. Let’s disable password authentication (it’s always a good idea anyway):
And go to the bottom of the page, click on "Global configuration for all nodes" (to apply it everywhere, we will see how it works later).
And click on save. It will open a window asking for confirmation and an audit message, that is used for traceability. You can leave it empty for now, and confirm.
Why a confirmation window here? Because the change we are saving will change our machines state. Once you click on save, the Rudder server will start updating configuration policies for nodes it is applied to (we will see later which ones exactly).
Let’s create a second directive, based on the technique we have just created (our demo user). Use the filter to find it in the directive tree, still on the Directives page.
This time, we will change one of the general parameters by overriding the policy mode to Audit mode.
Then apply it to the "Global configuration for all nodes" and save.
You have just applied your first configuration policy. If you wait just a bit (at most 5 minutes), our expected state will be applied on our machine (actually only our Rudder server itself for now).
But we will have a closer look at what happens on the machine in the next section. | https://docs.rudder.io/get-started/current/configuration-policies/directive.html | 2019-01-16T09:46:44 | CC-MAIN-2019-04 | 1547583657151.48 | [array(['../_images/ssh.png', 'SSH technique'], dtype=object)
array(['../_images/ssh-password.png', 'SSH technique'], dtype=object)
array(['../_images/rule.png', 'SSH technique'], dtype=object)
array(['../_images/audit.png', 'Audit mode'], dtype=object)] | docs.rudder.io |
PyXWF currently requires a dedicated webserver with WSGI support. Many widely-used webservers (at least Apache and lighttpd) support WSGI using a module.
In addition to the python code, you’ll have a repository with content, which is where you put all the data which belongs to the site. We call this the data directory, while the directory where PyXWF resides is the PyXWF directory.
All those files referenced in this section are available in an example form in the PyXWF repository, in the subdirectory misc/example. You may use these as a reference to build your own customized website.
PyXWF requires at least one file, the so called sitemap.xml (although you can choose a different name). This is an XML file where the whole configuration for PyXWF goes. It usually resides in the root of the data directory.
This is how a bare sitemap.xml looks (it won’t work):
<?xml version="1.0" encoding="utf-8" ?> <site xmlns="" xmlns: <meta> <title>My fancy PyXWF site</title> </meta> <plugins> <p>PyXWF.Nodes.Directory</p> </plugins> <tweaks /> <dir:tree </dir:tree> <crumbs /> </site>
We’ll use this as a basement for our walk through the configuration process. Before we go on, I’ll leave a few words on how we at PyXWF think about XML namespaces, as what we’re doing here may seem a bit weird to you if you know XML (if you don’t, you probably should read up on XML anyways, cause most of PyXWF happens in XML and Python).
PyXWF is really modular, as you might guess from the snippet above, even the most basic elements like a directory in a site structure are plugins. To keep the plugins separated from each other in the sitemap XML, each plugin has to use its own XML namespace.
This brings you to a dilemma when it comes to find the root of the tree in the sitemap XML: because you _cannot_ know its namespace. Someone could decide to put a Transform node at the root or just a single page or a remote redirect.
So we have to scan through the whole <site /> node and find a child with the XML local-name tree. This is the only place in PyXWF itself where the local-name of a node matters.
The @template attribute on the <dir:tree /> node by the way states the default template used to render web pages. We’ll talk about templates later, namely when we create the file referenced there.
As mentioned previously, this sitemap will not work in PyXWF. You’ll recieve an error message which tells you that the dir:tree node needs an index node.
So let’s add one:
<dir:tree <page:node </dir:tree>
This won’t work on it’s own, we have to add two other things. First, the page namespace prefix needs to be resolved. We do that by adding the declaration to the <site /> node:
<site xmlns="" xmlns:
And we also need to load the plugin by adding another <p /> tag:
<plugins> <p>PyXWF.Nodes.Directory</p> <p>PyXWF.Nodes.Page</p> </plugins>
Now let’s add source file for the home page. As you might guess, the @src attribute of a <page:node /> references the file in which the node looks for the source of the document to display. All paths are relative to the data directory. So we create a home.xml file, which has to look like this to satisfy the application/x-pywebxml content type set in @type:
<?xml version="1.0" encoding="utf-8" ?> <page xmlns="" xmlns: <meta> <title>Home page</title> </meta> <body xmlns=""> <header> <!-- note that hX tags automatically get transformed to hX+1 tags according to the environment where they're used. --> <h1>Welcome to my website!</h1> </header> <p>I just set up PyXWF and want to play around.</p> </body> </page>
The root element of a PyWebXML document must be a <py:page /> node in the namespace given above. We usually choose the py: prefix to reference this namespace (for more elements and attributes which can be used in that namespace have a look at the respective documentation). For more information about PyWebXML documents see <py-namespace>. The only thing you need to know now is that you can use arbitary XHTML (and anything you can use in XHTML) inside the <h:body /> element. It will be displayed on page, a correct template presumed.
Templates in PyXWF are XSL transformations. If you don’t know anything about these, you’re probably lost. We cannot help you there, you maybe should get some resources on these.
I won’t paste a whole default template here. Instead, i’ll describe in short what the outermost template must do to create a proper website.
As mentioned before, PyXWF connects to the Web using WSGI. Here are the neccessary configuration steps to get it to work.
You may want to have a look at examples/start/pyxwf.py from the PyXWF repository to see how a WSGI script might look. Most important is to set up the path to the data directory properly, and make sure that PyXWF is in your python path.
We’ll go through it here anyways. It’s basically a normal python script, which is set up for use with WSGI (see the PEP-3333 for more info about WSGI itself). A very simplistic approach might look like this and we’ll call this file pyxwf.py:
#!/usr/bin/python2 # encoding=utf-8 from __future__ import unicode_literals, print_function import sys import os import logging # you can configure logging here as you wish. This is the recommended # configuration for testing (disable DEBUG-logging on the cache, it's rather # verbose and not particularily helpful at the start) logging.basicConfig(level=logging.DEBUG) logging.getLogger("PyXWF.Cache").setLevel(logging.INFO) conf = { "pythonpath": ["/path/to/pyxwf"], "datapath": "/path/to/site" } try: sys.path.extend(conf["pythonpath"]) except KeyError: pass os.chdir(conf["datapath"]) import PyXWF.WebBackends.WSGI as WSGI sitemapFile = os.path.join(conf["datapath"], "sitemap.xml") application = WSGI.WSGISite( sitemapFile, default_url_root=conf.get("urlroot") )
The datapath in the conf dictionary refers to the directory in which PyXWF will look for all files. In fact, all references to files inside the sitemap.xml are relative to that path. Later on in the snippet above, we also look for the sitemap.xml itself in that location. Note that you have to add the path to the PyXWF package to your pythonpath (if you have not already done this globally).
For this, PyXWF comes with a script called serve.py. It’ll help you to run test your website as soon as you have a WSGI script running. This will break though as soon as you have static content which is served from outside of PyXWF. But for the start, it’s fine. It’s basic use is pretty simple (and ./serve.py -h will tell you more). Just navigate to the PyXWF directory and do:
./serve.py /path/to/pyxwf.py
(you created the file pyxwf.py in the previous step!) This will spam some log messages. After it quiets down, you’ll be able to access your site using the URL.
As soon as you need static files (images, CSS, …), you’ll want to use a dedicated webserver for that, as PyXWF does not deliver such files by default. The next section deals with setting up PyXWF with Apache, but you’re free to skip this in favour of finding out how awesome PyXWF really is.
You can in fact also run the examples delivered with PyXWF using serve.py, for example:
./serve.py examples/start/pyxwf.py
We are using a configuration similar to this one for zombofant.net:
WSGIApplicationGroup %{GLOBAL} WSGIScriptAlias / /path/to/zombofant/data/pyxwf.py # access to static files via Apache, PyXWF won't do that Alias /css /path/to/zombofant/data/css Alias /img /path/to/zombofant/data/img
Actually, thats all you need. Read up on WSGI configuration to see how to adapt this to your needs if it doesn’t work out of the box.
Before you cheer in happiness, a word of warning. The directive WSGIApplicationGroup is required for PyXWF to work properly with lxml, but may also break having multiple PyXWF sites on one server. The solution is to use one WSGIProcessGroup for each site. I might write another section about this, but for the basic setup the above snippet is okay, so I’ll leave that for later. | https://docs.zombofant.net/pyxwf/devel/tutorial/setup/index.html | 2019-01-16T10:38:37 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.zombofant.net |
Step 5: Set up a Configuration Set
To set up Amazon SES to publish your email sending events to Amazon Kinesis Firehose, you first create a configuration set, and then you add a Kinesis Firehose event destination to the configuration set. This section shows how to accomplish those tasks.
If you already have a configuration set, you can add a Kinesis Firehose destination to your existing configuration set. In this case, skip to Adding a Kinesis Firehose Event Destination.
Creating a Configuration Set
The following procedure shows how to create a configuration set.
To create a configuration set
Sign in to the AWS Management Console and open the Amazon SES console at.
In the left navigation pane, choose Configuration Sets.
In the content pane, choose Create Configuration Set.
Type a name for the configuration set, and then choose Create Configuration Set.
Choose Close.
Adding a Kinesis Firehose Event Destination
The following procedure shows how to add a Kinesis Firehose event destination to the configuration set you created.
To add a Kinesis Firehose event destination to the configuration set
Choose the configuration set from the configuration set list.
For Add Destination, choose Select a destination type, and then choose Kinesis Firehose.
For Name, type a name for the event destination.
Select all Event types.
Select Enabled.
For Stream, choose the delivery stream that you created in Step 4: Create a Kinesis Firehose Delivery Stream.
For IAM role, choose Let SES make a new role, and then type a name for the role.
Choose Save.
To exit the Edit Configuration Set page, use the back button of your browser. | http://docs.aws.amazon.com/ses/latest/DeveloperGuide/event-publishing-redshift-configuration-set.html | 2017-06-22T14:17:51 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.aws.amazon.com |
- MongoDB CRUD Operations >
- Bulk Write Operations
Bulk Write Operations¶
On this page
Overview¶¶¶
bulkWrite() supports the following write operations:
Each write operation is passed to
bulkWrite() as a
document in an array.
For example, the following performs multiple write operations:
The
characters collection contains the following documents:
{ "_id" : 1, "char" : "Brisbane", "class" : "monk", "lvl" : 4 }, { "_id" : 2, "char" : "Eldon", "class" : "alchemist", "lvl" : 3 }, { "_id" : 3, "char" : "Meldane", "class" : "ranger", "lvl" : 3 }
The following
bulkWrite() performs multiple
operations on the collection:
try { db.characters.bulkWrite( [ { insertOne : { "document" : { "_id" : 4, "char" : "Dithras", "class" : "barbarian", "lvl" : 4 } } }, { insertOne : { "document" : { "_id" : 5, "char" : "Taeln", "class" : "fighter", "lvl" : 3 } } }, { updateOne : { "filter" : { "char" : "Eldon" }, "update" : { $set : { "status" : "Critical Injury" } } } }, { deleteOne : { "filter" : { "char" : "Brisbane"} } }, { replaceOne : { "filter" : { "char" : "Meldane" }, "replacement" : { "char" : "Tanys", "class" : "oracle", "lvl" : 4 } } } ] ); } catch (e) { print(e); }
The operation returns the following:
{ "acknowledged" : true, "deletedCount" : 1, "insertedCount" : 2, "matchedCount" : 2, "upsertedCount" : 0, "insertedIds" : { "0" : 4, "1" : 5 }, "upsertedIds" : { } }
For more examples, see bulkWrite() Examples
Strategies for Bulk Inserts to a Sharded Collection¶¶). | https://docs.mongodb.com/v3.4/core/bulk-write-operations/ | 2017-06-22T14:02:35 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.mongodb.com |
Similarities
The Similarities response group returns titles and ASINs of five items that are similar to the item specified in the request. This response group is often used with ItemLookup.
Relevant Operations
Operations that can use this response group include:
Response Elements
The following table describes the elements returned by Similarities.
Similarities also returns the elements that all response groups return, as described in Elements Common to All Response Groups.
Parent Response Group
The following response groups are parent response groups of Similarities.
None
Child Response Group
The following response groups are child response groups of Similarities.
None
Sample REST Use Case
The following request uses the Similarities response group.
Copy? Service=AWSECommerceService& AWSAccessKey=
[AWS Access Key ID]& AssociateTag=
[Associate ID]& Operation=ItemSearch& Condition=All& SearchIndex=Blended& Keywords=Mustang& Merchant=All& ResponseGroup=Similarities &Timestamp=[YYYY-MM-DDThh:mm:ssZ] &Signature=[Request Signature]
Sample Response Snippet
The following response snippet shows the elements returned by Similarities.
Copy
<SimilarProduct> <ASIN>B00004GJVO</ASIN> <Title>Minor Move</Title> </SimilarProduct> | http://docs.aws.amazon.com/AWSECommerceService/latest/DG/RG_Similarities.html | 2017-06-22T14:13:33 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.aws.amazon.com |
Having explained in the previous section the how to run a Mercury driver code, we next explain the form of the data output, and describe how relevant information may be extracted from this data. Mercury produces data regarding a wide range of system parameters and, as such, there exist a variety of manners in which this data may be obtained and processed.
Running a Mercury executable produces three main output files in which we are interested. Each of the files produced will carry the name of the code used followed by one of the extensions ‘.data’, ‘.fstat’ and ‘.ene’.
For instance, building and running a file named ‘example.cpp’ will produce ‘example.data’, ‘example.fstat’ and ‘example.ene’ (in addition to several other files which will be discussed in later sections [CHECK THIS IS TRUE!]).
The simplest of the three file types is the ‘.ene’ file, which allows us to interpret the time evolution of the various forms of energy possessed by the system. Data is written at predefined time steps, with the system’s total gravitational (
’ene_gra’) and elastic (
’ene_ela’) potential energies and translational (
’ene_kin’) and rotational (
’ene_rot’) kinetic energies being shown alongside the system’s centre of mass position in the x, y and z directions (
’X_COM’,
’Y_COM’ and
’Z_COM’, respectively).
At each time step, the data is output as follows:
time ene_gra ene_kin ene_rot ene_ela X_COM Y_COM Z_COM
The next file type we will discuss — .data — although slightly more complicated, is perhaps the most useful and versatile of the three, as it provides full information regarding the positions and velocities of all particles within the system at each given time step.
The files are formatted as follows: at each time step, a single line stating the number of particles in the system (
N), the time corresponding to the current step (
time) and the maximal and minimal spatial boundaries defining the computational volume used in the simulations (
xmin, ymin, zmin, xmax, ymax, zmax) is first output. This first line is structured as below:
N, time, xmin, ymin, zmin, xmax, ymax, zmax
This output is then followed by a series of
N subsequent lines, each providing information for one of the
N particles within the system at the current point in time. For each particle, we are given information regarding its current position in three dimensions (
x, y, z), the magnitudes of the three components of its instantaneous velocity (
vx, vy, vz), the radius of the particle (
rad), its
angular position in three dimensions (
qx, qy, qz) and the three components of its instantaneous angular velocity (
omex, omey, omez). The term
xi represents an additional variable which can be specified by the user as described in section ??? [DO THIS!]. By default,
xi represents the species index, which stores information regarding the particle’s material properties.
These parameters are output in the following order:
x, y, z, vx, vy, vz, rad, qx, qy, qz, omex, omey, omez, xi
The sequence of output lines described above is then repeated for each time step.
It should be noted that the above is the standard output required for three-dimensional data; for two-dimensional data, only five items of information are given in the initial line of each time step:
N, time, xmin, zmin, xmax, zmax
and eight in the subsequent
N lines:
x, z, vx, vz, rad, qz, omez, xi
Finally, we discuss the .fstat file, which is predominantly used to calculate stresses.
The .fstat output files follow a similar structure to the .data files; for each time step, three lines are initially output, each preceded by a ‘hash’ symbol (#). These lines are designated as follows:
# time, info # info # info
where
time is the current time step, and the values provided in the spaces denoted ‘info’ ensure backward compatibility with earlier versions of Mercury.
This initial information is followed by a series of Nc lines corresponding to each of the Nc particle contacts (as opposed to particles) within the system at the current instant in time.
Each of these lines is structured as follows:
time, i, j, x, y, z, delta, deltat, fn, ft, nx, ny, nz, tx, ty, tz
Here,
i indicates the number used to identify a given particle and
j similarly identifies its contact partner. The symbols
x,
y and
z provide the spatial position of the point of contact between the two particles
i and
j, while
delta represents the overlap between the two and
deltat the length of the tangential spring (see section ??? [REFER TO CORRECT SECTION WHEN WRITTEN]). The parameters
fn and
ft represent, respectively, the absolute normal and tangential forces acting on the particles, with the relevant direction provided by the unit vectors defined by
nx, ny, nz for the normal component and
tx, ty, tz for the tangential component.
We begin by discussing the manner in which Mercury data can simply be ‘visualised’ - i.e. a direct, visual representation of the motion of all particles within the system produced.
ParaView may be downloaded from ??? [FIND WEBSITE] and installed by following the relevant instructions for your operating system.
Data may be visualised using the ‘data2pvd' tool, which converts the ‘.data' files output by Mercury into a Paraview file (.pvd) and several VTK (.vtu) files.
We will now work through an example, using data2pvd to visualise a simple data set produced using the example code
chute_demo . From your build directory, go to the ChuteDemos directory:
and run the
chute_demo code:
Note: if the code does not run, it may be necessary to first build the code by typing:
Once the code has run, you will have a series of files; for now, however, we are only interested in the '.data' files.
Since data2pvd creates numerous files, it is advisable to output these to a different directory. First, we will create a directory called
chute_pvd:
We will then tell data2pvd to create the files in the directory:
In the above, the first of the three terms should give the path to the directory in which the data2pvd program is found (for a standard installation of Mercury, the path will be exactly as given above); the second is the name of the data file (in the current directory) which you want to visualise; the third gives the name of the directory into which the new files will be output (‘chute_pvd’) and the name of the files to be created ('chute').
Once the files have been successfully created, we now start ParaView by simply typing:
Which should load a screen similar to the one provided below:
Note: for Mac users, ParaView can be opened by clicking 'Go', selecting 'Applications' and opening the file manually.
The next step is to open the file by pressing the folder icon circled in the above image and navigating to the relevant directory using the panel shown below.
Here, you can choose to open either the `.pvd' file, which loads the entire simulation, or the '.vtu' file, which allows the selection of a single timestep.
For this tutorial, we will select the full file - ’chute.pvd'.
On the left side of the ParaView window, we can now see chute.pvd, below the builtin in the Pipeline Browser.
Click ‘Apply' to load the file into the pipeline.
Now we want to actually draw our particles. To do so, open the 'filters' menu at the top of the ParaView window (or, for Mac users, at the top of the screen) and then, from the drop-down list, select the 'common' menu and click 'Glyph'.
In the current case, we want to draw all our particles, with the correct size and shape. In the left-hand menu, select 'Sphere' for the ‘Glyph Type’, 'scalar' as the Scale Mode (under the ‘Scaling’ heading) and enter a Scale Factor of 2.0 (Mercury uses radii, while ParaView uses diameters).
Select 'All Points' for the ‘Glyph Mode’ under the ‘Masking’ heading to make sure all of our particles are actually rendered. Finally press 'Apply' once again to apply the changes.
In order to focus on our system of particles, click the 'Zoom to data' button circled in the image above.
The particles can then be coloured according to various properties; for the current tutorial, we will colour our particles according to their velocities. To do this, with the 'Glyph1' stage selected, scroll down in the properties menu until you find 'Colouring' and select the 'Velocities' option.
The colouring can be rescaled for optimal clarity by pressing the ‘Rescale' button in the left hand menu under the ‘Colouring’ heading.
We are now ready to press the 'play' button near the top of the screen and see the system in motion!
The ParaView program has endless possibilities and goes way beyond the scope of this document. Please consult the ParaView documentation for further information.
In the MercuryCG folder (
MercuryDPM/MercuryBuild/Drivers/MercuryCG), type ‘
make fstatistics’ to compile the ‘fstatistics’ analysis package.
For information on how to operate fstatistics, type ‘
./fstatistics -help’.
The Mercury analysis package are due to be upgraded in the upcoming Version 1.1, at which point full online documentation and usage instructions will be uploaded.
If you experience problems in the meantime, please do not hesitate to contact a member of the Mercury team. | http://docs.mercurydpm.org/Beta/d1/d6b/analysing.html | 2017-06-22T14:07:36 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.mercurydpm.org |
Frequently asked questions¶
How do I enable logging for OAuthLib?¶
What parts of OAuth 1 & 2 are supported?¶
OAuth 1 with RSA-SHA1 signatures says “could not import cryptography”. What should I do?¶
Install cryptography via pip.
$ pip install cryptography
OAuth 2 ServiceApplicationClient and OAuth 1 with RSA-SHA1 signatures say “could not import jwt”. What should I do?¶
Install pyjwt and cryptography with pip.
$ using Django should seek out django-oauth-toolkit and those using Flask flask-oauthlib. For other frameworks, please get in touch by opening a GitHub issue, on G+ or on IRC #oauthlib irc.freenode.net.. | http://oauthlib.readthedocs.io/en/latest/faq.html | 2017-04-23T05:25:52 | CC-MAIN-2017-17 | 1492917118477.15 | [] | oauthlib.readthedocs.io |
Creating Your First Pyramid Application¶
In this chapter, we will walk through the creation of a tiny Pyramid application. After we’re finished creating the application, we’ll explain in more detail how it works.
Hello World, Goodbye root URL (
/), the server
will simply serve up the text “Hello world!” When visited by a browser on
the URL
/goodbye, the server will serve up the text “Goodbye world!”
Ctrl-C.
The script uses the
pyramid.response.Response class later in the
script to create a response object.
Like many other Python web frameworks, Pyramid uses the WSGI
protocol to connect an application and a web server together. The
paste.httpserver server is used in this example as a WSGI server for
convenience, as the
paste package is a dependency of Pyramid
itself.
View Callable Declarations¶
The above script, beneath its set of imports, defines two functions: one
named
hello_world and one named
goodbye_world.
These functions don’t do anything very difficult. Both functions accept a
single argument (
request). The
hello_world function does nothing but
return a response instance with the body
Hello world!. The
goodbye_world function returns a response instance with the body
Goodbye world!.
Each of these functions upstream WSGI
server and sent back to the requesting browser. To return a response, each
view callable creates an instance of the
Response
class. In the
hello_world function, the string
'Hello world!' is
passed to the
Response constructor as the body of the response. In the
goodbye_world function, the string
'Goodbye world!' is passed.
Note
As we’ll see in later chapters, returning a literal response object from a view callable is not always required; we can instead use a renderer in our view configurations. If we use a renderer, our view callable is allowed to return a value that the renderer understands, and the renderer generates a response on our behalf.
Application Configuration¶
In the above script, the following code represents the configuration of
this simple application. The application is configured using the previously
defined imports and function definitions, placed within the confines of an
if statement:
Let’s break this down this piece-by-piece.
Configurator Construction¶
The
if __name__ == '__main__': line in the code sample above represents a
Python idiom: the code inside this if clause is not invoked unless the script
containing this code is run directly from the command line. For example, if
the file named
helloworld.py contains the entire script body, the code
within the
if statement will only be invoked when
python
helloworld.py is executed from the operating system command line.
helloworld.py in this case is a Python module. Using the
if
clause is necessary – or at least best practice – because code in any
Python module may be imported by another Python module. By using this idiom,
the script is indicating that it does not want the code within the
if
statement to execute if this module is imported; a application registry associated with
the application.
Adding Configuration¶
Each of these lines calls the
pyramid.config.Configurator.add_view()
method. The
add_view method of a configurator registers a view
configuration within the application registry. A view
configuration represents a set of circumstances related to the
request that will cause a specific view callable to be
invoked. This “set of circumstances” is provided as one or more keyword
arguments to the
add_view method. Each of these keyword arguments is
known as a view configuration predicate.
The line
config.add_view(hello_world) registers the
hello_world
function as a view callable. The
add_view method of a Configurator must
be called with a view callable object or a dotted Python name as its
first argument, so the first argument passed is the
hello_world function.
This line calls
add_view with a default value for the predicate
argument, named
name. The
name predicate defaults to a value
equalling the empty string (
''). This means that we’re instructing
Pyramid to invoke the
hello_world view callable when the
view name is the empty string. We’ll learn in later chapters what a
view name is, and under which circumstances a request will have a
view name that is the empty string; in this particular application, it means
that the
hello_world view callable will be invoked when the root URL
/ is visited by a browser.
The line
config.add_view(goodbye_world, name='goodbye') registers the
goodbye_world function as a view callable. The line calls
add_view
with the view callable as the first required positional argument, and a
predicate keyword argument
name with the value
'goodbye'.
The
name argument supplied in this view configuration implies
that only a request that has a view name of
goodbye should cause
the
goodbye_world view callable to be invoked. In this particular
application, this means that the
goodbye_world view callable will be
invoked when the URL
/goodbye is visited by a browser.
Each invocation of the
add_view method registers a view
configuration. Each predicate provided as a keyword argument to the
add_view method narrows the set of circumstances which would cause the
view configuration’s callable to be invoked. In general, a greater number of
predicates supplied along with a view configuration will more strictly limit
the applicability of its associated view callable. When Pyramid
processes a request, the view callable with the most specific view
configuration (the view configuration that matches the most specific set of
predicates) is always invoked.
In this application, Pyramid chooses the most specific view callable
based only on view predicate applicability. The ordering of calls to
add_view() is never very important. We can
goodbye_world first and
hello_world second; Pyramid
will still give us the most specific callable when a request is dispatched to
it. set,
however, you can learn more about it by visiting wsgi.org. two calls
to its
add_view method.
WSGI Application Serving¶
Finally, we actually serve the application to requestors by starting up a
WSGI server. We happen to use the. It. | http://docs.pylonsproject.org/projects/pyramid/en/1.1-branch/narr/firstapp.html | 2017-04-23T05:24:56 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.pylonsproject.org |
API.
Each data type in the Typed Data API is a plugin class (annotation class example: ); these plugins are managed by the typed_data_manager service (by default ). Each data object encapsulates a single piece of data, provides access to the metadata, and provides validation capability. Also, the typed data plugins have a shorthand for easily accessing data values, described in Tree handling.
The metadata of a data object is defined by an object based on a class called the definition class (see ). The class used can vary by data type and can be specified in the data type's plugin definition, while the default is set in the $definition_class property of the annotation class. The default class is ..
There are three kinds of typed data: primitive, complex, and list.
Primitive data types wrap PHP data types and also serve as building blocks for complex and list typed data. Each primitive data type has an interface that extends , with getValue() and setValue() methods for accessing the data value, and a default plugin implementation. Here's a list:
Complex data types, with interface , represent data with named properties; the properties can be accessed with get() and set() methods. The value of each property is itself a typed data object, which can be primitive, complex, or list data.
The base type for most complex data is the class, which represents an associative array. Map provides its own definition class in the annotation, , and most complex data classes extend this class. The getValue() and setValue() methods on the Map class enforce the data definition and its property structure.
The Drupal Field API uses complex typed data for its field items, with definition class .
List data types, with interface ,.
Typed data allows you to use shorthand to get data values nested in the implicit tree structure of the data. For example, to get the value from an entity field item, the Entity Field API allows you to call:
This is really shorthand for:
Some notes:
To define a new data type:
The data types of the Typed Data API can be used in several ways, once they have been defined: | http://drupal8docs.diasporan.net/d2/dd7/group__typed__data.html | 2017-04-23T05:33:47 | CC-MAIN-2017-17 | 1492917118477.15 | [] | drupal8docs.diasporan.net |
Interstitial Ads
Interstitial Ads are full screen ads that display over any of your application’s content. These ads often result in higher eCPM for developers. The best time to use Interstitial Ads is during a natural break in the application’s content, such as after completion of a level in a game.
Within interstitial ad placements developers have the option to display interactive video ads. These units often produce the highest eCPM. To ensure a placement is able to recieve these ads, check mMedia or speak to a Millennial Media Account Manager.
Basic Integration
NOTE: Items in source like
<YOUR_PLACEMENT_ID>must be replaced with your information.
1. Add the following to your view controller’s header (.h) file.
#import <UIKit/UIKit.h> #import <MMAdSDK/MMAdSDK.h> @interface ViewController : UIViewController <MMInterstitialDelegate> @property (strong, nonatomic) MMInterstitialAd *interstitialAd; @end
2. Add the following to your view controller’s .m file to fetch and display an interstitial ad.
If you plan to display multiple interstitials within your app, you will want to fetch an ad in other places within your app too. For games, a good time to fetch an interstitial ad is while the user is actively playing a level. Remember to replace
<YOUR_PLACEMENT_ID> with an Interstitial placement ID. Interstitial placement IDs can be created in mMedia or provided by an Account Manager.
- (void)viewDidLoad { self.interstitialAd = [[MMInterstitialAd alloc] initWithPlacementId:@"<YOUR_PLACEMENT_ID>"]; self.interstitialAd.delegate = self; [self.interstitialAd load:nil]; } - (void)showInterstitialAd { if (self.interstitialAd.ready) { [self.interstitialAd showFromViewController:self]; } } - (void)dealloc { _interstitialAd.delegate = nil; _interstitialAd = nil; }
Best Practice: Interstitial ads are cached to the user’s device in order to provide the best possible experience. It is recommended that publishers request interstitial ads at least 10-30 seconds before they are ready to be displayed. This helps ensure that the ad is fully cached and ready to display. | http://docs.onemobilesdk.aol.com/ios-ad-sdk/interstitial-ads.html | 2017-04-23T05:26:11 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.onemobilesdk.aol.com |
User Data Block
Contents
- 1 User Data Block
- 1.1 Important Notes
- 1.2 Name Property
- 1.3 Block Notes Property
- 1.4 Assign Data Property
- 1.5 Condition Property
- 1.6 Logging Details Property
- 1.7 Exceptions Property
- 1.8 Interaction ID Property
- 1.9 Log Level Property
- 1.10 Enable Status Property
- 1.11 ORS Extensions Property
- 1.12 Internal Key Prefix
- 1.13 Wait For Event Property
- 1.14 Timeout Property
You can use in a routing application to update an interaction's User Data and for attaching Business Attributes, Categories and Skills. Corresponds to function _genesys.ixn.setuData under Functions in the Orchestration Developer's Guide and.
Important Notes
- Do not assign the value of a variable named data to a key-value pair. This will not work since the generated code also declares a variable named data.
- When the Wait for Event property is set to true and User Data blocks are used in both the parallel legs, use Internal Key Prefix to reliably verify the user data attachment. key.
Condition Property
Find this property's details under Common Properties.
Logging Details Property
Find this property's details under Common Properties.
Exceptions Property
Find this property's details under Common Properties.Id.
Internal Key Prefix
Starting with 8.1.420.14, Composer attaches an internal key along with the configured user data. When the Wait for Event property is set to true and User Data blocks are used in both parallel legs, use Internal Key Prefix (the Show Advanced Properties button) to reliably verify the user data attachment. The value of the internal key is the time stamp of the application change. This key is used internally to verify whether the interaction.udata.changed event has been received. If parallel User Data blocks are used in a workflow, the internal keys might mismatch, which leads to a timeout of User Data blocks. You can configure the internal key prefix either directly through this property or through variables. The configured value will be attached as a prefix to the existing Composer-generated internal key; for example:
Composer_<Internal Key Prefix>_internal_key = <timestamp>
Wait For Event Property
This property allows you to choose whether to wait for the user data changed event before transitioning to the next block.
- Click Wait for Event under Property.
- Under Value, select one of the following:
- True
- False
Timeout Property
Select the variable that contains the timeout value for the user data change event..
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/Composer/8.1.5/Help/UserDataBlock | 2019-03-18T17:25:13 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.genesys.com |
Step-by-Step Guide to Managing the Active Directory
Abstract
This guide introduces you to administration of the Windows® 2000 Active Directory™ service. The procedures in this document demonstrate how to use the Active Directory Users and Computers snap-in to add, move, delete, and alter the properties for objects such as users, contacts, groups, servers, printers, and shared folders.
On This Page
Introduction
Using Active Directory Domains and Trusts Snap-in
Using the Active Directory Users and Computers Snap-in
Publishing a Shared Folder
Finding Specific Objects
Filtering a List of Objects
Introduction:
Click Start, point to Settings, click Control Panel, and then click Change or Remove Programs.
Click Start , point to Programs, point to Administrative Tools, and then click Active Directory Domains and Trusts. The Active Directory Domains and Trusts snap-in appears as in Figure 1 below.
Figure 1: Active Directory Domains and Trust snap.
Select Active Directory Domains and Trusts in the upper left pane, right-click it, and then click Properties.
Enter any preferred alternate UPN suffixes in the Alternate UPN Suffixes box and click
Right-click the domain object (in our example, reskit.com), and then click Properties.
Click Change Mode.
You receive a message requiring confirmation. Click Yes to continue. Click OK to proceed, or No to stop this action. If you plan to add Windows NT 4.0 domain controllers to your configuration, do not proceed.
Using the Active Directory Users and Computers Snap-in
To start the Active Directory Users and Computers snap-in, click Start, point to Programs, point to Administrative Tools, and then click Active Directory Users and Computers..
Click the + next to Accounts to expand it.
Right-click Accounts.
Right-click the Construction organizational unit, point to New, and then click User, or click New User on the snap-in toolbar.
Type user information as in Figure 4 below:
Figure 4: New User dialog
Note that the Full name is automatically filled in after you enter the First and Last names. Click Next to proceed.
Type a password in both the Password and Confirm password boxes and click Next.
Accept the confirmation in the next dialog box by clicking Finish.
You have now created an account for James Smith in the Construction OU To add additional information about this user:
Select Construction in the left pane, right-click James Smith in the right pane, and then click Properties..
Click the James Smith user account in the right pane, right-click it, and click Move.
Click the + next to Accounts to expand it as in Figure 6 below.
Figure 6: List of available OUs
Click the Engineering OU, and click OK.
If you upgrade from an earlier version of Windows NT Server, you might want to move existing users from the Users folder to some of the OUs that you create.
Creating a Group
Right-click the Engineering OU, click New, and then click Group..
Adding a User to a Group
Click Engineering in the left pane.
Right-click the Tools group in the right pane, and click Properties.
Click the Members Tab and click.
Use Windows Explorer to create a new folder called Engineering Specs on one of your disk volumes.
In Windows Explorer, right-click the folder name, and then click Properties. Click Sharing, and then click Share this folder.
In the New Object–Shared Folder dialog box, type ES in the Share name box and click OK. By default, Everyone has permissions to this shared folder. If you want, you can change the default by clicking the Permissions button.
Populate the folder with files, such as documents, spreadsheets, or presentations.
To publish the shared folder in the directory
In the Active Directory Users and Computers snap-in, right-click the Engineering OU, point to New, and click Shared Folder.
In the Name box, type Engineering Specs.
Double-click My Network Places on the desktop.
Double-click Entire Network, and then click Entire contents of the network.
Double-click the Directory.
Double-click the domain name, Reskit, and then double-click Engineering.
Click Start, point to Settings, click Printers, and then double-click Add Printer. The Add Printer Wizard appears. Click Next.
Click Local Printer, clear the Automatically detect and install my Plug and Play printer checkbox, and click Next.
Click the Create a new port option, then scroll to Standard TCP/IP Port, and click Next.
The Add Standard TCP/IP Printer Port Wizard appears. Click Next.
On the Add Port page, type the IP address of the printer in the Printer Name or IP Address box, type the port name in the Port name box, and click Next. Click Finish.
Select your printer's manufacturer and model in the Printers list box, and then click Next.
In the Printer name text box, type the name of your printer.
On the Printer Sharing page, type a name for the shared printer. Choose a name no more than eight characters long so computers running earlier versions of the operating system display it correctly.
Type in the Location and Comment in those text boxes.
Click Start, point to Settings, and then click on Printers.
Double-click the Add Printer icon.
In the Add Printer Wizard dialog box, click the Next button.
Select the Network printer button, and then click Next.
Select the Find a printer in the Directory button, and then click Next.
Click Start, click Run, and type cmd in the text box. Click OK.
Type cd\ winnt/system32 and press Enter.
Right-click the Marketing organizational unit, click New, and click Printer.
On the Desktop, click Start, click Search, and click For Printers..
Right-click the Engineering organizational unit, point to New, and then click Computer.
For the computer name, type Vancouver.
Every object in the directory can be renamed and deleted, and most objects can be moved to different containers.
To move an object, right-click the object, and then click Move..
Create a new group by right-clicking Engineering, pointing to New, and then clicking Group. Type All Engineering and then click OK.
Right-click the All Engineering Group, and click Properties.
Click the Members tab and click Add.
In the list box, select Tools, click Add, and then click OK.
Click Apply, and then click OK. You've now created a nested group.
To check the nested groups
Right-click All Engineering, click Properties, and then click Membership. You will see Press Liaison as a member of All Engineering..
Select the Engineering OU. Right-click Engineering, and then click Find.
In the Name box, type Smith..
In the Active Directory Users and Computers snap-in, click the View menu, click Filter Options.
Click the radio button for Show only the following types of objects, and then select Users and Groups..
Important Notes
The example company, organization, products, people, and events depicted in this step-by-step guide. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/bb742437(v=technet.10) | 2019-03-18T17:27:51 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.microsoft.com |
Table of content
Change password
Reset password
Change password
Follow the steps below to modify your password. You should do this frequently to make your account secured.
Reset password
Sometimes you forgot your password and want to take it back. You can do those following steps:
- Click on Forgot password below the login form
- Type email and captcha
- Open your mailbox to check reset email from Subiz
- Enter new password | https://docs.subiz.com/manage-password/ | 2019-03-18T18:04:33 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.subiz.com |
- What are Spaces on Nexudus Spaces?
- Accessing Spaces
- General
- About the Space
- Users
- Invoices
- Using your own Domain - Additional Domains
- Multiple locations in Nexudus
What are Spaces on Nexudus Spaces?:
Accessing Spaces:
General
The General tab shows information about your Nexudus Spaces account. You can edit your invoicing and contact details, space address, etc.
Remember to click on Save every time you make any changes to the settings.
Spaces - Field References
- Billing
This section includes your coworking space's contact details, which will be shown in the "Contact" section on your Nexudus Spaces website.
- Settings
About the Space
In this section, you can include the space description, general terms and conditions and welcome message. Now, we are going to describe each of the fields:
- Details - Terms and conditions
- Details - Short Introduction
- Details - About Us
- Details - Welcome Email
Users
This tab shows the Users that are signed up to your coworking space. If you would like more information about how they work, go to the Users section on the Nexudus Spaces Knowledge Base.
Invoices.
Using your own Domain - Additional Domains:
- Redirecting a naked domain (for example: yourdomain.com or).
- Add a "A" record to your DNS settings pointing your root domain (usually referred to as "@") to the IP 174.129.25.170.
- Add "CNAME" record to your DNS settings pointing your "www" subdomain to spaces.nexudus.com.
- If, instead, you want to redirect just a subdomain to Nexudus, for example "members.yourdomain.com", you can just add a CNAME record:
- Add "CNAME" record to your DNS settings pointing your subdomain to spaces.nexudus.com. For example, if you are redirecting "members.yourdomain.com" add a CNAME record for "members."
Configuring an additional domain
Click on the Add Business Domain in the Additional Domains tab.
Fill in all of the required fields and click on Save to complete the process.
Important
IMPORTANT: add both "yourdomain.com" and "" as additional domains.
Once you have set it up, you'll see that it appears on the Additional Domains list.
Once you have done this, you must go to your domain host (1and1, GoDaddy, etc.) and adjust the settings:
- You need to create a CNAME record for the subdomain (without www) in the DNS settings, and enter the address spaces.nexudus.com
Why some pages still show the nexudus.com domain?.
Google Captcha, Google Maps and AddThisEvent plug-ing in Nexudus.
Related articles | http://docs.nexudus.com:8090/display/NSKE/Spaces | 2019-03-18T17:31:28 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.nexudus.com:8090 |
Book Creator
Add this page to your book
Book Creator
Remove this page from your book
This is an old revision of the document!
Sources section?
Shouldn't we include a sources section at the bottom of each page to the template for new articles? This will encourage users to attribute properly and give themselves credit for original articles. — Harishankar 2012/08/24 06:53 | https://docs.slackware.com/talk:slackdocs:tutorial?rev=1345816489 | 2019-03-18T17:59:26 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png',
None], dtype=object)
array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png',
None], dtype=object) ] | docs.slackware.com |
Blender Documentation Contents¶
Welcome, this document is an API reference for Blender 2.77.1, built Unknown.
This site can be downloaded for offline use Download the full Documentation (zipped HTML files)
Blender/Python Documentation)
- Interpolation Utilities (mathutils.interpolate)
- Noise Utilities (mathutils.noise)
- OpenGL Wrapper (bgl)
- Font Drawing (blf)
- GPU functions (gpu)
- GPU Off-Screen Buffer (gpu.offscreen)
- Audio System (aud)
- Extra Utilities (bpy_extras)
- ID Property Access (idprop.types)
- BMesh Module (bmesh). | https://docs.blender.org/api/blender_python_api_2_77_1/ | 2019-03-18T17:35:27 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.blender.org |
Registers the User and Role types within the ITypesInfo object that supplies metadata on types used in an XAF application.
Namespace: DevExpress.ExpressApp.Security
Assembly: DevExpress.ExpressApp.Security.v18.2.dll
public override void CustomizeTypesInfo( ITypesInfo typesInfo )
Public Overrides Sub CustomizeTypesInfo( typesInfo As ITypesInfo )
Generally, you do not need to call this method from your code. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Security.SecurityModule.CustomizeTypesInfo(DevExpress.ExpressApp.DC.ITypesInfo) | 2019-03-18T17:34:50 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.devexpress.com |
Contents Now Platform Administration Previous Topic Next Topic Administering form annotations Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Administering form annotations Form annotations are additional pieces of information on a form, such as a line or paragraph of text. Use form annotations to provide on-screen instructions to your users. Form annotations are enabled by default in the base system. To disable them, set the glide.ui.form_annotations system property to false. Support multiple languages for a form annotation You can store multiple translations of form annotation text. Before you beginRole required: admin About this task To support multiple languages, use message records to translate annotation text. Procedure Navigate to System UI > Messages. Create a message record for each language you support. On the Message form, set the Key field to a unique identifier for the annotation text. The annotation text is a good key. The key must be the same for each translation message for the annotation. Select the appropriate Language. In the Message field, enter the translated annotation text. Edit the form annotation and reference the message key with a gs.getMessage call. For example, if the message key is Message key text, enter ${gs.getMessage("Message key text")} in the form annotation. Administer form annotation types You can define the form annotation types to control their appearance. Before you beginRole required: admin Procedure Navigate to System UI > Form Annotation Types. Set the Active field to false for any types you do not want to use. Click New to add a type. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/form-administration/concept/c_FormAnnotation.html | 2019-03-18T18:21:56 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.servicenow.com |
Contents Now Platform Administration Previous Topic Next Topic Control access to history Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Control access to history You can give a role access to view audit history by setting a system property. Before you beginRole required: admin Procedure Navigate to System Properties > System. In the property List of roles (comma-separated) that can access the history of a record, enter the user roles you want to access history. Click Save. ResultAny changes to a field are omitted if a user without read-access views the history of a record. Related TasksChange the number of history entriesRelated ConceptsDifferences Between Audit and History SetsHistory TimelineTracking changes to reference fieldsTracking insertsRelated ReferenceHistory ListHistory CalendarTracking CI Relationships On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/security/task/t_ControlAccessToHistory.html | 2019-03-18T18:20:19 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.servicenow.com |
Daily Organisation
Manage Daily Organisation, staffing, class cancellations, rooms and absences
- How relief teachers can use XUNO
- How to add Relief teachers to XUNO
- How to locate a student, teacher or room
- How to print relief teacher roles and daily classes list
- How to swap a room
- How to use Daily Organisation
- How to use Relief Teacher Timesheets
- How to view Daily Summary Report
- Managing Daily Organisation room changes and availability
- Managing Daily Organisation staff absences
- Managing Daily Organisation staffing preferences and relief teachers
- Video: How to locate a room, student or teacher
- Video: How to swap a room
- Video: How to use the Daily Organisation | https://docs.xuno.com.au/category/10-daily-organisation | 2019-03-18T17:27:41 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.xuno.com.au |
Package clientcredentials
Overview ▹
Overview ▾
Package clientcredentials implements the OAuth2.0 "client credentials" token flow, also known as the "two-legged OAuth 2.0".
This should be used when the client is acting on its own behalf or when the client is the resource owner. It may also be used when requesting access to protected resources based on an authorization previously arranged with the authorization server. describes a 2-legged OAuth2 flow, with both the client application information and the server's endpoint URLs.
type Config struct { // ClientID is the application's ID. ClientID string // ClientSecret is the application's secret. ClientSecret string // TokenURL is the resource server's token endpoint // URL. This is a constant specific to each server. TokenURL string // Scope specifies optional requested permissions. Scopes []string // EndpointParams specifies additional parameters for requests to the token endpoint. EndpointParams url.Values }
func (*Config) Client ¶
func (c *Config) Client(ctx context.Context) *http.Client
Client returns an HTTP client using the provided token. The token will auto-refresh as necessary. The underlying HTTP transport will be obtained using the provided context. The returned client and its Transport should not be modified.
func (*Config) Token ¶
func (c *Config) Token(ctx context.Context) (*oauth2.Token, error)
Token uses client credentials to retrieve a token. The HTTP client to use is derived from the context. If nil, http.DefaultClient is used.
func (*Config) TokenSource ¶
func (c *Config) TokenSource(ctx context.Context) oauth2.TokenSource
TokenSource returns a TokenSource that returns t until t expires, automatically refreshing it as necessary using the provided context and the client ID and client secret.
Most users will use Config.Client instead. | http://docs.activestate.com/activego/1.8/pkg/golang.org/x/oauth2/clientcredentials/ | 2019-03-18T18:28:04 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.activestate.com |
LATEST VERSION: 8.2.13 - RELEASE NOTES
Errors to the receiver where the problem occurs. This type of error includes:
- Unavailable entry value in the receiving cache, either because the entry is missing or its value is null. In both cases, there is nothing to apply the delta to and the full value must be sent. This is most likely to occur if you destroy or invalidate your entries locally, either through application calls or through configured actions like eviction or entry expiration.
- InvalidDeltaException. | http://gemfire82.docs.pivotal.io/docs-gemfire/latest/developing/delta_propagation/errors_in_delta_propagation.html | 2019-03-18T17:56:06 | CC-MAIN-2019-13 | 1552912201521.60 | [] | gemfire82.docs.pivotal.io |
Range
The Range feature is available in Line and Scatter charts. If you have upper and lower limit values for the values on Y-Axis such as confidence intervals, you can show them as ranges in the same chart.
You can control the Range setting on the Range Setting dialog. You can open the Range Setting dialog by clicking
Range from Y-Axis dropdown menu.
You can enable the Range by checking the
Show Range checkbox. Then you can assign columns for Upper Limit values and Lower Limit values.
Columns will be automatically picked and assigned for your convenience if you have one of following column pairs in the same data frame.
(Y-Axis Column Name)_highand
(Y-Axis Column Name)_low
(Y-Axis Column Name).highand
(Y-Axis Column Name).low
(Y-Axis Column Name)_upperand
(Y-Axis Column Name)_lower
(Y-Axis Column Name).upperand
(Y-Axis Column Name).lower
(Y-Axis Column Name)_higherand
(Y-Axis Column Name)_lower
(Y-Axis Column Name).higherand
(Y-Axis Column Name).lower
conf_highand
conf_low
conf.highand
conf.low
Range on Line Chart
If you assign Upper and Lower Limit of the range in the Line Chart, it shows the range band for each line.
Range on Scatter Chart
If you assign Upper and Lower Limit of the range in the Scatter Chart, it shows the error bar on each circle.
You can optionally set the Error Bar Width at the Range Setting dialog. The default width is
4 in pixel. | https://docs.exploratory.io/viz/range.html | 2019-03-18T17:37:11 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['images/range-toggle.png', None], dtype=object)
array(['images/range-dialog.png', None], dtype=object)
array(['images/range-line.png', None], dtype=object)
array(['images/range-scatter.png', None], dtype=object)] | docs.exploratory.io |
Zapier is an app that lets you connect your Nexudus Spaces account with other external platforms like MailChimp, Slack, Zendesk, etc. Once linked with Zapier, you can use the software to create a Zap, which will let you automate lots of different processes. For example, you could have a Zap that generates a subscriber to your MailChimp account every time someone signs up to your Nexudus Spaces account. Another example could be that a support ticket is generated on your Zendesk account when someone writes you a message on your Nexudus Spaces Help Desk.
There is a free version of Zapier that lets you set up some Zaps. You can look at the different price plans on the following link:
What do I need to link my Nexudus Spaces account with Zapier?
Firstly, you'll need to open an account on both of these platforms. If you still don't have a Zapier account, you can sign up for one via the website.
On the other hand, the user that you're going to link Zapier with must have an API key set up via your Nexudus Spaces dashboard. To do so, go to System > Users and view the details of the User you want to link to Zapier.
In the Status tab, enable the API Key.
Connecting with Zapier
Now, you need to link your account with Zapier. To do so, go to Space Settings > Integrations on the dashboard.
IMPORTANT: You must bear in mind that you need to sign up to Zapier with the same email address that you use to log in to your Nexudus Spaces account.
- Next, you'll be sent to a screen where you'll have to accept the invitation to connect Nexudus and Zapier.
- The next step will ask you to enter your Zapier credentials. Then, you'll have to sign in to your Zapier account. The next step would be to set up your first Zap. Click on Make a Zap! on the navigation bar.
· In the next section, you have to choose the app that you want to connect with: Nexudus Spaces.
- You then have to choose a "Trigger", i.e. an action within Nexudus Spaces that will trigger a process in the app that you link. There are several triggers related to different Nexudus Spaces features, such as receiving a help desk message, a member becoming inactive, a new newsletter subscriber, etc. These actions let you sync the processes that occur on your Nexudus Spaces account with other external platforms. In this example, we're going to sync our MailChimp subscriber list every time someone signs up to your newsletter via your Nexudus Spaces website. Select the New subscriber in the newsletter trigger and click on Save+Continue.
- To continue, you must select your Nexudus Spaces account. If you've got several accounts set up, select the one you'd like to sync. Click on the Test button to test the integration. To continue to the next step, click on Save+Continue.
- You must be sure that you've added an element that corresponds with the trigger that you're going to use, in this case, a newsletter subscriber. If you don't have one, you can add one via the dashboard Content > Newsletter subscribers. Click on Fetch and Continue to continue.
Connecting with another app
In the next Zap creation step, you must choose the app that you want to link your Nexudus Spaces account to. In this example, as we've already mentioned, we're going to link it to MailChimp.
- Select the action that you want to take in MailChimp when the trigger is set off on your Nexudus Spaces account. In this example, we're going to Add/Update a Newsletter Subscriber list that we've got on our MailChimp account. Click on the Save+Continue button.
- Test the connection with the MailChimp account that you're going to sync by clicking on Test.
- Then, you must set up the template that will generate the data on MailChimp. For this example, in the List field, we've selected the list that we've got on MailChimp and that we want to sync. In the Email field, we've put the subscriber's email address that we have on Nexudus Spaces, so that these data are added to the MailChimp list. You can select more fields, however, in this example, we're going to limit ourselves to syncing the email. Click on Save+Continue to complete the process.
- Well done! We're just one step from finishing. Lastly, we need to check the sync by creating a test subscriber. Click on Create+Continue on the next screen.
If you don't want to take this step, you can select Skip Test and Continue.
- Click on Finish to complete the process.
- Congrats! You have created your first Zap between Nexudus Spaces and Zapier. Now, you only need to enable it by switching it ON and giving it a name (optional).
You can add as many Zaps and connect them to your Nexudus Spaces with other apps. Below, is a list of the triggers available on your Nexudus Spaces account to use with Zapier. We are updating this list frequently and adding new triggers.
List of triggers available on Nexudus Spaces that can be used on Zapier.
New subscriber in the newsletter: Triggered when a new subscriber is added to a newsletter list.
New Active Member: Triggered when a new ACTIVE member is signed up via the dashboard or the space website.
New Booking: Triggered when a new booking is made on your Nexudus Spaces account.
New Blog Post: Triggered when a new Blog Post is created. Note that this trigger will fire even if the blog post has not yet been published. Check the PublishDate field to see when the blog post will be published.
New Event in the Calendar: Triggered when a member posts a new message on the space wall.
New Invoice for Member: Triggered when a new invoice is raised for a member
New Event in the Calendar: Triggered when a new event is published on the calendar.
New Active Member: Triggered when a new ACTIVE member is signed up via the dashboard or the space website.
New help desk message: Triggered when a member sends a new help desk message.
New Inactive Member: Triggered when a new INACTIVE member is found. Inactive members are those where the "Active" field has been disabled.
New Paid Invoice: Triggered when an invoice is paid.
New message in the wall: Triggered when a member posts a new message in the space wall.
New Active Contact: Triggered when a new ACTIVE contact is registered from the administration panel or from the members website.
New Inactive Contact: Triggered when a new member is registered from the administration panel or from the members website.
New Contact (active or not): Triggered when a new contact (ACTIVE OR NOT ACTIVE) is registered from the administration panel or from the members website.
New Member (active or not): Triggered when a new member (ACTIVE OR NOT ACTIVE) is registered from the administration panel or from the members website.
Related articles
There is a free version of Zapier that lets you set up some Zaps. You can look at the different price plans on the following link: | http://docs.nexudus.com:8090/pages/viewpage.action?pageId=15532945 | 2019-03-18T17:57:32 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.nexudus.com:8090 |
.
The cleanup.bat file is located in the
<APPIAN_HOME>/server/_scripts/ directory. It is used to backup and delete engine files, log files, and process archives.
cleanup [action] [options] [arguments]
Run the batch file with the –help parameter for additional execution details.
Schedule a Windows Scheduled Task or a set a job-scheduler calendar to clean up your engine files, logs, and archives.
Cleanup Checkpointed Appian Engine Files
For example:
cleanup data –delete –keep 10.
Cleanup Appian Log Files
For example:
cleanup logs -target \<BACKUP_SERVER>\logs –keep 3
Appian log files are rolled over as needed. The cleanup script is used to move aging log files to a backup location and delete them. Be sure to create the directory at
\<BACKUP_SERVER>\logs before running the script.
Cleanup Process Archives
For example:
cleanup processes -target \<BACKUP_SERVER>\ap –keep 100.
You must configure a separate Appian data source (in a separate tablespace/database within your RDBMS, or in a separate RDBMS).
You can also configure a data source within the Query Database smart service properties.
See also: Configuring Relational Databases for details regarding how to configure these data sources so that they are available for use with the Query Database Smart Service and Data Stores.>://<SERVER_AND_PORT>/<APPLICATION_CONTEXT>
SCHEME
By default your site will use HTTPS for the
SCHEME. HTTPS is required for production systems. If you wish to run a non-production site over HTTP instead of HTTPS, set the following property in custom.properties.
conf.suite.SCHEME=http
By default, session cookies are marked as "secure" and "httpOnly" for JBoss. If you change your url scheme to HTTP you will also need to adjust these defaults by removing the following lines from the
session-config element in
<APPIAN_HOME>\ear\suite.ear\web.war\WEB-INF\web.xml.
<cookie-config> <http-only>true</http-only> <secure>true</secure> </cookie-config> <tracking-mode>COOKIE</tracking-mode>
If you are running an application server other than JBoss, you must consult the documentation provided by the application server vendor for how to configure a web application to use secure, HTTP-only, cookies and adjust web.xml accordingly.
SERVER_AND_PORT
Use the
SERVER_AND_PORT property to specify the fully qualified domain name of the site's URL.
conf.suite.SERVER_AND_PORT=
<FULLY_QUALIFIED_DOMAIN_NAME>:<PORT>in custom.properties.
For example:
conf.suite.SERVER_AND_PORT=
The default port numbers are
:80 when
SCHEME is set to
HTTP and
:443 when
SCHEME is set to
HTTPS. No port number setting is needed for these ports.
You can have your first system administrator user (other than the default Administrator account) created during application server startup by specifying the following properties in the
passwords.properties.<ENVIRONMENT> file:
conf.password.ADMIN_USERNAME= conf.password.ADMIN_FIRST_NAME= conf.password.ADMIN_LAST_NAME= conf.password.ADMIN_EMAIL= conf.password.ADMIN_TEMPORARY_PASSWORD=
These properties must be present in
<APPIAN_HOME>/ear/suite.ear/conf/passwords.properties.<ENVIRONMENT> during application server startup. You can create this file by copying
password.properties.example in the same directory and using it as a starting point.
conf.password.ADMIN_TEMPORARY_PASSWORD is the password that will be used to log in for the first time. The user will be prompted to change their password upon logging in.
A user will be created only if there are currently no active system administrator users on the system. The specified user information must follow all the same requirements as creating a user from within Appian to be created successfully.
The
passwords.properties file is read and then deleted from the file system during application server startup.
Add the Appian engine files (*.kdb) to the list of files that are not scanned by your server’s anti-virus software.
Restrict port access to Appian engines using a custom security token.
See also: Generating a Custom Security Token.
Appian logs important system events and sends alerts for errors. These settings are configurable.
If left unmanaged, log files can consume too much disk space. These files must be cleaned up at regular intervals. See Data Maintenance.
See also: Customizing Application Logging.
Appian processes are used for reporting. The process archives must be configured according to your historical reporting needs. Balance your reporting needs against the need to limit the size of your Execution Engines.
See also: Managing Process Archives.
On 64-bit systems, modify the
server.conf.processcommon.MAX_EXEC_ENGINE_LOAD_METRIC property value in the
custom.properties file to a minimum value of 120 or higher.
See also: Configuring the Process Engine Servers.
See also: Security for Expression Rules, Security for Query Rules and Security for Constants.
The following optional changes can be made to your installation. Example settings are described in the Appian
custom.properties.example file found in the
<APPIAN_HOME>/ear/suite.ear/conf/ directory.
The
appian-topology.xml.example file in
<APPIAN_HOME>/ear/suite.ear:
NOTE: Appian does not recommend modifying the Application Context. It requires making changes to shipped files that could slow down a future upgrade process.
The default application context is
suite. You can change this to a different name by setting the following property in
custom.properties:
conf.suite.APPLICATION_CONTEXT=index
You cannot set the application context to use more than one level of directories, such as
suite/apps/application. To set a default application, see Applications.
After modifying the application context value in
custom.properties, you also need to update the following files with any references to the old application context
suite property:
ear/suite.ear/META-INF/application.xml
ear/suite.ear/unattended-request-handler.jar/META-INF/jboss.xml
suite.ear/email-handler.jar/META-INF/jboss.xml
File names in the list above only apply to those using JBoss. If using a different application server, make sure to update the respective file.
As of Appian 6.7, users access actions from the Actions tab instead of the right-side of the Tempo interface. This change removed the application links in Tempo which took users to each application within the Portal interface.
The Portal interface is still accessible by using a direct URL (e.g.,), but if you want to provide your users with application links in the Tempo interface, you need to add the following setting to the
custom.properties file located in in the
<APPIAN_HOME>/ear/suite.ear/conf/ directory:
conf.navigation.SHOW_APPLICATIONS_MENU=
The following values are accepted by this property:
The logo is configured in the Appian Administration Console.
The name is configured in the Appian Administration Console.
If you have not already done so, refer to a link below specific to your application server for required and optional configurations.
The ability to modify the final retry interval has been deprecated.
The copyright holder listed in the footer is controlled by the following property:
resources.appian.ap.application.appian.ap.appianName=Appian Corporation
The years listed for the copyright in the footer are controlled by the following property:
resources.appian.ap.application.appian.ap.copyrightYear=2003-2014
You can specify the
From email address listed when the system sends an alert or a password reset email, using the following properties:
conf.mailhandler.ntf_sndr_addr= conf.mailhandler.ntf_sndr_name=:
conf.content.download.inline=false
This property must be set to true in order to do the following:
When set to false (the default) the user is prompted to download the document instead of viewing it inline, even if
inline=true is passed in the request.:
server.conf.exec.CHAINED_EXECUTION_NODE_LIMIT=: Identifying Process Memory Usage
To prevent accidental mass notifications, if a notification is generated for more than a certain number of recipients, the system sends a WARN message to the application server log and does not send the email or portal alert notification to the recipients.
This recipient limit is controlled by the following property in
custom.properties:
conf.notifications.MAX_RECIPIENTS=.
resources.appian.analytics.application.maxreportrows=.
Appian for Mobile Device applications give users the option to enable a passcode lock requiring them to enter a user-defined password before entering the application. This creates a separate level of security at the Appian server level in conjunction with the mobile device operating system level. You can require users to enable a passcode lock in the Appian Administration Console
By requiring a passcode, you also require users to upgrade to a version of the Appian for Mobile Devices application that supports this configuration feature.
See also:.
conf.plugins.poll-interval=60
The following maximum export rows property is no longer used.
See the section below for a replacement configuration option.
The maximum amount of time in seconds that a query waits for a response from the database before timing out is configured using the following property:
conf.data.query.timeout=10
10(ten seconds).
The amount of memory in bytes that will be consumed in the application server for a single query before the query is halted is configured using the following property:
conf.data.query.memory.limit=1048576
1048576bytes (1 MB).
NOTE: Before changing this value, consider using the query rules rule:
server.conf.processcommon.MAXIMUM_REPORT_MS=
MAXIMUM_REPORT_MS) is configured for all analytics engines. It is not possible to raise or lower the value for an individual process analytics engine.
News entries are indexed into a search index that is stored locally on the file system of each application server. News entries entered on the local application server are updated in the search index immediately. Those entered on other application servers are updated in the search index within a minute. This synchronization period is not configurable.
To troubleshoot or correct an issue with the news search, Appian Technical Support may instruct you to force the search index on a given application server to be re-created on the next automatic update. To do this, remove the value of the
lastSyncTs property in
<APPIAN_HOME>/_admin/search-local/index-primary/lastsync.properties. To recreate the search index on all application servers, this step must be carried out on each server.
See also: Searching News Entries:
conf.timezones.locales.[locale_code]=[List of time zones]:
conf.timezones.locales.en_US=America/Los_Angeles,America/Denver,America/Chicago.
conf.suite.USER_ACCOUNT_LIMIT=:
conf.node.webservice.connection.timeout=
60seconds.
You can adjust the number of seconds to wait for a response to a request, once a connection is established, using the following setting:
conf.node.webservice.socket.timeout=
60seconds.
You can specify the number of redirect responses (HTTP 304) the web service activity accepts.
conf.node.webservice.max.redirects=`
4.
On This Page | https://docs.appian.com/suite/help/17.1/Post-Install_Configurations.html | 2019-03-18T18:01:40 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.appian.com |
Predict Data - Survival
Predict with Survival Model.
How to Access This Feature
From + (plus) Button
From a step that creates a Survival Model, you can access this feature from 'Add' (Plus) button. Select one of the 3 "Predict" menus.
How to Use This Feature
With "Data" dropdown, select data to predict on from the following options.
- Training - Get predicted values on training data.
- Test - Get predicted values on test data.
- Data Frame - Get predicted values on other data frame.
Select "Type of Prediction" from the following options.
- Linear Predictor - lp - Risk Score on a log scale.
- Risk Score - exp(lp) - Score for the risk that the event happens within a unit time for the subject.
- Expected Number of Events - Number of events that would happen on average for a subject that has same condition as the subject during the time the subject has survived.
- Status at Specified Time - Predicted status of the subject at the specified time.
Select "Type of Residual" from the following options.
- Martingale
- Deviance
- Score
- Schoenfeld
- Schoenfeld (Scaled)
- DFBETA
- DFBETA (Scaled)
Click "Run" button to run the prediction. | https://docs.exploratory.io/ml/prediction_coxph.html | 2019-03-18T18:17:08 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['images/prediction_coxph_menu.png', None], dtype=object)
array(['images/prediction_coxph_dialog.png', None], dtype=object)] | docs.exploratory.io |
Display the loop with all returned posts. Includes support for custom WP_Query and Custom Post Types..
When we’re talking about posts here, we’re referring to all different post types, like posts, pages, attachments and custom post types.
The Loop is a PHP code construct that goes through all returned posts and displays or processes each one of them.
This is how the Loop code looks like:
With Pinegrow we don’t have to write and maintain this code by hand. The Post & Loop – Smart action does all the work for us.
Let’s take a look at how it is used.
The Default Loop
The Post & Loop – Smart action without any parameters displays the default WordPress loop that goes through all posts returned by WordPress for the current template page.
We usually add this action on the element that represents the post, for example the <article> element.
Watch the video course to see how Smart actions are used in practice. | https://docs.pinegrow.com/docs/wordpress/actions/smart-actions/post-loop-smart/ | 2019-03-18T18:28:27 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.pinegrow.com |
Molecular Dynamics
This tutorial will introduce you to a basic molecular dynamics simulation in Gromacs on Rescale. You will be shown step-by-step how to setup and submit a job from scratch, if you follow the directions contained in this document. Alternatively, you can click Import Job Setup in the top right of the page to clone the job that we have already run for you. You are free to modify any of the settings in the cloned job to become familiar with the Rescale platform.
Please contact us if you're having trouble.
Visualization of a similar molecular dynamics simulation using DPPC membranes with an embedded protein. (Image courtesy of Justin Lemkul)
Import Job Setup Get Job Results
Description of the SimulationDescription of the Simulation
Duration: Approximately 23 minutes. Alternatively, you can also click the Get Job Results link above, and review the full setup and results for the molecular dynamics simulation with Gromacs.
The Simulation: A phospholipid membrane, consisting of 1024 dipalmitoylphosphatidylcholine (DPPC) lipids in a bilayer configuration with 23 water molecules per lipid, for a total of 121,856 atoms.
The Workflow: This tutorial will use one run of the Gromacs simulation to output the final results. The workflow for this tutorial consists of 2 parts:
- The Gromacs preprocessor (
grompp.mdp)
- The Computational chemistry engine which completes the molecular dynamics simulation (
mdrun)
Simulation software: Gromacs
Starting up RescaleStarting up Rescale
To start using Rescale, go to platform.rescale.com and log in using your account information. Using Rescale requires no download or implementation of additional software. Rescale is browser-based, which allows you to securely access your analyses from your workstation or from your home computer.
From the main screen of the platform, click on the +New Job button at the top left corner of your screen. This will take you to the first of five Setup pages.
Setup: Input FilesSetup: Input Files
First, you need to give the job a name. Because Rescale's platform saves all your jobs, we recommend you name it something specific so that you can find the job again later. (For example, Tutorial 2: Molecular Dynamics) To change the name of your project, click on the pencil (shown in blue, below) next to the current job name in the top left corner of the window.
Next upload the input file that you want to use by clicking the Choose File button which is highlighted in red below. For this tutorial, you will want to upload the
d.dppc.zip file that you downloaded at the start of the tutorial.
On completion, your Input Files setup page should look like that shown below:
Setup: Software SettingsSetup: Software Settings
Now, you need to select the software module you want to use for your analysis. For this demo, scroll down and click on Gromacs.
When you have selected Gromacs, the Analysis Options will be available.
- Select version 2016.1 (MPICH, Single Precision, AVX2) from the version selection drop-down menu
- Next, you need to add the analysis execution command for your project. This is a command that is specific to the software package and input file being used. For this input file and Gromacs, the execution command will be:
cd d.dppc; gmx grompp -v; gmx mdrun
The completed Software Settings should look like those in the image below:
Setup: Hardware SettingsSetup: Hardware Settings
The next step is to select the desired computing hardware for the job. Click on the Hardware Settings icon to do this.
On this page you must select your desired core type and how many cores you need. For this demo, use the drop-down menu to pick the Onyx core type and for Number of Cores, choose one core. Your Hardware Settings screen should look like that shown below:
Setup: ReviewSetup: Review
For this tutorial we omit the optional post processing step. The Review page provides a summary of the job you have set up. It should look like the example shown below. At this stage, you can save and submit the job.
StatusStatus
Now you can monitor the progress of your job from the Status tab. This tab is highlighted in red in the image below. The entire process for submitting this job should take about 22 minutes. Once the job is submitted, it will typically take anywhere from three to seven minutes for the cluster to boot up.
- Because this analysis is entirely run in the cloud, feel free to close your browser window or shut down your computer. You can check on the progress at any time by logging into Rescale and clicking on the Jobs tab. You will receive an email notifying you when the job is completed.
- Rescale's Live Tailing feature can also be seen below. In this particular example, the files
d.dppc/md.logand
process_output.logcan be used to monitor the job's progress.
ResultsResults
The Results tab, highlighted in red above, shows all the resulting files that are associated with your job. On this page you can:
- Click on Download to download all the files associated with this job
- View individual files, below a certain size, in your browser by clicking on the View icon in the Actions menu. An example is shown below
Finally, you might like to download the results to compare with the results of your job. | https://docs.rescale.com/articles/md/ | 2019-03-18T18:03:44 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['https://d33wubrfki0l68.cloudfront.net/38d8abac40d0a9bdab818733b877132e30e0cdb3/a4b23/images/en/advanced-tutorials/md/dppc.8dd8c338.jpg',
'Visualization of a similar molecular dynamics simulation using DPPC membranes with an embedded protein. (Image courtesy of Justin Lemkul)'],
dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/6e216ac847bdfa86d3babc9943c9c0b403a0e29a/8ae27/images/en/advanced-tutorials/md/inputfiles.3fa18dc8.png',
'Input Files'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/925316f608fdc9a60c7358a8374896da0a21c9f3/c5453/images/en/advanced-tutorials/md/softwaresettings.4b9fd310.png',
'Software Settings'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/75bfa2f8f3a67a2add1406eb5c36a55cee1d6563/623e6/images/en/advanced-tutorials/md/hardwaresettings.7b77e95b.png',
'Hardware Settings'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/2c434931a1772423ea3c0b0baf66efbdc6f67c3a/9899c/images/en/advanced-tutorials/md/review.6bc9b043.png',
'Review'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/fa3792f164a735220ea120705e286774d10333f9/7be46/images/en/advanced-tutorials/md/livetailing.4b123936.png',
'Live Tailing'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/e66baedbc4198cafe725f08e12faf005a69cae21/57255/images/en/advanced-tutorials/md/results.b3136735.png',
'Results'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/d981178947d835e0f11c4b6ab660d867799dc259/85e5e/images/en/advanced-tutorials/md/viewlog.e536ea2c.png',
'View Log'], dtype=object) ] | docs.rescale.com |
Administration¶
Use the extension manager to install linkvalidator. It is not installed by default.
Apply the needed changes to the database.
You are advised to use cURL to check for links. Linkvalidator uses the HTTP request library shipped with TYPO3. Please have a look in the Install Tool at the section "All Configuration" which includes an HTTP section at the end.
There you may define a default timeout and you may change from using sockets to using the cURL library. | https://docs.typo3.org/typo3cms/extensions/linkvalidator/Administration/Index.html | 2019-03-18T18:50:58 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.typo3.org |
Same parameters won't work the same if we change the type of the elevons we use. So if we try a different model of elevon we will notice that the amplitude has change. The amplitude of the elevons in FBWA mode can be changed using this parameters:
If the plane doesn't fly because the elevons are different, don't worry, in these photos below you will see the amplitude of the wings the plane should have to fly properly without turbulence in FBWA mode. If your plane's wings amplitude is different try changing the parameteres I have mentioned until you get it.
Now we will see if the autopilot's stabilization works well. With this kind of control, the plane will try to fly always horizontal, so if the plane rolls left the control algorithm will move the left wing down and the right wing up to roll to the right and stabilize it. If the plane rolls right, the control will do the opposite. In the same way, if the plane goes up the control will move the wings down to stabilize it and to fly horizontal. For a good fly we should have those amplitudes when we roll and pitch the plane without moving the joysticks.
IMPORTANT: The control algorithm uses feedback of the sensors of the PXFmini, so it is really important to have the autopilot horizontal and well attached to the plane. Otherwise, the plane will crash.
Those parameters are not the perfect ones, so maybe with other amplitudes and other PIDs the plane will fly better. If you optimize them please share your results with us and we will try them. | http://docs.erlerobotics.com/erle_robots/projects/plane_x1/elevons_amplitude | 2019-03-18T17:35:43 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.erlerobotics.com |
LATEST VERSION: 8.2.13 - RELEASE NOTES
Troubleshooting .NET Applications
Troubleshooting .NET Applications
The .NET Framework does not find managed DLLs using the conventional PATH environment variable. In order for your assembly to find and load a managed DLL, it must either be loaded as a private assembly using assemblyBinding, or it must be installed into the Global Assembly Cache (GAC).
The GAC utility must be run on every machine that runs the .NET code.
If an assembly attempts to load the GemStone.GemFire.Cache.dll without meeting this requirement, you receive this System.IO.FileNotFoundException:
{{ Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'GemStone.GemFire.Cache, Version=8.0.0.0, Culture=neutral, PublicKeyToken= 126e6338d9f55e0c' or one of its dependencies. The system cannot find the file specified. File name: 'GemStone.GemFire.Cache, Version=8.0.0.0, Culture=neutral, PublicKeyT oken=126e6338d9f55e0c' at HierarchicalClient.Main() }} | http://gemfire82.docs.pivotal.io/docs-gemfire/gemfire_nativeclient/dotnet-caching-api/troubleshooting-dotnet-applications.html | 2019-03-18T17:49:02 | CC-MAIN-2019-13 | 1552912201521.60 | [] | gemfire82.docs.pivotal.io |
Code Reviewing Hyperlambda
One of our partners has just finished the initial implementation of a medium complex FinTech project using Magic and Hyperlambda, and they wanted me to code review the product before putting the finishing touches on it. The system has 15 tables, it’s got 169 files, 165 of which are Hyperlambda, and the project is of medium complexity. There are two interesting points to this story, which is as follows.
- It took me 15 minutes to do a complete code review on the entire thing
- I could only find a handful of issues, in fact 7 to be exact
The reasons why this is spectacular is because first of all, if I was to code review C# code or PHP code for a system of the same complexity, I would probably need at least a whole working day, maybe even half a week. Due to the degree of simplicity in Hyperlambda, I could read through its entire codebase in 15 minutes. The system’s backend was implemented by one developer, whom had never created any Hyperlambda code before, still I could only find 7 issues with it. Every time I code review C# code of the same complexity, I can easily pinpoint dozens and sometimes hundreds of issues.
Anyways, this illustrates a couple of crucial unique selling points with Hyperlambda, being of course that the thing is pathetically easily understood, ridiculously easy to read, and preposterously simple to maintain. Imagining reading through a C# project, or a Java project, that is created on top of a database with 15 different tables, doing multiple integrations with 3rd party systems, with the same degree of complexity as this system has in 15 minutes, and understanding the system in its entirety in 15 minutes, is of course simply madness. With Hyperlambda, I spent no more than 15 minutes, and after those 15 minutes, I had created a deep and thorough understanding of 90% of the codebase, what it does, and how it does what it does.
BTW, if you’re interested in code reviewing your Hyperlambda code, and or tutoring in any ways, this is a services we are providing for our partners - And you can send us an email at [email protected] if you want to start the discussion in these regards. Our philosophy is that the foundation of our success is our clients’ success and happiness. When you sleep at night, we sleep at night … ;) | https://docs.aista.com/blog/code-reviewing-hyperlambda | 2022-09-25T02:43:06 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.aista.com |
Frame on Nutanix AHV¶
You can host your applications, desktops, and user data within your own private cloud using Nutanix AHV infrastructure with Frame. To use your Nutanix AHV cluster, you will deploy a Cloud Connector Appliance (CCA) which enables Frame Platform to communicate with Prism Central and Prism Element on your Nutanix AHV cluster. The Frame Platform leverages Nutanix AHV infrastructure to enable a rich array of enterprise-grade features:
Announcement
Support for Cloud Connector Appliance (CCA) 2.x will be deprecated as of October 31, 2020. Existing CCA 2.x customers will need to upgrade to CCA 3.0 as soon as possible. Instructions for upgrading the CCA can be found here.
The ISO file for CCA 3.0 can be downloaded from the Nutanix Downloads Portal.
As part of the upgrade process, customers will also need to configure new firewall rules and proxy servers to allow for the new FQDNs required by CCA 3.0. A complete list of required FQDNs for CCA 3.0 can be found in our network requirements documentation.
Use the links below or in the sidebar to navigate to a specific point of the Frame on AHV documentation, if needed:
- Setup
- Management
- CCA/WCCA Configuration
- Manual Cloud Connector Appliance Setup | https://docs.frame.nutanix.com/infrastructure/byo/frame-AHV/frame-AHV.html | 2022-09-25T02:36:06 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.frame.nutanix.com |
-
Facebook Ads
Facebook Ads allows marketers to retrieve statistics about their ads, ad sets, and campaigns running on Facebook.
Hevo uses the Facebook Marketing API to replicate your Facebook Ads data into the desired Destination database or data warehouses for scalable analysis. For this, you must authorize Hevo to access data from your Facebook Ads account.
You can also ingest data from your Instagram Ads account using the Facebook Ads connector. Instagram Ads follows the same schema as the Facebook Ads connector and offers the same data replication settings. Facebook account from which data is to be ingested.
Access to Facebook Ads console and related statistics.
Note: Facebook’s authentication system uses pop-ups that may encounter issues if ad blockers are not disabled during the setup. Read Authorization.
Configuring Facebook Ads as a Source
Perform the following steps to configure Facebook Ads as the Source in your Pipeline:
Click PIPELINES in the Asset Palette.
Click + CREATE in the Pipelines List View.
In the Select Source Type page, select Facebook Ads.
In the Configure your Facebook Ads Account page, click ADD FACEBOOK ACCOUNT.
Log in to your Facebook account.
Click Done to authorize Hevo to access your Facebook Ads and related statistics.
In the Configure your Facebook Ads Source page, specify the following:
Pipeline Name: A unique name for your Pipeline, not exceeding 255 characters.
Select Ad accounts: The Facebook Ads account(s) from where you want to replicate the data. One Facebook account can contain multiple Facebook Ads accounts.
Report Type: Select one of the following report types to ingest data from your Facebook Ads:
Predefined Reports: Hevo automatically selects all the reports and their respective fields and columns for ingestion.
Custom Reports: Hevo allows you to manually select the aggregation level, the aggregation time, and the fields for the Facebook Ads report that you want to replicate. Refer to the section, Custom Reports to know how to configure them.
Ads Action Report Time: The reported time of an action by a Facebook user. The available options are:
Impression: (Default value) The time at which your ad was watched.
Conversion: The time when an action was taken after watching your ad.
Mixed.
Historical Sync Duration: The duration for which the existing data in the Source must be ingested. Default value: 1 Year.
Advanced Settings
Breakdowns: The filters to further narrow down the results of your retrieved reports. For example, you can break down the results by Date or Age Group.
Attribution Windows: You can specify either one or both of the following:
Click Attribution Window: The number of days between a person clicking your ad and taking an action such as install or subscribe. Default value: 7d_click (7 days).
View Attribution Window: The number of days between a person viewing your ad and taking an action such as install or subscribe. Default value: 1d_view (1 day).
Click TEST & CONTINUE.
Proceed to configuring the data ingestion and setting up the Destination.
Custom Reports
With custom reports, you can manually select the parameters such as the aggregation settings, breakdown settings, and fields for the Facebook Ads reports that you want to replicate.
Note: The data included in the custom report is based on the set of parameters you select.
To configure custom reports, specify the following Custom Report Settings:
Fields: The fields or metrics that you want to include in the report such as Ad name or Campaign ID.
Aggregation Level: The primary filter on which the report data is aggregated, such as the Ad ID or the Account ID. Default value: Ad ID.
Aggregation Time: The time interval for which the report data is aggregated. This category shows you the trend and performance based on 1 Day, 1 Week, 1 Month, or 90 days period. Default value: 1 Day. For example, if the aggregation time is 1 Month, the report contains data for the past one month from the time the Pipeline runs.
Data Replication
Note: The custom frequency must be set in hours, as an integer value. For example, 1, 2, 3 but not 1.5 or 1.75.
For predefined creates a replication task for Facebook Ads reports. It also creates a data refresher task per report type to fetch the attributing results of an older date. By default, 30 days old data is fetched once per day.
For custom resyncs the data of the past 30 days.
Schema and Primary Keys
Hevo uses the following schema to upload the predefined reports’ data in the Destination:
Note:
The schema for custom reports is derived dynamically based on the settings you provide.
From Release 1.87 onwards, with JDBC-based Destinations, Hevo creates the Destination schema for predefined reports based on the data retrieved from Facebook Ads. This ensures that the limit on the number of Destination columns is not exceeded, and only fields that contain data are created in the Destination tables. However, this could lead to some Source fields being mapped to incompatible Destination columns. Read Resolving Incompatible Schema Mappings.
Data Model
The following is the list of tables (objects) that are created at the Destination when you run the Pipeline for predefined reports:
Note: For custom reports, the data model is derived dynamically based on the custom report settings you provide.
Additional Information
Read the detailed Hevo documentation for the following related topics:
Source Considerations
- Facebook Ads allows fetching data up to the past three years. Hence, you can change the offset using the Change Position action only within the past three years; beyond this, the API fails.
Limitations
- This integration does not support replicating data for reviews and pages. Hevo has a separate integration for Facebook Page Insights that can be used to fetch page and post insights.
See Also
Revision History
Refer to the following table for the list of key updates made to this page: | https://docs.hevodata.com/sources/mkt-analytics/facebook-ads/ | 2022-09-25T02:41:01 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.hevodata.com |
Querying status
The ProL2TP daemons can be queried using CLI utilities.
In this section, we explain how these can be used to query status.
prol2tpd
Status summary
The prol2tpd daemon is usually managed with the Linux system's init subsystem.
Most Linux distributions have converged on using systemd, but some use alternatives, e.g. upstart and rc-sysvinit. The service command wraps these, and can be used to determine whether prol2tpd is running:
root@lns:~# service prol2tp status ● prol2tp.service - ProL2TP L2TPv2/L2TPv3 network protocol daemon Loaded: loaded (/lib/systemd/system/prol2tp.service; disabled; vendor preset: Active: active (running) since Thu 2018-11-29 11:45:30 UTC; 6min ago Process: 12245 ExecStart=/usr/sbin/prol2tpd $PROL2TPD_OPTIONS (code=exited, status=0/SUCCESS) Process: 12238 ExecStartPre=/bin/sh -c modprobe -q -a $MODULES || true (code=exited, status=0/SUCCESS) Main PID: 12246 (prol2tpd) Tasks: 6 Memory: 1.9M CPU: 232ms CGroup: /system.slice/prol2tp.service ├─12246 /usr/sbin/prol2tpd -d -o /var/log/prol2tpd.log └─12251 /usr/sbin/prol2tp-scriptd -f
It is also possible to query prol2tpd state directly using the prol2tp tool. The
show system command summarises the current state and shows some configuration values that may be derived, for example the hostname and the router id value used for L2TPv3:
root@lns:~# prol2tp show system ProL2TP V2.0.0 (c) Copyright 2004-2019 Katalix Systems Ltd. L2TP configuration: listening on: 192.168.211.10:1701 hostname: jackdaw router id: 3232235848 log level: NOTICE L2TP service status: tunnels: 1, sessions: 3
L2TP tunnels
To obtain lists of tunnels, use the
show tunnels command.
root@lns:~# prol2tp show tunnels TunId Name State Time Peer N 27774 - UP 00:41:50 192.168.211.20
In this example, there is one tunnel, assigned id 27774.
The 'N' in the first column indicates it is a net instance, that is, it is created by a network request. The L2TP peer is 192.168.211.20, which is our LAC. The tunnel doesn't have a name because it is created by the network.
Details of any tunnel can be seen with the
show tunnel command.
root@lns:~# prol2tp show tunnel id 27774 Tunnel 27774 from 192.168.211.25:20142 to 192.168.211.20:1962 created at: Nov 29 11:56:36 2018 origin: net tunnel mode: LNS version: L2TPv2 encapsulation: UDP state: ESTABLISHED time since state change: 00:02:26 number of sessions: 3 log level: INFO local tunnel id: 27774 transport ns: 4 transport nr: 10 transport cwnd: 5 transport ssthres: 10 transport tx window: 10 tunnel profile name: lac1 peer tunnel id: 24377 peer host name: lac peer vendor name: prol2tp 1.8.6 Linux-4.4.0-139-generic (x86_64)
For more details refer to the prol2tp man page.
L2TP sessions
Sessions can be listed with the
show sessions command.
root@lns:~# prol2tp show sessions TunId SessId TunName SessName Type State Time Identifier N 27774 34877 - - PPP UP 00:41:49 [email protected] N 27774 2129 - - PPP UP 00:41:49 [email protected] N 27774 57634 - - PPP UP 00:41:49 [email protected]
With L2TPv2, sessions are assigned 16-bit ids, scoped by the tunnel in which they are created. The session list shows the assigned ID of the session and its tunnel.
Like the
show tunnels output, the 'N' in the first column indicates that a session is created by a network request.
For PPP sessions, the Identifier column shows the PPP username, if known.
root@lcce1:~# prol2tp show sessions TunId SessId TunName SessName Type State Time Identifier C 2008991623 762631622 one one ETH UP 00:00:02 demo-l2tpv3-eth-1
With L2TPv3, tunnel and session ids are 32-bit values.
For L2TPv3 ethernet/VLAN pseudowires, the Identifier column shows the session's L2TPv3 Remote End ID. A 'C' in the first column indicates taht a session was created by a local request based on the prol2tpd config file.
Details of a session instance are shown using the
show session command. If the optional
stats keyword is used, the output includes dataplane statistics.
Sessions are identified by their id and their parent tunnel's id.
root@lns:~# prol2tp show session id 34877 in tunnel id 27774 stats Session 34877 on tunnel 27774: created at: Nov 29 11:56:36 2018 state: ESTABLISHED origin: net time since state change: 00:04:24 pseudowire type: PPP log level: INFO tunnel id: 27774 session id: 34877 peer session id: 42991 session profile name: lac pseudowire profile name: ppp data tx pkts/bytes/errors: 19 / 575 / 0 data rx pkts/bytes/errors: 18 / 404 / 0 data rx oos pkts/discards: 0 / 0
For more details refer to the prol2tp man page.
propppd
Status summary
Like prol2tpd, the propppd daemon is managed by the system's init subsystem.
root@lns:~# service proppp status ● proppp.service - ProPPP Scalable PPP network protocol daemon Loaded: loaded (/lib/systemd/system/proppp.service; disabled; vendor preset: Active: active (running) since Thu 2018-11-29 11:45:30 UTC; 7min ago Process: 12200 ExecStart=/usr/sbin/propppd $PROPPPD_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 12204 (propppd) Tasks: 5 Memory: 1.6M CPU: 134ms CGroup: /system.slice/proppp.service └─12204 /usr/sbin/propppd -d -l /var/log/propppd.log
It is also possible to query propppd directly using the propppctl tool. The
propppctl status command shows a status summary:
root@lns:~# propppctl status ProPPP v2.0.0 support: [email protected] License: unlicensed PPP: ppp instance count: 3 create requests: 3, failures: 0 destroy requests: 0, failures: 0 RADIUS: access requests: 3, accepts: 3, rejects: 0, challenges: 0 accounting starts: 3, stops: 0, updates: 21, responses: 24 disconnect requests: 0, responses: 0 retransmits: 0, timeouts: 0 auth requests in progress: 0, accounting requests in progress: 0 Events: created: 3, destroyed: 0, up: 3, down: 0 Config: config updates: 1, failures: 0
PPP sessions
To obtain a list of PPP sessions, use the
propppctl list command.
root@lns:~# propppctl list Name Interface Duration State User session-1 ppp0 0:08:07 UP [email protected] session-2 ppp1 0:08:07 UP [email protected] session-3 ppp2 0:08:07 UP [email protected]
It is possible to filter the list to show only PPP instances that are either up or down.
root@lns:~# propppctl list down Name Interface Duration State User root@lns:~# propppctl list up Name Interface Duration State User session-1 ppp0 0:08:21 UP [email protected] session-2 ppp1 0:08:21 UP [email protected] session-3 ppp2 0:08:21 UP [email protected]
Details of a session are displayed with the
propppctl show command.
root@lns:~# propppctl show session-1 interface name: ppp0 created: 2018-11-29 11:56:39 type: PPPoL2TP debug: 7 connect delay: 1000 state: RUNNING connect time: 8.7 minutes link mtu: 1500, peer mru: 1500 run count: 84 lcp: echo interval: 30, max echo failures: 0 next echo: 18 want: pap chap eap asyncmap magic mru pcomp accomp got: pap chap asyncmap magic mru pcomp accomp allow: chap eap asyncmap magic mru pcomp accomp his: asyncmap magic pcomp accomp.5.1.100 peer ip: 10.5.1.1 pap: auth timeout: 30, retransmit interval: 3 our state: CLOSED, peer state: CLOSED chap: timeout: 3, rechallenge time: 0 local state: lowerup started done transmits: 1 peer state: lowerup eap: local: state: Closed requests: 0, responses: 0 timeout: 3, max requests: 10 peer: state: Closed requests: 0, responses: 0 timeout: 20, max requests: 20 radius: status: AUTH_CHAP_ACK acct updates: 8 auth: remote name: '[email protected]' config: local: , peer: done: local: chap, peer: pppol2tp: protocol version: L2TPv2 (LNS) id: 27774/2129 peer id: 24377/57566
For more details refer to the propppctl man page.
proacd
Status summary
As with prol2tpd and propppd, proacd is managed by the system's init subsystem.
root@lac~# service proac status ● proac.service - ProL2TP Access Concentrator network protocol daemon Loaded: loaded (/lib/systemd/system/proac.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2019-10-17 19:00:49 UTC; 5min ago Main PID: 14851 (proacd) CGroup: /system.slice/proac.service └─14851 /usr/sbin/proacd -o /var/log/proacd.log
It is also possible to query proacd directly using the proac_info tool. Passing the
-s argument displays a summary of system state:
root@lac:~# proac_info -s Routing: Routes created: 6 Route create failures: 0 Insufficient license denials: 0 Routes deleted: 4 Route delete failures: 0 Destination open failures: 4 Destination close failures: 0 Source close failures: 0 Routes closed by destination: 0 Routes closed by source: 2 PPPoE: PADI received: 4 Invalid PADI received: 0 PADO sent: 4 PADR received: 4 Invalid PADR received: 0 Resent PADR received: 0 PADS sent: 4 PADT received: 2 Invalid PADT received: 0 PADT sent: 2 L2TP: Tunnels opened: 3 Sessions opened: 2 Sessions closed: 0 DNS lookup failures: 0 RADIUS: Successful requests: 3 Failed requests: 0 DNS lookup failures: 0 Server timeouts: 0 PPPD: Local pppd started: 0 Local pppd terminated: 0 Exit codes: OK: 0 Fatal error: 0 Option error: 0 No kernel support: 0 User request: 0 Connect failed: 0 Option 'pty' command failed: 0 Negotiation failed: 0 Peer failed to authenticate: 0 Idle timeout: 0 Connect time exceeded: 0 Peer dead: 0 Hangup: 0 Init failed: 0 Failed to authenticate with peer: 0 Other: 0
AC routes
A list of all the routes currently instantiated in proacd can be displayed using proac_info using the
-r argument:
root@lac:~# proac_info -r Route 'r2' 13564: Source : PPPoE interface enp0s8 session 5051 service name 'dynamic' (advertised) peer MAC 08:00:27:4E:71:75 Destination: L2TP tunnel 35352 session 41053 Route 'r1' 33975: Source : PPPoE interface enp0s8 session 15932 service name 'static' (advertised) peer MAC 08:00:27:64:4A:2B Destination: L2TP tunnel 64947 session 32602
If the system has many instantiated routes, proac_info can select just one of them using either the route name or route ID rather than showing the full list:
root@lac:~# proac_info -r 33975 Route 'r1' 33975: Source : PPPoE interface enp0s8 session 15932 service name 'static' (advertised) peer MAC 08:00:27:64:4A:2B Destination: L2TP tunnel 64947 session 32602 root@lac:~# proac_info -r r1 Route 'r1' 33975: Source : PPPoE interface enp0s8 session 15932 service name 'static' (advertised) peer MAC 08:00:27:64:4A:2B Destination: L2TP tunnel 64947 session 32602
For more details refer to the proac_info man page. | https://docs.prol2tp.com/manual_querying.html | 2022-09-25T00:53:41 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.prol2tp.com |
Connect to your cloud service provider 🔗
Connect to your cloud service provider to start sending data to Splunk Observability Cloud.
See the following topics to learn how to connect each of these cloud service providers:
Note
Splunk is not responsible for data availability, and it can take up to several minutes (or longer, depending on your configuration) from the time you connect until you start seeing valid data from your account. | https://docs.signalfx.com/en/latest/gdi/get-data-in/connect/connect.html | 2022-09-25T02:07:46 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.signalfx.com |
D-Sheet Piling
VIKTOR's D-Sheet Piling integration requires a specific D-Sheet Piling worker which can be downloaded here.
Analyzing a D-Sheet Piling model in VIKTOR can be done using the
DSheetPilingAnalysis class (worker required).
No binding is provided by VIKTOR for this module, which means that the input file has to be generated manually or
by using the GEOLIB:
from viktor.external.dsheetpiling import DSheetPilingAnalysis# Generate the input SHI file.input_file = ...# Run the analysis and obtain the output file.analysis = DSheetPilingAnalysis(input_file)analysis.execute(timeout=10)output_file = analysis.get_output_file() | https://docs.viktor.ai/docs/create-apps/software-integrations/dsheetpiling/ | 2022-09-25T02:50:28 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.viktor.ai |
API reference#
This section describes the DPF-Post API, which provides three main classes:
DpfSolution: Provides the
Solutionobject, which is the entry point to the result file, its metadata, and contents.
Result: Provides access to
Resultsobject, in accordance with the result type and analysis type.
ResultData: Provides access to the actual data values and closely relates to the fields container concept in DPF-Core. | https://post.docs.pyansys.com/api/index.html | 2022-09-25T02:02:04 | CC-MAIN-2022-40 | 1664030334332.96 | [] | post.docs.pyansys.com |
Set Alpha Node¶
The Set Alpha Node adds an alpha channel to an image.
Inputs¶
- Image
Standard image input.
- Alpha
The amount of Alpha can be set for the whole image by using the input field or per pixel by connecting to the socket.
Outputs¶
- Image
Standard image output.
Bemerkung
This is not, and is not intended to be, a general-purpose solution to the problem of compositing an image that does not contain Alpha information. You might wish to use „Chroma Keying“ or „Difference Keying“ (as discussed elsewhere) if you can. This node is most often used (with a suitable input being provided by means of the socket) in those troublesome cases when you cannot, for some reason, use those techniques directly.
Example¶
Fade To Black¶
To transition the audience from one scene or shot to another, a common technique is to „fade to black“. As its name implies, the scene fades to a black screen. You can also „fade to white“ or whatever color you wish, but black is a good neutral color that is easy on the eyes and intellectually „resets“ the viewer’s mind. The node map below shows how to do this using the Set Alpha node.
In the example above, the alpha channel of the swirl image is ignored. Instead, a Time node introduces a factor from 0.00 to 1.00 over 60 frames, or about 2 seconds, to the Set Alpha node. Note that the time curve is exponentially-shaped, so that the overall blackness will fade in slowly and then accelerate toward the end. The Set Alpha node does not need an input image; instead, the flat (shadeless) black color is used. The Set Alpha Node uses the input factor and color to create a black image that has an alpha set which goes from 0.00 to 1.00 over 60 frames, or completely transparent to completely opaque. Think of alpha as a multiplier for how vivid you can see that pixel. These two images are combined by the Alpha Over node completely (a Factor of 1.00) to produce the composite image. The Set Alpha node will thus, depending on the frame being rendered, produce a black image that has some degree of transparency. Setup and Animate, and you have an image sequence that fades to black over a 2-second period.
Bemerkung
No Scene Information Used
This example node map does not use the Render Layer node. To produce this 2-second animation, no Blender scene information was used. This is an example of using Blender’s powerful compositing abilities separate from its modeling and animation capabilities. (A Render Layer could be substituted for the Image layer, and the „fade-network“ effect will still produce the same effect.)
Fade In a Title¶
To introduce your animation, you will want to present the title of your animation over a background. You can have the title fly in, or fade it in. To fade it in, use the Set Alpha node with the Time node as shown below.
In the above example, a Time curve provides the Alpha value to the input socket. The current Render Layer node, which has the title in view, provides the image. As before, the Alpha Over node mixes (using the alpha values) the background swirl and the alpha title to produce the composite image.
Colorizing a BW Image¶
In the example above, notice how the blue tinge of the render input colors the swirl. You can use the Set Alpha node’s color field with this kind of node map to add a consistent color to a BW image.
In the example map to the right, use the Alpha value of the Set Alpha node to give a desired degree of colorization. Thread the input image and the Set Alpha node into an Alpha Over node to colorize any black-and-white image in this manner. | https://docs.blender.org/manual/de/dev/compositing/types/converter/set_alpha.html | 2020-02-17T06:21:34 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.blender.org |
Service:
Documentation:
Overview:
Bugs:
Blueprints:
Wiki:
Release notes:): | https://docs.openstack.org/networking-sfc/latest/index.html | 2020-02-17T06:40:52 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.openstack.org |
Spec
Concurrent Suspendable QueueThread-safe suspendable queue for multiple producers and multiple consumers.
Thread-safe suspendable queue for multiple producers and multiple consumers.
Template Parameters
Interface Function Overview
Interface Functions Inherited From ConcurrentQueue
Detailed Description
In contrast to the standard @Class.ConcurrentQueue@ this queue suspends the caller if it pops a value when the queue was empty or appends a value to a full fixed-size queue.
The implementation uses Mutexes and Events to optionally suspend the calling thread and uses a @Class.AllocString@ as ring buffer. | http://docs.seqan.de/seqan/2.3.0/specialization_ConcurrentSuspendableQueue.html | 2020-02-17T06:03:27 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.seqan.de |
This is an old revision of the document!
Bojan Popović (bocke)
Hi! My name is Bojan and I am not good in introductions, so I will (try to?) keep it to minimum. :)
My first contact with Linux was in 1999 or 2000. It was an article in a local computer magazine. I first got a chance to try it in 2001. It was something called Winlinux. Very soon I switched to Red Hat. I was unlucky to not have a PC between 2003 and 2005. In 2005 I got a new one and continued exploring. At first I tried Suse, than Debian, and finally at the end of the same year, settled with Slack. Same year I became a moderator of the Linux section of a large internet forum/community in Serbia. That is how I first tried Slack - by the recommendation of another moderator. It seems Slackware was invisible for computer media. They mostly covered Red hat, Mandrake, Suse and Knoppix. The history is repeating today, only the names changed. Anyways, I was hooked and have been a Slacker ever since.
I don't have a serious writing background, but I did write an article for a print edition of "Digital" computer magazine (november of 2009). it was a 3-page article on Linux desktop environments.
From 2011 and onwards I am one of the coordinators of Serbian Slackware community. | http://docs.slackware.com/wiki:user:bocke?rev=1352075710 | 2020-02-17T07:52:42 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.slackware.com |
Set Up Test Data for an Entire Test Class
Test setup methods can reduce test execution times especially when you’re working with many records. Test setup methods enable you to create common test data easily and efficiently. By setting up records once for the class, you don’t need to re-create records for each test method. Also, because the rollback of records that are created during test setup happens at the end of the execution of the entire class, the number of records that are rolled back is reduced. As a result, system resources are used more efficiently compared to creating those records and having them rolled back. If a test method changes those records, such as record field updates or record deletions, those changes are rolled back after each test method finishes execution. The next executing test method gets access to the original unmodified state of those records.
Syntax
Test setup methods are defined in a test class, take no arguments, and return no value. The following is the syntax of a test setup method.
@testSetup static void methodName() { }
Example
The following example shows how to create test records once and then access them in multiple test methods. Also, the example shows how changes that are made in the first test method are rolled back and are not available to the second test method.
@isTest private class CommonTestSetup { @testSetup static void setup() { // Create common test accounts List<Account> testAccts = new List<Account>(); for(Integer i=0;i<2;i++) { testAccts.add(new Account(Name = 'TestAcct'+i)); } insert testAccts; } @isTest static void testMethod1() { // } }
Test Setup Method Considerations
- Test setup methods are supported only with the default data isolation mode for a test class. If the test class or a test method has access to organization data by using the @isTest(SeeAllData=true) annotation, test setup methods aren’t supported in this class. Because data isolation for tests is available for API versions 24.0 and later, test setup methods are also available for those versions only.
- Multiple test setup methods are allowed in a test class, but the order in which they’re executed by the testing framework isn’t guaranteed.
- If a fatal error occurs during the execution of a test setup method, such as an exception that’s caused by a DML operation or an assertion failure, the entire test class fails, and no further tests in the class are executed.
- If a test setup method calls a non-test method of another class, no code coverage is calculated for the non-test method. | http://docs.releasenotes.salesforce.com/en-us/spring15/release-notes/rn_apex_test_setup_methods.htm | 2018-07-15T20:59:49 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.releasenotes.salesforce.com |
All content with label as5+gridfs+infinispan+lock_striping+nexus.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, archetype, jbossas, guide, schema, listener, cache, amazon,
s3, grid, jcache, api, xsd, ehcache, maven, documentation, ec2, 缓存, hibernate, aws, interface, custom_interceptor, setup, clustering, eviction, concurrency, out_of_memory, jboss_cache, import, index, events, hash_function, configuration, batch, buddy_replication, loader, write_through, cloud, mvcc, tutorial, notification, read_committed, jbosscache3x, xml,, snapshot, webdav, hotrod, repeatable_read, docs, consistent_hash, batching, store, jta, faq, 2lcache, jsr-107, jgroups, lucene, locking, hot_rod
more »
( - as5, - gridfs, - infinispan, - lock_striping, - nexus )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+gridfs+infinispan+lock_striping+nexus | 2018-07-15T21:23:27 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.jboss.org |
What you will need:
- A 3rd Party SMTP Account (please see supported options below)
- Access to your domain registrar's DNS Settings.
**Please note: the Startup Plan of $97/month only allows for 1 SMTP. The Etison Full Suite allows for 3 SMTP integrations. If you're looking for more, please contact billing support.**
Step One:
Send13: Learn to Integrate Send13 SMTP with ClickFunnels
SparkPost: Learn to Integrate SparkPost SMTP with ClickFunnels
Actionetics MD: Learn how to setup your ClickFunnels Mail SMTP
Step Two: Presently Unsupported SMTP Setups
- Amazon AWS: Please find instructions here, but know that our support staff with be unable to provide assistance. Amazon AWS also does not include tracking for bounces and unsubscribes, so we recommend using a supported integration.
If you have any questions about this, please contact our support team by clicking the support icon in the bottom right-hand corner of this page. | http://docs.clickfunnels.com/smtp-email-integrations/general-faq/smtp-integrations | 2018-07-15T21:19:59 | CC-MAIN-2018-30 | 1531676588972.37 | [array(['https://downloads.intercomcdn.com/i/o/62327410/23319f2240247acebbb13274/rule.png',
None], dtype=object) ] | docs.clickfunnels.com |
SHOW ALL CONTENT
Table of contents
Procedure Management within the CMDB
You can manage your procedures through CI type Procedure. This CI type is part of Octopus original installation, but the actual article suggests a more specific way to manage efficiently your procedure using the CMDB.
Step 1 : Planning a Procedure Structure
A good structure will ease procedure management and research. Examples below suggest types, but each customer has to define his. Establish a structure that is simple, and includes all types of procedures you may need to manage.
- Identify which type of procedures you want to manage. Examples are :
Installations
Configurations
Architecture Designs
Application Designs
Database Designs
Processes
Templates
Others...
- Identify attributes for each Procedure CI type. Examples are :
Description
Last Update
Owner (role)
Key Words
Others...
Step 2 : Configuration of "Procedure" CI type
- From Tools \ Reference Data Management, open CI \ Types
- If Procedure Type exists, select it. If not, right-click on Types and click Add"
Configuration Tab
- Make sure that Is a document option is selected. If you want.
Relation Tab
- Identify relationships supported by this CI type. When you create a procedure, only supported relationships will be accessible.
Attributs Tab
- Add attributes you identified in Step 1.
Categories Tab
- Add categories you identified in Step 1.
Step 3 : Creation of "Procedures" CI"
- From Configurations Module, click on Create CI"
- Enter information from drop down lists (if applicable), and give a name to the new CI (Note: the CI name must be unique)
- Click OK"
- Complete missing information
- Identify Main Contact, which could represent the procedure owner or the person responsible to update it (according to your internal context)
Relations Tab
To establish relationship(s)between the procedure and other CI :
- Click on
- Search for the CI to link and specify relationship type
- Click OK, and repeat to add a relationship to another CI, if required
Configuration Tab
- Complete the attribute information for this procedure
Costs Tab
- Enter the cost information, if this is pertinent for this procedure
Requests Tab
- You will see all requests from any type created using this CI
Note Tab
- If needed
Attached Files
- You can add a file, a link to a file, or an attachment from the content of the clipboard
History Tab
- Keeps the modification history done on this CI
Procedure Search
- Select Configurations module and access to the Advanced Search. You can make a search based on multiple criteria.
- An interesting search is by category. To do so, you have to select first the CI type in General Tab to be able to access category list for this CI type.
Visual Explanation
X
Thank you, your message has been sent.
Help us improve our articles | https://docs.octopus-itsm.com/en/articles/how-manage-procedures-octopus | 2018-07-15T21:23:47 | CC-MAIN-2018-30 | 1531676588972.37 | [array(['/sites/all/files/wiki1/fr/ProcedureMgmt/CI_Type_Procedure_EN.png',
None], dtype=object)
array(['/sites/all/files/wiki1/fr/ProcedureMgmt/Relation_Tab_EN.png',
None], dtype=object)
array(['/sites/all/files/wiki1/fr/ProcedureMgmt/Attribut_Tab_EN.png',
None], dtype=object)
array(['/sites/all/files/wiki1/fr/ProcedureMgmt/Categorie_Tab_EN.png',
None], dtype=object)
array(['/sites/all/files/wiki1/en/ProcedureMgmt/CreateProcedureCI1.png',
None], dtype=object)
array(['/sites/all/files/wiki1/en/ProcedureMgmt/Procedure_CI_EN.png',
None], dtype=object)
array(['/sites/all/files/wiki1/en/ProcedureMgmt/Relation_Tab_CI_EN.png',
None], dtype=object)
array(['/sites/all/files/wiki1/en/ProcedureMgmt/Recherche_CI_EN.png',
None], dtype=object)
array(['/sites/all/files/wiki1/en/ProcedureMgmt/Recherche_1_EN.png', None],
dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/wiki1/en/ProcedureMgmt/Recherche_2_EN.png',
None], dtype=object) ] | docs.octopus-itsm.com |
The
.gitattributes file must be encoded in UTF-8 and must not contain a
Byte Order Mark. If a different encoding is used, the file's contents will be
ignored.
Syntax Highlighting
The
.gitattributes file can be used to define which language to use when
syntax highlighting files and diffs. See "Syntax
Highlighting" for more. | https://docs.gitlab.com/ee/user/project/git_attributes.html | 2018-07-15T21:24:49 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.gitlab.com |
LogChatHistory Property
This content is no longer actively maintained. It is provided as is, for anyone who may still be using these technologies, with no warranties or claims of accuracy with regard to the most recent product version or service release.
Gets.
Namespace: Microsoft.Rtc.Collaboration.GroupChat
Assembly: Microsoft.Rtc.Collaboration.GroupChat (in Microsoft.Rtc.Collaboration.GroupChat.dll)
Syntax
'Declaration Public Property LogChatHistory As Nullable(Of Boolean) Get Set
'Usage Dim instance As ChatRoomCategorySettings Dim value As Nullable(Of Boolean) value = instance.LogChatHistory instance.LogChatHistory = value
public Nullable<bool> LogChatHistory { get; set; }
Property Value
Type: System.Nullable<Boolean>
true if chat history should be logged for chat rooms in this category; false if chat rooms should not log history; or null if this value should be inherited from the parent category. (default)
See Also
Reference
ChatRoomCategorySettings Class
ChatRoomCategorySettings Members
Microsoft.Rtc.Collaboration.GroupChat Namespace | https://docs.microsoft.com/en-us/previous-versions/office/developer/communication-server-2007-R2/ff755143(v=office.13) | 2018-07-15T21:23:44 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.microsoft.com |
sim3(speciesData)
Randomizes a binary matrix speciesData by reshuffling elements within each column equiprobably.
This algorithm assumes species are equiprobable, but preserves differences among sites (= column sums).
This algorithm preserves differences in species richness among sites (= colsums), but assumes that all species are equiprobable. This assumption is usually unrealistic, and sim3 has a high frequency of Type I errors with random matrices, so it is not recommended for co-occurrence analysis.
Gotelli, N.J. 2000. Null model analysis of species co-occurrence patterns. Ecology 81: 2606-2621.
randomMatrix <- sim3(speciesData=matrix(rbinom(40,1,0.5),nrow=8)) | http://docs.ecosimr.org/sim3.html | 2018-07-15T20:40:26 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.ecosimr.org |
Table of contents
Related articles
Introduction
Information provided by the user within the request submit form is important and we have worked on conditions on forms and tasks to better exploit it. The conditions allow forms to be dynamic and to ask questions to the user based on his previous responses. In addition, with the conditions it is possible to adapt the workflow of the request by selecting which tasks will be generated.
Conditions apply on two types of actions:
Fields to display in a form.
The generation of a task in a service request.
Basis for the conditions
Conditions can be based on:
- The value of a field in the custom form.
- The value of a field in the request.
- Whether or not a field was filled.
How it works
To add conditions to a field in a form or a task, you need to use the F3 key to see available fields and to add the description of a condition.
Configuration of a condition in a form
In the dynamic form creation, the form fields and those from the request are the criteria that will be used as conditions to determine if the field is to be displayed or not to the user.
Follow these steps to add a Display condition:
- Go to Options section.
- Place the mouse in the Display condition field.
- Press F3 on the keyboard.
- Choose the field on which the condition will be based on.
- Add the displayed condition.
- Test the form to make sure everything works.
- Make corrections if needed.
- Finish with OK in the field and proceed with the configuration of the rest of the form.
Configuration of a condition for a task
When generating tasks, fields from request and form can be used as variables. These variables can then be put into the Condition field control task creation.
Follow these steps to add a condition for task creation:
- Place the mouse cursor in the Condition field.
- Press F3 on the keyboard to display the custom and request fields available.
- Make your selection and confirm with OK.
How to build conditions and the available operators
Whether the conditions are used in a form or a task, the method to configure the condition is the same. As it is easier to see and validate on the visualization screen, the following examples are shown in forms.
When you prepare task conditions, test the conditions as a field display condition first, since it is the easiest way to determine if the conditions work.
In bilingual environments, it is important to build the value list tied to combo list or radio buttons field types in both languages. The use of a comma is very useful in this mode.
Be careful when renaming a field or a choice in a value list, since it could well render a condition invalid. Use the display screen to properly validate conditions.
The comma
- To add more then one choice to a condition, they must be separated by a comma.
- Blue, White, Red
- In a bilingual mode, this is how to account for both languages.
- Yes, Oui
Quotation marks
- Quotation marks encloses the response for text, large text, or value list value containing a comma in it. As explained in the previous point, the comma is used to separate the different valid responses in a condition. They will be used in elements of a list like:
- iPhone 6, grey
- iPhone 6, silver
- LG G5, black
- Samsung Galaxy S7, black
- A condition based on these fields will need quotation marks to work properly:
- NameOfTheVariable="iPhone 6, grey", "iPhone 6, silver"
Is equal to
- Use the equal sign = in a condition based on a Combo list or Radio button type field.
- To indicate the choice or choices that will make the condition true.
A choice is presented to the user to have some paper prepared for a service.
There are 3 options:
For the choice with logo, the user will need to add an image.
The image only needs to be asked for the logo, so the question will only appear if the user's choice is equal to Parchment paper + Service name + Logo.
From the Add the logo file field, add a display condition.
- With F3 you can obtain the name of the variable:
- $Paperorder.Chooseaccordingtotheneed
- Then add the equal = sign.
- And add the choice that will make the condition true.
- Parchment paper + Service name + Logo
The result will be to make the Add the logo file field appear if the condition is met.
- Use the equal = sign in a condition based on fields of the following types Check box, Date, Date and Time, File Attachment or CI.
- = True, indicates that the condition will be met if the field is checked or filled.
- = False, indicates that the condition will be met if the field stays empty.
As part of a mobile phone request, we offer the user to check the options he needs.
There are no conditions related to the first three choices, but if the user checks 64GB we want a comment to appear along with a question asking for the user's Cost Center.
Here is the condition for the Cost Center:
And the result if the condition is true:
Is different from
- Use smaller than and greater than signs as follows <> in a condition based on a Combo list or Radio button type field.
- To indicate the choice or choices that will make the condition true.
A choice is presented to the user to have some paper prepared for a service.
There are 3 options:
In two of the three choices, we will need to ask the Service name to the user.
The service name will only be requested if the user's choice <>Parchment paper.
Here is the Service name condition:
And the result when the condition is met:
The field was filled or selected
Conditions can be based on the fact of adding information or using a field.
- Use the =True combination in a condition based on using one of the following types Check box, Date, Date and Time, File Attachment or CI.
- =False will indicate the opposite, meaning that the field was left empty.
In a move request, a message needs to appear to the user if there are boxes to move.
The comment will appear only if there is information added to the Other items to move field.
Here is the condition for the comment:
And the result when the condition is true:
Values contained in a specific field of a form
- A condition can be based on the text entered into a text or large text field.
- The text searched needs to be in quotes.
- Octopus does not distinguish between upper and lower case letters.
- Wild cards variables are not supported.
The user is asked via a text field the name of his favourite hockey team.
A comment will appear based on the answer. To do this, there will be multiple comment field types to personalize the message to the user following his answer.
Here is the condition of one of the comments:
And the different results when the condition is true:
Value contained in a field from the request
- A condition can be based on the fields of a request instead of the ones from the form, for example:
- The Site of the user.
- The Department of the user.
- The VIP field of the user.
Certain fields, like Site or Department, can be associated with diverse elements like the Requester, the User or the CI. It is hence very important to make the selection from the correct node.
In our scenario, because we are replacing equipment, we know that the information available to users is not the same from one site to the other.
The new printers, installed in Quebec, have a label that clearly identifies them, but this is not the case elsewhere.
The condition will then be based on the site of the user. If the user is from Quebec, and reports a printer problem, additional fields will be used to identify the equipment in question.
From the Identification Comment, Printer Identification and Indicate where the printer is located fields, we add the display conditions.
- With F3 we open the window to make the selection:
- Open the Incident / SR node.
- Open the User node.
- Make the selection of the Site field.
@Incident.User.SiteRootName
- We finish the condition with the site equal or different from:
- =Quebec
- <>Quebec
Here is the condition to add the comment:
The form will be as follows for a user that is not from Quebec:
And for a user from Quebec:
The combination of fields
Conditions can be combined to define a condition on more than one field at a time. This works with fields from the form as well as fields from the request.
Use the following variables to combine more than one field to a condition:
- @OR
- For a OR type condition, Field1=ABC@ORField2=ABC, DEF.
- Everything can be on the same line or there can be a carriage return (Enter) to separate the first field of the variable.
- Only one of the conditions must be true for the condition to work.
In a form to request equipment, we ask the user to check the items that he needs. However, if he asks for a laptop or a second screen, we want him to justify his need. There is only one justification field, but two field can trigger the condition.
Choices presented to the user:
Here is the condition for the justification field:
$Equipmentrequest.Laptop]=Yes@OR$$Equipmentrequest.Secondscreen=Yes
Results if one or the other is checked:
- @AND
For a AND type condition, Field1= ABC @AND Field2 = DEF.
Everything can be on the same line or there can be a carriage return (Enter) to separate the first field of the variable.
Every variable must be met for the condition to work.
In a form to request equipment, we want to offer users from Montreal that are part of the Communications department the choice of an iPad to check on the content of the company's Web pages. The condition will need to be based on the site and the department.
Here is the condition for the iPad choice :
@Incident.User.SiteRootName=Montreal@[email protected]=Communications
Results depend on the site and department:
For a user from Montreal from the Marketing department, only one condition is true.
For a user from Quebec from the Communications department, only one condition is true.
For a user from Montreal from the Communications department, both conditions are true.
- Parentheses
- Allows combining multiple operators in a single logical expression.
- Respects operator priority by resolving expressions contained within parentheses first.
- Parentheses must be in equilibrium on either side of an operator, e.g.: ( expr1 ) @And ( expr2 @Or expr3 ) presents the same amount of parentheses on either side of the @And operator.
For example, if the email or extension fields are empty.
The condition for an empty email field would be @Incident.User.Email=False
Thank you, your message has been sent. | https://docs.octopus-itsm.com/en/articles/configuration-conditions-forms-and-tasks | 2018-07-15T21:09:42 | CC-MAIN-2018-30 | 1531676588972.37 | [array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/F3PourChamps_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Création de formulaires personnalisés dans Octopus - APRES 4.1.130/ConditionAffichage1_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Création de formulaires personnalisés dans Octopus - APRES 4.1.130/ConditionAffichage2_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Création de formulaires personnalisés dans Octopus - APRES 4.1.130/ConditionAffichage3_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Création de formulaires personnalisés dans Octopus - APRES 4.1.130/ConditionAffichage4_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration avancée des tâches/AjoutVariablePersoDansConditionV2_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ChoixCommandePapier_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionLogo_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionLogoVraie_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ChoixTelephoneMobile_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionCentreCout_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionCentreCoutVraie_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ChoixCommandePapier_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionNomService_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionNomServiceVraie_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ChoixDemenagement_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionAutresItems_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionAutresItemsVraie_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ChoixEquipeHockey_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionCanadiens_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionCanadiensVraie_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionNordiquesVraie_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionBruinsVraie_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/F3PourSiteUtilisateur_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionSite_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/UtilisateurDifferentDeQuebec_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/UtilisateurDeQuebec_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ChoixEquipement_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionEquipement_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionEquipementPortableVraie_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionEquipementEcranVraie_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionEquipement_iPad_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionMontrealMarketing_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionQuebecCommunications_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Configuration des conditions pour les formulaires Web et les tâches/ConditionMontrealCommunications_EN.png',
None], dtype=object)
array(['https://wiki.octopus-itsm.com/sites/all/files/Truc.gif', None],
dtype=object) ] | docs.octopus-itsm.com |
An installation might not succeed when IaaS time servers are not synchronized with the vRealize Appliance and the Identity Appliance.
Problem
You cannot log in after installation, or the installation fails while it is completing.
Cause
Time servers on all servers might not be synchronized.
Solution
For each server (Identity Appliance, vRealize Appliance, and all Windows servers where the IaaS components will be installed), enable time synchronization as described in the following topics:
Enable Time Synchronization on the Identity Appliance
Enable Time Synchronization on the vRealize Appliance
Enable Time Synchronization on the Windows Server
For an overview of timekeeping for vRealize Automation, see Time Synchronization. | https://docs.vmware.com/en/vRealize-Automation/6.2/com.vmware.vra.install.doc/GUID-8FAAA7F3-528B-4945-8DC4-B5A7F9001F35.html | 2018-07-15T21:32:29 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.vmware.com |
The Particles Falloff
This is an additional falloff for the X-Particles particle modifiers.
Note: this falloff may not work correctly other than with the X-Particle modifiers; for example, it may not do anything in Cinema 4D deformers or Mograph effectors. This is currently a limitation.
Most falloffs use a defined area for the falloff, such as the volume of a cube or sphere. The particle falloff, however, treats each particle from an emitter as its own individual falloff. This enables a modifier to affect particles only when they are within the required distance of a particle from another emitter.<<
Parameters
Radius
This is the sphere surrounding each particle from the falloff emitter within which the modifier will have an effect on other particles. The strength of the effect is determined by how close the particles are. For example, if the 'Radius' is set to 10 units, and the distance between two particles is 9 units, the strength of the effect is multiplied by 0.1 (1.0 - distance/radius is the actual formula used, in this case that is 1.0 - 9/10). The closer the particles, the stronger the effect, up to 100% strength.
A particle may be within the 'Radius' distance of more than one particle from the falloff emitter. In this case, the strongest possible effect is the one used (i.e. the closest of all possible particles from the falloff emitter).
Show Radius
Check this box for a visual representation of the radius value around each particle from the falloff emitter.
Emitter
Drag the emitter which is to be the source of the falloff particles in here.
Groups
The Particles Falloff supports particle groups. If there are no groups in this list, all particles from the emitter are used. Otherwise, the falloff will return zero for all particles not in one of the groups in the list. | http://docs.x-particles.net/html/particlesfalloff.php | 2021-10-16T01:53:05 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['../images/falloff_particles_1.jpg', None], dtype=object)] | docs.x-particles.net |
Client Setup
Client Setup
📝 NOTE
This document is for Version 1 of the Quick Deployment Environment. If you're using a newer version of QDE, please refer to the newer QDE Client Setup page.
Summary
Once you have QDE set up in your environment, you'll need to get clients talking to it. To do that, we'll need to do the following:
- Setting up DNS on the client to access QDE.
- Install the QDE SSL/TLS certificate so clients can access HTTPS components
- Install Chocolatey and friends
DNS
Typically in your environment, onces you've added QDE, it should be able to start talking to the QDE. In some situations, you may need to add the host name with the IP address to your HOSTS file to reach your environment.
Client Installation
On your client machines, you will be running the following script in an administrative context:-g
Quick Deployment Environment | https://docs.chocolatey.org/en-us/quick-deployment/v1/client-setup | 2021-10-16T03:40:21 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.chocolatey.org |
fast_events_email_api_result
This action is called after an order email has been send to the email-provider (Host-email, SMTP, Amazon SES, Mailgun, …). This call is made with both a successful email and an incorrect email (submission failed).
Parameters
- $http_code
(int) The http resultcode. Consult the Mail Provider API for the right codes. The code may be in the 2xx-range (processing went ok) or an error (usually in the 4xx-range). For
Host-emailand
SMTPthis is 200 (Success) or 400 (Failure)
- $order_id
(int) The id of the order.
(string) The emailaddress of the recipient.
- $result
(string) The result body after the Mail Provider API call. Consult your Mail Provider API for the format and content. | https://docs.fast-events.eu/en/latest/hooks/email_api_result.html | 2021-10-16T02:27:23 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fast-events.eu |
Network Interface Drivers with ALTQ Traffic Shaping Support¶
The intention of this page is to provide information regarding FreeBSD’s ALTQ drivers, what they do, and how they work.
Information¶
The ALTQ framework is used for queuing/traffic shaping. In pfSense® software, this is utilized by the Shaper Wizard and the Queues/Interfaces tabs under Firewall > Traffic Shaper.
See the altq(4) or the altq(9).
On that page, select the version of FreeBSD that corresponds to the pfSense version being run.
In addition to the drivers listed as supporting ALTQ in FreeBSD, pfSense software also includes support for ALTQ on vlan(4) and IPsec enc(4) interfaces.
If the NIC being used does not support ALTQ, Limiters may be used instead. | https://docs.netgate.com/pfsense/en/latest/trafficshaper/altq-hardware-support.html | 2021-10-16T02:03:23 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.netgate.com |
Opening and Closing Delays
The RadMenu allows you to specify delays for its closing and opening actions. This means that you can make the RadMenu wait for a specific amount of time before opening or closing a menu. In order to specify these delays you can set the ShowDelay and HideDelay properties. They are of type Duration and have the following format in XAML "0:0:0.00".
Note that when the ClickToOpen property is set to True, the delays don't affect the RadMenu's behavior.
Here is an example of a RadMenu with a delay before opening a menu equal to one second and a delay before closing a menu also equal to one second.
<telerik:RadMenu ... </telerik:RadMenu> | https://docs.telerik.com/devtools/silverlight/controls/radmenu/features/opening-and-closing-delays | 2021-10-16T04:02:36 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.telerik.com |
Creating Views
To create a View
Load your model in Trimble Connect for Browser's 3D Viewer.
Adjust the model to any position that you would like to capture a View of.
Click the Add View button in the toolbar.
The New View panel opens.
Enter the desired information.
Click Save.
Best practices for creating views for mobile
If you are going to be creating Views that are meant to be viewed on Trimble Connect for Mobile, it is recommended that you utilize clip planes so that the View only contains the relevant model objects for that View.
This will help improve performance on the mobile application as it will require less data to be loaded on the device.
When creating a View, the 3D view and its content is saved. This includes the following:
Camera angle and zooming
Color and transparency changes
Object visibility changes
Measurements
Markups
Clip planes
Grid visibility
Clashes visibility
Ghost mode
Orthogonal/Perspective mode | https://docs.3d.connect.trimble.com/views/creating-views | 2021-10-16T02:12:03 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.3d.connect.trimble.com |
@Target(value={TYPE,FIELD,METHOD,PARAMETER}) @Retention(value=RUNTIME) public @interface CqlName
Entityclass or one of its properties (field or getter), to specify a custom CQL name.
This annotation can also be used to annotate DAO method parameters when they correspond to
bind markers in custom
WHERE clauses; if the parameter is not annotated with
CqlName, its programmatic name will be used instead as the bind marker name.
Beware that if the DAO interface was pre-compiled and is located in a jar, then all its bind
marker parameters must be annotated with
CqlName; failing to do so will result in a
compilation failure because class files do not retain parameter names by default. If you intend
to distribute your DAO interface in a pre-compiled fashion, it is preferable to always annotate
such method parameters.
Example:
@Entity public class Product { @PartitionKey @CqlName("product_id") private int id; ... } @Dao public class ProductDao { @Query("SELECT count(*) FROM ${qualifiedTableId} WHERE product_id = :product_id") long countById(@CqlName("product_id") int id); ... }This annotation takes precedence over the
naming strategydefined for the entity. | https://docs.datastax.com/en/drivers/java/4.3/com/datastax/oss/driver/api/mapper/annotations/CqlName.html | 2021-10-16T01:39:10 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.datastax.com |
Upgrade vs. Migration: TFS Integration Platform
This is the seventh blog post in a series about Upgrade and Migration for TFS.
In some migration scenarios, neither upgrading servers nor losing history are viable options:
“I have Team Projects I need to migrate from one existing TFS 2008 server to another TFS 2008 server, and I need to keep the history”
In scenarios such as this one, using the TFS Integration Platform to preserve some history may be an option. This platform was designed to facilitate the development of migration tools that connect other systems to TFS, and all of the logic to interface with TFS and move data between systems is provided with the platform.
Using the TFS Integration Platform to move data does allow historical versions of version control and work item tracking data to be preserved, but it has some of the same disadvantages of the Tip Migration approach: loss of date-time information, and changeset and work item IDs are not preserved. Furthermore, migrations involving the TFS Integration Platform also suffer from the problem of time compression, that is, actions that occurred over a long period of time in the source system are replayed in a short amount of time in the destination. The result of this time compression is that many of the reporting metrics of the destination TFS server are inaccurate.
The cost of using the TFS Integration Platform to migrate Team Projects is not trivial, and should be carefully weighed against the benefit of preserving some of the historical revisions. Typically, this option should be reserved only when the other options previously discussed have been exhausted. For more information, please visit the TFS Integration Platform CodePlex project page. | https://docs.microsoft.com/en-us/archive/blogs/mitrik/upgrade-vs-migration-tfs-integration-platform | 2021-10-16T03:21:57 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.microsoft.com |
Developing the code insights backendDeveloping the code insights backend
- Beta state of the backend
- Architecture
- Life of an insight
- Debugging
- Creating DB migrations
Beta state of the backendBeta state of the backend
The current code insights backend is a beta version contributed by Coury (based on previous work by Stephen) - it:
- Supports running search-based insights over all indexable repositories on the Sourcegraph installation.
- Is backed by a TimescaleDB instance. See the database section below for more information.
- Optimizes unnecessary search queries by using an index of commits to query only for time periods that have had at least one commit.
- Supports regexp based drilldown on repository name.
- Provides permissions restrictions by filtering of repositories that are not visible to the user at query time.
- Does not yet support synchronous insight creation through an API. Read more below in the Insight Metadata section.
The current version of the backend is an MVP to achieve beta status to unblock the feature request of “running an insight over all my repos”.
ArchitectureArchitecture
The following architecture diagram shows how the backend fits into the two Sourcegraph services “frontend” (the Sourcegraph monolithic service) and “worker” (the Sourcegraph “background-worker” service), click to expand:
Deployment StatusDeployment Status
Code Insights backend is currently disabled on
sourcegraph.com until solutions can be built to address the large indexed repo count.
Feature FlagsFeature Flags
Code Insights is currently an experimental feature, and ships with an “escape hatch” feature flag that will completely disable the dependency on TimescaleDB (named
codeinsights-db). This feature flag is implemented as an environment variable that if set true
DISABLE_CODE_INSIGHTS=true will disable the dependency and will not start the Code Insights background workers or GraphQL resolvers. This variable must be set on both the
worker and
frontend services to remove the dependency. If the flag is not set on both services, the
codeinsights-db dependency will be required.
Implementation of this environment variable can be found in the
frontend and
worker services.
This flag should be used judiciously and should generally be considered a last resort for Sourcegraph installations that need to disable Code Insights or remove the database dependency.
With version 3.31 this flag has moved from the
repo-updater service to the
worker service.
Sourcegraph SettingSourcegraph Setting
Code Insights is currently behind an experimental feature on Sourcegraph. You can enable it in settings.
"experimentalFeatures": { "codeInsights": true },
DatabaseDatabase
Currently, Code Insights uses a TimescaleDB database running on the OSS license. The original intention was to use some of the timeseries query features, as well as the hypertable. Many of these are behind a proprietary license that would require non-trivial work to bundle with Sourcegraph.
Additionally, we have many customers running on managed databases for Postgres (RDS, Cloud SQL, etc) that do not support the TimescaleDB plugin. Recently our distribution team has started to encourage customers to use managed DB solutions as the product grows. Given entire categories of customers would be excluded from using Code Insights, we have decided we must move away from TimescaleDB.
A final decision has not yet been made, but a very likely candidate is falling back to vanilla Postgres. This will simplify our operations, support, and likely will not present a performance problem given the primary constraint on Code Insights is search throughput.
It is reasonable to expect this migration to occur some time during the beta period for Code Insights.
Insight MetadataInsight Metadata
Historically, insights ran entirely within the Sourcegraph extensions API on the browser. These insights are limited to small sets of manually defined repositories since they execute in real time on page load with no persistence of the timeseries data. Sourcegraph extensions have access to settings (user / org / global) , so the original storage location for extension based insight metadata (query string, labels, title, etc) was settings.
This storage location persisted for the backend MVP, but is in the process of being deprecated by moving the metadata to the database. Given roadmap constraints an API does not currently exist to synchronously interact with the database for metadata. A background process attempts to sync insight metadata that is flagged as “backend compatible” on a regular interval.
As expected, this async process causes many strange UX / UI bugs that are difficult or impossible to solve. An API to fully deprecate the settings storage is a priority for .
As an additional note, extension based insights are read from settings for the purposes of sending aggregated pings.
Life of an insightLife of an insight
(1) User defines insight in settings(1) User defines insight in settings
A user creates a code insight using the creation UI, and selects the option to run the insight over all repositories. The Code Insights will create a JSON object
in the appropriate settings (user / org) and place it in the
insights.allrepos dictionary. Note: only insights placed in the
insights.allrepos dictionary are considered eligible for
sync to prevent conflicts with extensions insights.
An example backend-compatible insight definition in settings:
"insights.allrepos": { "searchInsights.insight.soManyInsights": { "title": "So many insights", "series": [ { "name": "\"insights\" insights", "stroke": "var(--oc-blue-7)", "query": "insights" } ] }, }
Unique IDUnique ID
An Insight View is defined to have a globally unique referencable ID. For the time being to match feature parity with extensions insights the ID is generated as the
chart title prefixed with
searchInsights.insight..
In the above example, the ID is
searchInsights.insight.soManyInsights.
Read more about Insight Views
Sync to the databaseSync to the database
The settings sync job will execute and attempt to migrate the defined insight. Currently, the sync job does not handle updates and will only sync if the insight view unique ID is not found.
Until the insight metadata is synced, the GraphQL response will not return any information if given the unique ID. Temporarily, the frontend treats all
404 errors
as a transient “Insight is processing” error to solve for this weird UX.
Once the sync job is complete, the following database rows will have been created:
1. An Insight View (
insight_view) with UniqueID
searchInsights.insight.soManyInsights
2. An Insight Series (
insight_series) with metadata required to generate the data series
3. A link from the view to the data series (
insight_view_series)
A note about data seriesA note about data series
Currently, data series are defined without scope for specific repositories or any other subset of repositories (all data series iterate over all repos). Data series are uniquely identified
by hashing the query string, with the
s: prefix.
This field is known as the
series_id. It must be globally unique, and any collisions will be assumed to be the same exact data series.
In the medium term this semantic will change to include repository scopes (assigning specific repos to a datseries), and may possibly change entirely. This is one important area of design and work for .
The
series_id for the example Insight data above series would be
s:7F1FE30EF252BF75FAB0C9680C7BCFFF648154165AFE718155091051255A0A99
The
series_id is how the underlying data series is referenced throughout the system; however, it is not currently exposed in the GraphQL API. The current model
prefers to obfuscate the underlying data series behind an Insight View. This model is not highly validated, and may need to change in the future
to expose more direct functionality around data series.
(2) The insight enqueuer (indexed recorder) detects the new insight(2) The insight enqueuer (indexed recorder) detects the new insight
The insight enqueuer (code) is a background goroutine running in the
worker service of Sourcegraph (code), which runs all background goroutines for Sourcegraph - so long as
DISABLE_CODE_INSIGHTS=true is not set on the
worker container/process.
Its job is to periodically schedule a recording of ‘current’ values for Insights by queuing a snapshot recording using an indexed query. This only requires a single query per insight regardless of the number of repositories,
and will return results for all the matched repositories. Each repository will still be recorded individually. These queries are placed on the same queue as historical queries (
insights_query_runner_jobs) and can
be identified by the lack of a revision and repo filter on the query string.
For example,
insights might be an indexed recording, where
insights repo:^codehost\.com/myorg/[email protected]$ would be a historical recording for a specific repo / revision.
You can find these search queries for queued jobs on the (primary postgres) table
insights_query_runner_jobs.search_query
Insight recordings are scheduled using the database field (codeinsights-db)
insight_series.next_recording_after, and will only be taken if the field time is less than the execution time of the job.
Recordings are currently always scheduled to occur on the first day of the following month, after
00:00:00. For example, if a recording was taken at
2021-08-27T15:29:00.000Z the next
recording will be scheduled for
2021-09-01T00:00:00.000Z. The first indexed recording after insight creation will occur on the same interval.
Note: There is a field (codeinsights-db)
insight_series.recording_interval_days that was intended to provide some configurable value to this recording interval. We have limited
product validation with respect to time intervals and the granularity of recordings, so beta has launched with fixed
first-of-month scheduling.
This will be an area of development throughout and into .
(3) The historical data enqueuer (historical recorder) gets to work(3) The historical data enqueuer (historical recorder) gets to work
If we only record one data point per repo every month, it would take months or longer for users to get any value out of backend insights. This introduces the need for us to backfill data by running search queries that answer “how many results existed in the past?” so we can populate historical data.
Similar to the insight enqueuer, the historical insight enqueuer is a background goroutine (code) which locates and enqueues work to populate historical data points.
The most naive implementation of backfilling is as follows:
For each relevant repository: For each relevant time point: Execute a search at the most recent revision
Naively implemented, the historical backfiller would take a long time on any reasonably sized Sourcegraph installation. As an optimization, the backfiller will only query for data frames that have recorded changes in each repository. This is accomplished by looking at an index of commits and determining if that frame is eligible for removal. Read more below
There is a rate limit associated with analyzing historical data frames. This limit can be configured using the site setting
insights.historical.worker.rateLimit. As a rule of thumb, this limit should be set as high as possible without performance
impact to
gitserver. A likely safe starting point on most Sourcegraph installations is
insights.historical.worker.rateLimit=20.
Backfill compressionBackfill compression
Read more about the backfilling compression in the proposal RFC 392
We maintain an index of commits (table
commit_index in codeinsights-db) that are used to filter out repositories that do not need a search query. This index is
periodically refreshed
with changes since its previous refresh. Metadata for each repositories refresh is tracked in a table
commit_index_metadata.
To avoid race conditions with the index, data frames are only filtered
out if the
commit_index_metadata.last_updated_at is greater than the data point we are attempting to compress.
Currently, we only generate 12 months of history for this commit index to keep it reasonably sized. We do not currently do any pruning, but that is likely an area we will need to expand in - .
Limiting to a scope of repositoriesLimiting to a scope of repositories
Naturally, some insights will not need or want to execute over all repositories and would prefer to execute over a subset to generate faster. As a trade off to reach beta we made the decision that all insights will execute over all repositories. The primary justification was that the most significant blocker for beta was the ability to run over all insights, and therefore unlocking that capability also unlocks the capability for users that want to run over a subset, they will just need to wait longer.
This is non-trivial problem to solve, and raises many questions: 1. How do we represent these sets? Do we list each repository out for each insight? This could result in a very large cardinality and grow the database substantially. 2. What happens if users change the set of repositories after we have already backfilled? 3. What does the architecture of this look like internally? How do we balance the priority of backfilling other larger insights with much smaller ones?
This is also a blocker to migrate all functionality away from extensions and to the backend, because the extesions do support small numbers of repositories at this time.
This will be an area of work for - .
Detecting if an insight is completeDetecting if an insight is complete
Given the large possible cardinality of required queries to backfill an insight, it is clear this process can take some time. Through dogfooding we have found on a Sourcegraph installation with ~36,000 repositories, we can expect to backfill an average insight in 20-30 minutes. The actual benchmarks of how long this will take vary greatly depending on the commit patterns and size of the Installation.
One important piece of information that needs to be surfaced to users is the answer to the question
is my insight still processing?
This is a non-trivial question to answer:
1. Work is processed asynchronously, so querying the state of the queue is necessary
2. Iterating many thousands of repositories can result in some transient errors causing individual repositories to fail, and ultimately not be included in the queue issue
3. The current shared state between settings and the database leaves a lot of intermediate undefined states, such as prior to the sync
As a temporary measure to try and answer this question with some degree of accuracy, the historical backfiller applies the following semantic:
Flag an insight as
completed backfill if the insight was able to complete one full iteration of all repositories without any
hard errors (such as low level DB errors, etc).
This flag is represented as the database field
insight_series.backfill_queued_at and is set at the end of the complete repository iteration.
This semantic does not fully capture all possible states. For example, if a repository encounters a
soft error (unable to fetch git metadata, for example)
it will be skipped and ultimately not populate in the data series. Improving this is an area of design and work for - .
(4) The queryrunner worker gets work and runs the search query(4) The queryrunner worker gets work and runs the search query
The queryrunner (code) is a background goroutine running in the
worker service of Sourcegraph (code), it is responsible for:
- Dequeueing search queries that have been queued by the either the indexed or historical recorder. Queries are stored with a
priorityfield that dequeues queries in ascending priority order (0 is higher priority than 100).
- Executing a search against Sourcegraph with the provided query. These queries are executed against the
internalGraphQL endpoint, meaning they are unauthorized and can see all results. This allows us to build global results and filter based on user permissions at query time.
- Flagging any error states (such as limitHit, meaning there was some reason the search did not return all possible results) as a
dirty query. These queries are stored in a table
insight_dirty_queriesthat allow us to surface some information to the end user about the data series. Not all error states are currently collected here, and this will be an area of work for .
- Aggregating the search results, per repository (and in the near-future, per unique match to support capture groups) and storing them in the
series_pointstable.
The queue is managed by a common executor called
Worker (note: the naming collision with the
worker service is confusing, but they are not the same).
Read more about
Worker and how it works in this search notebook.
These queries can be executed concurrently by using the site setting
insights.query.worker.concurrency and providing
the desired concurrency factor. With
insights.query.worker.concurrency=1 queries will be executed in serial.
There is a rate limit associated with the query worker. This limit is shared across all concurrent handlers and can be configured
using the site setting
insights.query.worker.rateLimit. This value to set will depend on the size and scale of the Sourcegraph
installations
Searcher service.
(5) Query-time and rendering!(5) Query-time and rendering!
The webapp frontend invokes a GraphQL API which is served by the Sourcegraph
frontend monolith backend service in order to query information about backend insights. (cpde)
- A GraphQL series resolver returns all of the distinct data series in a single insight (UI panel) (code)
- A GraphQL resolver ultimately provides data points for a single series of data (code)
- The series points resolver merely queries the insights store for the data points it needs, and the store itself merely runs SQL queries against the TimescaleDB database to get the datapoints (code)
Note: There are other better developer docs which explain the general reasoning for why we have a “store” abstraction. Insights usage of it is pretty minimal, we mostly follow it to separate SQL operations from GraphQL resolver code and to remain consistent with the rest of Sourcegraph’s architecture.
Once the web client gets data points back, it renders them! For more information, please contact an @codeinsights frontend engineer.
User PermissionsUser Permissions
We made the decision to generate data series for all repositories and restrict the information returned to the user at query time. There were a few driving factors behind this decision: 1. We have split feedback between customers that want to share insights globally without regard for permissions, and other customers that want strict permissions mapped to repository visibility. In order to possibly support both (or either), we gain the most flexibility by performing query time limitations. 2. We can reuse pre-calculated data series across multiple users if they provide the same query to generate an insight. This not only reduces the storage overhead, but makes the user experience substantially better if the data series is already calculated.
Given the large possible cardinality of the visible repository set, it is not practical to select all repos a user has access to at query time. Additionally, this data does not live in the same database as the timeseries data, requiring some network traversal.
User permissions are currently implemented by negating the set of repos a user does not have access to. This is based on the assumption that most users of Sourcegraph have access to most repositories. This is a fairly highly validated assumption, and matches the premise of Sourcegraph to begin with (that you can search across all repos). This may not be suitable for Sourcegraph installations with highly controlled repository permissions, and may need revisiting.
Storage FormatStorage Format
The code insights time series are currently stored entirely within Postgres.
As a design, insight data is stored as a full vector of match results per unique time point. This means that for some time
T, all of the unique timeseries that fall under
one insight series can be aggregated to form the total result. Given that the processing system will execute every query at-least once, the possiblity of duplicates
exist within a unique timeseries. A simple deduplication is performed at query time.
Read more about the history of this format.
DebuggingDebugging
This being a pretty complex, high cardinality, and slow-moving system - debugging can be tricky.
In this section, I’ll cover useful tips I have for debugging the system when developing it or otherwise using it.
Accessing the TimescaleDB instanceAccessing the TimescaleDB instance
Dev and docker compose deploymentsDev and docker compose deployments
docker exec -it codeinsights-db psql -U postgres
Kubernetes deploymentsKubernetes deployments
kubectl exec -it deployment/codeinsights-db -- psql -U postgres
- If trying to access Sourcegraph.com’s DB:
kubectl -n prod exec -it deployment/codeinsights-db -- psql -U postgres
- If trying to access k8s.sgdev.org’s DB:
kubectl -n dogfood-k8s exec -it deployment/codeinsights-db -- psql -U postgres
Finding logsFinding logs
Since insights runs inside of the
frontend and
worker containers/processes, it can be difficult to locate the relevant logs. Best way to do it is to grep for
insights.
The
frontend will contain logs about e.g. the GraphQL resolvers and TimescaleDB migrations being ran, while
worker will have the vast majority of logs coming from the insights background workers.
Docker compose deploymentsDocker compose deployments
docker logs sourcegraph-frontend-0 | grep insights
and
docker logs worker | grep insights
Inspecting the Timescale databaseInspecting the Timescale database
Read the initial schema migration which contains all of the tables we create in TimescaleDB and describes them in detail. This will explain the general layout of the database schema, etc.
The most important table in TimescaleDB is
series_points, that’s where the actual data is stored. It’s a hypertable.
Querying dataQuerying data
SELECT * FROM series_points ORDER BY time DESC LIMIT 100;
Query data, filtering by repo and returning metadataQuery data, filtering by repo and returning metadata
SELECT * FROM series_points JOIN metadata ON metadata.id = metadata_id WHERE repo_name_id IN ( SELECT id FROM repo_names WHERE name ~ '.*-renamed' ) ORDER BY time DESC LIMIT 100;
(note: we don’t actually use metadata currently, so it’s always empty.)
Query data, filter by metadata containing
{"hello": "world"}
SELECT * FROM series_points JOIN metadata ON metadata.id = metadata_id WHERE metadata @> '{"hello": "world"}' ORDER BY time DESC LIMIT 100;
(note: we don’t actually use metadata currently, so it’s always empty.)
Query data, filter by metadata containing Go languagesQuery data, filter by metadata containing Go languages
SELECT * FROM series_points JOIN metadata ON metadata.id = metadata_id WHERE metadata @> '{"languages": ["Go"]}' ORDER BY time DESC LIMIT 100;
(note: we don’t actually use metadata currently, so it’s always empty. The above gives you some ideas for how we intended to use it.)
See for more metadata
jsonb operator possibilities. Only
?,
?&,
?|, and
@> operators are indexed (gin index)
Query data the way we do for the frontend, but for every seriesQuery data the way we do for the frontend, but for every series
SELECT sub.series_id, sub.interval_time, SUM(sub.value) AS value, sub.metadata FROM ( SELECT sp.repo_name_id, sp.series_id, sp.time AS interval_time, MAX(value) AS value, NULL AS metadata FROM series_points sp JOIN repo_names rn ON sp.repo_name_id = rn.id GROUP BY sp.series_id, interval_time, sp.repo_name_id ORDER BY sp.series_id, interval_time, sp.repo_name_id DESC ) sub GROUP BY sub.series_id, sub.interval_time, sub.metadata ORDER BY sub.series_id, sub.interval_time DESC
Inserting dataInserting data
Upserting repository namesUpserting repository names
The
repo_names table contains a mapping of repository names to small numeric identifiers. You can upsert one into the database using e.g.:
WITH e AS( INSERT INTO repo_names(name) VALUES ('github.com/gorilla/mux-original') ON CONFLICT DO NOTHING RETURNING id ) SELECT * FROM e UNION SELECT id FROM repo_names WHERE name='github.com/gorilla/mux-original';
Upserting event metadataUpserting event metadata
Similar to
repo_names, there is a separate
metadata table which stores unique metadata jsonb payloads and maps them to small numeric identifiers. You can upsert metadata using e.g.:
WITH e AS( INSERT INTO metadata(metadata) VALUES ('{"hello": "world", "languages": ["Go", "Python", "Java"]}') ON CONFLICT DO NOTHING RETURNING id ) SELECT * FROM e UNION SELECT id FROM metadata WHERE metadata='{"hello": "world", "languages": ["Go", "Python", "Java"]}';
Inserting a data pointInserting a data point
You can insert a data point using e.g.:
INSERT INTO series_points( series_id, time, value, metadata_id, repo_id, repo_name_id, original_repo_name_id ) VALUES( "my unique test series ID", now(), 0.5, (SELECT id FROM metadata WHERE metadata = '{"hello": "world", "languages": ["Go", "Python", "Java"]}'), 2, (SELECT id FROM repo_names WHERE name = 'github.com/gorilla/mux-renamed'), (SELECT id FROM repo_names WHERE name = 'github.com/gorilla/mux-original') );
You can omit all of the
*repo* fields (nullable) if you want to store a data point describing a global (associated with no repository) series of data.
Inserting fake generated data pointsInserting fake generated data points
TimescaleDB has a
generate_series function you can use like this to insert one data point every 15 days for the last year:
INSERT INTO series_points( series_id, time, value, metadata_id, repo_id, repo_name_id, original_repo_name_id) SELECT time, "my unique test series ID", random()*80 - 40, (SELECT id FROM metadata WHERE metadata = '{"hello": "world", "languages": ["Go", "Python", "Java"]}'), 2, (SELECT id FROM repo_names WHERE name = 'github.com/gorilla/mux-renamed'), (SELECT id FROM repo_names WHERE name = 'github.com/gorilla/mux-original') FROM generate_series(TIMESTAMP '2020-01-01 00:00:00', TIMESTAMP '2021-01-01 00:00:00', INTERVAL '15 day') AS time;
Creating DB migrationsCreating DB migrations
Since TimescaleDB is just Postgres (with an extension), we use the same SQL migration framework we use for our other Postgres databases.
migrations/codeinsights in the root of this repository contains the migrations for the Code Insights Timescale database, they are executed when the frontend starts up (as is the same with e.g. codeintel DB migrations.)
Currently, the migration process blocks
frontend and
worker startup - which is one issue we will need to solve. | https://docs.sourcegraph.com/dev/background-information/insights/backend | 2021-10-16T02:57:33 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.sourcegraph.com |
Welcome to Narupa’s documentation!¶
Interact with molecules in virtual reality¶
Narupa lets you enter virtual reality and steer molecular dynamics simulations in real time. Explore rare events and try docking poses.
Use insightful visuals¶
Use python to control molecular representations.
Explore with others¶
Collaborate in a shared virtual space while sharing or not a physical space. Host a session in the cloud to share the experience with others across the world.
Customise your workflow¶
Narupa uses a customisable python server. Using the API you can integrate different physics engine and integrate Narupa into your existing workflow.
Narupa is free, open source and distributed under the GNU GPLv3 license. You can look at and contribute to the code for building server applications here, and our VR applications such as iMD-VR and Narupa Builder.
Citation¶
If you find Narupa useful, please cite the following paper:
Jamieson-Binnie, A. D., O’Connor, M. B., Barnoud, J., Wonnacott, M. D., Bennie, S. J., & Glowacki, D. R. (2020, August 17). Narupa iMD: A VR-Enabled Multiplayer Framework for Streaming Interactive Molecular Simulations. ACM SIGGRAPH 2020 Immersive Pavilion. SIGGRAPH ’20: Special Interest Group on Computer Graphics and Interactive Techniques Conference.
Bib file:
@inproceedings{10.1145/3388536.3407891, author = { Jamieson-Binnie, Alexander D and O'Connor, Michael B. and Barnoud, Jonathan and Wonnacott, Mark D. and Bennie, Simon J. and Glowacki, David R. }, title = { Narupa IMD: A VR-Enabled Multiplayer Framework for Streaming Interactive Molecular Simulations }, year = {2020}, isbn = {9781450379687}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {}, doi = {10.1145/3388536.3407891}, abstract = { Here we present Narupa iMD, an open-source software package which enables multiple users to cohabit the same virtual reality space and interact with real-time molecular simulations. The framework utilizes a client-server architecture which links a flexible Python server to a VR client which handles the rendering and user interface. This design helps ensure utility for research communities, by enabling easy access to a wide range of molecular simulation engines for visualization and manipulation of simulated nanoscale dynamics at interactive speeds. }, booktitle = {ACM SIGGRAPH 2020 Immersive Pavilion}, articleno = {13}, numpages = {2}, location = {Virtual Event, USA}, series = {SIGGRAPH '20} }
Contents:
- Installation & Getting Started
- Tutorials
- Concepts
- narupa | https://narupa.readthedocs.io/en/latest/ | 2021-10-16T03:21:27 | CC-MAIN-2021-43 | 1634323583408.93 | [] | narupa.readthedocs.io |
Table of Contents
Product Index
Zalir is a charming girl who hides a very dark secret: she is part-demon fresh from the underworld.
Zalir comes with many customizable options including multiple Eyes, Lips, Blush and Brows, but where she's most unique is her Demon Horns and Bonus Makeup options!
Don't miss the opportunity to enjoy Zalir in your. | http://docs.daz3d.com/doku.php/public/read_me/index/69295/start | 2021-10-16T02:43:55 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.daz3d.com |
Media statistics
From XMBdocs
Statistics about uploaded file types. This only includes the most recent version of a file. Old or deleted versions of files are excluded.
Bitmap images
Total file size for this section: 351,887 bytes (344 KB; 100%).
All files
Total file size for all files: 351,887 bytes (344 KB). | https://docs.xmbforum2.com/index.php?title=Special:MediaStatistics | 2021-10-16T03:21:18 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.xmbforum2.com |
' ); /**************************************************/ // Your code starts here... // Remember that the Site application isn't running, so you cannot access $mainframe or any of its methods. /**************************************************/ | http://docs.joomla.org/Initializing_the_Joomla!_1.5_Framework_in_an_external_script | 2012-05-24T07:20:14 | crawl-003 | crawl-003-010 | [] | docs.joomla.org |
.
The following terms are used commonly in the remainder of this section and are outlined here for clarification:
AnonymousUserclass to represent the security context of the current request.
Clearspace relies on common security patterns established in the Spring Security (formerly Acegi Security) library. By leveraging Spring Security, Clearspace uses terminology familiar to Spring users in an effort to standardize integration and leverage existing Spring libraries and idioms.
Fundamentally, authentication in Clearspace is performed by a series of Spring Security filter (implementations of J2EE Servlet Filters) chains, linked together. Each element in a given chain has a dedicated responsibility, while each chain is responsible for accomplishing high-level goals towards the handling of a request. Ultimately, these chains must prepare a request to fulfill a single contract enforced by the last link in the primary security filter chain.
Each thread of execution in Clearspace, including background jobs and asynchronous tasks, is associated with a Spring Security
SecurityContext instance. The
SecurityContext holds information about the Authentication associated with the request.
Internally within Clearspace code, Jive extends the Spring Security
Authentication interface with the
JiveAuthentication class. This class serves a number of purposes, including directly exposing a Clearspace User implementation representing the current user through a strongly-typed contract as well as exposing meta data about the user such as whether or not the user is anonymous.
Each URI handled by the Clearspace system passes through a series of J2EE Servlet Filters at the Clearspace Security Layer before entering the Clearspace Application Layer. The following URI contexts are defined in a standard Clearspace installation:
The series of filters handling each request can be altered through the Clearspace Plugin system when customization of authentication behavior is needed (see below).
Clearspace defines several Security Filter Chains, each mapped to a specific URL pattern described above. The default filter chain is defined in spring-securityContext.xml as the following set of filters:
The authentication contract is a fundamental set of assumptions made by application-level code about the security context of any given request. In a standalone Clearspace configuration (one in which Clearspace is the system of record for user information), the authentication contract is met by out of the box Clearspace functionality. Likewise, for LDAP-based authentication Clearspace fufills the contract. In the case of custom authentication, third-party code must meet the terms of the contract in order to perform a successful authentication.
The authentication contract is enforced by the last filter in the Clearspace Security Filter Chain, the
JiveAuthenticationTranslationFilter. This ensures that the authentication associated with the
SecurityContext is a valid
JiveAuthentication before transferring control of the request handling to the Clearspace application layer downstream.
The contract between the Clearspace security layer and the Clearspace application layer requires that one of the following is true before control is passed from the security layer to the application layer:
SecurityContextof the request contains an instance of the
JiveAuthenticationinterface (established through the
SecurityContext.setAuthenticationmethod).
Authenticationassociated with the
SecurityContextreturns
truefor
isAuthenticated()and an implementation of Jive's
Userinterface is present in either the
getPrincipal()method or in the
getDetails()method.
As part of the authentication contract, if no authentication is present when the
JiveAuthenticationTranslationFilter is invoked, the
AnonymousAuthentication will be set to the
SecurityContext prior to transferring control to the application layer. As a result, application-level code needn't check to see if user references obtained from the
SecurityContext are null.
Clearspace includes several implementations of the
JiveAuthentication interface, a subclass of Spring Security's
Authentication interface. Most commonly used is
JiveUserAuthentication which requires an implementation of the Jive
User interface as it's sole constructor argument.
As an example, once a handle to a
User implementation has been obtained (directly created or through the
UserManager API), that implementation instance can fulfill the authentication contract by creating an instance of
JiveUserAuthentication and setting that instance to the
SecurityContext.
UserTemplate ut = new UserTemplate();
SecurityContextHolder.getContext().setAuthentication(new JiveUserAuthentication(ut));
Authorization in Clearspace is addressed via three constructs in the Clearspace Application Layer.
Permissions behavior is governed by the
PermissionsManager API, group membership by the
GroupManager API. Proxies are used to secure access to Clearspace API methods and domain objects as they move through the system. Proxies enforce security based on the Acegi
SecurityContext associated with a request. Clearspace associates instances of an Acegi subclass —
JiveAuthentication — with each request by the time the servlet stack leaves the filter chain. That
JiveAuthentication contains the effective user for the current call stack, which is in turn used to drive proxy authorization checks. | http://docs.jivesoftware.com/clearspace_community/latest/AuthenticationandAuthorization.html | 2012-05-24T12:24:12 | crawl-003 | crawl-003-010 | [] | docs.jivesoftware.com |
You can monitor the following critical SQL services to determine whether the service is available (Up) or is disabled (Down).
Select this process:
If you want to:
MSSQLSERVER
This is the database engine. It controls processes all SQL functions and manages all files that comprise the databases on the server.
SQLSERVERAGENT
This service works with the SQL Server service to create and manage local server jobs, alerts and operators, or items from multiple servers.
Microsoft Search
A full-text indexing and search engine.
Distributed Transaction Coordinator
The MS DTC service allows for several sources of data to be processed in one transaction. It also coordinates the proper completion of all transactions to make sure all updates and errors are processed and ended correctly.
SQL Server Analysis Services
Implements a highly scalable service for data storage, processing, and security.
SQL Server Reporting Services
Used to create/manage tabular, matrix, graphical, and free-form reports.
SQL Server Integration Services
A platform for building high performance data integration solutions.
SQL Server FullText Search
Issues full-text queries against plain character-based data in SQL Server tables.
SQL Server Browser
Listens for incoming requests for SQL Server resources and provides information about SQL Server instances installed on the computer.
SQL Server Active Directory Helper
View replication objects, such as a publication, and, if allowed, subscribe to that publication.
SQL Server VSS Writer
Added functionality for backup and restore of SQL Server 2005. | http://docs.ipswitch.com/NM/90_WhatsUp%20Gold%20v12.4/03_Help/microsoft_sql_server_services.htm | 2012-05-23T23:57:26 | crawl-003 | crawl-003-010 | [] | docs.ipswitch.com |
Description
The Mirah plugin enables compiling Ruby-like code straight to Java bytecode.
Installation
The current version of griffon-mirah-plugin is 0.1
To install just issue the following command
Usage
Mirah source code should be placed in
$appdir/src/mirah using
.mirah as file extension. Sources will be compiled first before any sources available on
griffon-app or
src/main which means you can't reference any of those sources from your Mirah code, while the Groovy sources can. You will be able to use the LangBridge Plugin facilities to communicate with other JVM languages.
It is possible to implement Griffon artifacts using Mirah. Here's a trivial Model implementation as an example
This plugin does not add runtime dependencies to Mirah nor JRuby. | http://docs.codehaus.org/display/GRIFFON/Mirah+Plugin | 2012-05-24T12:03:21 | crawl-003 | crawl-003-010 | [] | docs.codehaus.org |
Sharing
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
Sharing
Splunk provides useful ways to share knowledge and information. You can create new eventtypes, save them, and put them into Splunk bundles. You can create saved searches and schedule alerts. And you can tap into SpunkBase and search for help, share your experiences, or share your bundles with the Splunk community.
Creating, tagging, and sharing eventtypes
Creating an eventtype
- In the SplunkWeb user interface: create a search to save for the event type that you want to make.
- Click the down arrow next to the search bar.
- From the drop-down menu, choose "Save As Event Type".
- In the name text box, enter a descriptive name for the event type.
- Apply a tag to your eventtype (see below).
- Click Save.
To use your saved eventtype, start a search with:
eventtype::
Tagging an eventtype
Tagging is useful when sharing an eventtype. You can assign tags to the new eventtype in the Tags text box before you save your created eventtype.
(You can make changes to the search at any time. Just make sure to run your changes through the search and re-save each time.)
Sharing an eventtype
To share saved eventtypes, you'll have to make a bundle. The Admin Manual will have a more advanced explanation of bundles, and how to make bundles. For now we'll go through a simple explanation on how to create a bundle.
Saved searches
You can save searches like you save eventtypes. Saved searches allow you to create alerts for certain events, or amounts of a certain event based on a threshold value. Alerts tied to to saved searches allow you to trigger events such as a scripts, sending an email, or even trigger an RSS feed.
SplunkBase
Full information on SplunkBase can be found in the Admin Manual. For our purposes as users, the SplunkBase is a helpful community to obtain answers from Splunk professionals, or other Splunk users. SplunkBase is also where you can share your bundles, or obtain useful bundles from other members of the community. Any content available in SplunkBase is findable through searches, as well as through the site's menus.
Looking up events
You can look up any event on SplunkBase. This is a helpful tool for gaining more insight into various events you might not be so familiar with.
From within the Splunk interface, click on the drop-down arrow underneath the timestamp:
You will see an option to Search SplunkBase:
Click this link and you will be redirected to the SplunkBase page associated with that event.
Getting Q & A
A large part of SplunkBase is devoted to Questions & Answers. You can focus the Q&A around your needs by using the Categories list on the left to narrow down to the technology you're interested in. Then, click a Question to see the list of Answers associated with it.
Using the How-to guides
Another large section of SplunkBase is devoted to HOWTOs. HOWTOs are documents that explain how to understand or accomplish something. Just as with Q&As, you can focus onto the technology you're looking for by using the Categories links on the side. Click through a HOWTO's name to see its contents.
Add-ons in SplunkBase
SplunkBase is teeming with bundles you can add to your Splunk installation. From the Add-ons page, you can narrow down the listing either through Categories, or through Types (the types of content within the add-on). Click through a bundle's name to read more about it, see ratings and comments, rate it yourself, view what's inside it by clicking View Contents, or download it by clicking the Download button. Once you have a bundle downloaded, to add it to your Splunk instance, place it into your Splunk server's $SPLUNKHOME/etc/bundles directory and extract the tarball or zip file.
This documentation applies to the following versions of Splunk: 3.0 , 3.0.1 , 3.0.2 , 3.1 , 3.1.1 , 3.1.2 , 3.1.3 , 3.1.4 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/3.1.2/User/Sharing | 2012-05-24T12:03:20 | crawl-003 | crawl-003-010 | [] | docs.splunk.com |
3 x normal: 2,5,6 CB2
5 x abnormal: All may be attributed to abnormal cleavage & separation of chromosones correctly.
Chromosomes 13, 16, 18, 21, 22, X, and Y as markers of embryo viability Homo
Sapiens has 46 chromosomes: 22 pairs of autosomes, and two sex chromosomes, X+X or
X+Y. PGD tests human pre-embryos for the presence of the correct number of chromosomes 13,18,21,X
and Y; any combination other than 13,13,18,18,21,21,X,X (Normal female) or
13,13,18,18,21,21,X,Y (Normal male) is considered ABNORMAL.
A monosomy (absence of one chromosome) of any autosome is lethal in
humans, only a few babies with autosomal monosomy have survived beyond birth, with severe
abnormatilites. TIley all had monosomy of chromosome 21.
A trisomy (one extra chromosome) can occur with any of the chromosomes. However,
the most common is a trisomy of chromosome 21, leading to Down's syndrome. According to the latest
National Vital Statistics report, in the year 2002 out of 3,993,973 births, 1,850 had Down syndrome
and 1,253 had "other chromosomal anomalies". Among these "other chromosomal anomalies" the most
common is an abnormal number of sex chromosomes. Per each 100,000 recognized human
pregnancies, around 1,400 abort due to an abnormal number of sex chromosomes, about 100 boys born
with Kleinfelter syndrome (instead of XY have XXY, XXXY, XXYY, or even XXXXY sets of sex
chromosomes) and about 50 girls born with Turner syndrome (instead of XX have X, XXX, or XXXX).
Apart trom chromosome 21, the only other autosomal trisomies with any significant frequency in
newborn are trisomies 18 (Edward's syndrome) and 13 (Patau's syndrome); frequencies of either
syndrome range from one in 2,000 to one in 15,000.
Apart from these five chromosomes (13, 18, 21, X, Y), the only other chromosomes noticeably
affecting the outcome of established pregnancies are chromosomes 16 (trisomy 16 found in
1,229 among 15,000 spontaneous abortions) and 22 (extra copy was present in 424 out of 15,000
spontaneous abortions). These embryos never reach term, but they affect the outcome of hurnan
pregnancies.
Aneuploidy for any other chromosome is lethal at the very first stages of embryo development,
before a pregnancy can even be established.
Triploid, tetraploid, or polyploid embryos are those
having full extra sets of all 23 chromosomes. These embryos originate from an oocyte fertilized by
two spennatozoa, by diploid spermatozoon, or from an oocyte which failed to extrude the second polar
body. Polyploidization also occurs during embryo cleavage; at the blastocyst stage it is a normal
step in trophectoderm formation. Haploid embryos originate trom parthenogenetically activated
oocytes; they have only one set of23 maternal chromosomes.
Any combination of auto somes other than normal (disomy), monosomy or
trisomy will be called complex abnormality. These reveal some very
serious errors in oocyte maturation and/or embryo cleavage. Chaotic embryo cleavage is the
most probable mechanism by which complex chromosomal abnormalities may originate. Chaotic cleavage
means that during mitosis, when the zygote and, subsequently, blastomeres divide into two daughter
cells, the chromosomes segregate between two sister blastomeres randomly or chaotically. Most
of these embryos are also morphologically abnormal and very few of them progress beyond the cleavage
stage. Such embryos should not be considered for transfer.
If more than one cell is analyzed from an individual embryo (marked in PGD Report as a),
b), ...), and conflicting results are obtained, this may be an indication of a FISH error
(see below) or embryo mosaicism. Embryo mosaicism may be considered as a "mild" case of
chaotic cleavage. Mosaicism is a result of mitotic error and as such arises during embryo
cleavage, when one of the blastomeres divides into two genetically unequal 'daughter blastomeres'.
This may lead to an embryo having both normal and abnormal cells, i.e., mosaic embryo. If an
embryo is suspected of having genetically normal cells and cells with autosomal monosomy,
such embryos should be AVOIDED during embryo transfer. Monosomic cells may be selected out
during further embryo development; however, if their population rises above some critical level, the
embryo dies before or shortly after implantation, like any other embryo with autosomal monosomy.
Since chromosome 21 is an exception (see above), mosaic embryos with monosomy 21 should not be
considered for transfer.
If an embryo is suspected to have genetically normal cells and cells with autosomal
trisomy, such embryos may develop into an abnormal mosaic baby. Some newborns with
Down's, Edward's, and Patau's syndrome are actually mosaics. Mosaic embryos with trisomies should
never be considered for transfer.
Some embryos may be revealed as abnormal even prior to FISH, during embryo biopsy or after
blastomere fixation. If a single blastomere has more then one nucleus, it is called a
multinucleate blastomere. Even if each nucleus (separated in PGD Report by [...]) is
genetically normal, or if they add to a normal set of chromosomes, the corresponding embryo may be
genetically abnormal. Multinucleation indicates some gross abnormalities in the timing between cell
division, cytokinesis and nuclear division, karyokinesis. If cleavage results in one
blastomere retaining both nuclei, then it's 'sister blastomere', will have none. Absence of a
nucleus (anucleate blastomere) may be similar in its origins to multinucleation, but it may
also be considered as an extreme example of embryo fragmentation. Although anucleate blastomeres
cannot give any indications as to the embryo genetic background, it should be noted that their
presence lowers embryo viability. Ifmultiple morphologically normal blastomeres were analyzed, and
none of them had a nucleus, such an embryo may be considered 'not viable' due to gross errors in
embryo cleavage.
Current estimates of FISH errors fluctuate around 10% for the detection of autosomal numerical
abnormalities, and around 0% for sex determination. Due to the standard practice: "if not proven
normal means abnormal", the rate of actual misdiagnosis is significantly lower. However, PGD
technique cannot reveal all cases of embryo mosaicism, and for this reason, prenatal
diagnosis via CVS or amniocentesis is strongly recommended. | http://www.fertility-docs.com/PGD_report.phtml | 2009-07-03T20:21:37 | crawl-002 | crawl-002-018 | [] | www.fertility-docs.com |
Sperm Evaluation
|
Sperm Tests
|
Advanced Tests
|
Azoospermia & Related Syndromes
|
Donor Sperm Bank
Azoospermia (No Sperm) & Related Syndromes
Zero Sperm Counts & Genetic Links
Congenital Absence of the Vas Deferens
Y Chromosome Microdeletions
Klinefelter's Syndrome
Zero Sperm Counts & Genetic Links
What has become evident at our Centers over the last several years is that our ability to diagnose and successfully treat severe male infertility problems has surpassed our ability to understand the basic causes of these problems. In the recent past, it was considered that nearly 20% of men with extremely low or "zero" sperm counts had no known medical reason for their fertility problems. Most recently, major advances in molecular biology and genetics have provided the "reasons" for severe infertility (very low or zero sperm counts) in many men whose fertility problems were previously poorly understood. We now know that 20-30% of men with such low (under 10 million/ml) or zero sperm counts have a now identifiable genetic cause for their problem. While we are now able to assist many, many men previously thought to be "hopelessly" infertile achieve pregnancy, it remains very important to not only treat these men, but to provide such couples with genetic information related to the problem causing the low or zero count. This is important because many of these genetic characteristics may potentially be passed along to children conceived with the help of modern male infertility treatments. Genetic disorders that would previously not have been able to be "passed along" due to the male's infertility are now being retained in the "gene pool" as a result of new procedures that overcome most of these previously untreatable male conditions.
Congenital Absence of the Vas Deferens
Congenital absence of the vas deferens (CAVD) is a syndrome in which a portion or all of the reproductive ducts (including the epididymis, vas and seminal vesicles) are missing. This causes an obstruction to the passage of sperm. These sperm, which are being produced normally in the testicle become "trapped" in the testicle for lack of a pathway to the ejaculate. CAVD may be associated with several diseases, including cystic fibrosis (CF) and malformations of the kidneys (renal malformations). 65% of men with CAVD will have a detectable genetic mutation in one of the cystic fibrosis genes, and 15% will have a missing or misplaced kidney. This does not imply that the man has or will develop cystic fibrosis but it means that he could be a carrier of the gene. If his spouse is also a carrier, this means that there is a 25% chance of a child born to them having cystic fibrosis. It is a standard treatment policy in our Centers that all couples in which the man has CAVD undergo cystic fibrosis carrier testing of both the man and his spouse/partner. Once the genetic testing is completed (testing takes about 10 days), an in vitro fertilization cycle may be planned for the wife, and a "MESA" (microsurgical epididymal sperm aspiration) procedure planned for the man to obtain viable sperm. These sperm may then be gently, microsurgically inserted inside the eggs of the wife (ICSI) that have been obtained from an aspiration carried out through the vagina. The resulting embryos may then be placed into the uterus of the female to establish a pregnancy. Success rates remain very high with this technique, even in men with "zero" sperm counts.
Y Chromosome Microdeletions
The human genome consists of 23 pairs or 46 chromosomes. There are 44 autosomes and two sex chromosomes. The sex chromosomes are called "X" and "Y". Each genetically normal human has two sex chromosomes. A woman has "X" and "X" (2X) and a man has one "X" and one "Y". The reproductive gametes (eggs and sperm) get one or the other of each partner's sex chromosomes. Because women have 2 "X" chromosomes, each of her eggs will have an "X" sex chromosome. Because men have one "X" and one "Y" chromosome, half of their sperm will carry the "X" chromosome and half will carry the "Y" chromosome. It can therefore be soon that men are responsible for sex determination. If an "X" egg is fertilized by an "X" sperm, an "XX" female will result. If an "X" egg is fertilized by a "Y" sperm, an "XY" male will result.
If the "Y" chromosome from the sperm that fertilizes the "X" egg carries a small mutation or deletion affecting sperm production in part of it's genetic make-up, a male child resulting from a pregnancy may have the same sperm production problem. We have found that 10-13% of men with an absent, or "zero" sperm count will have a mutation or deletion on the "Y" chromosome. We have also detected "Y" chromosome microdeletions in men with low, but not zero counts. We offer a very sophisticated blood analysis (test) to determine if these genetic conditions are present before undertaking therapy that will lead to pregnancy.
Klinefelter's Syndrome
K.
Advanced Tests
Donor Sperm Bank
Search
What's New
SINGLE CYCLE IVF LOS ANGELES: $5,800
MICROSORT LIMITED AVAILABILITY
ALTERNATIVES TO VASECTOMY REVERSAL
More News...
Press Release
Fertility Institutes Announce Expansion to New York City
9/10/08
Fertility Services
Procedure Fees
Surrogacy Solutions
Fertility Procedures
IVF Procedures
Reversal Procedures
Sex Selection
Patient Registration
Register Online
Registration Forms
Financing Plans
Insurance Info.
|
ART
|
Fertility Tests
|
Sperm Tests
|
Egg Donors
|
Egg Freezing
|
Surrogacy
|
Clinical Trials
|
Glossary
|
Español
©
Disclaimer | http://www.fertility-docs.com/sperm_eval_azoospermia.phtml | 2009-07-03T20:22:29 | crawl-002 | crawl-002-018 | [] | www.fertility-docs.com |
Analyzing data backed up using inSync
About inSync Data Analytics feature
The Data Analytics feature of inSync provides you insight into the data that is being backed up by inSync users. The Data Analytics page displays three graphs, Summary, Data By File Type, and # Files By Sizes, that provides you a high level view of the data that has been backed up using inSync..
Note: The graphs refresh after every ten minutes. | https://docs.druva.com/003_inSync_Enterprise/5.3.1/060_Add-ons%3A_Analytics%2C_DLP%2C_Share/010_inSync_Analytics/010_Analyzing_data | 2018-08-14T15:11:51 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.druva.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.