content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Introduction
What is Alluxio
Alluxio is world’s first open source data orchestration technology for analytics and AI for the cloud. It bridges the gap between data driven applications and storage systems, bringing data from the storage tier closer to the data driven applications and makes it easily accessible enabling applications to connect to numerous storage systems through a common interface. Alluxio’s memory-first tiered architecture enables data access at speeds orders of magnitude faster than existing solutions.
In the data ecosystem, Alluxio lies between data driven applications, such as Apache Spark, Presto, Tensorflow, Apache HBase, Apache Hive, or Apache Flink, and various persistent storage systems, such as Amazon S3, Google Cloud Storage, OpenStack Swift, HDFS, GlusterFS, IBM Cleversafe, EMC ECS, Ceph, NFS, Minio, and Alibaba OSS. Alluxio unifies the data stored in these different storage systems, presenting unified client APIs and a global namespace to its upper layer data driven applications.
The Alluxio project originated from the UC Berkeley AMPLab (see papers) as the data access layer of the Berkeley Data Analytics Stack (BDAS). It is open source under Apache License 2.0. Alluxio is one of the fastest growing open source projects that has attracted more than 1000 contributors from over 300 institutions including Alibaba, Alluxio, Baidu, CMU, Google, IBM, Intel, NJU, Red Hat, Tencent, UC Berkeley, and Yahoo.
Today, Alluxio is deployed in production by hundreds of organizations with the largest deployment exceeding 1,500 nodes.. In addition, Alluxio’s tiered storage that can utilize memory, SSD or disk makes elastically scale data-driven applications cost effective. Multi-tiering Caching: supports industry common APIs, such as HDFS API, S3 API, FUSE API, REST API. It.
Also try our getting started tutorials for Presto & Alluxio via:
Downloads and Useful Resources) or our Community Slack Channel.
Downloads | User Guide | Developer Guide | Meetup Group | Issue Tracking | Community Slack Channel | User Mailing List | Videos | Github | Releases | https://docs.alluxio.io/os/user/2.3/en/Overview.html | 2020-10-20T02:59:36 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.alluxio.io |
Using the ADO.NET Database Profile Setup
To define a connection using the ADO.NET interface, you must create a database profile by supplying values for at least the basic connection parameters in the Database Profile Setup -- ADO.NET dialog box. You can then select this profile at any time to connect to your data in InfoMaker.
For information on how to define a database profile, see Using database profiles.
Specifying connection parameters
You must supply a value for the Namespace and DataSource connection parameters and for the User ID and Password. When you use the System.Data.OleDb namespace, you must also select a data provider from the list of installed data providers in the Provider drop-down list.
The Data Source value varies depending on the type of data source connection you are making. For example: DataDirect with OLE DB page and double-click the button next to the File Name box. (You can also launch the Data Link API in the Database painter by double-clicking. | https://docs.appeon.com/im2019/im_connecting_to_your_database/ch05s04.html | 2020-10-20T03:48:48 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.appeon.com |
Modifying the default Styles of the Search Criteria
It is possible to modify the Layout of the search criteria and also the Styles used. You can keep the default layout and just change the styles. Styles are stored in a Cascading Style Sheet (CSS) file. If you are interested in changing the Layout, please see
Customize Search Criteria Layout.
On the Look and Feel Settings page accessed from the Preferences section of the Web Part Settings, click Modify CSS to access the style sheet for this instance of the web part.
It’s a good idea to save contents of the default style sheet to your local PC before making any changes. Once you make changes, there isn’t a reset button.It’s a good idea to save contents of the default style sheet to your local PC before making any changes. Once you make changes, there isn’t a reset button.
For specific information about the different types of search criteria you may want to style, see the sample search below.
- Column Labels
- Text Box Controls
- Choice & Lookup Columns
- Multi-line Text Controls
- Date Columns
- Person or Group Columns
- Number Columns
- Currency Columns
- Search in all columns for Control
- Buttons
To modify the overall style of the default search criteria table (i.e., borders, padding, background color), see Customize the Overall Style for the Search Criteria.
To see some examples, check these:
Example of Customized Styles in Search Criteria
Video:Change the Style of the Search Results in Simple Search Web Part | https://docs.bamboosolutions.com/document/customize_search_criteria_styles/ | 2020-10-20T02:18:37 | CC-MAIN-2020-45 | 1603107869785.9 | [array(['/wp-content/uploads/2017/06/default20simple20search20UI20with20nums.png',
'default simple search UI with nums.png'], dtype=object) ] | docs.bamboosolutions.com |
AWS Elastic TranscoderLeave FeedbackIntroductionAmazon Elastic Transcoder converts media files that are stored in Amazon Simple Storage Service (Amazon S3) into media files in the formats required by consumer playback devices. For example, you can convert large, high-quality digital media files into formats that you can play back on mobile devices, tablets, web browsers, and connected televisions.NoteUse the OpsRamp AWS public cloud integration to discover and collect metrics against the AWS service.SetupTo set up the OpsRamp AWS integration and discover the AWS service, go to AWS Integration Discovery Profile and select Elastic Transcoder.MetricsOpsRamp MetricMetric Display NameUnitAggregation TypeDescriptionaws_elastictranscoder_Billed_HD_OutputBilled.HD.Output.transcoderSecondsAverageNumber of billable seconds of HD output for a pipelineaws_elastictranscoder_Billed_SD_OutputBilled.SD.Output.transcoderSecondsAverageNumber of billable seconds of SD output for a pipeline.aws_elastictranscoder_Billed_Audio_OutputBilled.Audio.Output.transcoderSecondsAverageNumber of billable seconds of audio output for a pipeline.aws_elastictranscoder_Jobs_CompletedJobs.Completed.transcoderCountAverageNumber of jobs completed by this pipeline.aws_elastictranscoder_Jobs_ErroredJobs.Errored.transcoderCountAverageNumber of jobs that failed because of invalid inputs, such as a request to transcode a file that is not in the given input bucket.aws_elastictranscoder_Outputs_per_JobOutputs.per.Job.transcoderCountAverageNumber of outputs Elastic Transcoder created for a job.aws_elastictranscoder_Standby_TimeStandby.Time.transcoderCountAverageNumber of seconds before Elastic Transcoder started transcoding a job.aws_elastictranscoder_ErrorsErrors.transcoderCountAverageNumber of errors caused by invalid operation parameters, such as a request for a job status that does not include the job ID.aws_elastictranscoder_ThrottlesThrottles.transcoderCountAverageNumber of times that Elastic Transcoder automatically throttled an operation.Event supportCloudTrail event supportSupportedConfigurable in OpsRamp AWS Integration Discovery Profile.CloudWatch alarm supportSupportedConfigurable in OpsRamp AWS Integration Discovery Profile.External referenceWhat is Amazon Elastic Transcoder? | https://docs.opsramp.com/integrations/aws/supported-services/aws-elastic-transcoder-int/ | 2020-10-20T03:43:33 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.opsramp.com |
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
Install the package with:
install.packages("rtimicropem")
Or install the development version using devtools with:
library("devtools") install_github("ropensci/rtimicropem") # Introduction
This package aims at supporting the analysis of PM2.5 measures made with RTI MicroPEM. RTI MicroPEM are personal monitoring devices (PM2.5 and PM10) developed by RTI international.
The goal of the package functions is to help in two main tasks:
Checking individual MicroPEM output files after, say, one day of data collection.
Building a data base based on output files, and clean and transform the data for further analysis.
For more information check out the package website, in particular the introductory vignette. | https://docs.ropensci.org/rtimicropem/ | 2020-10-20T03:14:00 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.ropensci.org |
Integrating BMC Digital Workplace with BMC Virtual Chat
With BMC Virtual Chat, users can chat with a virtual or a live agent to get answers to their questions. Support agents use the Support Agent Console to chat with business users. They can chat with multiple business users simultaneously, and their conversation is logged to an incident request.
Configuring knowledge articles to launch in BMC MyIT in the BMC Virtual Chat 9.1 documentation
Using the Self Service Portal in the BMC Virtual Chat 9.1 documentation
From the BMC Remedy IT Service Management Suite documentation:
Adding people without using templates
To integrate BMC Digital Workplace with BMC Virtual Chat, perform the following tasks:
The following video demonstrates the steps to integrate BMC Digital Workplace with BMC Virtual Chat.
Note
The video shows an older version of BMC Digital Workplace. The previous product name was MyIT. Although there might be minor changes in the user interface, the overall functionality remains the same. | https://docs.bmc.com/docs/digitalworkplaceadvanced/1808/integrating-bmc-digital-workplace-with-bmc-virtual-chat-837351256.html | 2020-10-20T03:54:02 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.bmc.com |
Description
Displays an area that can contain multiple lines of styled text. Just double-click to enter inline editing mode and type directly on stage, or use the selection inspector to edit properties of the entire text box.
Text objects fully support writing also in right-to-left and non-alphabetic languages, read this article for more informations on this topic
Understanding HTML Text
PubCoder exports are based on web technology, thus text in PubCoder is HTML text.
HTML text is really powerful, but if you’ve experience using a word processing software like Microsoft Word, Apple Pages or even Google Docs, there are some important differences in how it works.
Basically, the main difference is that while text styling in traditional word processors is “flat”, in the sense that you can highlight some text and assign a style - as a combination of font, size and variations like bold or italic - and that’s it, in HTML text styling is based on Cascading Style Sheets (CSS), and for this reason styles are hierarchical.
Specifically, in PubCoder you can stylize text in at least 4 ways:
1. Define default text style for the entire project
Using the project inspector panel, at the right of your project window, you can specify default text font, size, line height and colors. These settings will be assigned to every text box in the various pages of the project, and can be of course be overridden by text-specific styles.
Here’s a list of Project Properties related to text, that you can find in the Writing Mode and Text and Colors sections of the project inspector:
2. Define text style of a single text box
Using the selection inspector panel, at the right of your project window, you can define font, size, alignment, color and a lot of other properties for the entire text box. This is the best way to stylize your text in PubCoder, since the styles are assigned to the entire text box, and changing the text inside the box, either manually or via the Switch Text action, will retain those styles.
Text box style will override the default project settings (1), and will eventually be overridden by inline styles (3).
3. Inline styling
Double-clicking a text box will enter inline text editing mode, and display a toolbar full of controls to modify inline styling. Please use inline styling only when strictly necessary, as using it will actually insert style informations together with the text itself, so changing, either manually or via the Switch Text action, the text will not retain its styling.
The best way to use inline styling is by combining it with text box styling, for example you can define text font, size and color for the entire text box, then use inline styles to assign bold or italic variations or different colors to portions of your text.
4. CSS
If you know how to write CSS code, you can write it both at project and page levels using the Code Editor, then apply CSS classes to both the text box or inline text using the text editor, see the Code section to learn how to write Paragraph and Character styles in CSS that will be automatically detected by the text editor.
Using the Inline Text Editor
PubCoder inline text editor is a really powerful WYSIWYG (What You See Is What You Get) editor based on TinyMCE, which is the very same editor used by WordPress. Simply double-click a text object to enter the text editor: you can then type to see your text directly on-stage and apply styling using the text editor toolbar that will show on top of the stage:
Let’s see the functionalities of the various buttons in detail.
Undo/Redo
Allows to undo or redo the latest changes to text, with support for multiple undos.
Section Type
Allow to select the type of section for the current selection, being a paragraph or an heading.
Paragraph and Character Styles
Allow to assign a paragraph or character styles of the selection. The list of styles is based on Paragrph and Character styles defined in Project Custom CSS Code.
Font Menu
Allows to assign a Font to the current selection. See Fonts section for more informations.
Font Size
Allows to assign a text size to the current selection. A menu with some specific sizes in pixels is displayed, but you can assign a custom font size using whatever CSS-valid size definition - e.g. in
pt,
px,
em, percentage - though using pixels size is strongly advises to avoid differences when displaying your publications in different browsers / readers.
Color and Emphasis
A set of buttons that allow to assign a color or emphasis - respectively bold, italic, underline, strikethrough - to the current text selection. The last button of the set allows to clear every style from the selection, going back to the project or text-box default style.
Lists
This menu allows to insert ordered or unordered lists in various styles, apply a different list style to the current selection or indent/outdent the current selection
Insert Image
Allows to insert an inline image in the text, or to modify the settings of the selected image.
You can specify the source of the image manually (e.g. to reference an image online), or choose an image asset from the ones in your project.
Insert Link
Allows to add a link to the current selection or modify the link in the current selection.
You can specify the URL of the link manually or use the Link List menu to select a page in your project as a destination. Doing this will result in PubCoder filling the Link field with a placeholder that will be resolved in a real link when you expory your project; this allows to link to real pages, and not their number: if you link to page 3 and then move that page so that it becomes page 4, the link will always point to that page, regardless of the page number (so it will point to page 4 in this example).
Insert Table
Allows to insert a table or modify table, row, column or cell settings for the current selection.
100%as the table width
Line Height
Allows to define the line height of the current selection, expressed as a relative decimal number (e.g. 1 means line height is exactly the vertical height of text, 1.5 means line height is one and a half rows of text and so on) or using a valid CSS
line.height definition, like
20px.
Code Button
Click to display and edit HTML code for the text object directly using the Code Editor
Additional Functionalities
The latest menu displays various additional functionalities, including defining the writing direction (left-to-right or right-to-left), using Find/Replace and insert special placeholders, like Read Aloud splitter, page number, pages count and others.
Fonts
When working with offline or print documents or publications, using a specific font simply means installing that font on your machine so that you can use it in the various applications. But with PubCoder, you will create publications that will be displayed on other users’ devices, so to ensure that your end users will be able to display your publication with the same font you see, you will need to embed the font in your publication.
For this reason, the Font Menu in PubCoder does not display fonts installed on your machine, but rather fonts embedded in your project. You can use the font menu to Import Fonts From Disk or Import Fonts From Internet.
After importing a font in your project, it will automatically appear in the Fonts menu and you will be able to use it throughout the software. The font menu also displays a “CSS Font Definitions” section, but we strongly advise to avoid using items in this section unless you have a deep undestanding of HTML and CSS.
Properties
Text object supports most of the Generic Object Properties, plus the ones listed below. To edit properties of a text object, select it and use the selection inspector at the right of your project window.
Events
Text object triggers all Generic Events. To edit event handlers for a text object, select it and use the Interactivity Panel on the right side of the project window.
Actions
Text object can be used as target of most Generic Actions, plus:
Switch Text
Applies another text to the text object, using the original text box styling. A fade effect can be used as a transition while switching the text.
Properties
Code
Here’s an example of how a text object with an “Hello World” string is exported on the html page:
<div id="obj4" class="SCPageObject SCText"> <div id="obj4_content" class="SCTextContainer SCTextVAlignTop"> <p>Hello World</p> </div> </div>
You can also switch the content of a text object via JavaScript using:
PubCoder.switchText("#obj4", "Hello, World!"); | https://docs.pubcoder.com/pubcoder_text_object.html | 2020-10-20T03:09:40 | CC-MAIN-2020-45 | 1603107869785.9 | [array(['images/pubcoder_stage_toolbar_text.png', 'Text Editor Toolbar'],
dtype=object)
array(['images/pubcoder_texteditor_undo_redo.png', 'Undo/Redo'],
dtype=object)
array(['images/pubcoder_texteditor_section.png', 'Section Type'],
dtype=object)
array(['images/pubcoder_texteditor_parchar_styles.png',
'Paragraph and Character Styles'], dtype=object)
array(['images/pubcoder_texteditor_fontmenu.png', 'Font Menu'],
dtype=object)
array(['images/pubcoder_texteditor_color_emphasis.png',
'Color and Emphasis'], dtype=object)
array(['images/pubcoder_texteditor_lists.png', 'Lists'], dtype=object)
array(['images/pubcoder_texteditor_image.png', 'Insert Image'],
dtype=object)
array(['images/pubcoder_texteditor_image_settings.png', 'Insert Image'],
dtype=object)
array(['images/pubcoder_texteditor_link.png', 'Insert Link'], dtype=object)
array(['images/pubcoder_texteditor_link_settings.png', 'Link Settings'],
dtype=object)
array(['images/pubcoder_texteditor_table.png', 'Insert Table'],
dtype=object)
array(['images/pubcoder_texteditor_table_settings.png', 'Table Settings'],
dtype=object)
array(['images/pubcoder_texteditor_lineheight.png', 'Line Height'],
dtype=object)
array(['images/pubcoder_texteditor_code.png', 'Code Button'], dtype=object)
array(['images/pubcoder_texteditor_additional.png',
'Additional Functionalities'], dtype=object)] | docs.pubcoder.com |
>>.
One a transaction as a group of events that have the same session ID,
JSESSIONID, and come from the same IP address,
clientip, and where the first event contains the string, "view", and the last event contains the string, "purchase".
sourcetype=access_* | transaction JSESSIONID clientip startswith="view" endswith="purchase" | where duration>0
The search defines the first event in the transaction as events that include the string, "view", using the
startswith="view" argument. The
endswith="purchase" argument does the same for the last event in the transaction.
This example then pipes the transactions into the
where command and the
duration field to filter out all of the transactions that took less than a second to complete. The
where filter cannot be applied before the
transaction command because the
duration field is added by the
transaction command. The values in the
duration field show the difference, in seconds, between the timestamps for the first and last events in the transaction.
You might be curious about why the transactions took a long time, so viewing these events might help you to troubleshoot. You won't see it in this data, but some transactions! | https://docs.splunk.com/Documentation/Splunk/7.2.0/Search/Abouttransactions | 2020-10-20T03:47:00 | CC-MAIN-2020-45 | 1603107869785.9 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
robupy¶
robupy is an open-source Python package for finding worst-case probabilites in
the context of robust decision making. It implements an algorithm, which reduces the
selection to a one-dimensional minimization problem. This algorithm was developed and
described in:
Nilim, A., & El Ghaoui, L. (2005). Robust control of Markov decision processes with uncertain transition matrices.. Operations Research, 53(5): 780–798.
You can install
robupy via conda with
$ conda config --add channels conda-forge $ conda install -c opensourceeconomics robupy
Please visit our online documentation for tutorials and other information.
Citation¶
If you use robupy for your research, do not forget to cite it with
@Unpublished{The robupy team, Author = {The robupy team}, Title = {robupy - A Python package for robust optimization}, Year = {2019}, Url = {}, } | https://robupy.readthedocs.io/en/latest/ | 2020-10-20T02:21:45 | CC-MAIN-2020-45 | 1603107869785.9 | [] | robupy.readthedocs.io |
Dense optical flow algorithms compute motion for each point
Calculate an optical flow using “SimpleFlow” algorithm.
See [Tao2012]. And site of project -.
Note
DeepFlow optical flow algorithm implementation.
The class implements the DeepFlow optical flow algorithm described in [Weinzaepfel2013] . See also .
Parameters - class fields - that may be modified after creating a class instance:
- float alpha¶
Smoothness assumption weight
- float delta¶
Color constancy assumption weight
- float gamma¶
Gradient constancy weight
- float sigma¶
Gaussian smoothing parameter
- int minSize¶
Minimal dimension of an image in the pyramid (next, smaller images in the pyramid are generated until one of the dimensions reaches this size)
- float downscaleFactor¶
Scaling factor in the image pyramid (must be < 1)
- int fixedPointIterations¶
How many iterations on each level of the pyramid
- int sorIterations¶
Iterations of Succesive Over-Relaxation (solver)
- float omega¶
Relaxation factor in SOR | https://www.docs.opencv.org/3.0-alpha/modules/optflow/doc/dense_optflow.html | 2020-10-20T03:15:31 | CC-MAIN-2020-45 | 1603107869785.9 | [] | www.docs.opencv.org |
Covered in this doc
List of supported CI services
Overview of how our CI integrations work
Percy works best when integrated into your CI workflow, running continuously alongside your test suite. We integrate with all common CI providers and can be configured for custom environments.
Supported CI integrations
Read the documentation for your CI service to get step-by-step instructions:
- AppVeyor
- Azure Pipelines
- Bitbucket Pipelines
- Buildkite
- CircleCI
- CodeShip
- Drone
- GitHub Actions
- GitLab CI
- Jenkins
- Semaphore
- Travis CI
- Local development
Don't see your CI service? We're constantly adding support for CI services. Reach out to support to see if yours is on the way.
How it works
Percy is designed to integrate with your tests and CI environment to run continuous visual reviews. Once you've added Percy to your tests and your CI environment, Percy will start receiving and rendering screenshots every time a CI build runs.
Configure CI environment variables
To enable Percy, the environment variable,
PERCY_TOKEN, must be configured in your CI service. This is our write-only API token unique for each Percy project and should be kept secret.
You can find
PERCY_TOKEN in your project settings.
Parallel test suites
Percy automatically supports most parallelized test environments. Snapshots are pushed from your tests to Percy and rendered for you to review grouped in the same Percy build, no matter if your tests are run in different processes or even on different machines. You can also simply configure Percy to support complex parallelization setups.
Updated 3 months ago
What's next
Learn more about configuring CI environment variables and Percy's parallelization capabilities. | https://docs.percy.io/docs/ci-setup | 2020-10-20T03:37:32 | CC-MAIN-2020-45 | 1603107869785.9 | [array(['https://files.readme.io/65dd187-project-settings-percytoken.png',
'project-settings-percytoken.png'], dtype=object)
array(['https://files.readme.io/65dd187-project-settings-percytoken.png',
'Click to close...'], dtype=object) ] | docs.percy.io |
Contents
- Starting the Installer
Read the License Agreement. You must select «I agree» to continue the installation. Click «Next».
Select the SYSTEMZ Platform installation folder for SharePoint 2013, the default location is «C:Program Filesi-SysPlatform». Then click on «Next».
Confirm the installation by clicking «Next», then wait until the installation is complete.
After the installation is completed, you need to run the SYSTEMZ Platform Configurator application, which allows you to deploy installed Platform family products to SharePoint farms. Click «Close» to launch the Platform Configurator automatically, or launch it later manually from the Windows application menu. | https://docs.systemz.io/en/platform-sp-2013/product-installation/starting-the-installer/ | 2020-10-20T02:26:33 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.systemz.io |
app-configuration_manager Change Logs
2020.2.7 Maintenance Release [2021-09-07]
Overview
- 4 Bug Fixes
- 4 Total Tickets
Bug Fixes
- app-configuration_manager:3.67.1-2020.2.59 [08-26-2021] - Object type parameters are now properly displayed when editing task instances.
- app-configuration_manager:3.67.1-2020.2.58 [08-17-2021] - Added a check to prevent crash of getTreesForDevice task, when devices are not available due to slow NSO connection.
- app-configuration_manager:3.67.1-2020.2.57 [08-02-2021] - Optional parameters are now supported in task instances.
- app-configuration_manager:3.67.1-2020.2.56 [07-14-2021] - Golden Configuration will now accept asterisks in interface names for Junos configurations.
2020.2.6 Maintenance Release [2021-07-06]
Overview
- 3 Bug Fixes
- 3 Total Tickets
Bug Fixes
- app-configuration_manager:3.67.1-2020.2.55 [06-29-2021] - Fixed a crash that occurred when creating a new Golden Configuration while Template Builder was down.
- app-configuration_manager:3.67.1-2020.2.54 [06-23-2021] - Updated colors for accessibility. Configuration Parsers now handle dark and light mode.
- app-configuration_manager:3.67.1-2020.2.53 [06-11-2021] - The accordion menu is displayed correctly when all menu options are closed.
2020.2.5 Maintenance Release [2021-06-01]
Overview
- 11 Bug Fixes
- 11 Total Tickets
Bug Fixes
- app-configuration_manager:3.67.1-2020.2.52 [06-04-2021] - Fixed an issue that caused the editor to restrict user input.
- app-configuration_manager:3.67.1-2020.2.51 [06-04-2021] - Tasks in JSON Golden Configurations now reflect accurate compliance data.
- app-configuration_manager:3.67.1-2020.2.50 [05-27-2021] - The updated Golden Config Tree Version task now takes an object instead of an array for the variables parameter.
- app-configuration_manager:3.67.1-2020.2.49 [05-25-2021] - Device counts updated to report correct number of devices. Device accordion menu will now list the total number of devices.
- app-configuration_manager:3.67.1-2020.2.48 [05-18-2021] - Users can now page forward when there are more than 25 device groups in the accordion menu.
- app-configuration_manager:3.67.1-2020.2.47 [05-17-2021] - Fixed a crash that occurred when invalid syntax was provided for the createConfigSpec API.
- app-configuration_manager:3.67.1-2020.2.46 [05-12-2021] - The previous offline device status will no longer override the current selected device status.
- app-configuration_manager:3.67.1-2020.2.45 [05-11-2021] - Icons are only displayed when the sidebar menu has been collapsed.
- app-configuration_manager:3.67.1-2020.2.44 [05-10-2021] - Removed the Groups field from the Devices card since it had no data and is hard coded for No Groups Found.
- app-configuration_manager:3.67.1-2020.2.43 [05-07-2021] - Sidebar elements will now update appropriately whenever the user creates or deletes items.
- app-configuration_manager:3.67.1-2020.2.42 [04-30-2021] - Removed the FixMode button from the toolbar in Golden Configuration. Button is no longer required due to Jinja2 variable handling capabilities.
2020.2.4 Maintenance Release [2021-05-04]
Overview
- 3 Bug Fixes
- 1 Security Fixes
- 4 Total Tickets
Bug Fixes
- app-configuration_manager:3.67.1-2020.2.41 [04-27-2021] - Pinned header links no longer take users to blank pages.
- app-configuration_manager:3.67.1-2020.2.39 [04-05-2021] - Corrected the issue with interface IP address validation causing the /undefined display error.
- app-configuration_manager:3.67.1-2020.2.38 [04-01-2021] - Enhanced the backup details dialog to meet design specifications.
Security Fixes
- app-configuration_manager:3.67.1-2020.2.40 [04-23-2021] - Fixed Axios security vulnerability.
2020.2.3 Maintenance Release [2021-04-06]
Overview
- 1 New Features
- 2 Improvements
- 10 Bug Fixes
- 1 Security Fixes
- 1 Chores
- 15 Total Tickets
New Features
- app-configuration_manager:3.67.1-2020.2.28 [03-08-2021] - Added Golden Configuration support for ArubaOS.
Improvements
- app-configuration_manager:3.67.1-2020.2.37 [03-26-2021] - Updated the Configuration Manager navigation bar to match the latest design.
- app-configuration_manager:3.67.1-2020.2.26 [03-01-2021] - Added support for regular expressions in JSON Golden Configurations.
Bug Fixes
- app-configuration_manager:3.67.1-2020.2.36 [03-26-2021] - Fixed the color-bar in configuration editor; node colors are now mapped correctly in the GC tree configuration lines, even if a node configuration is missing in the tree.
- app-configuration_manager:3.67.1-2020.2.35 [03-23-2021] - Enhanced the visual design of the overall application to meet current design requirements.
- app-configuration_manager:3.67.1-2020.2.34 [03-22-2021] - Resolved issue with user seeing a forever spinner after running a task with invalid device name; task dialog now displays the error returned by the southbound system.
- app-configuration_manager:3.67.1-2020.2.33 [03-19-2021] - Adapter tasks are now sorted in the tasks dialog of JSON Golden Configuration.
- app-configuration_manager:3.67.1-2020.2.32 [03-12-2021] - Junos configuration parser no longer halts when parsing an empty line ending with a semicolon.
- app-configuration_manager:3.67.1-2020.2.31 [03-12-2021] - Resolved an issue with adding a task that has $ref in the schema definition.
- app-configuration_manager:3.67.1-2020.2.29 [03-09-2021] - Fixed a crash that occurred due to an uncaught exception when running auto-remediation.
- app-configuration_manager:3.67.1-2020.2.27 [03-05-2021] - Enhanced the visual design of the navigation to meet current design requirements.
- app-configuration_manager:3.67.1-2020.2.24 [02-22-2021] - Enhanced the visual design of the Devices node to meet current design requirements.
- app-configuration_manager:3.67.1-2020.2.23 [02-19-2021] - Device names are now validated before being added to a group.
Security Fixes
- app-configuration_manager:3.67.1-2020.2.30 [03-11-2021] - Updated Axios and Lodash libraries to fix security vulnerabilities.
Chores
- app-configuration_manager:3.67.1-2020.2.25 [02-26-2021] - Moved maintenance jobs from apollo CI back to argo CI.
2020.2.2 Maintenance Release [2021-03-02]
Overview
- 2 Improvements
- 13 Bug Fixes
- 1 Chores
- 16 Total Tickets
Improvements
- app-configuration_manager:3.67.1-2020.2.18 [02-18-2021] - Reworked variables in Golden Configuration to fully support Jinja2 templates.
- app-configuration_manager:3.67.1-2020.2.11 [02-04-2021] - A visual indication is now provided for each node that an inherited configuration belongs to.
Bug Fixes
- _ app-configuration_manager:3.67.1-2020.2.17 [02-11-2021]_ - The code editor now maintains the user's selected theme state throughout the current session.
- app-configuration_manager:3.67.1-2020.2.22 [02-19-2021] - Enhanced the visual design of the Device Backups node to meet current design requirements.
- app-configuration_manager:3.67.1-2020.2.21 [02-18-2021] - Enhanced the visual design of the Device Groups detail page to meet design requirements.
- app-configuration_manager:3.67.1-2020.2.19 [02-18-2021] - Backups are now normalized when imported to prevent breaking behavior.
- app-configuration_manager:3.67.1-2020.2.17 [02-16-2021] - Enhanced the visual design of the Golden Configuration node; moved the tree compliance button to the left of the version button to create a toolbar.
- app-configuration_manager:3.67.1-2020.2.16 [02-11-2021] - The vertical scrollbar is now always visible in the backup diff comparison view.
- app-configuration_manager:3.67.1-2020.2.15 [02-11-2021] - Changed text font in search box to show more contrast and resolve accessibility issue in Configuration Manager.
- app-configuration_manager:3.67.1-2020.2.14 [02-10-2021] - Task instances will now order the parameters properly to prevent invalid inputs.
- app-configuration_manager:3.67.1-2020.2.13 [02-09-2021] - Underscores are now visible when editing configurations in the Configuration Manager editor.
- app-configuration_manager:3.67.1-2020.2.12 [02-08-2021] - The Golden Configuration menu option is no longer missing when devices are loading.
- app-configuration_manager:3.67.1-2020.2.10 [02-04-2021] - Added node name validation to prevent crashes due to invalid node names.
- app-configuration_manager:3.67.1-2020.2.9 [02-02-2021] - Updated Rodeo version to resolve missing menu items.
- app-configuration_manager:3.67.1-2020.2.8 [01-29-2021] - Resolved missing menu items.
Chores
- app-configuration_manager:3.67.1-2020.2.20 [02-18-2021] - Moved project to master pipeline.
2020.2.1 Maintenance Release [2021-02-02]
Overview
- 2 Improvements
- 4 Bug Fixes
- 6 Total Tickets
Improvements
- app-configuration_manager:3.67.1-2020.2.7 [01-26-2021] - Updated styling and copy to clipboard feature in the editor to support Jinja.
- app-configuration_manager:3.67.1-2020.2.3 [01-14-2021] - The getConfigTemplate API now also returns templates for each node individually.
Bug Fixes
- app-configuration_manager:3.67.1-2020.2.6 [01-26-2021] - Deleting a task instance in a new GC tree version no longer deletes the tasks in the original version.
- app-configuration_manager:3.67.1-2020.2.5 [01-26-2021] - JSON Golden Configurations can now handle null values when running compliance.
- app-configuration_manager:3.67.1-2020.2.4 [01-26-2021] - Fixed an issue causing the Golden Configuration to crash when attempting to save an invalid Jinja2 configuration.
- app-configuration_manager:3.67.1-2020.2.2 [01-07-2021] - Resolved issue where the manual remediation task dialog displayed blank results.
2020.2.0 Feature Release [2021-01-05]
Overview
- 13 New Features
- 31 Improvements
- 54 Bug Fixes
- 1 Security Fixes
- 5 Chores
- 104 Total Tickets
New Features
-.
Improvements
-.
Bug Fixes
-.
Security Fixes
- app-configuration_manager:3.58.4 [10-20-2020] - Updated the 'lodash' dependency.
Chores
-. | https://docs.itential.com/2020.2/changelog/app-configuration_manager/ | 2021-10-15T23:46:16 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.itential.com |
Table of Contents
Product Index
Environment
Z),.
Poses
All poses have been carefully adjusted for Genesis 8 Female and Victoria 8, Genesis 8 Male and Michael 8, and Genesis 3 Male & Female. There are High Heel and Flat feet versions included for Genesis 8 Female and Victoria 8.
The set includes 20. | http://docs.daz3d.com/doku.php/public/read_me/index/47141/start | 2021-10-15T22:57:12 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.daz3d.com |
Table of Contents
Product Index
Particle Accelerator Crisis Poses for Genesis 8 Male is a set of 20 dynamic poses for use with the Particle Accelerator Your Genesis 8 Male will interact perfectly with this environment with Running, Laying Down, Standing, Kneeling, and Falling Over Poses. Get Particle Accelerator Crisis Poses for Genesis 8 Male for your crisis or industrial accident. | http://docs.daz3d.com/doku.php/public/read_me/index/71335/start | 2021-10-15T22:51:54 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.daz3d.com |
ImportFeeds Plugin¶
This plugin helps you keep track of newly imported music in your library.
To use the
importfeeds plugin, enable it in your configuration
(see Using Plugins).
Configuration¶
To configure the plugin, make an
importfeeds: section in your
configuration file. The available options are:
absolute_path: Use absolute paths instead of relative paths. Some applications may need this to work properly. Default:
no.
dir: The output directory. Default: Your beets library directory.
formats: Select the kind of output. Use one or more of:
- m3u: Catalog the imports in a centralized playlist.
- m3u_multi: Create a new playlist for each import (uniquely named by appending the date and track/album name).
- link: Create a symlink for each imported item. This is the recommended setting to propagate beets imports to your iTunes library: just drag and drop the
dirfolder on the iTunes dock icon.
- echo: Do not write a playlist file at all, but echo a list of new file paths to the terminal.
Default: None.
m3u_name: Playlist name used by the
m3uformat. Default:
imported.m3u.
relative_to: Make the m3u paths relative to another folder than where the playlist is being written. If you’re using importfeeds to generate a playlist for MPD, you should set this to the root of your music library. Default: None.
Here’s an example configuration for this plugin:
importfeeds: formats: m3u link dir: ~/imports/ relative_to: ~/Music/ m3u_name: newfiles.m3u | https://beets.readthedocs.io/en/v1.4.5/plugins/importfeeds.html | 2021-10-15T22:49:20 | CC-MAIN-2021-43 | 1634323583087.95 | [] | beets.readthedocs.io |
(Associates at Market Basket demonstrate their support for their CEO Arthur T. Demoulas)
For me the Market Basket story of the summer of 2014 is somewhat personal. I had the pleasure of teaching two of Arthur T. Demoulas children and had numerous interactions with the family. They treated my family as if we were part of theirs and their generosity and support when needed was always present. The concept of “family” is also the core of how the Market Basket supermarket chain has always been operated. This concept forms the basis of Daniel Korschun and Grant Welker’s new book WE ARE MARKET BASKET which relates the story of the amazing relationship between management and labor, describing the behind the scenes events and analysis that accompanied the firing of Arthur T. Demoulas (Arthur T.) as company CEO in 2014, bringing to a head an ongoing family dispute that had existed for years. The dispute has become a case study for many business classes as in this instance labor supported management in the person of Arthur T., when his cousin Arthur S. Demoulas (Arthur S.) sought to destroy the company’s successful business model by squeezing every last dime out of Market Basket to the detriment of the loyal workers and customers of the chain.
The ongoing battle for the leading supermarket chain in New England was between two different corporate views. The first was followed by Arthur T. who continued the principles laid down by his father Telemachus and his uncle George, the sons of the chain’s founder in creating a sense of family and empowerment among the company’s labor force. Treating workers as associates with generous profit sharing and other benefits, and keeping prices down for middle and lower consumers whereby helping balance the socioeconomic divide in a given community. For Arthur S., George’s son, the goal was quite different. After an earlier court decision, Arthur S. and his faction controlled 50.5% of the company’s stock and a majority of its corporate board. They sought to implement a plan to shift as much of the company’s liquidity to shareholders as possible, this involved an immediate and continuous dividend of all excess cash, beginning with a $300 million payout in the fall of 2013. Further, it appeared that Arthur S. and his cohorts were going to sell the company to Delhaize Group, that a few years earlier had also purchased Hannafords. To achieve this goal, Arthur T. had to be fired as company CEO. In a nutshell that is the background that Daniel Korschun, a marketing professor at Drexel University; and Grant Welker, a journalist with the Lowell News present in their new book. However, the detail presented goes much deeper and upon completion what emerges is the family background to the business dating from its founding by Greek immigrants in 1917, a detailed discussion of the company’s philosophy and business model, and the nasty corporate war that raged inside the family until Arthur T. was finally restored as company CEO in August, 2014.
Market Basket is a $4.5 billion corporation that retains the mom and pop feel that its founder, Athanasios Demoulas and his sons Telemachus and George cultivated from the outset. The authors detail the course of the company’s evolution as it caught the American supermarket phase of the 1950s to create the success that it has become. Once its founder died, the two son’s success was built on their ability to serve families on fixed and limited incomes as the textile mills closed down in Lowell, MA where the first store was opened. They kept their prices low which in effect raised their customer’s standard of living. Further, the Demoulas brothers were open to local producers and did not charge the high slotting fees that other chains did. They relied on offering high quality products, fully staffed and stocked their stores on a level not matched by their competitors, and treated their employees well so they would have a vested interest in the company’s success.
The authors acknowledge that Arthur T. possessed personal attributes that were almost “cult” like during the ensuing strike following his dismissal as CEO, but they argue there is much more to this complex man than is often presented in the media. He is a perfectionist who demands excellence and an extremely tough negotiator. He believes in having almost complete control in implementing his vision, but he is an astute individual who has a good “heart” and has developed a strong and loyal management team that has been with him for years. He believes workers, known as associates have to learn the business from the ground up and promoting from within, not hiring the latest MBA. Like his father, Arthur T. “overarching goal is to grow the company, and his personal goal is to be a good merchant,” which is in marked contrast to his cousin, Arthur S. For Arthur T., “Market Basket has a moral obligation to the communities we serve,” which explains the amazing support he received from customers during the 2014 strike and how they returned as shoppers once he was able to buy out the opposition and return as CEO.
The authors stress the culture that has evolved at Market Basket over the years-loyalty, family, and community. The sense of family transcends traditional boundaries as is described in detail throughout the narrative. The culture of the company rests on empowerment as “associates believe that their job is important and that they as individuals have roles in the success of the company.” The authors devote a significant amount of time to explaining leadership and business practice theory and apply different academic philosophies to Market Basket. But, it seems in all cases no matter which study or market research that is consulted, the company either stands out as one of the best, or it has adapted and never wavered from its core values, i.e.; empowerment, communication, and distributed leadership strategies. Market Basket executives consistently break with the accepted wisdom put forth in business schools and focus on weekly shoppers who buy for their families, as opposed to the newer trends of the mega store like Wegmans or the occasional shopper like Trader Joe’s.
By 2013 following the death of George, the family conflict over the company’s philosophy could no longer be contained once his widow and son shifted their support to Arthur S. The authors had access to Market Basket board meetings as part of their research that provides a unique view into corporate conflict. The strategy of Arthur S. and his board allies to remove his cousin are laid out, in addition to the birth of the movement that would support Arthur T. Once the firing took place fear spread among associates that there company was about to be sold and felt that their lives that were totally integrated into the Market Basket family were about to be destroyed. A detailed chronological description of events from the perspective of the opposition to Arthur S. and his board actions is presented, as is a perceptive analysis of the strategic errors they made.
(Market Basket CEO Arthur T. Demoulas)
To gain the feel of what the firing meant to Market Basket associates the authors included numerous interviews in the text, and the relationship between Arthur T. and his employees is clearly one of deep emotion and support. The authors spend a major part of the book analyzing the strike that was implemented to save Arthur T. and their vision of their jobs from the warehouse and supply stoppages, the use of social media to gain outside support, as well as the economic and political ramifications that probably would have taken place had Arthur T. not been able to purchase control of the company. The narrative and dialogue presented is often breezy, but in a very serious manner because of what was at stake. It is a fine effort by the authors and fully explains why so many people were “honking their horns” throughout the summer of 2014 as they drove by their local Market Basket store. | https://docs-books.com/2015/09/ | 2021-10-15T23:35:10 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs-books.com |
Dynamic Brush Menus¶
Provides access to commonly used settings and tools for painting/sculpting.
Activation¶
Open Blender and go to Preferences then the Add-ons tab.
Click Interface then Dynamic Brush Menus to enable the script.
Description¶
Features of Note:
Pop-ups to pick colors and edit curves.
A brush menu that supports user created brushes.
Sliders included at the top of submenus like Radius, Strength, and Weight to allow for precise adjustment.
Fast creation of UV maps and Texture Paint Slots in Texture Paint Mode if they are not already present.
Integrates well with the Dynamic Context Menu add-on.
A Preference for the number of columns shown in the Brush and Brush Mode menus can be found for this add-on by going to the Add-ons tab in Preferences and expanding the add-on.
Reference
- Category
Interface
- Description
Fast access to brushes & tools in Sculpt and Paint Modes.
- Location
Spacebar in Sculpt/Paint Modes
- File
space_view3d_brush_menus folder
- Author
Ryan Inch (Imaginer)
- Maintainer
Ryan Inch (Imaginer)
- License
GPL
- Support Level
Community
- Note
This add-on is bundled with Blender. | https://docs.blender.org/manual/en/2.92/addons/interface/brush_menus.html | 2021-10-15T22:59:34 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['../../_images/addons_interface_brush-menus_ui.jpg',
'../../_images/addons_interface_brush-menus_ui.jpg'], dtype=object)] | docs.blender.org |
Remote PC Access..
Remote PC Access uses the same Citrix Virtual Apps and Desktops components that deliver virtual desktops and applications. As a result, the requirements and process of deploying and configuring Remote PC Access are the same as those required for deploying Citrix Virtual Apps and Desktops for the delivery of virtual resources. This uniformity provides a consistent and unified administrative experience. Users receive the best user experience by using Citrix HDX to deliver their office PC session.
The feature consists of a machine catalog of type Remote PC Access that provides this functionality:
- Ability to add machines by specifying OUs. This ability facilitates the addition of PCs in bulk.
- Automatic user assignment based on the user that logs into the office Windows PC. We support single user and multiple users assignments.
Citrix Virtual Apps and Desktops can accommodate more use cases for physical PCs by using other types of machine catalogs. These use cases include:
- Physical Linux PCs
- Pooled physical PCs (that is, randomly assigned, not dedicated)
Notes:
For details on the supported OS versions, see the system requirements for the VDA for single-session OS and Linux VDA.
For on-premises deployments, Remote PC Access is valid only for Citrix Virtual Apps and Desktops Advanced or Premium licenses. Sessions consume licenses in the same way as other Citrix Virtual Desktops sessions. For Citrix Cloud, Remote PC Access is valid for the Citrix Virtual Apps and Desktops Service and Workspace Premium Plus.
ConsiderationsConsiderations
While all the technical requirements and considerations that apply to Citrix Virtual Apps and Desktops in general also apply to Remote PC Access, some might be more relevant or exclusive to the physical PC use case.
Deployment considerationsDeployment considerations
While planning the deployment of Remote PC Access, make a few general decisions.
- You can add Remote PC Access to an existing Citrix Virtual Apps and Desktops deployment. Before choosing this option, consider the following:
- Are the current Delivery Controllers or Cloud Connectors appropriately sized to support the additional load associated with the Remote PC Access VDAs?
- Are the on-premises site databases and database servers appropriately sized to support the additional load associated with the Remote PC Access VDAs?
- Will the existing VDAs and the new Remote PC Access VDAs exceed the number of maximum supported VDAs per site?
- You must deploy the VDA to office PCs through an automated process. The following are two of options available:
- Electronic Software Distribution (ESD) tools such as SCCM: Install VDAs using SCCM.
- Deployment scripts: Install VDAs using scripts.
- Review the Remote PC Access security considerations.
Machine catalog considerationsMachine catalog considerations
The type of machine catalog required depends on the use case:
- Remote PC Access
- Windows dedicated PCs
- Windows dedicated multi-user PCs
- Single-session OS
- Static - Dedicated Linux PCs
- Random - Pooled Windows and Linux PCs
Once you identify the type of machine catalog, consider the following:
- A machine can be assigned to only one machine catalog at a time.
- To facilitate delegated administration, consider creating machine catalogs based on geographic location, department, or any other grouping that eases delegating administration of each catalog to the appropriate administrators.
- When choosing the OUs in which the machine accounts reside, select lower-level OUs for greater granularity. If such granularity is not required, you can choose higher-level OUs. For example, in the case of Bank/Officers/Tellers, select Tellers for greater granularity. Otherwise, you can select Officers or Bank based on the requirement.
- Moving or deleting OUs after being assigned to a Remote PC Access machine catalog affects VDA associations and causes issues with future assignments. Therefore, make sure to plan accordingly so that OU assignment updates for machine catalogs are accounted for in the Active Directory change plan.
- If it is not easy to choose OUs to add machines to the machine catalog because of the OU structure, you don’t have to select any OUs. You can use PowerShell to add machines to the catalog afterward. User auto-assignments continue to work if the desktop assignment is configured correctly in the Delivery Group. A sample script to add machines to the machine catalog along with user assignments is available in GitHub.
- Integrated Wake on LAN is available only with the Remote PC Access type machine catalog.
Linux VDA considerationsLinux VDA considerations
These considerations are specific to the Linux VDA:
- Use the Linux VDA on physical machines only in non-3D mode. Due to limitations on NVIDIA’s driver, the local screen of the PC cannot be blacked out and displays the activities of the session when HDX 3D mode is enabled. Showing this screen is a security risk.
- Use machine catalogs of type single-session OS for physical Linux machines.
- The integrated Wake on LAN functionality is not available for Linux machines.
Technical requirements and considerationsTechnical requirements and considerations
This section contains the technical requirements and considerations for physical PCs.
- The following are not supported:
- KVM switches or other components that can disconnect a session.
- Hybrid PCs, including All-in-One and NVIDIA Optimus laptops and PCs.
- Connect the keyboard and mouse directly to the PC. Connecting to the monitor or other components that can be turned off or disconnected, can make these peripherals unavailable. If you must connect the input devices to components such as monitors, do not turn those components off.
- The PCs must be joined to an Active Directory Domain Services domain.
- Secure Boot is supported on Windows 10 only.
- The PC must have an active network connection. A wired connection is preferred for greater reliability and bandwidth.
- If using Wi-Fi, do the following:
- Set the power settings to leave the wireless adapter turned on.
- Configure the wireless adapter and network profile to allow automatic connection to the wireless network before the user logs on. Otherwise, the VDA does not register until the user logs on. The PC isn’t available for remote access until a user has logged on.
- Ensure that the Delivery Controllers or Cloud Connectors can be reached from the Wi-Fi network.
- You can use Remote PC Access on laptop computers. Ensure the laptop is connected to a power source instead of running on the battery. Configure the laptop power options to match the options of a desktop PC. For example:
- Disable the hibernate feature.
- Disable the sleep feature.
- Set the close lid action to Do Nothing.
- Set the “press the power button” action to Shut Down.
- Disable video card and NIC energy-saving features.
- Remote PC Access is supported on Surface Pro devices with Windows 10. Follow the same guidelines for laptops mentioned previously.
If using a docking station, you can undock and redock laptops. When you undock the laptop, the VDA reregisters with the Delivery Controllers or Cloud Connectors over Wi-Fi. However, when you redock the laptop, the VDA doesn’t switch to use the wired connection unless you disconnect the wireless adapter. Some devices provide built-in functionality to disconnect the wireless adapter upon establishing a wired connection. The other devices require custom solutions or third-party utilities to disconnect the wireless adapter. Review the Wi-Fi considerations mentioned previously.
Do the following to enable docking and undocking for Remote PC Access devices:
- In the Start menu, select Settings > System > Power & Sleep, and set Sleep to Never.
- Under the Device Manager > Network adapters > Ethernet adapter go to Power Management and clear Allow the computer to turn off this device to save power. Ensure that Allow this device to wake the computer is checked.
- Multiple users with access to the same office PC see the same icon in Citrix Workspace. When a user logs on to Citrix Workspace, that resource appears as unavailable if already in use by another user.
- Install the Citrix Workspace app on each client device (for example, a home PC) that accesses the office PC.
Configuration sequenceConfiguration sequence
This section contains an overview of how to configure Remote PC Access when using the Remote PC Access type machine catalog. For information on how to create other types of machine catalogs, see the Create machine catalogs.
On-premises site only - To use the integrated Wake on LAN feature, configure the prerequisites outlined in Wake on LAN.
If a new Citrix Virtual Apps and Desktops site was created for Remote PC Access:
- Select the Remote PC Access Site type.
- On the Power Management page, choose to enable or disable power management for the default Remote PC Access machine catalog. You can change this setting later by editing the machine catalog properties. For details on configuring Wake on LAN, see Wake on LAN.
- Complete the information on the Users and Machine Accounts pages.
Completing these steps creates a machine catalog named Remote PC Access Machines and a Delivery Group named Remote PC Access Desktops.
If adding to an existing Citrix Virtual Apps and Desktops site:
- Create a machine catalog of type Remote PC Access (Operating System page of the wizard). For details on how to create a machine catalog, see Create machine catalogs. Make sure to assign the correct OU so that the target PCs are made available for use with Remote PC Access.
- Create a Delivery Group to provide users access to the PCs in the machine catalog. For details on how to create a Delivery Group, see Create Delivery Groups. Make sure to assign the Delivery Group to an Active Directory group that contains the users that require access to their PCs.
Deploy the VDA to the office PCs.
- We recommend using the single-session OS core VDA installer (VDAWorkstationCoreSetup.exe).
- You can also use the single-session full VDA installer (VDAWorkstationSetup.exe) with the
/remotepcoption, which achieves the same outcome as using the core VDA installer.
- Consider enabling Windows Remote Assistance to allow help desk teams to provide remote support through Citrix Director. To do so, use the
/enable_remote_assistanceoption. For details, see Install using the command line.
- To be able to see logon duration information in Director, you must use the single-session full VDA installer and include the Citrix User Profile Manager WMI Plugin component. Include this component by using the
/includeadditionaloption. For details, see Install using the command line.
- For information about deploying the VDA using SCCM, see Install VDAs using SCCM.
- For information about deploying the VDA through deployment scripts, see Install VDAs using scripts.
After you successfully complete steps 2–4, users are automatically assigned to their own machines when they log in locally on the PCs.
Instruct users to download and install Citrix Workspace app on each client device that they use to access the office PC remotely. Citrix Workspace app is available from the application stores for supported mobile devices.
Features managed through the registryFeatures managed through the registry
Caution:.
Disable multiple user auto-assignments
On each Delivery Controller, add the following registry setting:
HKEY_LOCAL_MACHINE\Software\Citrix\DesktopServer
- Name: AllowMultipleRemotePCAssignments
- Type: DWORD
- Data: 0
Sleep mode (minimum version 7.16)
To allow a Remote PC Access machine to go into a sleep state, add this registry setting on the VDA, and then restart the machine. After the restart, the operating system power saving settings are respected. The machine goes into+ALT+DEL). To prevent this automatic action, add the following registry entry on the office PC, and then restart the machine.
HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\PortICA\RemotePC
- Name: SasNotification
- Type: DWORD
- Data: 1
By default, the remote user has preference over the local user when the connection message is not acknowledged within the timeout period. To configure the behavior, use this setting:
HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\PortICA\RemotePC
- Name: RpcaMode
- Type: DWORD
- Data:
- 1 - The remote user always has preference if he or she does not respond to the messaging UI in the specified timeout period. This behavior is the default if this setting is not configured.
- 2 - The local user has preference.
The timeout for enforcing the Remote PC Access mode is 30 seconds by default. You can configure this timeout but do not set it lower than 30 seconds. To configure the timeout, use this registry setting:
HKLM\SOFTWARE\Citrix\PortICA\RemotePC
- Name: RpcaTimeout
- Type: DWORD
- Data: number of seconds for timeout in decimal values. The prompt asks whether to allow or deny the local user’s connection. Allowing the connection disconnects the remote user’s session.
Wake on LANWake on LAN
Integrated Wake on LAN is available only in on-premises Citrix Virtual Apps and Desktops and requires Microsoft System Center Configuration Manager (SCCM).. For example, because of a power outage.
The Remote PC Access Wake on LAN feature is supported with PCs that have the Wake on LAN option enabled in the BIOS/UEFI.
SCCM and Remote PC Access Wake on LAN
To configure the Remote PC Access Wake on LAN feature, complete the following before deploying the VDA.
- Configure SCCM 2012 R2, 2016, or 2019 within the organization. Then deploy the SCCM client to all Remote PC Access machines, allowing time for the scheduled SCCM inventory cycle to run (or force one manually, if necessary).
- For SCCM Wake Proxy or magic packet support:
- Configure Wake on LAN in each PC’s BIOS/UEFI settings.
- For Wake Proxy support, enable the option in SCCM. For each subnet in the organization that contains PCs connection and the machine catalog.
- If you enable power management in the catalog, specify connection details: the SCCM address, access credentials, and connection name. The access credentials must have access to collections in the scope and the Remote Tools Operator role.
- If you do not enable power management, you can add a power management (Configuration Manager) connection later and then edit a Remote PC Access machine catalog to enable power management.
You can edit a power management connection to configure advanced settings. You can enable:
- Wake-up proxy delivered by SCCM.
-.
TroubleshootTroubleshoot
Monitor blanking not working
If the Windows PC’s local monitor is not blank while there is an active HDX session (the local monitor displays what’s happening in the session) it is likely due to issues with the GPU vendor’s driver. To resolve the issue, give the Citrix Indirect Display driver (IDD) higher priority than the graphic card’s vendor driver by setting the following registry value:
HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\Graphics\AdapterMerits
- Name: CitrixIDD
- Type: DWORD
- Data: 3
For more details about display adapter priorities and monitor creation, see the Knowledge Center article CTX237608.
Session disconnects when you select Ctrl+Alt+Del on the machine that has session management notification enabled
The session management notification controlled by the SasNotification registry value only works when Remote PC Access mode is enabled on the VDA. value:
HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\PortICA
- Name: ForceEnableRemotePC
- Type: DWORD
- Data: 1
Restart the PC for the setting to take effect.
Diagnostic information
Power management
If power management for Remote PC Access is enabled, subnet-directed broadcasts might fail to start machines that are.
The active remote session records the local touchscreen input
When the VDA enables Remote PC Access mode, the machine ignores the local touchscreen input during an active session. setting:
HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\PortICA
- Name: ForceEnableRemotePC
- Type: DWORD
- Data: 1
Restart the PC for the setting to take effect.
More resourcesMore resources
The following are other resources for Remote PC Access:
- Solution design guidance: Remote PC Access Design Decisions.
- Examples of Remote PC Access architectures: Reference Architecture for Citrix Remote PC Access Solution. | https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/1912-ltsr/install-configure/remote-pc-access.html | 2021-10-16T00:53:24 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.citrix.com |
You can use the following methods to access HDFS metrics using the Java Management Extensions (JMX) APIs.
Use Access". | https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.0/bk_hdfs-administration/content/ch_jmx_metrics_apis_hdfs_daemons.html | 2021-10-16T00:30:01 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.cloudera.com |
dask.array.divmod¶
- dask.array.divmod(x1, x2, [out1, out2, ]/, [out=(None, None), ]*, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])[source]¶
This docstring was copied from numpy.divmod.
Some inconsistencies with the Dask version may exist.
Return element-wise quotient and remainder simultaneously.
New in version 1.13.0.
np.divmod(x, y)is equivalent to
(x // y, x % y), but faster because it avoids redundant work. It is used to implement the Python built-in function
divmodon NumPy arrays.
-1ndarray
Element-wise quotient resulting from floor division. This is a scalar if both x1 and x2 are scalars.
- out2ndarray
Element-wise remainder from floor division. This is a scalar if both x1 and x2 are scalars.
See also
floor_divide
Equivalent to Python’s
//operator.
remainder
Equivalent to Python’s
%operator.
modf
Equivalent to
divmod(x, 1)for positive
xwith the return values switched.
Examples
>>> np.divmod(np.arange(5), 3) (array([0, 0, 0, 1, 1]), array([0, 1, 2, 0, 1]))
The divmod function can be used as a shorthand for
np.divmodon ndarrays.
>>> x = np.arange(5) >>> divmod(x, 3) (array([0, 0, 0, 1, 1]), array([0, 1, 2, 0, 1])) | https://docs.dask.org/en/latest/generated/dask.array.divmod.html | 2021-10-16T00:46:22 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.dask.org |
Software Download Directory
frevvo was designed to integrate with your back end systems. For example, you can connect to a database, Google Drive and sheets, PaperVision - Image Silo document management systems and more. There are several connectors available to help with this integration. The connectors are listed below.
The Database Connector makes it very easy to connect frevvo to most databases. You can save form submissions in your database or initialize forms from a database. Note that this connector uses XML schema.
See Database Connector for help installing and using this connector.
Connecting your frevvo to Google sheets and drive is very easy to do with the frevvo Google Connector. See the Google Connector chapter for help installing and using this connector.
The frevvo Add-on for Confluence is available as an add-on to either frevvo' Online service or In-house installations. You need to install it into Confluence before you can add forms and submissions pages to Confluence. You will also need to download and install frevvo for Confluence.
See frevvo ™ for Confluence documentation for help installing and using this plugin.
frevvo supports form submissions sent directly into Digitech Systems PaperVision® and ImageSilo® document management products.
See Connecting to PaperVision® / ImageSilo® for help installing and using this connector.
The Filesystem Connector saves frevvo submissions to a local or remote filesystem or an Enterprise Content Management system (ECM). See Filesystem Connector for easy installation and configuration information.
Store documents and information to a secure Microsoft SharePoint website. Configure the frevvo SharePoint Connector for your frevvo tenant then use the SharePoint wizard to connect your forms/flows to the SharePoint website for document storage. . Refer to the SharePoint Connector topic for the details.
frevvo connectors installed on-premise should only be accessible to the frevvo server and should not be remotely accessible. frevvo recommends only allowing HTTPS access to the server (not external HTTP access). Since the connector(s) is not exposed over HTTPS, remote code execution vulnerability can be mitigated (a remote attacker cannot exploit this vulnerability as it is not exposed).
If you choose to allow external access to HTTP, you should only allow requests with paths starting with /frevvo for port 8082 (or the port you are using for frevvo and the Connector(s)).
See also Database Connector Security. | https://docs.frevvo.com/d/display/frevvo90/Connectors | 2021-10-15T23:21:41 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.frevvo.com |
Grid
View Automation Peer. IView Automation Peer. View Detached Method
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Called when the custom view is no longer applied to the control.
virtual void System.Windows.Automation.Peers.IViewAutomationPeer.ViewDetached() = System::Windows::Automation::Peers::IViewAutomationPeer::ViewDetached;
void IViewAutomationPeer.ViewDetached ();
abstract member System.Windows.Automation.Peers.IViewAutomationPeer.ViewDetached : unit -> unit override this.System.Windows.Automation.Peers.IViewAutomationPeer.ViewDetached : unit -> unit
Sub ViewDetached () Implements IViewAutomationPeer.ViewDetached
Implements
Remarks
This member is an explicit interface member implementation. It can be used only when the GridViewAutomationPeer instance is cast to an IViewAutomationPeer interface. | https://docs.microsoft.com/en-gb/dotnet/api/system.windows.automation.peers.gridviewautomationpeer.system-windows-automation-peers-iviewautomationpeer-viewdetached?view=windowsdesktop-5.0&viewFallbackFrom=net-5.0 | 2021-10-16T00:55:31 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.microsoft.com |
The advent of the Trump presidency has wreaked havoc with the traditional American approach to foreign policy that has been in place roughly for the last seventy years. Under the leadership of former Secretary of State Rex Tillerson the Foreign Service has been gutted as have the careers of life long diplomats leaving the United States with a lack of qualified personnel to conduct the daily work of the State Department, an essential component for an effective foreign policy. This is in large part due to the paucity of regional experts, professional negotiators, and has resulted in the rising lack of trust in American foreign policy worldwide. A case in point is the current American-North Korean nuclear talks and announced summit for June 12. One day it is on, one day it has been cancelled, a process that should be based on months of preparation seems to be evolving around the whims and/or transactional nature of President Trump’s decision making. Another example is the American withdrawal from the Iran Nuclear deal, with no thoughtful policy to replace it. Department influence and the diplomatic community in general did not begin with Trump, but has evolved over the last two decades and it is a bipartisan problem, not to be blamed on one party.
(Former Secretary of State Rex Tillerson and President Trump)
Farrow’s thesis is very clear in that the reduction of the role of diplomats at the State Department was underway during the tenure of Secretary of State James Baker under President George H. W. Bush, continued under Bill Clinton as the need to achieve budget savings was paramount as we refocused on domestic economic issues. During the 1990s the international affairs budget declined by 30% employing the end of the Cold War as a means of rationalizing the closing of consulates, embassies, and rolling important autonomous agencies into the State Department. By the time of the Islamic State twenty years later many experts in that region and subject matter were gone. After 9/11 the State Department was short staffed by 20%. Those who remained were undertrained and under resourced at a time we were desperate for information and expertise which were nowhere to be found.
Farrow is correct in arguing that the Trump administration brought to a new extreme a trend that had gained momentum after 9/11. With crisis around the world the US “cast civilian dialogue to the side, replacing the tools of diplomacy with direct, tactical deals between our military and foreign forces.” In areas that diplomats formally where at the forefront in policy implementation, now they were not invited into the “room where it happened.” “Around the world, uniformed officers increasingly handled negotiation, economic reconstruction, and infrastructure development for which we once had a devoted body of specialists.” The United States has changed who they bring to the table, which also affects who the other side brings to negotiate.
(Former Secretaries of State, Colin Powell and Condi Rice)
Restaffing under Secretary of State Colin Powell during George W. Bush’s presidency saw the repackaging of traditional State Department programs under the umbrella of “Overseas Contingency Operations” and counter terrorism. Since 2001 the State Department has ceded a great deal of its authority to the Defense Department whose budget skyrocketed, while the budget at State declined. As a result diplomats slipped into the periphery of the policy process especially in dealing with Iraq as Powell and his minions at State were squeezed to the sidelines by Vice President Dick Cheney who ran his own parallel National Security Council. Interestingly, the process would continue under President Obama who liked to “micromanage” large swaths of American foreign policy. Obama also favored military men as appointees, i.e.; Generals Jim Jones, David Petraeus, James Clapper, Douglas Lute to name a few.
Farrow’s book is an in depth discussion of how US foreign policy has been militarized over the last twenty years. He discusses how this situation evolved, who the major players were and how they influenced policy. Further, he explores how it has effected US foreign policy in the past, currently, and its outlook for the future, particularly when Washington leaves behind the capacity for diplomatic solutions as it confronts the complexities of settling the world’s problems.
Farrows is a wonderful story teller who draws on his own government experience and his ability to gain access to major policy makers – a case in point was his ability to interview every living Secretary of State including Rex Tillerson. At the core of Farrows narrative is the time he spent with Richard Holbrooke who brokered the Dayton Accords to end the fighting in the Balkans in the 1990s, and was a special representative working on Afghanistan and Pakistan under President Obama. Holbrooke was a driven man with an out sized ego but had a history of getting things done. From his early career in Vietnam through his work at State with Hillary Clinton, who held the job he coveted. Holbrooke saw many parallels between Vietnam and Afghanistan. First, we were defeated by a country adjacent to the conflict. Secondly, we relied on a partner that was corrupt. Lastly, we embraced a failed counterinsurgency policy at the behest of the military. These are the types of views that at times made Holbrooke a pariah in government, but also a man with expertise and experience that was sorely needed. His greatest problem that many historians have pointed out is that he was not very likeable.
(Nuclear talks with Iran)
During the Obama administration Holbrooke butted heads with most members of the National Security Council and the major figures at the Pentagon. He worked assiduously to bring about negotiations with the Taliban to end the war in Afghanistan. No matter how hard he tried he ran into a brick wall within the Obama administration. Secretary of State Clinton would finally come around, but the military refused to partake, and lastly his biggest problem was that President Obama saw him as a relic of the past and just did not like him.
An important aspect of the book is devoted to the deterioration of American-Pakistani relations, particularly after the capture and killing of Osama Bin-Laden and the episode involving CIA operative Raymond Davis. The lack of trust between the two governments was baked in to policy, but events in 2011 took them to a new level. Farrow’s monograph makes for an excellent companion volume to that of Steve Coll’s recent DIRECTORATE S which is an in depth study of our relationship with Pakistan concentrating on the ISI. Like Coll, Farrow hits the nail right on the head in that Pakistan reflected the difficulties of leaning on a military junta, which had no strategic alignment with the United States, particularly because of India.
Once Trump took over the “fears of militarization” Holbrooke had worried over had come to pass on a scale he could never have imagined. Trump concentrated more power in the Pentagon, granting nearly total authority in areas of policy once orchestrated across multiple agencies. The military made troops deployment decisions, they had the power to conduct raids, and set troop levels. Diplomats were excluded from decision making in Afghanistan as 10 of 25 NSC positions were held by current or retired military officials, i.e., White House Chief of Staff General John Kelly; Secretary of Defense Jim Mattis; until recently National Security advisor H.R. McMaster among a number of other former or serving military in his cabinet. However, one member of Trump’s military cadre is dead on, as Secretary of Defense Mattis has pointed out that “if you don’t fully fund the State Department, then I need to buy more ammunition.”
Farrow zeroes in on US, Syria, Afghanistan, the Horn of Africa, and policies toward Egypt and Columbia to support his thesis. The US had a nasty policy of allying with warlords and dictators in these regions and negotiations were left to the military and the CIA. Obama’s approach was simple; conduct proxy wars, he described our foreign military or militia allies as our partners who were doing the bidding of the United States. Yemenis and Pakistanis could do our work, why send our own sons and daughters to do it was his mantra. The Trump administration has continued this policy and closed the Office of the Special Representative for Afghanistan and Pakistan and has left the position of Assistant Secretary for Southern and Central Asia vacant – makes it difficult to engage in diplomacy/negotiations. As in Afghanistan with the Northern Alliance and other warlord groups, the US approach in Somalia was similar. First, we “contracted” the Ethiopian military in Eritrea to invade Somalia and allied with a number of warlords. In both cases, military and intelligence solutions played out, but the US actively sabotaged opportunities for diplomacy and it resulted in a destabilizing effect “continents and cultures away.” One wonders if American policy contributed to the growth of al-Shabaab in the region – for Farrow the answer is very clear.
(North Korean Leader Kim Jong Un)
Farrow accurately lays out a vicious cycle; “American leadership no longer valued diplomats, which led to the kind of cuts that made diplomats less valuable. Rinse, repeat.” Farrow’s thesis is accurate, but at times perhaps overstated as in most administrations there are diplomatic successes (at this time we are waiting for North Korean negotiations – which all of a sudden has gone from a demand for total denuclearization to a getting to know you get together); Obama’s Iran Nuclear deal, Paris climate deal, opening relations with Cuba are all successes, despite Trump’s mission to destroy any accomplishments by the former president. Farrow’s book is a warning that new Secretary of State Mike Pompeo should take to heart, if not all future negotiations will rest with people who have not studied the cultures and societies of the countries they would be dealing with. Dean Acheson wrote PRESENT AT CREATION detailing his diplomatic career and the important events following World War II, I wonder what a diplomat might entitle a memoir looking back decades from now as to what is occurring. | https://docs-books.com/2018/06/02/war-on-peace-the-end-of-diplomacy-and-the-decline-of-american-influence-by-ronan-farrow/ | 2021-10-15T22:42:10 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs-books.com |
XlThemeColor Enum
Lists the base colors of the document theme.
Namespace: DevExpress.Export.Xl
Assembly: DevExpress.Printing.v21.1.Core.dll
Declaration
Related API Members
The following properties accept/return XlThemeColor values:
Remarks
The values listed by this enumeration are used by following methods:
- XlCellFormatting.Themed - specifies themed formatting for a cell (for details, refer to the How to: Apply Themed Formatting to a Cell example).
- XlColor.FromTheme - creates an XlColor object, which can be used to set the color of different spreadsheet elements (e.g., cell background color, border line color, etc.). To obtain the theme color used to create an XlColor instance, use the XlColor.ThemeColor property.
If you change the document theme by using the IXlDocument.Theme property, the new set of colors corresponding to the selected theme, will be used.
See Also
Feedback | https://docs.devexpress.com/CoreLibraries/DevExpress.Export.Xl.XlThemeColor | 2021-10-15T23:35:44 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.devexpress.com |
Branding
Create an authentication experience that feels like "you" from start to finish: Inject your own branding into the Swoop flow with easy controls.
Swoop comes out of the box with a branded experience that includes our logo and color. But in a few simple steps, you can replace the Swoop logo and color with your own.
Default Branding
Custom Logo and Color
Updating the logo and color is managed from the Swoop dashboard.
- Within your app, navigate to Branding for the property you'd like to update.
- Click the color picker to select the primary color and text color of your choice.
- Upload the logo or image you'd like to include.
- Click 'Update Branding'.
What Gets Customized?
The Swoop authentication service and Magic Link email will be updated with your user-defined styling. Below is an example of the end results.
Customized Authentication Service Page
Customized Magic Code email
Updated 25 days ago
Swoop offers a couple of authentication flow options. Customize the user experience with different auth flows.
Did this page help you? | https://docs.swoopnow.com/docs/customization | 2021-10-15T22:31:33 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['https://files.readme.io/d371dbf-screen_1.png',
'screen 1.png Default Branding'], dtype=object)
array(['https://files.readme.io/d371dbf-screen_1.png',
'Click to close... Default Branding'], dtype=object)
array(['https://files.readme.io/d55f855-branding.png', 'branding.png'],
dtype=object)
array(['https://files.readme.io/d55f855-branding.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/bfd190e-branded_screen_1.png',
'branded screen 1.png Customized Authentication Service Page'],
dtype=object)
array(['https://files.readme.io/bfd190e-branded_screen_1.png',
'Click to close... Customized Authentication Service Page'],
dtype=object)
array(['https://files.readme.io/1bf5841-Magic_Code_email.png',
'Magic Code email.png Customized Magic Code email'], dtype=object)
array(['https://files.readme.io/1bf5841-Magic_Code_email.png',
'Click to close... Customized Magic Code email'], dtype=object)] | docs.swoopnow.com |
Response represents the response of an yii\base\Application to a yii\base\Request.
For more details and usage information on Response, see the guide article on responses.
The exit status. Exit statuses should be in the range 0 to 254. The status 0 means the program terminates successfully.
public integer $exitStatus = 0
Removes all existing output buffers.
Sends the response to client.
© 2008–2017 by Yii Software LLC
Licensed under the three clause BSD license. | https://docs.w3cub.com/yii~2.0/yii-base-response | 2021-10-16T00:26:22 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.w3cub.com |
The import feature allows you to import posts or pages in bulk to a WordPress site.
Currently it only supports importing posts or pages that are exported with WPvivid Backup plugin.
It supports two import methods:
- Upload and import an export directly from your computer.
- Scan an uploaded export to import: You can upload the exported files to /ImportandExport folder via FTP, then scan the files to import.
The /ImportandExport folder refers to /wp-content/wpvividbackups/ImportandExport, which is a folder created by our plugin for storing exported and imported post or page files on a site. The Import page also comes with an option for deleting all the exported files in the /ImportandExport folder.
Note: To properly display the imported content, please make sure the sites for exporting and importing posts have the same environment, e.g., same theme or pages built with the same page builder. | https://docs.wpvivid.com/import-content.html | 2021-10-16T00:04:29 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.wpvivid.com |
-154254 · Issue 632638
Correct Email Bot training text highlighted
Resolved in Pega Version 8.4.6
When a piece of text was selected and tagged against an entity while training the Email Bot, the entity selection was misplaced and partially covered the actual text selected. The incorrect selection was then carried forward to the training data spreadsheet. To resolve this, rule changes have been made that will update HTML entities to HTML encoded forms.
INC-175994 · Issue 667483
Removed redundant Microsoft Outlook email interaction chain
Resolved in Pega Version 8.4.6.
INC. | https://docs.pega.com/platform/resolved-issues?f%5B0%5D=%3A29991&f%5B1%5D=resolved_capability%3A28506&f%5B2%5D=resolved_version%3A31236&f%5B3%5D=resolved_version%3A34256&f%5B4%5D=resolved_version%3A34331&f%5B5%5D=resolved_version%3A36921 | 2022-01-29T05:18:07 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.pega.com |
Funnelback 14.2 patches Patches Type Release version Description 3 Bug fixes 14.2.3.40 Fixed an issue where the user editing interface for a user with no permitted collections would be presented with all collections selected, rather than none. 3 Bug fixes 14.2.3.39 Fixes a cross site scripting vulnerability when unescaped HTML was provided to the CheckBlending macro’s linkText attribute. 3 Bug fixes 14.2.3.38 Corrected the XSS Vulnerability in Anchors.html 3 Bug fixes 14.2.3.37 Fixes a bug where configs would not be reloaded in some multi server environments. 3 Bug fixes 14.2.3.36 Restore the Message Of The Day (MOTD) feature which was erroneously removed. MOTD file can be placed under $SEARCH_HOME/share/include/motd.txt and its content will be displayed at the top of the administration interface. 3 Bug fixes 14.2.3.35 A few improvements for content auditor templates. 3 Bug fixes 14.2.3.34 Fixes a bug where data loss could occur in Push collections if commits failed. 3 Bug fixes 14.2.3.34 Fixes a bug on Windows where commits could fail if index files in a snapshot are held opened. 3 Bug fixes 14.2.3.34 Fixes various DLS security flaws. 3 Bug fixes 14.2.3.33 Fixes a bug where data loss could occur in push on Windows. The problem is more likely to occur when Push is used in a meta collection. 3 Bug fixes 14.2.3.32 Fixes an issue where the submit button does not submit the form after valid user input has been checked and passed upon creation in the curator edit screen. The button now submits the form and the curator ruleset list is displayed. 3 Bug fixes 14.2.3.31 Fixes a problem with content auditor display which was introduced in patch 14.2.3.28 3 Bug fixes 14.2.3.30 Fixes a potential web server crash when autoc files contain no suggestions. 3 Bug fixes 14.2.3.29 Fixes an issue where after a Curator Rule has been set, the rule will not be shown in the Curator Rulesets list. The Curator Rule is now shown immediately after it has been created. 3 Bug fixes 14.2.3.28 Introduces the ability to disable links to WCA from within content auditor. To disable it, set ui.modern.content-auditor.search_results.show_wcag_link=false within the relevant collection.cfg file. 3 Bug fixes 14.2.3.27 Fixes an issue where meta dependencies fails when a meta collection contains a Push collection. 3 Bug fixes 14.2.3.26 Fixes an issue where the wrong collapsing results are shown. 3 Bug fixes 14.2.3.25 Fixes a issue where cache copies for modern UI would fail on Windows. 3 Bug fixes 14.2.3.24 Fixes a issue where refresh updates may keep documents which no longer exists. 3 Bug fixes 14.2.3.23 Fixes a issue where a partial match limit was applied when DAAT was larger than the total number of documents. 3 Bug fixes 14.2.3.22 Fixes a issue where some characters in query logs would cause analytics to fail. 3 Bug fixes 14.2.3.22 Fixes a issue where cache copies for warc would be dependent upon the offline view. 3 Bug fixes 14.2.3.22 Improves groovy hook scripts such that they can load classes under $SEARCH_HOME/lib/java/all as well as the collection @groovy folder. 3 Bug fixes 14.2.3.21 Makes the reporting update process more error-tolerant of invalid log files. 3 Bug fixes 14.2.3.20 Fixes a issue where explore queries would have negative weights on query terms, causing the query processor to generate warnings. 3 Bug fixes 14.2.3.20 Fixes a issue where the query processors was not stemming query terms to all available terms in a meta collection. 3 Bug fixes 14.2.3.19 Upgrades the Jetty web server to the latest 9.2.x version to fix a buffer bleed vulnerability. 3 Bug fixes 14.2.3.18 Fixes issue where Facebook gather would stop mid-gather 3 Bug fixes 14.2.3.18 Default script now supplies fields to request from Facebook 3 Bug fixes 14.2.3.17 Fixes a issue where index log files would not be emptied before being re-used, this could result in large log files. Affects Meta and push collections. 3 Bug fixes 14.2.3.16 Fixes issues with the session features with long metadata names and metadata values longer than 4096 characters. 3 Bug fixes 14.2.3.15 Switching to instead of means not as much information is returned as was previously. This patch fetches causes comments to be fetched and attached to posts. 3 Bug fixes 14.2.3.14 Fixes Facebook collections, by reconfiguring restfb to point to instead of 3 Bug fixes 14.2.3.13 Fixes an issue where the crawler could store a document outside its include_patterns when following redirects 3 Bug fixes 14.2.3.12 Fixes a concurrency issue with Push when used in meta collections. 3 Bug fixes 14.2.3.12 Reduces the time Push commits are delayed when Push is used in a meta collection. 3 Bug fixes 14.2.3.11 Fixed a Modern UI bug where a response would be wrapped in the JSONP callback function twice. 3 Bug fixes 14.2.3.10 Fixed a bug where crawler did not read the crawler.accept_cookies setting correctly 3 Bug fixes 14.2.3.9 Fixes a bug in the crawler where cookies having a domain starting with "." were not kept by the crawler, breaking parts of form interaction. 3 Bug fixes 14.2.3.9 Implemented experimental support for defining click-through actions for confirmation pages which follow forms. 3 Bug fixes 14.2.3.8 Fixes a bug in how missing cached documents and meta collections are handled in SEO auditor. 3 Bug fixes 14.2.3.7 Adds support for debugging files which can not be replaced on Windows within Push collections. Details on how to enable this debugging is described in the Push documentation which is updated with this patch. 3 Bug fixes 14.2.3.6 Significantly reduces memory requirements of the query processor when run over a meta or push collection. 3 Bug fixes 14.2.3.6 Fixes a issue where the index merger did not preserve the geo location data when merging indexes in a push collection which had geospatial search enabled, as the merger would disable geospatial search causing issues when the query processor ran. 3 Bug fixes 14.2.3.6 Fixes a issue where the index merger would not correctly merge indexes which had security metadata class names that where longer than 1 character. 3 Bug fixes 14.2.3.5 Fixes a bug where metadata class names of at least 2 characters which started with the letter 'd' would not work when indexing xml. 3 Bug fixes 14.2.3.5 Fixes a bug where indexing would fail when using long metadata class names to index CJKT characters 3 Bug fixes 14.2.3.5 Fixes a bug where the query processor may fail when stemming is used in conjunction with gscopes. 3 Bug fixes 14.2.3.4 Fixes a concurrency issue when push collections are used in one or more meta collections. 3 Bug fixes 14.2.3.3 Fixes a cross site scripting bug in the error page displayed when an exception occurs within the modern UI’s cached results page. Also hides completely the underlying error message to prevent leaking backend information. 3 Bug fixes 14.2.3.2 Fixes a problem in content auditor where the links to accessibility auditor did not respond unless manually opened in a new window. 3 Bug fixes 14.2.3.1 Fixes a concurrency issue with snapshots where, under heavy load, Push might miss the latest generation in the snapshot. 3 Bug fixes 14.2.2.5 Fixed an issue where the user editing interface for a user with no permitted collections would be presented with all collections selected, rather than none. 3 Bug fixes 14.2.2.4 Upgrades the Jetty web server to the latest 9.2.x version to fix a buffer bleed vulnerability. 3 Bug fixes 14.2.2.3 Fixes a bug which caused collections with @groovy directories in conf to have the collection root directories removed when being updated (every third time gathering occurs). 3 Bug fixes 14.2.2.2 Fixes a bug with Push where it would create errant Vaccum tasks while the push collection was shutting down. 3 Bug fixes 14.2.2.1 Fixes a bug where a small number of autocompletion possibilities were not considered. 3 Bug fixes 14.2.2.1 Provides substantial improvements to the Push API response times. 3 Bug fixes 14.2.1.7 Fixed an issue where the user editing interface for a user with no permitted collections would be presented with all collections selected, rather than none. 3 Bug fixes 14.2.1.6 Upgrades the Jetty web server to the latest 9.2.x version to fix a buffer bleed vulnerability. 3 Bug fixes 14.2.1.5 Fixes a bug with Push where it would create errant Vaccuum tasks while the push collection was shutting down. 3 Bug fixes 14.2.1.4 Fixes a bug where the Index Merger sometimes fails. 3 Bug fixes 14.2.1.4 Improves the logging of the indexer specifically for logging when external metadata is used as well as for storing metadata in the index. 3 Bug fixes 14.2.1.4 Improves the indexer so that metadata that is associated with internally defined metadata classes are not stored in the index unless the metadata class is defined in metamap.cfg or xml.cfg. This is closer to previous versions of the indexer. 3 Bug fixes 14.2.1.4 Improved the indexer so that values in XML elements that are not defined in xml.cfg do not default to metadata class k. 3 Bug fixes 14.2.1.3 Fixes a issue with the indexer reading external_metadata.cfg, which contained :. 3 Bug fixes 14.2.1.3 Fixes a issue with Push collections, where filtering was unable to use Tika to filter documents. 3 Bug fixes 14.2.1.3 Improves Push so that the reason commits are disabled is remembered. 3 Bug fixes 14.2.1.3 Improves Push so that the log folder for the last failed commit and merge is not deleted through log rotation. 3 Bug fixes 14.2.1.3 Adds a feature to Push so that if push.create-snapshot-on-merge-failure is set to true, Push will create a snapshot of the Push collection if a merge fails for further investigation. 3 Bug fixes 14.2.1.2 Fixes an issue with click logs processing during indexing, where a invalid click log line can cause indexing to fail. 3 Bug fixes 14.2.1.1 Allows update-configs to update a single collection, rather than the entire server. To use this new capability provide update.configs.pl with the directory for the collection (e.g. /opt/funnelback/conf/collection_name/) and the upgrade process will be run only on that directory. Note that running this will not create an empty updates statistic database in admin/reports/collection_name which might be required when upgrading from some old versions. 3 Bug fixes 14.2.0.1 Fixed an issue where the user editing interface for a user with no permitted collections would be presented with all collections selected, rather than none. | https://docs.squiz.net/funnelback/docs/latest/release-notes/patches/14.2/index.html | 2022-01-29T05:06:30 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.squiz.net |
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
Caution
This page documents the latest, unreleased version of Buildbot. For documentation for released versions, see.
2.5.17.1. FailingBuildsetCanceller¶
The purpose of this service is to cancel builds once one build on a buildset fails.
This is useful for reducing use of resources in cases when there is no need to gather information from all builds of a buildset once one of them fails.
The service may be configured to be track a subset of builds.
This is controlled by the
filters parameter.
The decision on whether to cancel a build is done once a build fails.
The following parameters are supported by the
FailingBuildsetCanceller:
name
(required, a string) The name of the service. All services must have different names in Buildbot. For most use cases value like
buildset_cancellerwill work fine.
filters
(required, a list of three-element tuples) The source stamp filters that specify which builds the build canceller should track. The first element of each tuple must be a list of builder names that the filter would apply to. The second element of each tuple must be a list of builder names that will have the builders cancelled once a build fails. Alternatively, the value
Noneas the second element of the tuple specifies that all builds should be cancelled. The third element of each tuple must be an instance of
buildbot.util.SourceStampFilter. | http://docs.buildbot.net/latest/manual/configuration/services/failing_buildset_canceller.html | 2022-01-29T03:47:50 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.buildbot.net |
GetSocial iOS SDK Upgrade Guide¶
For a minor version update, please check version-specific upgrade instructions in the Changelog.
If you are integrating from scratch, please follow Getting Started with GetSocial iOS SDK guide.
Upgrade from SDK 6 to SDK 7¶
- In your Xcode project go to Project Settings → select target you want to modify → Build Phases tab.
Create new Run Script phase with the content:
"$PROJECT_DIR/getsocial-sdk7.sh --app-id=[your-getsocial-app-id] --framework-version=[latest]"
Build your project.
Configuration¶
Most of the GetSocial settings are moved to a single
getsocial.json file which could be used across all platforms. You can read more about this configuration file.
Open
getsocial.json, which should reside in the root directory of your project and generated automatically by our script.
. in the key name means it is inner json object, e.g.
pushNotifications.autoRegister in json file should be translated to:
// getsocial.json { ... "pushNotifications": { ... "autoRegister": true } }
iOS Installer script¶
framework-version,
use-ui and
app-id are still used as parameter for
getsocial-sdk7.sh.
CocoaPods¶
- Update GetSocial SDK version to
7.0.0-alpha-0.0.2in your
Podfile
- Execute
pod update
Manual Integration¶
- Download the latest GetSocial frameworks from the Downloads page.
- Overwrite your existing instances with the new version.
Methods¶
All methods that had a
CompletionCallback,
Callback or other callback mechanism now have two different parameters for callbacks:
ResultCallback and
SuccessCallback.currentUser(). This method returns
nil if SDK is not initialized. When you update user properties like avatar or display name, the object is automatically updated after operation succeeded. If you switch or reset user, this object is changed to a new one. You will receive the new object in OnCurrentUserChangedListener or you can call
GetSocial.currentUser() again and get the new instance.
All getters should be called on
GetSocial.crrent. | https://docs.getsocial.im/libraries/ios/upgrade/ | 2022-01-29T05:36:18 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.getsocial.im |
ControlType.IpAddress Field
[This documentation is for preview only, and is subject to change in later releases. Blank topics are included as placeholders.]
Namespace: Microsoft.VisualStudio.TestTools.UITesting
Assembly: Microsoft.VisualStudio.TestTools.UITest.Extension (in Microsoft.VisualStudio.TestTools.UITest.Extension.dll)
Syntax
'Declaration Public Shared ReadOnly IpAddress As ControlType 'Usage Dim value As ControlType value = ControlType.IpAddress
public static readonly ControlType IpAddress
public: static initonly ControlType^ IpAddress
public static final var IpAddress : ControlType
static val IpAddress: ControlType
.NET Framework Security
- Full trust for the immediate caller. This member cannot be used by partially trusted code. For more information, see Using Libraries from Partially Trusted Code.
See Also
Reference
Microsoft.VisualStudio.TestTools.UITesting Namespace | https://docs.microsoft.com/en-us/previous-versions/dd580186(v=vs.100) | 2022-01-29T06:09:57 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.microsoft.com |
# Understanding the Maintainability of Your Code Base
The maintainability of your code base plays an important role for the future-proofness of your system. Getting an overview and being able to assess it correctly are important preconditions for steering development and maintenance activities in a meaningful way.
# How (Not) to Measure and Assess Maintainability
Maintainability cannot be expressed in terms of a single number (opens new window), while still retaining valuable and actionable insights [1]. Rather, we suggest to determine maintainability by investigating and assessing different quality criteria, such as how the code is structured or how much redundancy is present in the code base. Differentiating between such quality criteria allows you to get a more thorough, actionable, and objective understanding of the underlying problems and to develop an appropriate and targeted improvement strategy.
In general, many quality metrics can be calculated on the level of individual files or classes and turned into a weighted average. Often, however, this raw average value is of little use: Are all the problems in a single file or is the problem rather equally distributed? What should be done concretely to improve the situation? We recommend to always examine the distribution of a metric over the entire system. In Teamscale, you can easily get such an overview with assessment charts or treemaps.
# Prerequisite: Determine the Correct Scope
To obtain meaningful results, it is important to set the right measurement scope, especially in a grown code base. In addition to the actual system code, whose maintainability you usually want to assess, a typical code base often contains other source code files, such as code written by a third party or generated code. Since you do not manually maintain those files, they should not be considered when assessing maintainability. The system code itself typically comprises application code and test code, which you might want to evaluate differently. For instance, how well exception handling is done might be of less importance in test code than it is in application code. Further, there might be "legacy" areas in the code base that are outside of the focus of current development efforts.
In order to measure and assess the maintainability of the relevant areas of your code base, you have to make sure the analysis scope is set accordingly. This includes both setting up your project in a meaningful way for correct measurement, e.g. excluding generated code, as well as looking at the right portion of the code base, e.g. the application code or a specific folder, when examining and assessing the results.
# Measurement vs. Assessment
It is important to notice that there is a difference between measurement and assessment:
- Measurement denotes collecting quantitative information about certain properties of a study object.
- Assessment refers to the evaluation of the measurement results. Assessment requires an interpretation of the measured information.
Tools like Teamscale measure different aspects of maintainability, such as the length of the methods in the code base. Comparing the measured results with coding standards and commonly accepted best practices allow us to deliver a first interpretation. However, a thorough assessment typically requires a lot of contextual knowledge, such as about the history of the system, the development and maintenance process, the system's goal, its criticality, its architecture, field problems and so on. This is why assessments should be carefully crafted by experts based on the obtained measurement results.
# What Makes Maintainable Code?
A single maintainability problem, such as a single long method or a copied file, does not present a serious issue for a system. But without effective counter measures, the quality of any system will gradually decay. Especially grown code bases often already contain a large number of quality problems. Chances are you already find yourself far from an "ideal" target.
As continuous progress is the key to successfully improving software quality, it is especially important to observe the trends of relevant quality criteria. However, you also need to be able to understand how far you are off regarding the different indicators to be able to assess the overall situation.
The quality criteria we apply in practice [2] generally estimate, how frequently developers are faced with code that contains specific maintainability problems in their daily work, or—as one customer coined it—the quality of life of the developers. To this end, we measure how prevalent a specific problem is in the code base and apply a threshold for its evaluation.
The following sections explain some of the most important quality criteria for maintainability.
# Code Structure
With "Code structure" we refer to different aspects of how code is organized. Insufficient structure complicates the orientation in the code base and hinders identifying relevant locations when having to modify the code. Moreover, how well the code is structured affects the impact of a change and potentially further required modifications.
In the following we describe three important quality criteria that assess code structure in more detail: File Size, Method Length and Nesting Depth. They capture how well code is distributed across files, within a file, and within a method.
In each case, thresholds are used to classify the code into different classes corresponding to good structure (Green), improvable structure (Yellow), and poor structure (Red) as shown on the following illustration.
Note
The illustration shows typical thresholds for higher level languages such as Java, C#, or C++. Depending on the language in your code base and your coding guidelines, you might use different thresholds. Teamscale comes with pre-configured thresholds for all supported languages, but you can modify these to fit your needs.
In practice, it is very difficult to completely avoid code with improvable or poor structure, especially in grown code bases. There will always be some structural problems, such as a long method to create UI objects or an algorithm with some deep nesting. But these problems are not a threat to maintainability unless they take the upper hand.
For this reason, rather than judging single problems, we assess how much of the code is affected by structural problems:
Code Structure: Target Best Practice
Ideally, no more than 5% of the code should be classified as red, and no more than 25% of the code should be classified as red or yellow for each of the code structure quality criteria [2]. In other words, in a healthy code base, at least three quarters of the code should be well structured and at most 5% may be structured poorly, as illustrated with the following distribution charts:
In the following we elaborate on the three quality criteria for code structure and explain why they are meaningful indicators.
# File Size
When looking for the location to perform a code change, the first step is normally to find the right file. The larger this file is, the more effort is required on average for the further search within the file. Overly long files hamper program comprehension and have a negative influence on maintenance.
Teamscale measures the length of source files in lines of code (LOC) or source lines of code (SLOC), i.e. lines of code without whitespace and comments.
# Method Length
Many programming guidelines recommend to limit the length of a method to a single screen page (typically 20 to 50 lines) [3]. The reason behind this rule of thumb is that, in order to modify a method, a developer normally has to read and understand the entire method. This is easier if the entire method or at least most of it is visible at the same time on the screen without scrolling. Additionally, long methods require developers to comprehend long statement sequences even if only simple changes are performed. Therefore, long methods make code overall harder to comprehend [4].
Further, long methods often lead to reuse of code fragments through copy-and-paste, thus causing multiple additional problems (opens new window). Beyond that, long methods are also harder to test, which is why it is not surprising that long methods are more error-prone than short methods [5].
# Nesting Depth
Most programming languages provide block or nesting mechanisms that allow to control the scope of certain statements.
Blocks are used in the bodies of methods and classes, but also in control flow operations such as conditional execution (e.g.
if) or loop statements (e.g.
for,
while).
The level of nesting affects code understanding, as each nesting level extends the context required to understand the contained code. Therefore, code with little nesting is easier to understand than code with deep nesting [3]. The general assumption is that two nesting levels of control structures can still be understood with reasonable effort by developers (where counting starts at the method level), while starting with nesting level three, this requires disproportionately more effort.
Furthermore, deep nesting also makes testing a method more complex as more different branches of the control flow have to be covered.
# Redundancy
Most programming languages provide abstraction mechanisms that allow developers to reuse existing functionality. Nevertheless, copy-and-paste (and possibly modify) is widely utilized to reuse existing code. In practice, this approach often leads to a multitude of source code duplicates—so called "clones (opens new window)"— which are typically very similar on a syntactic level and hamper the maintenance and evolution of a system in various ways.
# Why Redundancy Impedes Maintainability
Clones unnecessarily increase the amount of code and hence raise the efforts needed for program comprehension and quality assurance activities [6, 7]. Any changes, including bug fixes, that affect a clone usually have to be propagated to all of its copies. Localizing and modifying these copies creates significant overhead.
Besides higher efforts for maintenance and testing, duplicated code is also more error-prone, because copies that are out of sight are easily missed when changing code that has been cloned. Often, developers do not even know that a piece of code has been copied elsewhere. In practice, this can lead to inconsistencies resulting in non-uniform behavior or errors [8].
# How we Measure Redundancy
Teamscale uses clone detection (opens new window) to analyze the redundancy in your system and find duplicated code. To get significant results, by default only clones with at least 10 common consecutive statements are flagged.
To quantify the overall extent of redundancy, Teamscale calculates the metric clone coverage, which measures the fraction of statements in a system that is part of at least one copy. Hence, clone coverage can be interpreted as the probability that a randomly selected statement in your code base has at least one copy.
Similar to structural problems, it is very difficult to completely avoid redundancy, especially in grown code bases. Therefore, rather than assessing single clones, we assess how much of the code is affected by duplication:
Redundancy: Target Best Practice
Ideally, the clone coverage of your code base should not exceed 5% [2]. Based on our experience, if clone coverage is between 5% and 15%, developers are regularly confronted with clones. As a rule of thumb, a clone coverage value of over 15% indicates that developers have to deal with redundancy on a daily basis.
# Other Relevant Criteria
Code structure and redundancy are two very important quality criteria for assessing the maintainability of your code base. However, for a comprehensive examination of maintainability we recommend to look into the following criteria as well.
# Commenting
In addition to the actual program statements, source code typically also contains comments. Their purpose is to document the system and help fellow developers to understand the code and the rationale behind a specific solution. High quality comments are widely considered to be crucial for program comprehension and thus for the maintainability of a system [9]. Nonetheless, in practice, code is often undocumented due to time pressure during development.
Consequently, two criteria for code comments should be investigated: Comment Completeness, i.e. are the relevant portions of the code documented, and Comment Quality, i.e. are the existing comments concise, coherent, consistent, and useful? Teamscale provides checks for both criteria.
# Naming
About 70% of source code in software systems consist of identifiers [10]. Identifiers provide names for important source code entities, such as classes, enums, interfaces, methods, attributes, parameters, and local variables. Consequently, meaningful, correct identifiers are crucial for reading and understanding your source code [10].
When investigating identifier names, two aspects should be examined:
- Are the Naming Conventions from your coding guidelines being followed consistently throughout the system?
- Is the Naming Quality acceptable, i.e. are identifiers easy to read, expressive, and suitable for the domain?
Teamscale can check naming conventions pretty extensively, while assessing the quality of identifier names usually requires manual inspection.
# Code Anomalies
The term code anomalies broadly refers to violations of commonly accepted best practices.
In many cases, such best practices are programming language specific—consider for instance PEP-8 (opens new window) for Python.
Nevertheless, they often cover similar best practices regarding maintainability, such as commented out code, unused code, task tags, missing braces, empty blocks, usage of
goto, formatting and so on.
The following Teamscale screenshot shows an unused variable finding, one of the typical code anomalies often found in Java code:
When investigating the maintainability of your code base, make sure to examine the number of code anomalies per thousand lines of code, i.e. the Findings Density. This gives you a measure for how often developers are confronted and hindered by code anomalies. Choose an appropriate threshold based on the number of best practices you are checking.
In Teamscale, you can use a Quality Indicator when configuring the analysis profile to specify which findings should be counted as code anomalies. Make sure to enable calculating Findings Density, when configuring the Quality Indicator.
# Exception Handling
Many programming languages provide exception handling to deal with deviations from the normal program execution.
Possible deviations range from programming mistakes, (e.g. dereferencing a
NULL pointer), over limitations of the execution environment, which in turn can hint at a programming mistake (e.g. stack overflow, running out of heap memory), to environment limitations out of the program’s control (e.g. write errors due to full disc, timed-out network connections).
In addition, exceptions are often used to indicate an erroneous result of a function call, such as parsing errors or invalid user input.
Improper exception handling can harm the maintainability of your code base in several ways. Consequently, when investigating the maintainability of your code base, you should also analyze if exceptions are handled adequately. Make sure to at least check the following handful of guidelines:
- If generic exceptions are thrown generously, it gets difficult to handle deviations selectively, which often leads to more complicated control flow. As a rule of thumb, only the most specific exception possible should be thrown and caught.
- It is good practice to introduce specific custom exceptions for your system, if this is doable in your programming language. Custom exceptions represent specific problems and allow to handle domain specific deviations from the normal execution selectively. If possible, custom exceptions should be grouped into a suitable exception hierarchy.
- Empty catch blocks should be avoided at all costs. Ignored exceptions do not influence the control flow, are not presented to the user, and—most importantly— are not logged. As a consequence, empty catch blocks might actually hide erroneous states.
- Loosing stack traces vastly complicates finding problems. Hence, the stack trace should always be kept, even when throwing other exceptions. Simply printing the stack trace is usually not enough. At the very least, the stack trace should be logged with the system's logging mechanism. When throwing from a catch block, the original exception should be wrapped, if possible.
- Instead of catching
NULLpointer exceptions, developers should check whether a specific object is
NULL.
Teamscale provides checks to analyze exception handling for several programming languages, including Java. The following screenshot shows a finding for a lost stack trace:
# Maintainability Overview in Teamscale
The quality criteria described above play a central role in Teamscale. You can access them in various places to get an overview of the maintainability of your code base:
- The Dashboard perspective allows you to configure widgets that visualize any of the abovementioned criteria, including assessment charts and treemaps. You can configure thresholds for the different metrics and observe trends over time.
- The Metrics perspective shows an overview of all metrics configured for your project, including the maintainability metrics described above:
It allows you to drill down into each metric for every folder in your code base. It also provides a quick way to view a treemap visualizing the distribution of the metric across the files in the code base as well as the history how the metric developed over time.
# Further Reading:
- Demystifying Maintainability
M. Broy, F. Deissenboeck, M. Pizka, In: Proceedings of the Workshop on Software Quality (WOSQ’06), 2006
-
- Clean Code: A Handbook of Agile Software Craftsmanship
R. C. Martin, In: Robert C. Martin Series. Prentice Hall, Upper Saddle River, NJ, Aug. 2008
- Measuring program comprehension: A large-scale field study with professionals
X. Xia, L. Bao, D. Lo, Z. Xing, A. E. Hassan, S. Li, In: IEEE Transactions on Software Engineering (TSE). 2017
- Mining Metrics to Predict Component Failures
N. Nagappan, T. Ball, A. Zeller, In: Proceedings of the 28th International Conference on Software Engineering (ICSE). 2006
- Survey of research on software clones
R. Koschke, In: Proceedings of the Dagstuhl Seminar on Duplication, Redundancy, and Similarity in Software. 2007
- A survey on software clone detection research
C. K. Roy, J. R. Cordy, In: Technical report, Queen’s University, Canada. 2007
- Do code clones matter?
E. Juergens, F. Deissenboeck, B. Hummel, S. Wagner, In: Proceedings of the 31st International Conference on Software Engineering (ICSE). 2009
- Quality Analysis of Source Code Comments
D. Steidl, B. Hummel, E. Juergens, In: Proceedings of the 21st IEEE Internation Conference on Program Comprehension (ICPC’13). 2013
- Concise and Consistent Naming
F. Deissenboeck, M. Pizka, In: Proceedings of the International Workshop on Program Comprehension (IWPC’05). 2005 | https://docs.teamscale.com/introduction/understanding-the-maintainability-of-your-code-base/ | 2022-01-29T04:13:14 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['/assets/img/structure_quality_criteria.fb99282f.png',
'Code Structure: Quality Criteria'], dtype=object)
array(['/assets/img/structure_distribution.4581101c.png',
'Code Structure: Target'], dtype=object)
array(['/assets/img/redundancy.eb8544a8.png',
'Redundancy: Quality Criteria'], dtype=object)
array(['/assets/img/unused_code.d53a5d81.png', 'Unused code'],
dtype=object)
array(['/assets/img/lost_stacktrace.80b7ed5e.png', 'Lost stack trace'],
dtype=object) ] | docs.teamscale.com |
Network Tunnel Configuration
You can establish an IPsec (Internet Protocol Security) IKEv2 (Internet Key Exchange, version 2) tunnel from a network device to Umbrella. IPsec tunnels created for the cloud-delivered firewall (CDFW) automatically forward HTTP/HTTPS traffic on ports 80 and 443 to the Umbrella secure web gateway (SWG). You can use IPsec tunnels to deploy the secure web gateway even if you choose not to use the IP, port, and protocol controls in the cloud-delivered firewall.
.
Table of Contents
Prerequisites
- Umbrella SIG data center (DC) public IP address, to which the tunnel will connect. For the latest Umbrella SIG DC locations and their IPs, see Connect to Cisco Umbrella Through Tunnel.
- An Umbrella organization ID. For more information, see Find Your Organization ID.
-.
- Allow ports on any upstream device: UDP ports 500 and 4500.
Note: Organizations have a default limit of 50 network tunnels. To increase this limit, contact support or your account manager.
Establish a Tunnel
With the certificate or passphrase credentials generated in the Umbrella portal, establish an IPsec IKEv2 tunnel to the Umbrella head-end
<umbrella_dc_ip> (
<umbrella_dc_ip> represents the public IP address in sample commands). Umbrella recommends setting your MTU size to 1350 to optimize performance.
Throughput and Multiple Tunnels
Each tunnel is limited to approximately 250 Mbps. To achieve higher throughput, you can establish multiple tunnels. If you set up multiple tunnels, we recommend that you divide the traffic between the tunnels either through load balancing with ECMP (Equal-cost multi-path routing) or assigning traffic through policy-based routing. For information about ECMP, see RFC 2991.
Network Tunnel Identities
A unique set of Network Tunnel credentials must be used for each IPsec tunnel. Two IPsec tunnels cannot connect to the same datacenter with the same credentials. Using unique credentials for every tunnel prevents inadvertent outages should one tunnel get rerouted to a nearby datacenter through anycast failover.
Network Tunnel and Secure Web Gateway
For web traffic sent through the Network Tunnel to the secure web gateway (SWG), we do not require that you exclude certain destinations.
If you choose to exclude traffic through the Network Tunnel, follow these general guidelines:
- You can not exclude a destination for the IPsec tunnel in the Umbrella dashboard. Instead, exclude a destination on the network device which establishes the IPsec tunnel to Umbrella.
- You must not exclude traffic to
146.112.255.200(
gateway.id.swg.umbrella.com) in the Network Tunnel. You must route SAML traffic through the same path as the secure web gateway traffic.
Traffic sent through the IPsec tunnel to the secure web gateway functions in two modes: Transparent and Explicit.
Transparent Mode
The secure web gateway (SWG) transparent mode is the default mode. Umbrella transparently filters web traffic that crosses the IPSec tunnel.
Explicit Mode
- In explicit mode, Umbrella does not require configuration changes to send traffic through the IPsec tunnel to the secure web gateway.
- If you use a PAC file, you must host a copy of the PAC file downloaded from Umbrella on an internal web server. You cannot use the secure web gateway in explicit mode with Umbrella's hosted PAC file.
- If you exclude the secure web gateway ingress destination range (
155.190.0.0/16) from the IPsec tunnel, you can choose not to send web traffic through the IPsec tunnel. As a result, traffic sent to the secure web gateway is not affected by the bandwidth of the IPsec tunnel.
Configure the IPsec tunnel to exclude secure web gateway traffic
- On the network device, exclude the IP address range
155.190.0.0/16to the IPsec tunnel.
- You must control web traffic with a PAC file, proxy chaining, or AnyConnect secure web gateway (SWG) security module.
- If you configure web traffic with a PAC file, you must not bypass
gateway.id.swg.umbrella.comin the PAC file. Traffic configured with a PAC file must follow the same route as the secure web gateway traffic.
- Umbrella only supports proxy chain traffic for Network deployments. You should not send proxy chain traffic through IPsec tunnels as features such as XFF are not supported.
Note: Umbrella sends SAML authentication requests to the
gateway.id.swg.umbrella.com domain through the secure web gateway using a PAC file or an on-premise proxy chaining configuration. For more information, see Manage Umbrella's PAC File and Manage Proxy Chaining.
Configuration Guides
We provide configuration guides for various network devices. For devices in which the setup is not documented, we cannot guarantee that the device can establish an IPsec tunnel to Umbrella.
Supported IPSec Parameters < Network Tunnel Configuration > Manual: vEdge
Updated 18 days ago | https://docs.umbrella.com/umbrella-user-guide/docs/tunnels | 2022-01-29T04:56:30 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['https://files.readme.io/1fc5e72-cdfw.png', 'cdfw.png'],
dtype=object)
array(['https://files.readme.io/1fc5e72-cdfw.png', 'Click to close...'],
dtype=object) ] | docs.umbrella.com |
Translations¶
Indico comes with a number of languages by default. In release 2.3, those are: English (default), French, Portuguese, Spanish and Chinese (in the order of integration). Additional languages are being prepared on the Transifex platform.
In order to use (partially) existing translations from Transifex or to contribute translations, you need to register with the Indico project on the Transifex platform.
Additional Translations¶
This is a guide to set up an Indico instance with a new language. It is useful for translators to verify how the translation looks in production or for administrators who just want to lurk at the incubated translation embryos.
Alternatively, you may use this guide to expose a translation we do not officially support, in your production version.
1. Setup an Indico dev environment¶
This should usually be done on your own computer or a virtual machine.
For creating your own Indico instance, we provide two different guides: The first one is for a production system, it will prepare Indico to be served to users and used in all the different purposes you may have besides translations. The second is development a light-weight, easier to set up, version oriented to testing purposes, that should not be exposed to the public.
For the purpose of translation development or testing we recommend using the development version.
2. Install the transifex client¶
Follow the instructions on the transifex site.
3. Get an API token¶
Go to your transifex settings and generate an API token.
Afterwards, you should run the command
tx init --skipsetup.
It will request the token you just copied from the previous settings and save it locally so you can start
using transifex locally.
If you do not know how to run this command, please refer to the
transifex client guide.
4. Install the translations¶
Navigate to
~/dev/indico/src (assuming you used the standard locations from the dev setup guide).
Run
tx pull -f -l <language_code>.
Languages codes can be obtained here.
For example, Chinese (China) is
zh_CN.GB2312.
5. Compile translations and run Indico¶
Run the commands
indico i18n compile-catalog
and
indico i18n compile-catalog-react
and:
- launch Indico, or
- build and deploy your own version of Indico, if you wish to deploy the translation in a production version.
The language should now show up as an option in the top right corner.
In case you modified the
.js resources, you also need to delete the cached
files in
~/dev/indico/data/cache/assets_i18n_*.js.
Why isn’t Indico loading my language?¶
Some languages in transifex use codes that Indico is not able to recognize.
One example is the Chinese’s
zh_CN.GB2312.
The easy fix for this is to rename the folder
zh_CN.GB2312 (inside
indico/translations/) to the extended locale code
zh_Hant_TW.
Unfortunately, there is no list with mappings for all the languages.
So if by any reason it doesn’t work for you, feel free to ask us.
Contributing¶
As a translator, you should have a good knowledge of the Indico functions (from the user side at least). Then you can subscribe to the abovementioned Transifex site for Indico and request membership of one of the translation teams. You should also contact the coordinators; some languages have specific coordinators assigned. They may point you to places, where work is needed and which rules have been agreed for the translations.
The glossary is usually of big help to obtain a uniform translation of all technical terms. Use it!
As a programmer or developer, you will have to be aware of the needs and difficulties of translation work. A Wiki page for Internationalisation is available from github (slightly outdated and we should eventually move it to this documentation). It describes the interface between translating and programming and some conventions to be followed. Everyone involved in translating or programming Indico should have read it before starting the work.
Whenever translaters spot difficult code (forgotten pluralization, typos), they should do their best to avoid double (or rather: multiple) work to their fellow translators. What is a problem for their translation, usually will be a problem for all translations. Don’t hesitate to open an issue or pull request on GitHub. Repair first, then translate (and be aware that after repair, the translation has to be made again for all languages).
Note
The codebase also contains legacy code, which may not follow all rules.
File Organisation¶
The relationship between
- transifex resources names (core.js, core.py, core.react.js)
- PO file names (messages-js.po, messages.po, messages-react.po) and
- the actual place, where the strings are found
is not always obvious. Starting with the resource names, the files ending in
.pyrefer to translations used with python and jinja templates,
.jsrefer to translations used with generic or legacy javascript,
react.jsrefer to translations used with the new react-based javascript.
These contain a relationship to PO files, as defined in the following example extracted
from
src/.tx/config.
[indico.<transifex resource slug>] file_filter = indico/translations/<lang>/LC_MESSAGES/<PO file name>.po source_file = indico/translations/<source file name>.pot source_lang = en type = PO
Note
The transifex resource slug is a name-like alias that identifies a particular file.
For more information regarding this subject a thread has started here. | https://docs.getindico.io/en/3.1.x/installation/translations/ | 2022-01-29T04:07:15 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.getindico.io |
Title
Collaborative knowledge management practices: Theoretical development and empirical analysis
Document Type
Article
Publication Title
International Journal of Operations and Production Management
Publication Date
3-1-2012
Abstract
Purpose: The purpose of this paper is to develop and empirically test a framework analyzing the relationship of collaborative knowledge management practices (CKMP) with supply chain integration and supply chain knowledge quality. Design/methodology/approach: The design of the study is based on a survey of 411 firms from eight manufacturing industries that are actively involved in inter-firm knowledge management practices with supply chain partners. First a measurement instrument for CKMP was statistically validated with confirmatory factor analysis. Then the structural equation modeling (SEM) path analysis was used to assess the structural relationship of CKMP with supply chain knowledge quality and supply chain integration. Findings: The study found that engagement in CKMP can lead to better integration between supply chain partners and increased organizational knowledge quality. Research limitations/implications: The study was conducted at the firm level for activities involving inter-firm knowledge sharing. Some measurement inaccuracy might be generated with a single respondent from each organization answering questions about both supply chain management issues and knowledge management-related issues. Practical implications: By identifying collaborative knowledge generation, storage, access, dissemination and application as the major components of CKMP, this study advises organizations on how to collaborate with partner firms on sharing supply chain knowledge. CKMP's positive relationship with knowledge quality and supply chain integration provides organizations with practice-related motivation for engaging in collaborative knowledge management and alerts them to the possibility of other potential benefits from it. Originality/value: As one of the first large-scale empirical efforts to systematically investigate collaborative knowledge management processes in a supply chain management context, this paper can be used as basis for enhanced homological understanding of this domain, by exploring antecedents and consequences of collaborative knowledge management. © Emerald Group Publishing Limited.
Volume
32
Issue
4
First Page
398
Last Page
422
DOI
10.1108/01443571211223077
Recommended Citation
Li, Y., Tarafdar, M., & Rao, S. (2012). Collaborative knowledge management practices: Theoretical development and empirical analysis. International Journal of Operations and Production Management, 32 (4), 398-422.
ISSN
01443577 | https://docs.rwu.edu/gsb_fp/107/ | 2022-01-29T04:44:45 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.rwu.edu |
Splunk MINT is no longer available for purchase as of January 29, 2021. Customers who have already been paying to ingest and process MINT data in Splunk Enterprise will continue to receive support until December 31, 2021, which is End of Life for all MINT products: App, Web Service (Management Console), SDK and Add-On.
Fixed Issues for Splunk MINT SDK for iOS 5.2.7
The following issues are fixed in the Splunk MINT SDK for iOS version 5.2.7.
Last modified on 23 January, 2020
This documentation applies to the following versions of Splunk MINT™ SDK for iOS (Legacy): 5.2.x
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/MintIOSSDK/latest/DevGuide/FixedIssues | 2022-01-29T05:25:25 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Moving Documents Between Buckets
If you have the proper permission from your XDOC Administrator, there are two ways to move documents between buckets:
1. Drag Method - Highlight the document(s) and drag them to another bucket. The destination bucket will be highlighted in yellow. Use the Ctrl+click and Shift+click functions to highlight multiple documents.
2. Toolbar Method - Highlight the document(s) in question then click the Bucket selector drop down icon on the Document List toolbar. Then choose destination bucket in question.
| https://docs.xdocm.com/6104/user/the-xdoc-document-viewer/the-document-list-and-doc-list-toolbar/moving-documents-between-buckets | 2022-01-29T05:11:24 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['moving-documents-between-buckets/drag-method.png', 'Drag Method'],
dtype=object)
array(['moving-documents-between-buckets/toolbar-method.png',
'Toolbar Method'], dtype=object) ] | docs.xdocm.com |
Install OpenCAD on DirectAdmin
This guide assumes you are using a paid hosting service and have configured your domain and web hosting appropriately.
- Download OpenCAD
- Upload files to Website File Manager
- Configure Database and Users
- Run Installation Wizard
Proceed to the Using the OpenCAD Installer guide if the web host environment is prepared.
Last update: 2021-12-15 | https://opencad-docs.readthedocs.io/en/latest/install/directadmin-welcome/ | 2022-01-29T05:21:14 | CC-MAIN-2022-05 | 1642320299927.25 | [] | opencad-docs.readthedocs.io |
check_point.mgmt.cp_mgmt_show_tasks – Retrieve all tasks and show their progress and details._show_tasks.
New in version 2.9: of check_point.mgmt
Synopsis
Retrieve all tasks and show their progress and details.
All operations are performed over Web Services API.
Examples
- name: show-tasks cp_mgmt_show_tasks: from_date: '2018-05-23T08:00:00' initiator: admin1 status: successful
Return Values
Common return values are documented here, the following are the fields unique to this module: | https://docs.ansible.com/ansible/latest/collections/check_point/mgmt/cp_mgmt_show_tasks_module.html | 2022-01-29T03:39:48 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.ansible.com |
e-commerce Select Forwarders page
When you select "Forward Data" from the "Add Data" page, the following page appears.
You can define server classes and add forwarders to those classes. Server classes are logical groupings of hosts based on things such as architecture or host name.
This page only displays forwarders that you configured to forward data and act as deployment clients to this instance. If you have not configured any forwarders, the page warns you of this.
- In Select Server Class, click one of the options.
- New to create a new server class, or if an existing server class does not match the group of forwarders that you want to configure an input for.
- Existing to use an existing server class.
- In the Available host(s) pane, choose the forward.
- (Optional) You can add all of the hosts by clicking the add all link, or remove all hosts by selecting the remove all link.
- If you chose New in "Select server class", enter a unique name for the server class that you will remember. Otherwise, select the server class you want from the drop-down list.
- Click Next. The "Select Source" page shows source types that are valid for the forwarders that you selected.
- Select the data sources that you want the forwarders to send data to this instance.
- Click Next to proceed to the Set Sourcetype page.
Next! | https://docs.splunk.com/Documentation/Splunk/7.0.5/Data/Forwarddata | 2022-01-29T05:12:57 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Funnelback 8.0.2 release notes
Released: 2nd April 2009
Upgrade issues
Any existing tagging databases will need to be exported, and then reimported into the new database format.
New CSS files will be copied to search.css.dist, instead of to search.css. Changes can be merged manually if desired.
New Features
Allow throttling of filecopy requests (to reduce load on the target server).
Allow ignoring of text copy protection on gathered PDF files (off by default).
Selected bugfixes and improvements
Include updated Davisor text filtering framework (Publishor 5.2)
Thesaurus tag expansion feature fixed.
Use lower priority for text extraction on Windows to reduce load on search server.
Filecopy user authentication was moved to the core filecopy settings.
Documentation improvements.
Fix filecopy gathering of documents with uppercase extensions.
Fix fluster "by site" listing for filecopy collections.
Fix XML parsing error with cached pages for docx files.
Ignore Microsoft Office temporary files when filecopy gathering.
Report accurate document sizes for gathered binary documents in filecopy collections.
Fix click feedback for filecopy collections.
More useful result summaries for filecopy collections.
Neaten the display of filecopy URLs.
Neaten the display of filecopy titles.
Search and tagging look and feel updates.
Improve summaries for docx documents.
Do not display spelling suggestions for expanded queries.
Fix warning output by updating shipped Regexp::Common.
Fix reporting bug where data displayed was for a particular month only.
Remove blank lines from CSV exported query reports.
Prevent htpasswd.pl creating multiple accounts with the same name.
Fix tag feedback for filecopy collections. | https://docs.squiz.net/funnelback/docs/latest/release-notes/8.0.2.html | 2022-01-29T04:02:07 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.squiz.net |
Setting up Vyne locally
Getting Vyne running locally is simple, whether you're doing your first API integration, or building streaming data solutions
There are a few ways to get Vyne working on your local machine and which you choose depends on what you're interested in.
This guide walks you through common use cases for running Vyne, and getting things going locally. For all our examples, you'll need Docker installed - so i fyou haven't done that already, head over to the Docker Docs and get docker set up.
From here, getting started depends on what you'd like to achieve. We have guides to help you get started locally for the following:
- Exploring Vyne's automated API integrations
- Building a standalone taxonomy
- Building streaming data solutions
- Ingesting and querying flat-based data (CSV, Json, Xml, etc)
Automating API integration
Getting Vyne deployed locally to automate API integration is simple:
docker run -p 9022:9022 --env PROFILE=embedded-discovery,inmemory-query-history,eureka-schema vyneco/vyne
This will launch an instance of Vyne running on your local machine - just head over to to see an empty instance of Vyne waiting for you.
This gives you a local developer envirnoment for Vyne, which is everything you need to start integrating APIs.
The next step is to get Microservices to start and publish their schemas to Vyne. We walk through this in detail in our Hello World tutorial - now's a good time to head over there.
Building a standalone Taxonomy
A common usecase for Vyne is to build and share an enterprise taxonomy, as part of a data governance strategy.
Taxi provides an excellent way to build a shared taxonomy, and Vyne is a fantastic way to view and explore it.
To get started, you can use our Docker Compose file for this configuration, which is available here.
Download the docker compose file locally, then simply run:
docker-compose up -d
Wait a bit, then head to.
We have a dedicated guide to building and publishing a taxonomy, which you can follow along with here.
Ingesting and querying file based data
Vyne doesn't just work with API's - we make any data discoverable and queryable.
For flat-file data that isn't served by an API (such as CSV data, or just JSON / XML files), we provide Casks - a way of storing file data, applying a taxonomy, and then querying / enriching with Vyne.
To get started, you can use our Docker Compose file for this configuration, which is available here.
Download the docker compose file locally, then simply run:
docker-compose up -d
Wait a bit, then head to.
We have a dedicated guide to storing and querying flat-file data, which you can follow along with here.
Building streaming data solutions
With Vyne you can transform and enrich streaming data using our pipelines, either publishing data to another topic, or storing it in a cask to query later.
[TODO - Provide docker compose file]
Sharing API schemas on your local machine
Vyne needs services to provide API schemas (Taxi) so that it can understand the data they provide, and the APIs that they publish.
There are different ways to configure services and Vyne to share schema data - Distributed or Centralised. - which we discuss in detail in Publishing & sharing schemas.
Using HTTP polling to share API schemas
The examples so far have leveraged a centralised, polling mechanism, with the Vyne stack running inside docker containers, and developer services (the one's you build) running locally.
This is the easiest way to get started, and it's how Vyne ships out-of-the-box. However, the drawback of this approach is that as services start and stop, there can be a lag for everything to sync up, which can be frustrating.
Using TCP multicasting to share API schemas
An alternative approach is to use TCP multicasting to share services. This makes updates instant between services as they start and stop, but requires a little extra config.
Getting Multicast running
First steps will be to update our local host IP. This step is required because we run Vyne in Docker. To update your IP:
- edit the .env file and set DOCKER_HOST_IP_=<_your_local_host_ip>
- edit hazelcast.yml and replace member-list: _<_your_local_host_ip> with your local IP
Once this is done navigate into your project directory and run:
docker-compose up
This will start the Vyne stack.
To view our service running in Vyne, navigate to. Here we can see if our service has successfully published schema types and operations to Vyne.
To stop the stack, run:
docker-compose down /
ctrl+c | https://docs.vyne.co/developers-guide/setting-up-vyne-locally/ | 2022-01-29T04:34:22 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['/assets/documentation-images-4-.png', None], dtype=object)
array(['/assets/documentation-images-6-.png', None], dtype=object)
array(['/assets/documentation-images-7-.png', None], dtype=object)
array(['/assets/documentation-images-9-.png', None], dtype=object)
array(['/assets/documentation-images-10-.png',
'Using HTTP Polling to distribute schema changes locally'],
dtype=object)
array(['/assets/documentation-images-11-.png', None], dtype=object)] | docs.vyne.co |
- Citrix Offline Plug-in Overview
- Specifying Trusted Servers for Streamed Services and Profiles
- Using the Merchandising Server and Citrix Receiver Updater to Deploy the Plug-ins
- To install the Offline Plug-in
- To deliver the AppHubWhiteList to user devices
- To configure the cache size of the Offline Plug-in
- To deploy the Offline Plug-in using the command-line
- To configure an .MSI package for the Offline Plug-in using transforms
- To deploy the Offline Plug-in to user devices through Active Directory
- To deploy applications to user devices
- To clear the streamed application cache on user devices
- To clear merged rules for linked profiles on user devices | https://docs.citrix.com/ko-kr/app-streaming/6-7/ps-stream-plugin-67-landing-page/ps-stream-plugin-radecache.html | 2018-06-18T05:38:32 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.citrix.com |
DataWeave Tutorial
Enterprise
This document walks you step by step through some basic transformations you can easily do with the Transform Message element in Anypoint Studio.
For more advanced examples written directly in DataWeave code, see DataWeave Examples.
Renaming Fields and Calculating Expressions
In this example, the data you get as an input is not only in a different format from what you need, you also need to pick out only a couple of the values in it, rename them and even infer some values from others.
Suppose that you want to send a SOAP request to a web service that places an order for a t-shirt, based on information you get from a sales tracking software you use (which outputs data in a different structure). The external service needs a JSON input with values for the following set of keys:
size
name
address1
address2
city
stateOrProvince
country
In this example, suppose that the output that you get from your sales tracking software follows the structure of this sample JSON file:
You then need to select only a few of these fields, rename some, and assume a value for "size" (as it’s not provided by the input).
Download the above example as a file.
Drag an HTTP Connector into a new flow, select the connector configuration that you created for the previous example. Set the path of the connector to
ex1
Select the Metadata tab in the HTTP Connector, then click Add Metadata. Select the Output:Payload, then click the edit icon next to it.
Select the Create new type radio button at the top, pick the type JSON, assign it any ID you wish, and browse for the example JSON file you just downloaded above.
Note that now when the HTTP Connector is selected in your canvas, the Output tab of the metadata explorer should show the fields that will be present in the outgoing payload.
Drag a Web Service Consumer to your flow. Create a global configuration for it by clicking the green plus sign in its properties editor. In the WSDL Location field, paste the following URL:. Note that all of the other fields in the configuration element are completed automatically when you do this. Then click Ok.
Back in the Web Service Consumer’s properties editor, select OrderTshirt in the Operation field. Note that while the Web Service Consumer element is selected in your canvas, the metadata explorer should now show the set of fields that are expected in the payload as inputs.
Drag a Transform Message element in between the HTTP connector and the Web Service Consumer. You will see that its properties editor is divided into two sections: on the left the graphical UI – which includes representations of the input, output and mappings between them – on the right the output, and on the right, the actual DataWeave code that this produces.
Note that, as there’s metadata available about both the input and expected output of this component, you can see a tree view of each, with all of the fields involved.
Right-click on the input section and select Edit Sample Data, a new tab opens up that shows a sample input built from the JSON example you provided the HTTP Connector:
This sample input will be used to build a sample output, however, as the body of your DataWeave code doesn’t refer to any elements of the input yet, you still won’t see anything changing in the output if you edit it now.
Use the GUI to create the actual mapping between the input and output fields. Simply click and drag a field in the input to a field in the output. Match all of the names that are identical, as well as those that are similar, such as
state&
stateOrProvinceor
nationality&
country.
Notice how each of these actions you performed creates a line in the DataWeave code. By now your DataWeave code should look like this:
As the input doesn’t provide a value for
sizeor for
address2, you can provide a literal expression for these. Double click on the
sizeand on the
address2fields in the output, note how this creates a line for each in your DataWeave code that loads them with the fixed value
null. Edit the DataWeave code directly to assign the value "M" to
size, leave
address2as null.
We can make this a little more interesting by changing the literal expression that populates "size" into a conditional expression. See in the code below how the line that defines "size" has changed, it sets it to "M" unless the buyer’s state is Texas, then it makes the shirt "XXL".
Click the
Previewbutton on the top right corner of the editor. This will open a section that displays a preview of your output data based on the sample data you provide in the input. Note that right now most of the values will simply have the
????placeholder.
Select the
payloadtab in your input section and replace the
????placeholders in the relevant fields with test values. → Mule Application.
Using a tool like Postman (chrome extension), send an HTTP POST request to with a JSON body like the one below:
You should get a response with an XML body that has a single value, this is the order ID for the shirt order you just placed.
Mule XML Code:
DataWeave Code:
Rearranging your Input
In this example, you obtain an input with several entries, and you want to regroup those entries into different categories based on the values found in one of the fields. Here you take contacts stored on a Salesforce account, and regroup them according to their role. If you don’t have a Salesforce account to carry out all of the steps here, note that there is a workaround for loading that same metadata manually into Studio.
Drag an HTTP Connector into a new flow, select the connector configuration that you created for the previous example. Set the path of the connector to
ex2
Drag a Salesforce Connector into your flow, after the HTTP Connector. Create a global configuration for it by clicking the green plus sign in its properties editor. If you own a Salesforce account, fill in your Salesforce Username, password and security token (which you should be able to find in the email you got from Salesforce when you first registered). Click Test Connection to make sure your credentials are accepted, then click ok.
Back in the properties editor of the Salesforce connector, select the operation Query. In the Query Text field below, write the following simple query:
This will retrieve every one of the contacts linked to your Salesforce account, each of them with four fields of data. Notice how now – when the Salesfoce connector is selected in your canvas – the metadata explorer’s Out tab shows that the output payload contains a list of contacts, each with these four fields. If it doesn’t, you may need to click the Refresh metadata button under the metada explorer.
Add a Transform Message element to your flow after the Salesforce connector, and open its properties editor.
In the input section of the editor, right-click and select Edit Sample Data, a new tab opens up that shows a sample input with placeholders for its fields. As the type of the input is a POJO, the object is displayed as described through a DataWeave transform:
This sample gives you a clear reference of how the incoming data is structured and how you can reference each value. This sample input is also used to produce a sample output in the output section. Flesh it out to make it into more helpful data, for example paste this in its place:
As in the previous case, in the input section you can see a tree that describes the data structure. As there’s no metadata about the desired output, there isn’t anything specified in the output section. In this example we will build the DataWeave code manually, as what we need to do requires more advanced features than what the UI can provide. In the DataWeave code, change the output directive from the default
application/javato
application/json.
In the transform section, Write the following DataWeave code:
The output you’re creating is an object. When objects have a single element, there’s no need to wrap it in curly brackets, as is necessary when it has multiple elements. Through this you’re creating a top level object with a single element in it named "roles" which in turn holds an object that contains everything else. Its contents are gouped by the "$.Title" field, which is an expression evaluated in the context of every contact in the input array.
Open the
Previewsection of the editor to see the produced output. It should display this:
Each different available value for "title" will have a corresponding element inside the "roles" object, each holding an array of objects with every contact that matches that value for title.
Save your Mule project and Deploy it to Studio’s virtual server to try it out by right-clicking on the project and selecting
Run As → Mule Application.
Using any browser you want, make a request to. You should get a response with an JSON body that contains a top level object, and inside it the object "roles" that has each different title as an element, each of these containing an array of objects with each contact in your Salesforce Account that matches its title.
Mule XML Code:
DataWeave Code:
One to one JSON to XML Conversion
Suppose you want to transform any JSON payload to XML, retaining the original data structure regardless of what attributes and nested objects or arrays the input might contain.
To achieve this, follow the steps below:
Drag an HTTP Connector into a new flow, create a new global element for it by clicking the green plus sign in its properties editor. Set its host to
localhostand leave its port as the default
8081, then click Ok. Back in the properties editor of the connector, set the path to
ex3.
Drag a Transform Message Transformer to your flow, right after your HTTP Connector, then open its properties editor.
In the transform section of the editor, change the DataWeave code so that it looks like this:
The directives in the Header of this transform define the output as being of type XML. The body of this transform simply references the payload, which is implicitly an input directive of this transform, as are all of the components of the inbound Mule message. Whatever exists in the payload – including any child elements at any depth – is transformed directly into XML without changing any of its structure.
Save your Mule project and Deploy it to Studio’s virtual server to try it out by right-clicking on the project and selecting
Run As → Mule Application.
Using a tool like Postman (chrome extension), send an HTTP POST request to with any JSON content you want in the request body. You should get a response with an XML body that has the same data and structure as the input.
For example, if you send a request with this body:
You should get this in the body of the response:
Mule XML Code:
DataWeave Code:
Also See
See our DataWeave Reference Documentation
See more advanced examples in DataWeave Examples
Migrate your old DataMapper components automatically using DataWeave Migrator Tool | https://docs.mulesoft.com/mule-user-guide/v/3.7/dataweave-tutorial | 2018-06-18T05:30:54 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.mulesoft.com |
Adirondack Rehabilitation Medicine Partners with The Center For Rheumatology for Joint Facility Grand Reopening
- Posted By: Marc Miranda
- Category: Featured, Articles
Adirondack Rehabilitation Medicine Partners with The Center For Rheumatology for Joint Facility Grand Reopening
Adirondack Rehabilitation Medicine and The Center for Rheumatology, LLP announce the grand reopening of a joint facility to serve patients in the North Country, Saratoga County, and Capital District areas.
Queensbury, NY - December 15, 2016 - Determined to prevent, diagnose, and effectively treat musculoskeletal and neurological disorders, Adirondack Rehabilitation Medicine (ARM) and The Center for Rheumatology, LLP (TCFR) join forces. The widely-praised healthcare facilities announce the grand reopening of a joint facility in Queensbury, NY. Utilizing cutting-edge technology such as EMG/NCV testing, as well as nerve and musculoskeletal ultrasound, the new facility will help patients increase their quality of life by enhancing their mobility. Serving patients with everything from simple exercise to surgical referrals the comprehensive facility and its practitioners have one thing in mind - a dedication to long-term results.
Patients will enjoy the convenience of the completely renovated healthcare facility with an additional 3,000 square feet of space. To that end, numerous new tenants have been added to the building to provide physical therapy services as well. Other services offered by practitioners in the newly renovated healthcare facility include specialties in Prosthetics and Orthotics. Handicap accessible, the joint facility also features additional parking.
ARM services include:
- Sports and Musculoskeletal Medicine
- Diagnostic Musculoskeletal Ultrasound and Guided Injections
- GMG/NCV and Neuromuscular Medicine
- Botulinum Toxin Injections and Spasticity Management
- Stroke, Spinal Cord, and Head Injury Medicine
- Prosthetic and Orthotic Evaluations and Prescriptions
TCFR Services include:
- Medical consultations and evaluations from highly trained rheumatologists and medical staff.
- Proficient diagnosing and treatments of a broad range of rheumatological disorders such as arthritis and other autoimmune diseases as well as treatments of musculoskeletal diseases like osteoporosis.
- Clinical research studies for the discovery of new treatments that benefit TCFR patient care.
Van T. Jackson, Jr., Practice Administrator at Adirondack Rehabilitation Medicine said of the grand reopening, “We couldn’t be more thrilled that the facility has so many new features for our patients. With the expansion we are able to see more patients, reduce wait times and service a larger part of our community. I encourage everyone to visit our website to see the list of services that we provide and our large footprint across the capital district. We look forward to continuing to service the community.”
Marc Miranda, Business Manager at The Center for Rheumatology, LLP said that “We are excited to expand our practice in the Queensbury area and the surrounding community to provide easier access to quality care from Dr. Ellen Cosgrove and the rest of our medical staff at The Center for Rheumatology. We are looking forward to expanding our presence in the new facility to meet the healthcare needs of community”.
For more information visit and.
About Adirondack Rehabilitation Medicine:
Adirondack Rehabilitation Medicine is a healthcare facility based in Queensbury, New York. Specialties include Sports Medicine, Electrodiagnostic Medicine, Neuromuscular Medicine and Musculoskeletal Ultrasound.
About The Center for Rheumatology, LLP:.
Van T. Jackson, Jr
Practice Administrator, Adirondack Rehabilitation Medicine
[email protected]
Marc Miranda
Business Manager, The Center for Rheumatology, LLP
[email protected]
518-489-4471
Websites:
Social Media: | http://joint-docs.com/news/Adirondack-Rehabilitation-Medicine-Partners-with-The-Center-For-Rheumatology-for-Joint-Facility-Grand-Reopening_71-blog.htm | 2018-06-18T05:29:42 | CC-MAIN-2018-26 | 1529267860089.11 | [array(['../img/news/CenterForRheumatology_Logo_4C-RGB-463481.jpg',
'Adirondack Rehabilitation Medicine Partners with The Center For Rheumatology for Joint Facility Grand Reopening'],
dtype=object) ] | joint-docs.com |
Uninstall Phoenix agent and remove your server from Phoenix
Uninstall the agent
Before you uninstall Phoenix agent, ensure that you have administrator privileges on the server.
To uninstall Phoenix agent
- Log on to the server on which you installed Phoenix agent.
- For Windows 2012 Server and Windows 2008 Server, from the Start menu, navigate to Control Panel > Programs > Uninstall a program.
- From the list of programs, uninstall Druva Phoenix Agent.
Complete the uninstallation
After the uninstallation, the Phoenix agent configuration file and logs are retained on your servers. To complete the uninstallation, you must delete these files manually.
Note: The configuration files and logs are retained as a safeguard against inadvertent uninstallation operations. If you want to install Phoenix agent again, do not delete these files.
Delete the following folders (containing the log files and configuration files):
- Windows 2012 Server
C:\ProgramData\Phoenix
- Windows 2008 Server
C:\ProgramData\Phoenix
To remove the server from Phoenix, see Delete a SQL server.
See also: | https://docs.druva.com/Phoenix/030_Configure_Phoenix_for_Backup/050_Backup_and_Restore_MS_SQL_Server_databases/040_manage_servers/080_Uninstall_Phoenix_agent | 2018-06-18T05:57:57 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.druva.com |
You can induce a failover situation for a selected Primary VM to test your Fault Tolerance protection.
About this task
This option is unavailable (grayed out) if the virtual machine is powered off.
Prerequisites
Launch the vSphere Client and log in to a vCenter Server system.
Procedure
- In the vSphere Client, select the Hosts & Clusters view.
- Right-click the fault tolerant virtual machine and select .
Results
This task induces failure of the Primary VM to ensure that the Secondary VM replaces it. A new Secondary VM is also started placing the Primary VM back in a Protected state. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.hostclient.doc/GUID-812356A5-A718-4D2F-A406-95E2DBD3B3D3.html | 2018-06-18T06:03:20 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.vmware.com |
Package management¶
In the Free and Open Source Software world, most software is released in source code format by developers. This means that generally, if you want to install a piece of software, you will find the source code on the website of the project. As a user, you will have to find and install all the other bits of software, that this particular piece depends on (the dependencies) and then install the software. To solve this painful issue, all Linux distributions have something called a package management system. Volunteers (mostly) all across the world help make binary software packages out of source code released by the developers, in such a way that users of the Linux distribution can easily install, update or remove that software.
It’s generally recommended, we use the package management system that comes with the distribution, to install software for the users. If you are really sure about what you’re doing in the system, you can install from the source files too; but that can be dangerous.
dnf command¶
dnf is the package management system in Fedora. The actual packages come in the rpm format. dnf helps you search, install or uninstall any package from the Fedora package repositories. You can also use the same command to update packages in your system.
Searching for a package¶
$ dnf search pss Fedora 25 - x86_64 34 MB/s | 50 MB 00:01 Fedora 25 - x86_64 - Updates 41 MB/s | 23 MB 00:00 Last metadata expiration check: 0:00:07 ago on Sun Jun 25 04:14:22 2017. =========================================== N/S Matched: pss ============================================ pss.noarch : A power-tool for searching inside source code files pssh.noarch : Parallel SSH tools
First the tool, downloads all the latest package information from the repository, and then gives us the result.
Finding more information about a package¶
dnf info gives us more information about any given package.
$ dnf info pss Last metadata expiration check: 0:04:59 ago on Sun Jun 25 04:14:22 2017. Available Packages Name : pss Arch : noarch Epoch : 0 Version : 1.40 Release : 6.fc25 Size : 58 k Repo : fedora Summary : A power-tool for searching inside source code files URL : License : Public Domain Description :.
Installing a package¶
The dnf install command helps us install any given package. We can pass more than one package name as the argument.
$ sudo dnf install pss wget Last metadata expiration check: 0:37:13 ago on Sun Jun 25 03:44:07 2017. Package wget-1.18-3.fc25.x86_64 is already installed, skipping. Dependencies resolved. ===================================================================================================================================================== Package Arch Version Repository Size ===================================================================================================================================================== Installing: pss noarch 1.40-6.fc25 fedora 58 k Transaction Summary ===================================================================================================================================================== Install 1 Package Total download size: 58 k Installed size: 196 k Is this ok [y/N]: y Downloading Packages: pss-1.40-6.fc25.noarch.rpm 969 kB/s | 58 kB 00:00 ----------------------------------------------------------------------------------------------------------------------------------------------------- Total 118 kB/s | 58 kB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Installing : pss-1.40-6.fc25.noarch 1/1 Verifying : pss-1.40-6.fc25.noarch 1/1 Installed: pss.noarch 1.40-6.fc25 Complete!
apt command¶
apt is the package management system for the Debian Linux distribution. As Ubuntu is downstream of the Debian distribution, it also uses the same package management system.
apt-get update¶
$ apt-get update ... long output
The apt-get update command is used to update all the package information for the Debian repositories. | https://lym.readthedocs.io/en/latest/packages.html | 2018-06-18T06:01:45 | CC-MAIN-2018-26 | 1529267860089.11 | [] | lym.readthedocs.io |
BO.
Glossary
Contents
How Do I Install GI2?
After you have installed SAP BusinessObjects Business Intelligence Platform (BI 4.1) software, use the instructions on this page to install Genesys Interactive Insights (GI2)
Before You Begin
GI2 requires a connection to a Genesys Info Mart release 8.5 database. Although the regular population of the database is not required for GI2 installation, GI2 can provide meaningful reports only if the database is regularly populated by a Genesys Info Mart 8.5 application. Genesys Info Mart must be properly configured and installed before GI2 runs the aggregation process.
- The Genesys Info Mart documentation set describes how to deploy and configure Genesys Info Mart.
- The Reporting and Analytics Aggregates (RAA) documentation set describes the aggregation.
- The BO/BI Documentation provides information about how to use the various BO/BI tools.
Overview of the Installation Routine
The installation routine offers you choices:
- copy-only: Copies the installation script and supporting files to a designated location for manual deployment at a later time. Choose this option if you require greater control and more visibility into the inner workings of the installation routine. In addition to copying the installation script and supporting files, the copy-only option:
- Copies the \agg folder, which contains RAA, to the Interactive Insights root folder.
- Updates the program registry on Microsoft Windows platforms.
- deploy-now: Installs GI2 immediately. In addition to copying the installation script and supporting files, the deploy-now option:
- Copies the \agg folder, which contains RAA, to the Interactive Insights root folder.
- Imports the GI2 universe, folder, reports, connection, measure maps, and PDF documents into the BI repository.
- Defines different users and groups within BI 4.1.
- Updates the program registry on Microsoft Windows platforms.
Refer to What Application Files Are Installed? for the names of the application files that are deployed.
Prerequisites
You can install multiple instances of GI2 on the same host. To install GI2, the following prerequisites must be met:
- BI software must be installed on the same host where GI2 will be installed. Refer to What BI Components Must I Install? for additional information.
- You must connect to the BusinessObjects Central Management Server (CMS) as Administrator and the BI servers must be running. Consider running the Repository Diagnostic Tool and addressing any issues encountered.
- You must have the GI2 8.5 installation package and the BI installation package.
- Your operating system version must be one of the following (to support BI 4.1):
- Sun SPARC: Solaris SPARC 64-bit 10
- Red Hat: Red Hat Enterprise Linux 64-bit 5.0, Red Hat Enterprise Linux 64-bit 6.x
- Microsoft: Windows Server 64-bit 2008, Windows Server 64-bit 2012
- IBM: AIX Power PC 64-bit 6.1, AIX Power PC 64-bit 7.1
In addition, before you operate the GI2 reports, you must have access to an Info Mart schema that is populated by Genesys Info Mart. Refer to the Genesys Info Mart Deployment Guide or the Genesys Migration Guide for information that pertains to configuring, installing, or upgrading these products. Although the installation routine does not check for access to the Info Mart database, the BI license that Genesys provides requires such access for use of the software.
Backing Up Prior Universes
To preserve any customizations that you made to a pre-existing GI2 universe, Genesys recommends that you back up any GI2 universes that might exist in the BI repository before you install GI2; the installation routine might overwrite a pre-existing GI2_Universe, regardless of the folder in which it resides. One way to accomplish this is to export the universe to an LCMBIAR file and store it for safekeeping. Refer to the BO/BI documentation for instructions on how to export the universe to these formats.
Installing GI2
Use the procedures in this section to install GI2.
Installing GI2 on UNIX
Purpose: Use this procedure to install GI2 on UNIX. Note that BI 4.1 software must already be installed so that the GI2 universe, folder, and reports can be deployed to the BI repository.
Steps
- In the directory into which you copied the GI2 installation package (or from the Genesys Interactive Insights DVD), locate the install.sh shell script.
- Run the following script from the command line: sh install.sh
- When prompted, choose whether to copy only the GI2 installation files: y or n. If you select y, the installation routine copies the files that are needed to install GI2 manually to the current directory. You will have to manually deploy the GI2 universe and reports to the BI repository and set up groups and user permissions.
If you select n, continue to Step 4. If you select y, proceed to Step 9.
- When prompted, choose whether to deploy GI2: y or n.
If you selected n at Step 3, select y now.
- Type the path where BI 4.1 is installed. This path cannot contain spaces.
- Specify the host for the BI CMS or accept the default, which is the name of the local computer.
The host name you specify cannot contain underscores (_), periods (.), or slashes (/ or \)
- Type the CMS port, or accept the default (6400).
- Type the password for the BI Administrator.
- Type the full path of the destination directory for GI2 installation files. This path must not contain spaces.
The installation routine verifies that a valid path was entered (and creates the path, if does not exist), extracts GI2 archives using the destination directories that were specified, and loads the GI2 reports and universe into the BI repository. If you selected y in step 3, and the installation routine fails to connect, the installation halts, and you must resume the installation manually as described in Manually Running the GI2 Installation Script.
See What Application Files Are Installed? for a description of the files that are deployed. For greater security, Genesys recommends that you edit the gi2_setenv.sh file to remove the password.
Installing GI2 on Windows
Purpose: Use this procedure to install GI2 on Windows platforms. Note that BI 4.1 software must already be installed so that the GI2 universe, folder, and reports can be deployed to the BI repository.
Steps
- From the Genesys Interactive Insights CD or image, double-click the setup.exe file. The installation routine checks the Windows registry for an existing GI2 installation before it displays the Welcome page.
- On the Welcome page, click Next.
- On the Installation Mode page, choose one of the following options: Deploy Genesys Interactive Insights or Copy Genesys Interactive Insights files only.
- Click Next. If you selected Copy Genesys Interactive Insights files only, skip to Step 6.
- On the BusinessObjects Enterprise Central Management Server page, type the password of the CMS Administrator, and click Next. The installation routine prepopulates default values in the Host name and Port fields. Enter appropriate values, or accept the default values. The host name that you specify cannot contain underscores (_), periods (.), or slashes (/ or \).
- On the Choose Destination Location page, specify where the installation routine is to install GI2, or accept the default location, and click Next. The default location is:
C:\Program Files (x86)\GCTI\Genesys Interactive Insights
- On the Ready to Install page, click Install.
The installation routine:
- Extracts the GI2 archives from the destination directory that you specified in Step 5.
- Adds keys to the registry.
- Loads the GI2 reports, universe, users, groups, and rights (if you selected the Deploy Genesys Interactive Insights option).
- Scans the BI repository for existing components.
- Exits.
Next StepsSee What Application Files Are Installed? for a description of the files that are deployed. For improved security, Genesys recommends that you edit the gi2_setenv.bat file to remove the password.
Manually Running the GI2 Installation Script
You can use the files that are deployed by the installation routine to import the GI2 universe, folder, reports, users, groups, and rights. Manually run the installation in any of the following circumstances:
- To deploy the universe and reports to a different BI environment.
- If you selected the Copy Genesys Interactive Insights files only option from the Genesys Installation Wizard.
- To re-import the universe, folder, and/or reports into your environment.ImportantBefore re-importation, delete the GI2 universe, folder, connection, user groups, and default users (that is, perform Step 3 of the procedure Additional Manual Steps to Finish the Uninstall).
- If the installation of GI2 using the Genesys Installation Wizard was unsuccessful. In this case, follow the procedure Before Manually Installing GI2 after a Failed Installation Attempt.
Before Manually Installing GI2 after a Failed Installation Attempt
Purpose: If you attempted to install GI2 using the Genesys Installation Wizard, and the installation was not successful, complete this procedure before attempting to manually install GI2.
Steps
- Verify that the Prerequisites are met.
- Check for errors using the Central Management Console (Promotion Management > Promotions Jobs > 8.5.0).
- Correct the error.
If the Genesys Installation Wizard was unable to access CMS, ensure that you have specified correct connectivity parameters:
- Open either gi2_setenv.bat (on Windows platforms) or gi2_setenv.sh (on UNIX platforms).
- Set the correct parameters.
- Save the file.
Manually Installing GI2
Purpose: To manually deploy GI2. For information about situations where you might use this method, see Manually Running the GI2 Installation Script.
Steps
- Specified connectivity parameters:
- Open either gi2_setenv.bat (on Windows platforms) or gi2_setenv.sh (on UNIX platforms).
- Set the connectivity parameters.
- Save the file.
- At the command prompt, run the script: gi2_deploy_unv_rep.bat (Windows) or gi2_deploy_unv_rep.sh (Unix).
The GI2 universe, reports, users, user groups, and permissions are deployed.
- For improved security, Genesys recommends that you edit the gi2_setenv.bat file to remove the password.
Manually Importing Objects and Data Elements
The GI2 universe contains the data elements and business objects required for running the GI2 reports. These data elements include classes, dimensions, measures, and conditions, as well as the reports themselves. Please refer to the Genesys Interactive Insights Universe Guide for descriptions of universe elements. The installation routine automatically imports the GI2 universe, folders, reports, users, groups, and rights into your BI environment. Use the information in this section to re-import these objects and data elements. This is useful, for instance, if:
- The installation does not finish.
- You delete the universe from the Central Management Console in error.
- To redeploy the same universe, reports, and folders to your existing environment.
- To deploy the universe, reports, folders, users, groups, and rights to more than one environment.
Importing Objects and Elements using CMC
Purpose: This section describes how to manually import the GI2 universe, folders, reports, users, groups, and rights into BI 4.1 using the Central Management Console (CMC). For more information about using CMC, see the BI Documentation.
Steps
- In CMC, in the Manage list, click Promotion Management.
- Click New Job.
- On the New Job dialog box, enter a Name, and optionally enter text in the Description and Keywords fields.
- In the Source drop-down list, choose Login to a new CMS and specify the system from which to load the LCMBIAR file, and enter the associated (Administrator) User Name and Password, and click Login.
- In the Destination drop-down list, choose Login to a new CMS and specify the destination CMS where you wish to save the new LCMBIAR file, enter the associated (Administrator) User Name and Password, and click Login.
- Click Create.
- On subsequent pages, select the objects to import.
Next StepsYou can optionally migrate your universe from UNV to UNX using the steps described in Migrating Custom Universe and Reports.
Universe Contents
If you have imported the GI2 universe successfully, your BI environment will contain the following:
Refer to the "Promotion Management" chapter in the relevant Business Intelligence Platform Administrator Guide for information on how to use Promotion Management. Refer to the Genesys Interactive Insights Universe Guide for a complete listing and descriptions of the reports.
Verifying Elements
You can invoke the BusinessObjects Central Management Console (CMC) to verify that these elements have been deployed (under Connections, Folders, Access Levels, and Universes, and Users and Groups). The LCMBIAR file also contains objects (such as measures, dimensions, and filters) which are imported using the Import Wizard, but the deployment of these elements is not reported on the wizard’s summary page. The following section (Viewing the GI2 Reports and Universe) describes an alternate way to verify the import.
Viewing the GI2 Reports and Universe
When you have successfully installed GI2, you can view the GI2 universe in the Information Design Tool and the GI2 reports in BI Launch Pad to confirm that all the options you selected during installation were installed. Keep in mind, however, that additional setup is required to actively use the report and universe elements. The additional steps for setting up the environment are described in After Installation, What Additional Steps Do I Perform?.
Viewing the Universe
The figure The GI2 Universe in the Information Design Tool shows a cutaway of the GI2 universe Information Design Tool. Refer to BI Documentation to learn more about the Information Design Tool.
Viewing the GI2 Reports
The figure The GI2 Reports in BI Launchpad shows the Interactive Insights folder, its subfolders, and some of the queue-based GI2 reports in BI Launch Pad when expanded. The Documentation folder contains the Genesys Interactive Insights Universe Guide and several maps that illustrate relationships between measures:
- If you manually imported the universe, the contents of the report subfolders varies depending on the selections you made during importation otherwise.
- If the installation routine imported the universe and reports, all of the reports will be imported to the repository.
You can access GI2 reports using BI Launch Pad (BI 4.1 deployments). Refer to BI Documentation for information about how to use these tools.
The report and documentation subfolders are stored within a release-specific subfolder of the Interactive Insights root folder (such as Interactive Insights > 8.5.0). This folder structure enables you to retain any customizations that you applied to previous GI2 universes. Text references and screen shots depicted throughout this documentation may not show the complete path, or may show a release number that differs from your release of GI2.
GI2 Versioning
Both the GI2 universe and all reports are labeled with a version number, which serves to identify the GI2 version of reports and the version of the universe. This can be important in the event that you initiate requests to Genesys Customer Care or have correspondence with other Genesys departments. This versioning might be further useful to your universe and report designers in distinguishing reports from other GI2 releases.
Determining the Version of the Universe
The GI2 universe defines the GI2_UNIVERSE_VERSION parameter to identify the version of the universe, which you can find in the Query Script Parameters dialog box:
- In the Information Design Tool, retrieve the universe, and click Business Layer.
- Select GI2_Universe on the Business Layer objects tree.
- Click Parameters.
Genesys recommends that you not change the version-number parameter value.
The version number that is shown in the figure Checking the Universe Version might not match your version of the GI2 universe.
Determining the Version of a Report
The Description tab of each GI2 report provides the GI2 version associated with the report, as shown in the Figure Checking the Report Version. This version number also appears after the descriptions of the measures that are used within the report. Genesys recommends that you not change its value.
The version number that is shown in the figure Checking the Report Version might not match your version of the GI2 universe.
Next Steps
After you have installed GI2, perform the additional steps that are described in After Installation, What Additional Steps Do I Perform? to ready the universe for report users.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/GI2/latest/Dep/GI2Install | 2018-06-18T05:56:12 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.genesys.com |
Does the online version of OA LodgeMaster run on my computer?
OA LodgeMaster provides cross platform access to your lodge data wherever you have an internet connection. For the best results we recommend using the latest version of one of the following web browsers, which are all available for free:
- Microsoft Internet Explorer (Windows) (Windows 10 note: Edge (the default browser in Windows 10) does not work with Silverlight, you must use IE, which can be found under Windows Accessories in the application menu)
- Mozilla Firefox ESR (Windows, Mac OS X) (note: the consumer release of Firefox does not work with Silverlight, you must use the ESR version)
- Safari (Mac OS X) (note: you'll need to make this settings change to make it work)
Does the offline version of OA LodgeMaster run on my computer?
In order to run the offline version of OA LodgeMaster your computer must meet the following minimum requirements:
- Intel© 32-bit or 64-bit 1.6 GHz or faster processor.
- Microsoft© Windows XP Service Pack 2, Windows Vista, Windows 7, Windows 8, or Windows 10.
- 512MB of RAM or greater.
- 200MB of available hard-disk space, plus space required for data.
- 1024x768 or higher-resolution monitor.
- Broadband Internet access (for first install and when syncing)
Can I use OA LodgeMaster on a Macintosh computer?
The online version of OA LodgeMaster is available to Macintosh users with one of the web browsers listed at the top. The offline version of OA LodgeMaster is PC-based software - Macintosh users need to use a program such as VMware Fusion, Parallels, or VirtualBox to run the offline version.
Do I need an Internet connection to run OA LodgeMaster?
An internet connection is required to use OA LodgeMaster. Data is synchronized with the OA LodgeMaster servers so that you can take advantage of the online features. Users can still use the offline version of OA LodgeMaster if they are disconnected from the Internet. For example, a lodge may want to bring a laptop with OA LodgeMaster to an event such as an ordeal, and use the ordeal management features throughout the event. A user can make changes to the database while at the event (not connected to the Internet), and then once an Internet connection is re-established, all of the new data is automatically synchronized to the online database.
How fast of a connection do I need?
A broadband connection (such as DSL or Cable) is not required, but the faster your connection, the better. | https://docs.oa-bsa.org/display/OALMLC3/System+Requirements | 2018-06-18T05:28:07 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.oa-bsa.org |
Smart Select commands simplify the process of selecting common blocks of code. From the current selection or caret position, you can easily select the encompassing block, expand the selection to a larger block, or shrink the selection to a narrower one. Visual Assist extends and shrinks only to common blocks so the process of selecting requires few, if more than one, keystroke.
Access
Primary access to the Smart Select commands is via keyboard shortcuts assigned during installation of Visual Assist.
For mouse users, access is available in the VAssistX menu.
Extend Block Selection
An initial selection using Extend Block Selection (Alt+]) is either the current statement or innermost block of code containing the caret. Blocks beyond the current statement are typically logical groups of them, e.g. all statements that comprise a method, or compounds statements—including their conditions and guards.
If you initiate Extend Block Selection when the caret is at the first non-white space character of a block, e.g. at the 'i' of "if", the initial selection is trimmed.
Successive presses of Extend Block Selection cause encompassing blocks to be selected.
Extend Selection
Alternatively, initial selections can be made with Extend Selection (Shift+Alt+]), which causes less code to be selected.
Successive presses of Extend Selection cause the selection to grow by small increments.
Extend Selection often follows Extend Block Selection to make a small increase to a large selection, e.g. to select braces of a compound statement but not its condition or loop statement.
If you initiate Extend Selection when the caret is in leading white space of a line, the first word or entire line is selected, including white space.
If you initiate Extend Selection inside a line, successive extends will select a trimmed version.
Shrink Block Selection
Once a selection has been made, Shrink Block Selection (Alt+[) will decrease the selection by a large increment.
Shrink Selection
With a selection made—by any means—Shrink Selection (Shift+Alt+[) will decrease the selection by a small increment.
Shrink Selection often follows Extend Block Selection to make a small decrease to a large block selection, e.g. to omit a condition or loop statement from a compound statement.
Extend Selection in Comments and Strings
Extend Selection and Shrink Selection work well within comments and strings.
Note: The block variants—Extend Block Selection and Shrink Block Selection—continue to operate at the block level when executed in comments and strings.
Extend to Start of Selection
A selection is scrolled into view as it grows. If a selection grows beyond the height of the text editor, a small window opens briefly to identify the start of the selection.
Adjust Size of Selections
All four of the Smart Select commands—extend and shrink by block and non-block—create initial selections. Successive executions of the commands grow and shrink a selection by additional elements. You can adjust the size of the initial selections and the granularity of subsequent changes in the options dialog for Visual Assist.
Start and grow selections by relatively small increments
When checked, if you begin a selection with non-block extend (Shift+Alt+]) or non-block shrink (Shift+Alt+[), the first selection is the current word or logical element and successive executions grow/shrink the selection by small, logical elements—until the current statement is selected.
If you begin a selection with block extend (Alt+]) or block shrink (Alt+[), the first selection is the current statement and successive executions grow/shrink the selection by blocks.
When unchecked, if you begin a selection with non-block extend (Shift+Alt+]) or non-block shrink (Shift+Alt+[), the first selection is the current word or logical element and the next selection is the current statement. If you begin a selection with block extend (Alt+]) or block shrink (Alt+[), the first selection is the current block.
Start non-block selections with the current word
When checked, the non-block extend (Shift+Alt+]) and shrink (Shift+Alt+[) commands begin every selection with the current word.
When unchecked, the commands choose an initial selection based on caret location and context. (The setting does not affect the block commands.)
Changes in case delimit current word
When checked, changes in case delimit the current word. (The setting is effective only if non-block selections start with the current word.)
Underscore delimit the current word
When checked, underscores delimit the current word. (The setting is effective only if non-block selections start with the current word.)
Default Shortcuts
Visual C++ 6.0
Smart Select is not available. | http://docs.wholetomato.com/default.asp?W560 | 2017-01-16T12:46:52 | CC-MAIN-2017-04 | 1484560279176.20 | [array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=47676&sFileName=smartSelectMenu.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37224&sFileName=smartSelectExtendBlock.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37385&sFileName=smartSelectExtendBlockTrimmed.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37225&sFileName=smartSelectExtendBlock2.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37228&sFileName=smartSelectExtend1.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37229&sFileName=smartSelectExtend2.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37230&sFileName=smartSelectExtend3.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37231&sFileName=smartSelectExtend4.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37234&sFileName=smartSelectExtendAfterBlock.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37232&sFileName=smartSelectExtendWhitespace.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37384&sFileName=smartSelectExtendTrimmed.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37233&sFileName=smartSelectShrinkAfterBlock.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37386&sFileName=smartSelectExtendComment.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=37387&sFileName=smartSelectExtendString.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=45288&sFileName=smartSelectPeekWindow.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=54432&sFileName=smartSelectOptions.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=52334&sFileName=SmartSelectEnableGranularStart1.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=52364&sFileName=SmartSelectEnableGranularStart0Block.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=52327&sFileName=smartSelectEnableWordStart1.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=52326&sFileName=smartSelectEnableWordStart0.png',
None], dtype=object)
array(['https://wholetomato.fogbugz.com/default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=52329&sFileName=SmartSelectSplitWordByCase1.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=52332&sFileName=SmartSelectSplitWordByUnderscore1.png',
None], dtype=object) ] | docs.wholetomato.com |
...
It is compatible with the Issues Report plugin to run pre-commit local analysis.
...
Maven and Ant can also be used to launch analysis on Flex projects.
...
Code Coverage Reports
If you want to display unit test results on dashboards, code coverage data, prior to the SonarQube analysis execute your unit tests before running the SonarQube analysis and set theand generate the Cobertura report. Import this report while running the SonarQube analysis by setting:
sonar.dynamicAnalysisproperty to
reuseReports
sonar.surefire.reportsPathproperty to the path to the directory containing the XML reports
sonar.cobertura.reportPathproperty to the path to the Cobertura XML report
...: | http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=136675686&selectedPageVersions=103&selectedPageVersions=102 | 2014-04-16T10:52:10 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.codehaus.org |
Ticket #1582 (closed task: fixed)
create bottom bar for launcher
Description.
Attachments
Change History
Changed 6 years ago by will
- Attachment new_home.png added
comment:3 Changed 6 years ago by will
- Keywords must have added
adding must have for query references.
raster, we really want to get this in..
Note: See TracTickets for help on using tickets.
new_home | http://docs.openmoko.org/trac/ticket/1582 | 2014-04-16T11:07:15 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.openmoko.org |
Message-ID: <771064452.82998.1397644183183.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_82997_1885850075.1397644183182" ------=_Part_82997_1885850075.1397644183182 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The following is needed when upgrading to 2.3:=20
FeatureResults was deprecated for the 2.1 release, and it is passing out=
of existence for 2.3. All methods were deprecated and their
re= placements documented.
public example(FeatureSource featureSource) throws IOException ){ FeatureResults results =3D featureSource.getFeatures(); int count =3D results.getCount(); FeatureCollection collection =3D results.collection(); FeatureReader reader =3D results.reader(); try { while( reader.hasNext() ){ Feature feature =3D reader.next(); =20 } } finally { reader.close(); } }=20
FeatureCollection results =3D featureSource.getFeatures(); int count =3D results.size(); FeatureCollection collection =3D results; FeatureIterator reader =3D results.features(); try { while( reader.hasNext() ){ Feature feature =3D reader.next(); =20 } } finally { reader.close(); } }=20
You may need to remove IOException handling code.=20
You can also transition directly to use of java.util.Iterator; this will= put you in better standing for the transition to GeoAPI FeatureCollection.==20
Iterator reader =3D results.iterator(); try { while( reader.hasNext() ){ Feature feature =3D (Feature) reader.next(); =20 } } finally { results.close( reader ); } }=20
It seems we did not meet our guidlines here and the GridCoveargeAPI has = been moved and completly changed!=20
import org.geotools.renderer.lite.GridCoverageRenderer; ... GridCoverageRenderer renderer =3D new GridCoverageRenderer(gc, crs); renderer.paint(graphics);=20
import org.geotools.renderer.lite.gridcoverage2d.GridCoverageRenderer; ... GridCoverageRenderer renderer =3D new GridCoverageRenderer( viewportCRS, im= ageAreaBBox, paintArea ); StyleFactory factory =3D CommonFactoryFinder.getStyleFactory(null); RasterSymbolizer rasterSymbolizer =3D factory.createRasterSymbolizer(); =20 renderer.paint( destination, gc, rasterSymbolizer );=20 | http://docs.codehaus.org/exportword?pageId=66270 | 2014-04-16T10:29:43 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.codehaus.org |
Document Bank Main Page >
The working materials in the NRDC Document Bank are listed in reverse chronological order. For additional policy materials including reports and issue papers, see the Issues section of the main NRDC site.
New Science On Impacts of Navy Sonar
A letter to the California Coastal Commission that details recent scientific studies, which confirm the harmful impacts of Navy sonar training and testing on Southern California marine mammals.
Waste in Our Water: The Annual Cost to California Communities of Reducing Litter That Pollutes Our Waterways
This August 2013 report, prepared by Kier Associates for NRDC, analyzes the full costs that California communities bear to keep litter and waste from polluting waterways.
NRDC v. Jewell Settlement Agreement
The agreement filed in federal district court today, in the case of NRDC v. Jewell.
Mid-Atlantic offshore wind and right whale agreement
A letter to the Bureau of Ocean Energy Management announcing an agreement between environmental groups and leading offshore wind developers to incorporate measures to protect right whales during Mid-Atlantic offshore wind development.
USWTR Summary Judgment Order
Judge Wood's order denying plaintiffs' motion for summary judgment and granting defendant's motion for summary judgment in a lawsuit that challenged the Navy's decision to build and operate a 500-square-mile undersea warfare training range next to the only known calving ground for critically endangered North Atlantic right whales.
Industry Letter in Support of the National Ocean Policy
Letter to congressional leadership in support of a NOP from more than a dozen representatives of ocean and coastal-based industries (from fishing to offshore wind).
E2 Supports the National Ocean Policy
Letter to Senate leadership in support of a NOP from more than 200 business and professional leaders.
The Joint Ocean Commission Initiative Supports the National Ocean Policy
Letter to congressional leadership in support of a NOP from the Joint Ocean Commission Initiative.
Environmental Groups Support the National Ocean Policy
Letter to congressional leadership in support of a NOP from more than a dozen leading environmental groups..
Global Goal and Commitments to End Plastic Pollution
NRDC will be working with other NGOs to get Member Nations and other relevant bodies to sign on to this commitment document at the Rio+20 United Nations Conference on Sustainable Development this year.
Global Declaration on Plastic Pollution, submitted to the UN Secretariat in advance of the Rio+20 Earth Summit
NRDC joined members of the Catto Fellowship program of the Aspen Institute and 24 other NGOs in submitting a call for immediate action on marine plastic pollution, leading up to the Rio+20 Earth Summit..
First Amended Nitrogen Consent Judgment
Citizen Groups reach legal settlement with NYC and State to clean up Jamaica Bay
Report.
NRDC & Other Group Comment Letter to California State Water Board on Proposed Amendment to Once-Through Cooling Policy, November 19, 2010
Comments of NRDC and 21 other groups to California State Water Board OTC Policy Amendment, November 19, 2010
Testimony by Lisa Suatoni at House of Representatives hearing on “The BP Oil Spill: Accounting for the Spilled Oil and Ensuring the Safety of Seafood from the Gulf”
Testimony by Lisa Suatoni, Senior Scientist in NRDC’s Oceans Program, before the House Committee on Energy and Commerce, Energy and Environment Subcommittee, Hearing on “The BP Oil Spill: Accounting for the Spilled Oil and Ensuring the Safety of Seafood from the Gulf,” addressing dispersants, the government “oil budget,” and seafood safety.
NRDC and Gulf Coast Community Letter to Department of Labor
Letter requests U.S. Department of Labor to take specific steps to protect the health of fishermen and workers working in oil-contaminated areas post BP oil spill.. | http://docs.nrdc.org/oceans/ | 2014-04-16T10:10:03 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.nrdc.org |
scipy.sparse.linalg.spilu¶
- scipy.sparse.linalg.spilu(A, drop_tol=None, fill_factor=None, drop_rule=None, permc_spec=None, diag_pivot_thresh=None, relax=None, panel_size=None, options=None)[source]¶
Compute an incomplete LU decomposition for a sparse, square matrix.
The resulting object is an approximation to the inverse of A.
Notes
To improve the better approximation to the inverse, you may need to increase fill_factor AND decrease drop_tol.
This function uses the SuperLU library. | http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.sparse.linalg.spilu.html | 2014-04-16T10:15:08 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.scipy.org |
Illustrations¶
Illustrations let you visualize your data and create figures for slides and publications.
Illustrations can be either used in single-page mode, or in batched layout mode..
Illustration Components¶
A variety of visualization types are available:
Flow Plots Individual 1D and 2D flow plots (histograms and biaxials). Ideal for creating gating hierarchies when used in conjunction with arrows and text boxes.
Pivot Tables Grids of flow plots, optionally overlaid.
Heatmaps A grid of statistics.
Bar and Line Charts Summary figures that display channel statistics or population frequencies.
Box Plots Display information about the distribution of datapoints, such as the median and interquartile range. Ideal for looking at experiments with many replicate conditions.
Dose Response Curves Calculate the relationship between the dosage of a drug and a readout.
along with basic objects, including text boxes, images, lines and arrows.
Illustration Interface¶
Illustration functionality is accessible from the menus, toolbars and sidebar.
The Pivoting Model¶
Pivot tables, bar charts, line charts, heatmaps and box plots all use a pivoting model to organize the data that they display. After selecting FCS file annotations to display along the X and Y dimensions (rows and columns, or axis labels and data series), files are automatically selected to populate the figure. You can also select populations or channels as X/Y dimensions, or explicitly select FCS files by filename.
Filter annotations may also be selected to limit the files displayed. For example, if you select "Plate Row" for your "rows" dimension and "Plate Column" for your "column" dimension, but have several 96-well plates in your experiment, you can use "Plate: plate 1" as a filter annotation to show data from just "plate 1."
When you select annotations, populations and channels, they will be automatically sorted. However, you can override the default sorting by following the instructions in annotation, population and channel sorting.
Exporting Illustrations¶
Downloading an illustration as a PDF¶
Howto
- From your illustration, click the File menu, then select Print to PDF.
See also:
Downloading parts of an illustration as SVG, PNG or PDF¶
Some types of components can be downloaded as SVG, PNG and/or PDF files.
Howto
- In your illustration, select one component.
- In the right-hand sidebar, click Download PNG/PDF/SVG.
Copying and pasting to other applications¶
You can copy illustrations or parts of illustrations to other programs, such as Microsoft Office and Apple iWork applications. All selected components will be rendered into a single image when they are copied.
Howto
- Select one or more components in an illustration.
- Use the keyboard shortcut ctrl+C (Windows, Linux) or ⌘+C (MacOS) to copy the selection.
- In the destination application, use the keyboard shortcut ctrl+V (Windows, Linux) or ⌘+V (MacOS) to paste. | http://docs.cellengine.com/illustrations/index.html | 2022-01-16T20:05:49 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['main-toolbar.png', 'main illustration toolbar'], dtype=object)
array(['components-toolbar.png', 'components toolbar'], dtype=object)] | docs.cellengine.com |
Package Installation - pip¶
For the full reference see: Pip
Python has a package repository: the Python Package Index or pypi which you can visit at
Even if we are developing for the web and this is new territory for Python, it would be nice if at least part of the vast arsenal of available packages
That’s the role of
anpylar-pip, which:
-
Instructs
pipto install packages in a private directory
-
Scans the packages for Python purity
-
Installs the packages in the application
Pure Python¶
Only packages which are pure Python can be installed. Those relying on
C
extensions are not supported.
Furthermore: NOT all pure Python packages can be used. See The technology for the description of the underlying technology and the constraints.
Installing a package¶
Let’s use a known pure Python package which provides a framework for working with
parameters in classes in a declarative manner:
metaparams
Let’s recall a standard project layout:
myapp ├── app │ ├── __init__.py │ ├── app_component.css │ ├── app_component.html │ ├── app_component.py │ └── app_module.py ├── anpylar.js ├── index.html ├── package.json └── styles.css
And the content of our
package.json before the installation
{ "packages": [ "app" ], "app_name": "", "version": "", "author": "", "author_email": "", "license": "", "url": "" }
Change into the
myapp directory and run
$ anpylar-pip install metaparams Target for pip installation is: . Processing package.json Collecting metaparams Collecting metaframe (from metaparams) Installing collected packages: metaframe, metaparams Successfully installed metaframe-1.0.1 metaparams-1.0.4 Moving pip packages to final destination
And the following happens to the file structure:
myapp ├── app │ ├── __init__.py │ ├── app_component.css │ ├── app_component.html │ ├── app_component.py │ └── app_module.py ├── metaframe │ ├── __init__.py │ └── metaframe.py ├── metaparams │ ├── __init__.py │ ├── metaparams.py │ └── version.py ├── anpylar.js ├── index.html ├── package.json └── styles.css
We have two new directories containing the packages
metaparams (as
expected) and a dependency which was pulled:
metaframe
And the content of our
package.json before the installation
{ "packages": [ "app", "metaframe", "metaparams" ], "app_name": "", "version": "", "author": "", "author_email": "", "license": "", "url": "" }
Our new pip packages have been added to
package.json and they will
therefore be collected when generating a webpack (see: Webpack)
The newly added packages can now be used during testing and deployed for production scenarios. | https://docs.anpylar.com/cli/pip.html | 2022-01-16T18:54:57 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.anpylar.com |
–
Setting up the IVA for Facebook Messenger
Provide users with the ability to interact with your Pega Intelligent Virtual Assistant™ (IVA) in Facebook Messenger. By interacting with your chatbot on Facebook Messenger, users have a convenient, everyday messaging app available to resolve issues or request more help in your application.For example, for a travel agency application, customers can interact with your chatbot in Facebook Messenger to cancel their flight.
- Configure Digital Messaging channel security settings. For more information, see Configuring Digital Messaging channel security.
- If you do not have an IVA for Digital Messaging, create a Digital Messaging channel. For more information, see Creating a Digital Messaging channel.
- If you do not have a Facebook Admin account and one or more Facebook pages, create them. For more information, refer to the Facebook developer portal.
For relevant training materials, see the Configuring Digital Messaging Manager module on Pega Academy.
- In the navigation pane of App Studio, click Channels.
- In the Current channel interfaces section, click the icon that represents your existing Digital Messaging channel.
- In the Digital Messaging channel, click the Connection tab.
- Click Manage connections.
- In the Digital Messaging Manager window, click Add Connection, and then click the Facebook icon.
- If you are not signed on to your Facebook account in the browser, log in to your account:
- On the Facebook login page, enter the email address and password for your Facebook account.
- Click Log In.
Note: The account you use must be an admin of the Facebook page that you want to authorize.
- In the displayed window, click Continue as <YOUR NAME>.
- In the page selection window for your Facebook page application, select the check box for the Facebook page that you will use with your Pega Platform application, and then click Next.
- Review the permissions for your Facebook page, and then click Done.This action ensures that you give permission to your Facebook page app to send and receive messages on behalf of the IVA channel.
Result: The window displays information that your Facebook page is now linked to your Digital Messaging channel.
- Click OK.
Result: The system displays information that the authorization for your Facebook page was successful, and displays a list of all of your configured messaging platforms.
- Close the browser window, and then in the Digital Messaging channel, click Save. | https://docs.pega.com/conversational-channels/86/setting-iva-facebook-messenger | 2022-01-16T19:31:22 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
–
Creating a field value
You can create field values to restrict the values of a property to a list of allowed values. Configure values that are meaningful to users so that the purpose of each value in the list is clear. For example, to prepare a list of values that a hospital administrator uses to select a medical procedure code, use a simple, readable name, such as appendix removal.A field value has three key parts: a class name, a field name, and a field value. Two or more field value instances where the first two key parts are identical are known as a subset.
- In the header of Dev Studio, click .
- In the Label field, enter a description for the field value.
- In the Field Name field, enter the name of a single-value property.If this field value supports language localization by using a language-specific ruleset, enter language-specific descriptive text for the value.
For example: The standard field value with class name Work-, field name pyRootCause, and field value Facilities makes a "Facilities" value available as a selection choice for the value of the Work-.pyRootCause property. To present a French-language version of facilities, copy this standard rule into a language-specific ruleset (ending in _FR to match a locale) and enter "Équipements" as the Localized Label.
- Click View additional configuration options.
- In the Translated value field, enter a literal constant that is an acceptable value for the field name property, using only letters, numbers, single spaces, and a hyphen.The value might display in a selection list or other aspects of the application user interface, so choose text that is meaningful to your user community. Enter no more than 64 characters. If the class and field name key parts together identify a
Single Valueproperty that has a maximum length value of less than 64, that limit applies.
- In the Context section, in the Apply to field, select the class in which you want to apply the allowed value.
- Click Create and open.
- Click Save. | https://docs.pega.com/data-management-and-integration/86/creating-field-value | 2022-01-16T20:05:42 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
–
Adding multiple predictors to an adaptive model
Use the batch option to add multiple predictors that you want to use in your adaptive model. You can define any number of properties as predictors.
- In the navigation pane of Prediction Studio, click Models.
- Open the Adaptive Model instance that you want to edit.
- In the Predictors tab, click Fields.
- From the Add field drop-down list, click Add multiple fields.
- In the Add predictors dialog box, click a page to display the properties that it contains:
- To choose a primary page, click Current page. The primary page is always available, even if it does not contain any properties.
- To choose a single page that is listed under the current page, click Page.
- To choose a page that contains pages and classes, click Custom page. The custom page is embedded in a page.
- Select the properties that you want to add as predictors and click Submit.
- Confirm the changes by clicking Save. | https://docs.pega.com/decision-management/84/adding-multiple-predictors-adaptive-model | 2022-01-16T20:44:15 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
Example¶
In case you want to take a look at the actual code and see it in action we created a demo project that covers nearly all Talkable possibilities. To download Talkable Demo iOS project click here. Once downloaded please proceed to configuration step to set up your API Key and Talkable Site Slug.
If you have any questions, don’t hesitate to email us at [email protected]. Happy hacking! | https://docs.talkable.com/ios_sdk/example.html | 2022-01-16T19:55:49 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.talkable.com |
Microsoft Graph API cannot filter /users by companyName? For example, say Microsoft used the companyName attribute to distinguish between some of it's various Business Units (e.g. "Azure", "M365", "Windows", "XBox", "Surface" etc) and I wanted to return all users from "M365", I couldn't do that? I can do it when I ask for counts, but not when I ask for a collection of users. For example:
Getting a count works (as long as I provide the consistencylevel = eventual HTTP Request Header): ne 'guest' and accountEnabled eq true and companyName eq 'XBox' eq true and companyName eq ‘XBox’
{
"error": {
"code": "Request_UnsupportedQuery",
"message": "Unsupported or invalid query filter clause specified for property 'companyName' of resource 'User'.",
"innerError": {
"date": "2021-02-19T12:36:03",
"request-id": "356e002c-6fb3-4f36-b269-92348cfc1043",
"client-request-id": "60ae7f54-b3c4-ea55-1dba-bc6ac1a10b54"
}
} | https://docs.microsoft.com/en-us/answers/questions/280408/microsoft-graph-api-cannot-filter-users-by-company-2.html | 2021-04-11T02:34:05 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.microsoft.com |
Billing FAQ
- How can I get on an annual plan?
- How can I get on the usage based plan?
- How are the credits deducted?
- How can I check how much I will pay for user licenses at the end of the month?
- What if I am building open source?
- How do I use credits?
- How do I recharge my credits balance?
- Do credits expire?
- Can you send me an invoice?
- Can I get a refund?
- How do I cancel my paid plan?
- Do these prices include tax?
- Can I sign up for automatic renewals for usage based plan?
- Are add-ons limited to a certain number of users?
- Why my credits balance is negative?
Please see our Billing overview first.
How can I get on an annual plan? #
The annual based plans are available by contacting the Travis CI support team.
How can I get on the usage based plan? #
The usage based plan is available by contacting theTravis CI support team.
How are the credits deducted? #
The credits will be deducted from the credits pool after each build job is executed depending on the operation system used.
How can I check how much I will pay for user licenses at the end of the month? #
The unique users triggering builds within a billing period will constitute a number of actual user licenses used and will be charged at the end of the billing period, according to the rates in a selected plan.
By default Travis CI system provides the possibility to trigger a build to all members of your team on GitHub, Bitbucket, GitLab and Assembla who have writing rights on repositories. If the team member has not triggered the build during the billing period Travis CI will not charge you for that user.
To check how much active users you got during the last billing cycle please contact the support. Travis CI is working on the user management functionality where you will be able:
- To see how many users has rights to trigger the build
- To see how many was active/trigger the build during the last month
- Select the users who are able to trigger the build
What if I am building open source? #
Each of the Travis CI Plans contains an amount of special OSS credits per month assigned to run builds only on public repositories. To find out more about it please contact the Travis CI support team. In the email please include:
- Your account name and your VCS provider (like travis-ci.com/github/[your account name] )
- How many credits (build minutes) you’d like to request (should your run out of credits again you can repeat the process to request more or to discuss a renewable amount)
How do I use credits? #
You can use your credits to run builds on your public and private repositories.
You may have been assigned an amount of OSS credits to run builds on public repositories. When you run out of OSS credits but want to keep building on public repositories you can go to the Plan page and turn the Credits consumption for OSS switcher to
On. In this case, once the ‘OSS credits’ pool is depleted, the system starts deducting from the ‘paid credits’ pool. Builds for OSS repositories will be allowed to start, and deducted from the paid credits.
How do I recharge my credits balance? #
You can buy additional build credits anytime you need them by clicking on your profile icon in the right upper corner of the screen =>Settings, navigate to the Plan page and press the ‘Buy add-ons’ button. Please be advised that it is not possible to buy additional credits on Free Plan.
Do credits expire? #
No, the credits you purchased do not expire.
Can you send me an invoice? #
The invoice is sent automatically by the Travis CI system after the Plan purchase or subsequent user license charge is made.
Can I get a refund? #
Upon cancellation of your account or switching back to the Free Plan, you can request a refund under the following conditions:
- You haven’t used any paid credits
- Request made up to and including 14 days after the billing date: applicable for full refund
Contact our support team at [email protected] Specify the GitHub/Bitbucket/GitLab/Assembla handle of the account for which you’re requesting a refund, and send us a copy of your payment and/or invoice.
How do I cancel my paid plan? #
If you want to cancel your paid plan, click on your profile icon in the right upper corner of the screen =>Settings, navigate to the Plan page and press the ‘Change Plan’ button and choose the Free Plan. Travis CI Free plan will provide you with 10,000 build credits to try it out for public and private repositories builds and unlimited number of users with no charge. If you want your account to be deleted, please contact the Travis CI support.
Do these prices include tax? #
No, all prices do not include tax.
Can I sign up for automatic renewals for usage based plan? #
The per-seats licence invoice will be charged and sent automatically after each month you use the Travis CI service, based on the maximum number of unique users who triggered the build during the given month. Unfortunately, right now it is not possible to configure automatic renewals for build credits. You need to manually buy credits each time you are about to run out of them. We intend to make it more convenient in the near future. To help you track the build credit consumption Travis CI system will send the notification emails each time your credit balance is used up by 50, 75 and 100%.
Are add-ons limited to a certain number of users? #
You can buy additional add-ons any time you feel it is needed. You and your organization’s members can use the bought add-ons with no limitations.
Why my credits balance is negative? #
Most probably your last build costed more than you had available in your credit balance. You won’t be able to run any builds until your balance gets positive. Replenish your credits (the negative balance will be deducted upon arrival of new credits creating new balance - see our billing overview. | https://docs.travis-ci.com/user/billing-faq | 2021-04-11T00:57:09 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.travis-ci.com |
A newer version of this page is available. Switch to the current version.
ASPxClientDateEdit.DateChanged Event
Fires after the selected date has been changed within the date editor.
Declaration
DateChanged: ASPxClientEvent<ASPxClientProcessingModeEventHandler<ASPxClientDateEdit>>
Event Data
The DateChanged event's data class is ASPxClientProcessingModeEventArgs. The following properties provide information specific to this event:
Remarks
The DateChanged event allows you to respond to the editor's selected date being changed by an end-user on the client side.
See Also
Feedback | https://docs.devexpress.com/AspNet/js-ASPxClientDateEdit.DateChanged?v=18.2 | 2021-04-11T00:35:28 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.devexpress.com |
SchedulerStorage.TriggerAlerts() Method
Namespace: DevExpress.Xpf.Scheduler
Assembly: DevExpress.Xpf.Scheduler.v18.2.dll
Declaration TriggerAlerts method to invoke all alerts immediately, instead of waiting for a timer-generated trigger. By default, reminders are checked every 15000 milliseconds (this interval is specified by the SchedulerStorage.RemindersCheckInterval property value).
The method is useful to check all reminders and perform all the necessary actions on application start. | https://docs.devexpress.com/WPF/DevExpress.Xpf.Scheduler.SchedulerStorage.TriggerAlerts?v=18.2 | 2021-04-11T01:56:48 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.devexpress.com |
Searching data / Building a query / Operations reference / Mathematical group / Logarithm: base 2 (log2)
Logarithm: base 2 (log2)
Description
Returns the base-2 logarithm of the selected argument.
How does it work in the search window?
Select Create column in the search window toolbar, then select the Logarithm: base 2 get the information required for the examples below:
from my.upload.sample.data
select split(message, ";", 17) as posNumbers1
We want to get the base-2 logarithm of the numbers in our posNumbers1 column. To do it, we will create a new column using the Logarithm: base 2 operation.
First, we must transform the string values in the posNumbers1 column into data type integer. To do it, create a new column using the To Int operation. Call the new column integerValues
Now, create another column using the Logarithm: base 2: base 2 operation:
log2(number)
Example
You can copy the following LINQ script and try the above example on the
my.upload
.sample.data table.
from my.upload.sample.data select split(message, ";", 17) as posNumbers1 select int(posNumbers1) as integerValues select log2(integerValues) as log2 | https://docs.devo.com/confluence/ndt/searching-data/building-a-query/operations-reference/mathematical-group/logarithm-base-2-log2 | 2021-04-11T01:55:36 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.devo.com |
Before you begin, you’ll need:
- Sophos Intercept X Endpoint installed
- Access to the Sophos Central Cloud console
- Filebeat 7 installed
- Terminal access to the instance running Filebeat. It is recommended to run the Sophos API script from the same instance running your Filebeat.
Configure Sophos to collect the Central Cloud logs
Follow the official instructions provided by Sophos for collecting Sophos Central Cloud logs from all machines.
The procedure involves using the Sophos API. Make sure that the
config.ini used in the Sophos siem.py script is under
format = json (this is the default setting).>>"].
Replace
<<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to.
Change
<<FILE_PATH>> to the output TXT file retrieved from the Sophos siem.py script.. You can search or filter for Sophos logs, under
type:sophos-ep.
If you still don’t see your logs, see log shipping troubleshooting.
Contact support to request custom parsing assistance
The logs will require customized parsing so they can be effectively mapped in Kibana. | https://docs.logz.io/shipping/security-sources/sophos.html | 2021-04-11T01:28:21 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.logz.io |
# Deployment Reference
Prerequisites
- A machine with docker installed, with a publicly accessible static IP address.
- A DNS service (such as GoDaddy, NameCheap, or DuckDNS) to give your host a DNS name
- Ether (roughly 3 eth) to activate and fund the relayer.
# Start a Machine
This document demonstrate using Google Compute Engine, but you can use any hosting service The reason we use GCP, is that it has a "free tier" not limited in time (for a single micro instance), and that it comes with "docker" pre-installed.
- Go to the Google Cloud Compute Engine UI (opens new window)
- Create a new instance.
- For "Machine type", select "e2-micro".
- In "Boot Disk" change the Operating system to "Container Optimized OS".
- Allow http and https traffic into the instance.
- Create the instance.
- Once you can get its public IP address, add an "A" record for that IP your DNS service.
- To easily SSH into the machine, add your ssh public key, to the "Settings/Metadata/SSH keys"
# Install GSN Relayer
Checkout the code in GSN git repository, and navigate to the
dockers/relaydcfolder
edit
.envfile and set the HOST value to the public DNS name of your host.
edit the
config-sample/gsn-relay-config.json:
- Edit the
ethereumNodeUrlto point to a valid RPC url of the network you want to use.
- Edit the
versionRegistryto point to the right entry for your network from the Deployed networks
copy the files to the host:
- The
.envfile must be placed at the home folder
- The
gsn-relay-config.jsonmust be placed inside a
configfolder
to bring up the relayer, run the command
./rdc HOSTNAME up -d
To view the log, run:
./rdc HOSTNAME logs [gsn1]
Note that initial startup takes about a minute (to create a private key and register an SSL certificate)
Check the relay is up:
curl
You should see JSON output containing:
{ .. "ready":false, ...}, meaning the relayer is up and running, but not registered yet.
To register and fund using the gsn relayer-register command
Wait for the Relayer to complete the registration. It should take 1-2 minutes.
Run curl again:
curl
You can see it says:
ready:true
Congratulations. You have a running relayer. You should now be able to see your relayer in the list of all relayers:
Note: in order to test your relayer, add its URL to the list of
preferedRelays for your client's RelayProvider.
Otherwise, your client is free to pick any active relay. | https://docs.opengsn.org/relay-server/deployment-reference.html | 2021-04-11T00:02:17 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.opengsn.org |
Release Date: 04/06/2020
Hardware detection added
Web user control detects when the browser cannot access either a microphone or a webcam.
In those cases, the user control fires the event "onError", and a descriptive error message is saved in the property "LastMessage".
"Remote connection lost" detection added in the Web user control
The web user control can detect when the remote client connection is lost (web browser refresh is executed, wifi is lost, airplane mode activated, etc). The remote client can be either web or a mobile client.
The event "onCallEnded" is fired, and the error message "Remote_Connection_Failed" is saved in the property "LastMessage". | https://docs.workwithplus.com/servlet/com.wiki.wiki?3162,VideoCallsPlus+3.2, | 2021-04-11T00:23:04 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.workwithplus.com |
Text: variable-length character string.
Integer: signed four-byte integer with a range between -2147483648 to +2147483647.
Object: can point to data of any data type. You can assign a variable, constant, or expression of any data type to an object.
Table Reference: this is an integer that references another table using the id #.
File: upload a file.
Image Metadata: this allows you to upload any image files. For example: .jpg.
Video Metadata: this allows you to upload any video files. For example: .mp4.
Attachment Metadata: this allows you to upload any files with the exception of .exe.
Bool: logical Boolean (true/false).
Decimal: data type ranges from 131072 digits before the decimal point to 16383 digits after the decimal point.
JSON: JavaScript Object Notation, the basic data types are: Number, String, Boolean, Array, Object, and null.
Password: this is a special case where the data is automatically encrypted using salt encryption.
Salt in cryptography, is random data that is used as an additional input to a one-way function that hashes data, a password or passphrase. Salts are used to safeguard passwords in storage.
Timestamp: this is the number of milliseconds that have elapsed since the Unix epoch.
Enum: a special data type that enables for a variable to be a set of predefined constants. The variable must be equal to one of the values that have been predefined for it.
Point (geo_point): a special type that represents a point on the map using latitude and longitude.
Point Collection (geo_multipoint): a special type that represents a collection of points on one map using latitude and longitude.
Path (geo_linestring): a collection of points that represent a line on a map using latitude and longitude of each point.
Path Collection (geo_multilinestring): a collection of lines on one map.
Polygon (geo_polygon): a collection of lines that form a multi-sided shape on a map.
Polygon Collection (geo_multipolygon): a collection of polygons on one map. | https://docs.xano.com/database/datatypes | 2021-04-11T01:58:43 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.xano.com |
Hello,
I have a question about Azure devices,
My company has some devices registered in the Azure portal but the device details are nothing being updated I mean after the device has registered, if the user changes the device name, it is not being updated in the Azure portal, the same thing happening with win version,
can someone tell me what I need to do to keep the azure device updated?
Thanks | https://docs.microsoft.com/en-us/answers/questions/2025/azure-device-update.html | 2021-04-11T00:46:36 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.microsoft.com |
Installing QIIME 2 using Amazon Web Services¶
1. Determine which AWS AMI to use¶
For a full list of QIIME 2 AWS AMIs, please check out the AWS AMIs link. Once you have determined which image you would like to use, please resume this guide.
2. Set up an AWS account¶
Point your browser to and log in (you will need to provide a credit card if you haven’t already created an account).
4. Launch an instance¶
When launching an instance, select “Community AMIs”, and search for the AMI you selected in Step 1 (above).
5. Configure¶
When prompted to set up a security group, make sure that port 22 is open. Next, when prompted to set up an SSH keypair, choose “Proceed without a keypair”. | https://docs.qiime2.org/2021.2/install/virtual/aws/ | 2021-04-11T00:20:32 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.qiime2.org |
[−][src]Crate unix_path
Unix path manipulation.
This crate provides two types,
PathBuf and
Path (akin to
String
and
str), for working with paths abstractly. These types are thin wrappers
around
UnixString and
UnixStr respectively, meaning that they work
directly on strings independently from the local platform's path syntax.
Paths can be parsed into
Components by iterating over the structure
returned by the
components method on
Path.
Components roughly
correspond to the substrings between path separators (
/). unix_path::Path; use unix_str::UnixStr; let path = Path::new("/tmp/foo/bar.txt"); let parent = path.parent(); assert_eq!(parent, Some(Path::new("/tmp/foo"))); let file_stem = path.file_stem(); assert_eq!(file_stem, Some(UnixStr::new("bar"))); let extension = path.extension(); assert_eq!(extension, Some(UnixStr::new("txt")));
To build or modify paths, use
PathBuf:
use unix_path::PathBuf; // This way works... let mut path = PathBuf::from("/"); path.push("feel"); path.push("the"); path.set_extension("force"); // ... but push is best used if you don't know everything up // front. If you do, this way is better: let path: PathBuf = ["/", "feel", "the.force"].iter().collect(); | https://docs.rs/unix_path/1.0.0/unix_path/ | 2021-04-11T01:25:33 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.rs |
For more than two years we've been continuously innovating and bringing something entirely new to the Swedish sneaker scene. With over 14600 success posts, more than 25000 shoes purchased and numerous new businesses started, we're happy with what we have achieved together with our members.
A typical month in AutoSnkr consists of 1000+ screenshots posted in the #success channel, which usually combines 2000-3000+ sneakers/items purchased by our members. You can also see our instagram story highlights showcasing massive amounts of members success. We also regularly post monthly collages of success screenshots. | https://docs.autosnkr.com/group-success | 2021-04-11T00:03:30 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.autosnkr.com |
Release Notes - Version 3.5¶
Key features and changes¶
TeamDrive Host Server Version 3.5 is the next major release following after version 3.0.013.
Note
Please note the the version numbering scheme for the Host usually remains at zero, unless a rebuild/republishing of a release based on the same code base has to be performed (e.g. to fix a build or packaging issue that has no effect on the functionality or feature set).
Version 3.5 contains the following features and notable differences to version 3.0.013. See Release Notes - Version 3.0.013 for a detailed description of the change history for that version.
Host Server Functionality¶
- Security enhancement: Files can now be published with an expiration date after which an auto task on the Host Server will automatically remove the published files again. Additionally, published files can now be protected by a password. This functionality requires support on the TeamDrive Client side, which is implemented in versions 4.1 of the TeamDrive Client. For entering the password in a html page, a few templates were added. The templates could be customized and will not overwritten when updating to a newer Host Server version.
- Security enhancement: A request for a published file no longer returns the actual file directly, except in the case where the request comes from tools like
wgetor
curl. Instead, the document returned is an HTML file containing JavaScript calls that load the actual file using a temporary URL. This solves a potential security problem in which URLs of published documents can be inadvertently disclosed to unintended recipients in the following scenario: A TeamDrive user publishes a document that contains URLs pointing to a third-party website (e.g. a PDF or office document). The user, or an authorized recipient of the published URL, clicks on a hyperlink embedded in the document. At that point, the referrer header discloses the document’s publish URL to the third-party website. Someone with access to that header, such as the webmaster of the third-party website, could then access the link to the published document. (HOSTSERVER-316)
- A new Client/Server protocol, supporting parallel polling of Spaces for increased throughput/performance, batched delete operations (e.g. emptying the Trash) and “soft” locking of files. These features require support on the TeamDrive Client side, which is scheduled to be implemented in future versions of the TeamDrive Client.
- Performance improvement: The Host Server now uses a database table instead of action files in the Space Volume’s file system for signalling actions like uploading or deleting files to the object store. As a result,
s3dno longer has to perform a full scan of all Space Volumes to look for new or changed files. (HOSTSERVER-284) Additionally, the MD5 digest of a file is also stored in this table, so
s3ddoes not need to perform a recalculation of the checksum before uploading the file to the object store. During an upgrade from a previous version, any remaining action tag files in the file system will be imported into the database. Afterwards, the server setting
ImportS3tagFilesshould be set to
False.
- The S3 daemon
s3dnow only performs a full scan of all Space Volumes once per day by default, looking for old files to be transferred to the object store. The age of these files is set via the settings variable
MaxFileAge. The maximum file age should be set long enough to ensure that no file that may still be in the process of being uploaded by a Client will be sent to the Object Store, otherwise the Client would have to restart the upload from scratch.
Administration Console¶
- Security improvement: Added support for managing multiple user/administrator accounts. There are 2 types of users: Superuser and Administrator. Only the Superuser may manage other users. The Administrator may view all users and only update his own user account. (HOSTSERVER-366)
- Security improvement: Disabled auto completion on the login form. (HOSTSERVER-379)
- Security improvement: The complexity of entered passwords is now indicated. (HOSTSERVER-374)
- Security improvement: it is now possible to enable two-factor authentication via email. If enabled, the user is required to enter a security code provided via email in addition to his username and password.
- Security improvement: On login, the user will get an error if he has another logged in session. To proceed, the user must check the checkbox titled: “Close my other login sessions”. (HOSTSERVER-376, HOSTSERVER-377)
- Security improvement: The following events are now logged at the “notice” level: login, logout, failed login attempts and changes to user accounts.
- Security improvement: the amount of search results (e.g. Spaces, Depots or users) is now limited to a maximum defined by the
MaxRecordsDisplayedsetting, which can only be changed by the Superuser.
- Administration: It is now possible to change a Depot’s status (e.g. enabled, disabled, deleted)
- Administration: Added support for viewing selected server log files and the Host Server API log. (HOSTSERVER-348, HOSTSERVER-243)
- Administration: It is now possible to track and display modifications made to Space Depots (e.g. via API calls coming from the Registration Server or via the Host Server Admin Console). (HOSTSERVER-388)
- Administration: When creating a new Space Volume via the Administration Console, the system now checks if the directory actually exists on the file system before creating the Volume. (HOSTSERVER-349)
- Usability: References like Depot Names, Volume names and owners in the Space list are now clickable, to improve the quick navigation between pages. (HOSTSERVER-390)
- Usability: Objects like Spaces or Depots that have been marked as deleted are now hidden in result lists by default. They can be made visible again by changing the setting
ShowDeletedObjectsfrom
falseto
true. (HOSTSERVER-442)
- Usability: Administration Console now better visualizes errors like missing Space Volumes.
- Usability: Units displayed for disk space or traffic usage now use the correct units (e.g. MiB, or GiB), to avoid confusion caused by conversions between different units. Space and traffic levels are now displayed in percent instead of absolute units.
Administration / Installation¶
- Administration: The Host Server’s log levels have been aligned with the ones used by the Registration Server and the Yvva Runtime Environment. Valid log levels are: 1 (Error), 2 (Warning), 3 (Notice), 4 (Trace), 5 (Debug). In production mode the default log level is 3 (Notice). Setting the log file name to
syslogwill now send log output to the local syslog service. You can add an optional “Log Identity after a colon in the log file name, for example:
syslog:my-log-id. The default Log Identity is name of the program, e.g.
s3dor
tshs.
- Administration: The central log file
/var/log/td-hostserver.logis the central log location for all Yvva-based components (e.g. the Host Server API, Administration Console or
td-hostserverbackground service); the log files used in previous versions (e.g.
/var/log/mod_yvva.log,
/var/log/p1_autotask.log,
/var/log/pbvm.log) will no longer be used.
- Administration: TSHS now supports the additional commands
disable-s3-host,
enable-s3-hostand
delete-s3-hostthat allow for disabling/removing the synchronization of objects to an S3-compatible object store. Calling
disable-s3-hostmarks a host entry as “disabled”. Calling
delete-s3-hostdeleteswill re-enable the synchronization of objects to the object store, including the upload of all objects that have been uploaded to TSHS while the object store was marked as disabled. If a disabled or deleted host is marked as current, then TSHS will generate an error on each write attempt.
- Administration: Added an auto task that can be enabled to send out notification emails if a Space Volume’s disk utilization reaches a configurable level.
- Administration: Added an auto task that removes published files that have reached their expiry time.
- Administration: Added an auto task that can be enabled to delete API log entries older than 30 days from the
hostapilogtable.
- Installation: TSHS now supports reading options from a configuration file. The default is
/etc/tshs.conf. The default options that were previously stored in the TSHS init script
/etc/init.d/tshshave now been moved to the configuration file instead. (HOSTSERVER-303)
- Installation: Optionally configure email support (required when using two-factor authentication). (HOSTSERVER-437)
- Installation: The initial Host Server setup process now asks for both a user name and password for the Superuser account. (HOSTSERVER-438)
- Installation: Host Server 3.5 now requires Yvva Runtime Environment version 1.2 or later. This version is included in the Host Server’s yum package repository and will be installed automatically.
- Installation: The distribution now contains the tool
mys3, which can be used to interact with an S3 compatible object store.
API¶
- Changes to a Space Depot performed by the API functions
addusertodepotand
deleteuserfromdepotare now added to the Depot’s change log.
- The MD5 checksum value calculated over API requests no longer needs to be passed in lowercase when submitting the request. (HOSTSERVER-426)
- For debugging purposes, erroneous API requests are now logged to the API requests table as well. (REGSERVER-465)
Change Log - Version 3.5¶
Documentation¶
- Fixed description of Background Tasks
- Added ssl configuration hint in case of upgrading a server to version 3.5
- Added description for the html templates for password protected published files
Host Server Functionality¶
- Usability: Added a default html template folder to avoid conflicts with customized html templates (HOSTSERVER-572)
- Administration: Fixed divide by zero error in case of depot size and traffic limit are zero (HOSTSERVER-570)
- Administration: German translation is disabled. Only english web interface is supported (HOSTSERVER-569)
- Administration: The new background task for API log cleanup will be created with status enabled instead of disabled. The usage could be controlled using the setting “APILogEntryTimeout” (HOSTSERVER-568)
- Usability: Added html template “url-invalid.html” for expired or invalid token in case of access a published file (HOSTSERVER-567)
- Security improvement: Limit access to allowed log files (HOSTSERVER-564)
- S3 daemon: Added bandwidth limitation for the S3 daemon (HOSTSERVER-563)
- Administration: Added filter (<, >, =) for Space-IDs and Depot-IDs (HOSTSERVER-562)
- Administration: Added setting “APILogEntryTimeout” to define a period in days for deleting api logs (HOSTSERVER-561)
- Administration: Fixed truncated “Add New Admin User”-Button (HOSTSERVER-560)
- Administration: Fixed access to ping.xml (HOSTSERVER-558)
- Administration: Fixed s3d.log file name for log file display (HOSTSERVER-557)
- S3 daemon: Fixed crash in case of multipart upload (HOSTSERVER-556)
- Administration: Fixed displaying info text for “TimeDiffTolerance” setting (HOSTSERVER-553) | https://docs.teamdrive.net/HostServer/3.5.1/html/TeamDrive-Host-Server-Installation-en/release-notes-3.5.html | 2021-04-11T01:02:51 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.teamdrive.net |
The Serial/VIN tab will give users the capability to enter and manage Serial/VIN data for a model. The Serial/VIN tab will be available in the model view, only when the feature is turned on.
A user can add a Serial/VIN values to a model by clicking on the Add button. When the button is clicked, the user should see an Edit Serial window appear. The window will contain three options for a serial value, a Single, an Open Range, and a Closed Range. A model can have a variety of Serial/VIN types as well as multiple entries.
When a user selects any of the Serial/VIN types, user will be prompted to enter a Serial Prefix. The Serial Prefix can contain a variation of alpha-numeric characters. The Serial Prefix will need to meet the minimum character count requirements for the prefix field if serial prefix count has been set. For Single types, the user can enter a number in the Serial Numeric Value field.
For Open Range serial types, a user can enter the Serial Prefix with an additional Serial Begin Number. The user will be able to use this type of Serial/VIN association in the case that a model does not yet have a Serial/VIN ending number.
For Closed Range serial types, a user can enter the Serial Prefix, the Serial Begins Number, and include the Serial End Number. The user will be able to use this type of Serial/VIN association in the case that a model does have an ending Serial/VIN number.
To turn on the feature and configure Serial/VIN settings, please see Catalog Settings under the Data Settings in the manual for further steps and information. The two diagrams below display the Serial/VIN tab in a model creating a Serial/VIN for that model:
| https://aridocs.com/docs/data-manager-rt/working-with-models/serial-vin/ | 2021-04-11T01:17:48 | CC-MAIN-2021-17 | 1618038060603.10 | [array(['http://aridocs.com/wp-content/uploads/2018/12/serial1.png', None],
dtype=object)
array(['http://aridocs.com/wp-content/uploads/2018/12/serial2.png', None],
dtype=object) ] | aridocs.com |
Recommended prior reading material: Course Activity Trends.
Taking a closer look in the Activity Trends graph, can shed light on how users learn overtime.
In the above graph we can focus on the active actions users made in a course, such as comments and replies (collaboration) and see how these actions increased the engagement - not only in the days that the discussion took place, but also in the following days (as users got back to the video, read the comments and replies, clicked on the time tags, added notes etc.).
At the User View we can use the Activity Trends to learn about a specific user - his activity throughout the course and his learning habits. We can even see how his learning advanced in the course over time.
The Activity Trends graph helps us identify learning patterns by acknowledging dates that had extremely high activity (before\ during\ right after a lecture? before an exam?) and when there was low activity (semester break?). In case we spot low activity when it was not expected, perhaps a comment on some videos, an assignment or a task can re inspire learners to become active.
Let's take this user's activity for example:
From this user's activity trends graph, we can see that about two weeks after the course began (at around Aug 15th ), he increased his level of participation - adding comments (green) and taking notes (orange) on a regular basis. At some point, during course break (Oct 1st - Oct 15th), Hen's activity decreased a little - but not completely. Towards the end of the semester (starting the 2nd half of November), his level of collaboration decreased, however - he did remain engaged until the end of semester (end of November).
Recommended prior reading material: How Engagement is measured.
A user's engagement indicates active learning, meaning - the user did not only watch the video, but actually had a meaningful interaction with it - either by creating content and\or by consuming it.
In the above example of User View (Gili Cohen's activity in "Demo" course):
The course includes 22 videos
Gili watched 4 of them – 123 times (for all 4 videos combined)
She finished 12% of the total course content of the 22 videos.
In fact, every time she played a video, she only watched small portions (Average Completion Rate), 10% on average.
Gili got 3 "Educator's Thumbs up" in the course for the 10 Comments and replies she wrote (30% is not bad).
She has 102 engagement score indicating a high meaningful activity in the course.
3 of Comments were replies to others indicates level of collaboration and knowledge sharing.
She liked 9 comments written by others.
Finally, she wrote 12 personal notes which can be characteristic of learning (summarising, bookmarking, etc).
Recommended prior reading material: Course Videos.
One of the greatest features of the tables in the dashboard is the fact that the columns can be sorted.
This way we can get a better understanding of “the top...” well.. everything!
Sorting the Course Videos table by the Collaboration column will place videos with most collaboration at the top. High level of collaboration might be an indicator that the video requires additional attention:
It might be extremely interesting and caused a vigorous discussion by the learners. In this case you might be interested in taking a part of the discussion as well.
There might be a part in which the users (all of them, or specific learners) are struggling and it may be helpful to offer your help, provide a different explanation, or share examples.
Sort by Engagement to see what videos were the ones that got the most “attention” from learners. Note that the most engaged video is not necessarily the one with the most collaboration, meaning: the content of the discussion is what increased the engagement (and not the number of comments) - might be interesting to take a deeper look in this video.
Sorting the table by the “Users” column will order the videos by the number of users who watched it. It may be interesting to see which videos were not viewed by many users - is it because this video was added at the end of the semester? It's worth investigating further, maybe this video has high drop rate (See Sort By Average Completion Rate below).
Videos with low Avg. Completion rate - means users watched only a little of the video every time they viewed it. It might be worthwhile to understand why - was the video hard to understand? was it not engaging enough? was it too long?
Recommended prior reading material: Course Users.
Sorting within the users table also means filtering. Use this feature get a better idea of how many users participated in specific type of activity.
Let’s take a demo course as an example:
In the above example we see there are 19 users in the course, 47% of them collaborated. If we scroll down to the users table and sort by “Collaboration”, we will see exactly how many users collaborated and who they are:
Presence: Sort by Views to see only users that played at least one video. If someone is not on the list - he has not watched any of the course videos yet.
Attention: Sort by Views, if a user has a radical number of views (much more or much less than others), it might indicate on struggling with the content. Diving into User View, can shed more light by looking at Completion metrics, especially The Average Completion Rate.
Learning: not only the users that collaborate are the ones that are learning - they are simply the ones that are “heard”. Note all the users that have high Engagement but low Collaboration and the ones that have written many personal Notes - they also had meaningful interactions, they just might be shy. Diving into User View, can shed more light by looking at Completion metrics, especially at the ratio of Average Completion rate vs Total Completion.
Sharing Knowledge:
The Thumbs up column indicates how many Thumbs Up (= Instructor’s “likes”) each learner received. This can be used as extra credit, or bonus points for the learners.
The Replies column can also serve as an indicator to learners who not only take an active part of the discussion, but are actually helping others (replying to other questions). This can also be used as extra credit, or bonus points for the learners.
Course analytics data can be exported, and dowloaded in CSV format. For more details please refer to Exporting Course Data:
Once exported, you will receive a zip that contains the following CSV’s:
Overview – the highlights of the course.
Users – Course Users activity (with additional fields such as: user email, votes, last login date).
Videos – Course Videos.
If you need any additional information or have any questions, please contact us at Annoto Support
Recommended prior reading material: What can we learn from Users table.
The exported Users table provides comprehensive analytics that can be used for grading, for example:
Participation - learners can be graded for the comments\ questions\ ideas\ answers & thoughts they are sharing:
Collaboration - for example: writing over 5 comments throughout the course grants extra credit
Replies - for example: writing over 3 meaningful replies throughout the course grants bonus points
Quality
Thumbs Up badges - for example: a learner that got more than 5 Educator's Thumbs Up gets extra credit
Course Completion - learners can be graded based Views, Watched video and Completion ratings (available in User View). | https://docs.annoto.net/guides/dashboard/best-practices | 2021-04-11T00:03:03 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.annoto.net |
Uses json.loads instead of eval() for JSON parsing, which could
allow users of the Blazar dashboard to trigger code execution on the
Horizon host as the user the Horizon service runs under.
json.loads
eval()
Add support for specifying the affinity policy to use with instance
reservations. This feature is available in Blazar starting with the Stein
release.
Except where otherwise noted, this document is licensed under
Creative Commons
Attribution 3.0 License. See all
OpenStack Legal Documents. | https://docs.openstack.org/releasenotes/blazar-dashboard/train.html | 2021-04-11T02:02:14 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.openstack.org |
Before You Start
Enable Registration on your Site!!
Make sure the checkbox anyone can register is checked, otherwise, the register option in the login of your page will be disabled. This is necessary for you invited users to use their invite codes.
This checkbox is located in Settings, on the left side of the screen.
| https://docs.themekraft.com/article/712-before-you-start | 2021-04-11T00:56:58 | CC-MAIN-2021-17 | 1618038060603.10 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55b4a8bae4b0b0593824fb02/images/5fdba8c87129911ba1b221b6/file-cQ9PjT9VhT.png',
None], dtype=object) ] | docs.themekraft.com |
6. Production envelopes¶
Production envelopes (aka phenotype phase planes) will show distinct phases of optimal growth with different use of two different substrates. For more information, see Edwards et al.
Cobrapy supports calculating these production envelopes and they can easily be plotted using your favorite plotting package. Here, we will make one for the “textbook” E. coli core model and demonstrate plotting using matplotlib.
In [1]:
import cobra.test from cobra.flux_analysis import production_envelope model = cobra.test.create_test_model("textbook")
We want to make a phenotype phase plane to evaluate uptakes of Glucose and Oxygen.
In [2]:
prod_env = production_envelope(model, ["EX_glc__D_e", "EX_o2_e"])
In [3]:
prod_env.head()
Out[3]:
If we specify the carbon source, we can also get the carbon and mass yield. For example, temporarily setting the objective to produce acetate instead we could get production envelope as follows and pandas to quickly plot the results.
In [4]:
prod_env = production_envelope( model, ["EX_o2_e"], objective="EX_ac_e", c_source="EX_glc__D_e")
In [5]:
prod_env.head()
Out[5]:
In [6]:
%matplotlib inline
In [7]:
prod_env[prod_env.direction == 'maximum'].plot( kind='line', x='EX_o2_e', y='carbon_yield')
Out[7]:
<matplotlib.axes._subplots.AxesSubplot at 0x10fc37630>
Previous versions of cobrapy included more tailored plots for phase planes which have now been dropped in order to improve maintainability and enhance the focus of cobrapy. Plotting for cobra models is intended for another package. | https://cobrapy.readthedocs.io/en/0.9.0/phenotype_phase_plane.html | 2021-04-11T01:24:42 | CC-MAIN-2021-17 | 1618038060603.10 | [array(['_images/phenotype_phase_plane_10_1.png',
'_images/phenotype_phase_plane_10_1.png'], dtype=object)] | cobrapy.readthedocs.io |
Docker object labels
Estimated reading time: 3 minutes
Labels are a mechanism for applying metadata to Docker objects, including:
- Images
- Containers
- Local daemons
- Volumes
- Networks
- Swarm nodes
- Swarm services
You can use labels to organize your images, record licensing information, annotate relationships between containers, volumes, and networks, or in any way that makes sense for your business or application.
Label keys and values
A label is a key-value pair, stored as a string. You can specify multiple labels for an object, but each key-value pair must be unique within an object. If the same key is given multiple values, the most-recently-written value overwrites all previous values.
Key format recommendations
A label key is the left-hand side of the key-value pair. Keys are alphanumeric
strings which may contain periods (
.) and hyphens (
-). Most Docker users use
images created by other organizations, and the following guidelines help to
prevent inadvertent duplication of labels across objects, especially if you plan
to use labels as a mechanism for automation..
These guidelines are not currently enforced and additional guidelines may apply to specific use cases.
Value guidelines
Label values can contain any data type that can be represented as a string,
including (but not limited to) JSON, XML, CSV, or YAML. The only requirement is
that the value be serialized to a string first, using a mechanism specific to
the type of structure. For instance, to serialize JSON into a string, you might
use the
JSON.stringify() JavaScript method.
Since Docker does not deserialize the value, you cannot treat a JSON or XML document as a nested structure when querying or filtering by label value unless you build this functionality into third-party tooling.
Manage labels on objects
Each type of object with support for labels has mechanisms for adding and managing them and using them as they relate to that type of object. These links provide a good place to start learning about how you can use labels in your Docker deployments.
Labels on images, containers, local daemons, volumes, and networks are static for the lifetime of the object. To change these labels you must recreate the object. Labels on swarm nodes and services can be updated dynamically.
- Images and containers
- Local Docker daemons
- Volumes
- Networks
- Swarm nodes
- Swarm services | https://docs-stage.docker.com/config/labels-custom-metadata/ | 2021-04-11T00:09:20 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs-stage.docker.com |
Use the migration function to migrate data from another system into Address Manager, or to add large amounts of new data. After structuring the data in an XML file, you import the files to Address Manager and queue them for migration.
To perform a migration:
- is displayed in the Uploaded Files section.
- Under Uploaded Files, click Queue beside a migration file. The file is added to the Queued Files section and its data is imported into Address Manager. During migration, you can stop the migration process and remove files from the Queued Files list:
- A Stop button appears beside the migration file in the Queued Files section while its data is being migrated. If necessary, click Stop to stop the migration.
- A Remove button appears beside the migration file in the Queued Files section while the file is awaiting migration. If necessary, click Remove to delete the file from the Queued Files list.
- As data is migrating, click the Refresh button to view an updated Queued Files list. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Migrating-Data/8.3.2 | 2021-04-11T00:26:47 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.bluecatnetworks.com |
Intersection logic treats all search items as equal to each other, regardless of the number of understood or specified intersections. You can indicate a precedence for a particular search item which falls outside the intersection quantity setting.
A common example is where you are interested chiefly in one subject, but you want to see occurrences of that subject in proximity to one or more of several specified choices. This would be an "or" search in conjunction with one item marked for precedence. You definitely want A, along with either B, or C, or D.
Use the plus sign (
+) to mark search items for mandatory
inclusion. Use
@0 to signify no intersections are required of
the unmarked permuted items. The number of intersections required as
specified by `
@#' will apply to those permuted items remaining.
Example:
WHERE BODY LIKE '+license plumbing alcohol taxes @0'
This search requires (
+) the occurrence of "license", which
must be found in the same sentence with either "plumbing",
"alcohol", or "taxes".
The 0 intersection designation applies only to the unmarked permuted
sets. Since "license" is weighted with a plus (
+), the
"
@0" designation applies to the other search items only.
This query finds the following hits:
+license (and) @0 alcohol
Every person licensed to sell liquor, wine or beer or mixed
beverages in the city under the ALCOHOLIC Beverage Code
shall pay to the city a LICENSE fee equal to the maximum
allowed as provided for in the Alcoholic Beverage Code.
+license (and) @0 plumbing
Before any person, firm or corporation shall engage in the
PLUMBING business within the city, he shall be qualified as
set forth herein, and a LICENSE shall be obtained from the
State Board of Plumbing Examiners as required.
+license (and) @0 taxes
The city may assess, levy and collect any and all character
of TAXES for general and special purposes on all subjects or
objects, including occupation taxes, LICENSE taxes and
excise taxes.
More than one search item may be marked with a plus (
+) for
inclusion, and any valid intersection quantity (
@#) may be used
to refer to the other unmarked items. Any search item, including
phrases and special expressions, may be weighted for precedence in
this fashion. | https://docs.thunderstone.com/site/texisman/weighting_items_for_precedence.html | 2021-04-11T01:09:13 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.thunderstone.com |
DAG’s spread across multi-domains?
Q: Can you spread an Exchange DAG (Database Availability Group) between two domains?
A: No.
Now the story: Say you have a single forest named: Contoso.com. In that forest, you have two child domains: East.Contoso.com and West.Contoso.com. You also have Exchange servers deployed in both East and West domains, but none in the root domain. Is it possible to host some Exchange DAG nodes, in the same DAG, deployed in the East.Contoso.com child domain and others in the West.Contoso.com domain? No.
Exchange does fully support AD site resilience and highly available options, but all within the same domain. Accordingly, it’s best to extend a single DAG across two AD sites, where the nodes are in the same domain. The Exchange Supportability matrix is key to knowing which OS’s are supported and which ones are not, for the different Exchange Server versions.
Yet, there is further discussion about why. It’s not an Exchange issue, but a Windows Operating System (OS) problem. Not until Windows Server 2016 could you even have a cluster across domains: Therefore, unless you are on Windows OS 2016 or higher, no cross domain cluster is allowed.
Consequently, your next thought is, well, if someone is running Exchange Server 2016 on Windows Server 2016, then they can just add in an Exchange Server 2019 node to an Exchange 2016 DAG? No. And not yet, and most likely not ever. The Exchange Product Group is aware of the request, but as of this writing, the latest information is, that option is not going to be an upgrade path available. Time will tell, as the future is not yet written. | https://docs.microsoft.com/en-us/archive/blogs/mconeill/dags-spread-across-multi-domains | 2021-04-11T01:03:03 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.microsoft.com |
❶ and gets the list of email attachments. This transformer is then applied to the pop3 inbound endpoint defined ❷. Then we define a list list-message-splitter-router ❸ which will iterate through all of the email attachments. Next we define a file outbound endpoint which will write the attachment to the './received' directory with a datestamp as the file name ❹. A simple groovy expression ❺ gets the inputStream of the attachment to write the file.
The IMAPS connector has tls client and server keystore information ❶. The built-in transformer is declared ❷ and gets the list of email attachments. This transformer is then applied to the inbound endpoint ❸. Then we define a list list-message-splitter-router ❹ which iterates through all of the email attachments. Next we define a file outbound endpoint that writes the attachment to the './received' directory with a datestamp as the file name ❺. A simple groovy expression ❻ gets the inputStream of the attachment to write the file.. Note that for Mule Runtime versions prior to 3.8.x, you need to escape the % character using %25 (HTML code for %).
host: The name or IP address of the IMAP server, such as, localhost, or 127.0.0.1.
port: The port number of the IMAP server.
For example:
Secure version:
You can also define the endpoints using URI syntax:
This will log into the
bob mailbox on
localhost on port 65433 using password
password. You can also specify the endpoint settings using a URI, but the above syntax is easier to read.
For more information about transformers, see the [Transformer] section in the Email Transport Reference.
For more information about filters, see [Filters] section in the Email Transport Reference. | https://docs.mulesoft.com/mule-user-guide/v/3.5/imap-transport-reference | 2017-12-11T03:58:17 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.mulesoft.com |
In STYLY, you can upload the 3D models as long as they are in supported formats by following the same procedure. In this tutorial, I will upload an FBX file as an example.
Please press the “Assets” button on the STYLY editor.
Select “3D Model”.
Select “Upload”.
Press “Select” to select the file you want to upload.
The uploaded file name is displayed as follows.
Let’s name the file. Press Upload when finished.
The uploading process will start. “Upload complete. The process will start soon.” is displayed when it is done uploading the model.
If you want to upload the next model in succession, please press “Upload another model”.
| http://docs.styly.cc/uploading-original-assets/uploading-3d-model/ | 2017-12-11T03:53:24 | CC-MAIN-2017-51 | 1512948512121.15 | [array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object) ] | docs.styly.cc |
In order to let pyglet process operating system events such as mouse and keyboard events, applications need to enter an application event loop. The event loop continuously checks for new events, dispatches those events, and updates the contents of all open windows.
pyglet provides an application event loop that is tuned for performance and low power usage on Windows, Linux and Mac OS X. Most applications need only call:
pyglet.app.run()
to enter the event loop after creating their initial set of windows and
attaching event handlers. The run function does not return until all open
windows have been closed, or until
pyglet.app.exit() is called.
The pyglet application event loop dispatches window events (such as for mouse and keyboard input) as they occur and dispatches the on_draw event to each window after every iteration through the loop.
To have additional code run periodically or every iteration through the loop, schedule functions on the clock (see Scheduling functions for future execution). pyglet ensures that the loop iterates only as often as necessary to fulfil all scheduled functions and user input.:
pyglet.app.EventLoop().run()
Only one EventLoop can be running at a time; when the run method is called the module variable pyglet.app.event_loop is set to the running instance. Other pyglet modules such as pyglet.window depend on this.()
The:
poll=Trueto call any scheduled functions.
The return value of the.
Earlier. | http://pyglet.readthedocs.io/en/pyglet-1.2-maintenance/programming_guide/eventloop.html | 2018-01-16T11:15:47 | CC-MAIN-2018-05 | 1516084886416.17 | [] | pyglet.readthedocs.io |
Roles and Permissions
Users are any members of your site that have an account. What they can do with that account is up to you. You can give them access to login-only areas of the front-end, give them the ability to manage every aspect of the site in the Control Panel, and everything in between.
A User by itself has no permission to access or change any aspect of Statamic except a front-end login. Beyond that, it takes an explicit set of Permissions for that User to accomplish anything.
Access to Statamic’s Control Panel and the ability to access, create, edit, and delete content and settings is broken out across many different Permissions. Let’s cover some lingo before we move on further.
Super Users
You may have heard the term super user, super admin, or root user thrown around the internet. This is a way of saying someone that has unlimited permissions.
Whenever Statamic needs to ask the question “Can this user do this thing?”, if the user has super permissions the answer will always be “Yes”.
It is not a permission to be taken lightly, and is often only held by the developers of the site.
A user can be considered super if they belong to a role or user group with a
super permission, or if they have
super: true directly on their account.
Super users are the only ones that are able to configure the site through the Control Panel. They can create and delete content collections (entry collections, taxonomies, global sets, asset containers, etc); manage fieldsets, edit system settings, and more.
Roles
Assigning permissions one at a time per-user would be a tedious endeavor, much like cleaning up a bucket of Lego bricks dumped out on your kitchen floor with one hand.
For that very reason, Roles exist. A Role is simply a reusable group of Permissions that can be assigned to users and groups. For example, you could create an Editor role that had all the necessary permissions to create, edit, and delete content, but no access whatsoever to modify site settings or work with system tools.
A User can hold multiple Roles and it makes no difference whether there is a lot, a little, or no overlap between them. The Permissions are additive, which means if one Role gives you a Permission, no other Role can take it away. Keep that in mind as you design the best Roles for your workflow.
It is through the Roles interface that you even access the available Permissions. There is no limit to the number of Roles you can create. Feel free to get creative. We’ve heard rumors of a “Bacon Aficionado” Role out there in the wild, and if you’re reading this and can give us access, we formally request an invite. We’ll bring the shrimp.
Roles are created and managed in
Configure » Users, assuming you have necessary permissions to access it.
It so happens that when roles are created via the Control Panel (CP) they are identified under the hood in
/site/settings/users/roles.yaml by a system-created alphanumeric ID, such as:
d32e14fb-08c9-44c2-aaf8-21200852bafd: title: Admin permissions: - super
This works just fine, as such IDs are never seen by folks who interact with the site exclusively via the CP. However, if you are setting up your site by hand, you can opt to define roles yourself by editing
.../roles.yaml and give them more human-readable names. For instance,
admin: title: Admin permissions: - super
Both approaches work equally well, but the latter may be more convenient for hands-on programmers.
Groups
Just in case you have a complex publishing workflow or lots of cooks in the kitchen (metaphorically speaking of course - if you have lots of real cooks we’d like to try what you’re cooking and formally request invites to this event too), assigning multiple Permissions to every User can also be tedious, as are an over abundance of metaphors and subtexts.
Enter: User Groups. When a User registers on your site, or you create a user and invite them, you can isolate them to a specific group. Each group has its own assigned Roles, which makes the entire Role assigning process hands-off and automated.
Groups are created and managed in
Configure » Users, again assuming you have the necessary permissions to access it.
Conclusion
To explain this any further would likely be futile. Log into the Control Panel, head to the
Configure area, and start clicking. We’re pretty sure you can take it from here. | https://docs.statamic.com/permissions | 2018-01-16T11:37:32 | CC-MAIN-2018-05 | 1516084886416.17 | [array(['/assets/img/other/bacon.jpg', 'Bacon, ladies and gentlemen'],
dtype=object) ] | docs.statamic.com |
Python type of the ndarray is PyArray_Type. In C, every ndarray is a pointer to a PyArrayObject structure. The ob_type member of this structure contains a pointer to the PyArray_Type typeobject..
A pointer to the first element of the array. This pointer can (and normally should) be recast to the data type of the array.
An integer providing the number of dimensions for this array. When nd is 0, the array is sometimes called a rank-0 array. Such arrays have undefined dimensions and strides and cannot be accessed. NPY_MAXDIMS is the largest number of dimensions for any array.
An array of integers providing the shape in each dimension as
long as nd
1. The integer is always large enough
to hold a pointer on the platform, so the dimension size is
only limited by memory.
An array of integers providing for each dimension the number of bytes that must be skipped to get to the next element in that dimension. NPY_UPDATEIFCOPY flag set, then this array is a working copy of a “misbehaved” array. As soon as this array is deleted, the array pointed to by base will be updated with the contents of this array..
Flags indicating how the memory pointed to by data is to be interpreted. Possible flags are NPY_C_CONTIGUOUS, NPY_F_CONTIGUOUS, NPY_OWNDATA, NPY_ALIGNED, NPY_WRITEABLE, and NPY_UPDATEIFCOPY.
This member allows array objects to have weak references (using the weakref module).
The PyArrayDescr_Type is the built-in type of the data-type-descriptor objects used to describe how the bytes comprising the array are to be interpreted. There are 21 statically-defined PyArray_Descr objects for the built-in data-types. While these participate in reference counting, their reference count should never reach zero. There is also a dynamic table of user-defined PyArray_Descr objects that is also maintained. Once a data-type-descriptor object is “registered” it should never be deallocated either. The function PyArray_DescrFromType (...) can be used to retrieve a PyArray_Descr object from an enumerated type-number (either built-in or user- defined). NPY_USE_GETITEM and NPY_USE_SETITEM flags should be set in the hasobject flag..
A traditional character code indicating the data type.
A character indicating the byte-order: ‘>’ (big-endian), ‘<’ (little- endian), ‘=’ (native), ‘|’ (irrelevant, ignore). All builtin data- types have byteorder ‘=’.
A data-type bit-flag that determines if the data-type exhibits object- array like behavior. Each bit in this member is a flag which are named as:
Indicates that items of this data-type must be reference counted (using Py_INCREF and Py_DECREF ).
Indicates arrays of this data-type must be converted to a list before pickling.
Indicates the item is a pointer to some other data-type
Indicates memory for this data-type must be initialized (set to 0) on creation.
Indicates this data-type requires the Python C-API during access (so don’t give up the GIL if array access is going to be needed).
On array access use the f->getitem function pointer instead of the standard conversion to an array scalar. Must use if you don’t define an array scalar to go along with the data-type.
When creating a 0-d array from an array scalar use f->setitem instead of the standard copy from an array scalar. Must use if you don’t define an array scalar to go along with the data-type.
The bits that are inherited for the parent data-type if these bits are set in any field of the data-type. Currently ( NPY_NEEDS_INIT | NPY_LIST_PICKLE | NPY_ITEM_REFCOUNT | NPY_NEEDS_PYAPI ).
Bits set for the object data-type: ( NPY_LIST_PICKLE | NPY_USE_GETITEM | NPY_ITEM_IS_POINTER | NPY_REFCOUNT | NPY_NEEDS_INIT | NPY_NEEDS_PYAPI).
Return true if all the given flags are set for the data-type object.
Equivalent to PyDataType_FLAGCHK (dtype, NPY_ITEM_REFCOUNT).
A number that uniquely identifies the data type. For new data-types, this number is assigned when the data-type is registered.
For data types that are always the same size (such as long), this holds the size of the data type. For flexible data types where different arrays can have a different elementsize, this should be 0.
A number providing alignment information for this data type. Specifically, it shows how far from the start of a 2-element structure (whose first element is a char ), the compiler places an item of this type: offsetof(struct {char c; type v;},:
The data-type-descriptor object of the base-type.
The shape (always C-style contiguous) of the sub-array as a Python tuple.).
A pointer to a structure containing functions that the type needs to implement internal features. These functions are not the same thing as the universal functions (ufuncs) described later. Their signatures can vary arbitrarily..
A pointer to a function that returns a standard Python object from a single element of the array object arr pointed to by data. This function must be able to deal with “misbehaved “(misaligned and/or swapped) arrays correctly.
A pointer to a function that sets the Python object item into the array, arr, at the position pointed to by data . This function deals with “misbehaved” arrays. If successful, a zero is returned, otherwise, a negative one is returned (and a Python error set).
These members are both pointers to functions to copy data from src to dest and swap if indicated. The value of arr is only used for flexible ( NPY_STRING, NPY_UNICODE, and.
A pointer to a function that returns TRUE if the item of arr pointed to by data is nonzero. This function can deal with misbehaved arrays..
A pointer to a function that fills a contiguous buffer of the given length with a single scalar value whose address is given. The final argument is the array which is needed to get the itemsize for variable-length arrays.
An array of function pointers to a particular sorting algorithms. A particular sorting algorithm is obtained using a key (so far PyArray_QUICKSORT, :data`PyArray_HEAPSORT`, and PyArray_MERGESORT are defined). These sorts are done in-place assuming contiguous and aligned data.
An array of function pointers to sorting algorithms for this data type. The same sorting algorithms as for sort are available. The indices producing the sort are returned in result (which must be initialized with indices 0 to length-1 inclusive).
Either NULL or a dictionary containing low-level casting functions for user- defined data-types. Each function is wrapped in a PyCObject * and keyed by the data-type number.
A function to determine how scalars of this type should be interpreted. The argument is NULL or a 0-dimensional array containing the data (if that is needed to determine the kind of scalar). The return value must be of type PyArray_SCALARKIND.
Either NULL or an array of PyArray_NSCALARKINDS pointers. These pointers should each be either NULL or a pointer to an array of integers (terminated by PyArray_NOTYPE) indicating data-types that a scalar of this data-type of the specified kind can be cast to safely (this usually means without losing precision).
Either NULL or an array of integers (terminated by PyArray_NOTYPE ) indicated data-types that this data-type can be cast to safely (this usually means without losing precision).
Unused. ufunc object is implemented by creation of the;
required for all Python objects.
The number of input arguments.
The number of output arguments.
The total number of arguments (nin + nout). This must be less than NPY_MAXARGS.
Either PyUFunc_One, PyUFunc_Zero, or PyUFunc_None to indicate the identity for this operation. It is only used for a reduce-like call on an empty array..
The number of supported data types for the ufunc. This number specifies how many different 1-d loops (of the builtin data types) are available.
Obsolete and unused. However, it is set by the corresponding entry in the main ufunc creation routine: PyUFunc_FromFuncAndData (...).
A string name for the ufunc. This is used dynamically to build the __doc__ attribute of ufuncs.
An array of nargs.
Documentation for the ufunc. Should not contain the function signature as this is generated dynamically when __doc__ is retrieved.
Any dynamically allocated memory. Currently, this is used for dynamic ufuncs created from a python function to store room for the types, data, and name members.
For ufuncs dynamically created from python functions, this member holds a reference to the underlying Python function..;
where
is the number of dimensions in the
underlying array.
The current 1-d index into the array.
The total size of the underlying array.
An
-dimensional index into the array.
The size of the array minus 1 in each dimension.
The strides of the array. How many bytes needed to jump to the next element in each dimension.
How many bytes needed to jump from the end of a dimension back to its beginning. Note that backstrides [k]= strides [k]*d ims_m1 [k], but it is stored here as an optimization.
This array is used in computing an N-d index from a 1-d index. It contains needed products of the dimensions.
A pointer to the underlying ndarray this iterator was created to represent.
This member points to an element in the ndarray indicated by the index.
This flag is true if the underlying array is *.
This type provides an iterator that encapsulates the concept of
broadcasting. It allows
arrays to be broadcast together
so that the loop progresses in C-style contiguous fashion over the
broadcasted array. The corresponding C-structure is the
PyArrayMultiIterObject whose memory layout must begin any
object, obj, passed in to the PyArray_Broadcast (obj)
function. Broadcasting is performed by adjusting array iterators so
that each iterator represents the broadcasted shape and size, but
has its strides adjusted so that the correct element from the array
is used;
Needed at the start of every Python object (holds reference count and type identification).
The number of arrays that need to be broadcast to the same shape.
The total broadcasted size.
The current (1-d) index into the broadcasted result.
The number of dimensions in the broadcasted result.
The shape of the broadcasted result (only nd slots are used).
An array of iterator objects that holds the iterators for the arrays to be broadcast together. On return, the iterators are adjusted for broadcasting.
This is an iterator object that makes it easy to loop over an N-dimensional neighborhood.
The C-structure corresponding to an object of PyArrayNeighborhoodIter_Type is the PyArrayNeighborhoodIterObject.
A pointer to a list of (npy_intp) integers which usually represent array shape or array strides.
The length of the list of integers. It is assumed safe to access ptr [0] to ptr [len
Necessary for all Python objects. Included here so that the PyArray_Chunk structure matches that of the buffer object (at least to the len member).
The Python object this chunk of memory comes from. Needed so that memory can be accounted for properly.
A pointer to the start of the single-segment chunk of memory.
The length of the segment in bytes.
Any data flags (e.g. NPY_WRITEABLE ) that should be used to interpret the memory.;
the integer 2 as a sanity check.
the number of dimensions in the array.’ -> string, ‘U’ -> unicode, ‘V’ -> void.
The number of bytes each item in the array requires.).
An array containing the size of the array in each dimension.
An array containing the number of bytes to jump to get to the next element in each dimension.
A pointer to the first element of the array.
A Python object describing the data-type in more detail (same as the descr key in __array_interface__). This can be NULL if typekind and itemsize provide enough information. This field is also ignored unless ARR_HAS_DESCR flag is on in flags.
Internally, the code uses some additional Python objects primarily for memory management. These types are not accessible directly from Python, and are not exposed to the C-API. They are included here only for completeness and assistance in understanding the code..
A simple linked-list of C-structures containing the information needed to define a 1-d loop for a ufunc for every defined signature of a user-defined data-type.
Advanced indexing is handled with this Python type. It is simply a loose wrapper around the C-structure containing the variables needed for advanced array indexing. The associated C-structure,. | http://docs.scipy.org/doc/numpy-1.5.x/reference/c-api.types-and-structures.html | 2015-07-28T08:24:28 | CC-MAIN-2015-32 | 1438042981753.21 | [] | docs.scipy.org |
Tom Hutchison/New main page
From Joomla! Documentation
User:Tom HutchisonRevision as of 07:14, 28 October 2013 by Tom Hutchison (Talk | contribs)/More Resources
User:Tom Hutchison/New main page/box-header-sqUser:Tom Hutchison/New main page/Wiki projectsUser:Tom Hutchison/New main page/box-footer
<translate> License</translate> | https://docs.joomla.org/index.php?title=User:Hutchy68/New_main_page&oldid=104596 | 2015-07-28T08:59:34 | CC-MAIN-2015-32 | 1438042981753.21 | [] | docs.joomla.org |
Revision history of "JLanguage::getMetadata/11.1"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 19:25, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JLanguage::getMetadata/11.1 to API17:JLanguage::getMetadata without leaving a redirect (Robot: Moved page) | https://docs.joomla.org/index.php?title=JLanguage::getMetadata/11.1&action=history | 2015-07-28T09:14:48 | CC-MAIN-2015-32 | 1438042981753.21 | [] | docs.joomla.org |
This.
Bayeux is a protocol for transporting asynchronous messages (primarily over web protocols such as HTTP and WebSocket), with low latency between a web server and web clients.., Bayeux communication:
A program that initiates the communication.
A web client (for example, a HTTP client, but also a WebSocket client) is a program that initiates TCP/IP connections for the purpose of sending web requests. A Bayeux client initiates the Bayeux message exchange and will typically execute within a web client, but it is likely to have also Bayeux clients that execute within web servers.
Implementations may distinguish between
Bayeux clients running within a web client and Bayeux clients running
within the web server. Specifically server-side Bayeux clients MAY be
privileged clients with access to private information about other clients
(see the section called “
clientId”) and subscriptions..
For the HTTP protocol, an HTTP request message as defined by section 5 of RFC 2616.
For the HTTP protocol, an HTTP response message as defined by section 6 of RFC 2616.
A message is a JSON encoded object exchanged between client and server for the purpose of implementing the Bayeux protocol as defined by Section C.4, “Common Message Field Definitions”, Section C.5, “Meta Message Field Definitions” and Section C.6, “Event Message Field Definitions”,
Application specific data that is sent over the Bayeux protocol.
The transport specific message format that wraps a standard Bayeux message.
A named destination and/or source of events. Events are published to channels and subscribers to channels receive published events.
A communication link that is established either permanently or transiently, for the purposes of messages exchange. A client is connected if a link is established with the server, over which asynchronous events can be received.
JavaScript Object Notation (JSON) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is described at. | http://docs.cometd.org/reference/bayeux.html | 2015-07-28T08:37:24 | CC-MAIN-2015-32 | 1438042981753.21 | [] | docs.cometd.org |
2 July 2015
m 12:18Template:CurrentSTSVer (diff; hist; 0) Tom Hutchison
30 June 2015
m 10:03Template:CurrentSTSVer (diff; hist; 0) Tom Hutchison
| https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&days=30&from=&target=J3.1%3AGetting_Started_with_Joomla! | 2015-07-28T09:26:07 | CC-MAIN-2015-32 | 1438042981753.21 | [array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object) ] | docs.joomla.org |
Suppose you´re building an application where products with three properties
id,
title and
price need
to be created as a Product object. This class has to be in the directory
src/Entity and the namespace is
App\Entity;
At this point, our object class looks like:
As you can see, this class also implements a interface called
ResourceInterface .
Why we do that, will be explained later, but it has to be mentioned now, because it is the reason,
why our Product class needs the id property. This id will be used as our
primary key
in our database table and the
getId()-Function is the only function we need, to include this interface.
For the other two properties, we also need a variable and as common in classes, each of them has its own public getter and setter methods.
To define the database type of the variables, we use annotations. Our id as primary key has to be unique, the best datatype is an integer. The title of a product is usually a word, so we mark the title as string. The price can be an integer, because we can save 1.78 USD as 178¢.
If we want to save an entity in a database-table, we need a column for each class-property we want to save.
For our example the columns would be id, title and price.
To create and map them we will use one more time annotations. In that case, they start with
@ORM\...
Let’s take a look at our code:
We´ve already talked about the annotations for our attributes. We can also use PHP-annotations for functions, as you can see in our example. For more information about annotations, take a look at this documentation.
Or for the doctrine annotations, have a look at the doctrine annotation reference
An optional, but important annotation is
@ORM\Table, which defines the table name for this entity.
A good structured and well-named database is always a goal which should be sought.
One step is mapping all properties of the entity to columns in the table.
We can do this with
@ORM\Column(type="integer"). Other common datatypes are
string,
float,
boolean etc.
You can find a full list and way more about basic mapping in doctrine here.
Another option for the column is, if the value in the column can be NULL or not.
We define that with
nullable=true/false (the default value is false).
The id needs some special annotations, for example
@ORM\Id,
which mark this property as
primary key in a table and
@ORM\GeneratedValue(strategy="AUTO")
specifies which strategy is used for identifier generation for an instance variable which is annotated by id.
At this page, a reference of all Doctrine annotations is given with short explanations on their context and usage.
Awesome! We´ve just created our first PHP-class, which is also called
Entity in Symfony.
Our next step is, how we can easily save our entity in our database, with the powerful Doctrine ORM, which helps us to manage our database and synchronize it with our project.
Before we mark our entity class with
@ORM\Entity and define the
repositoryClass,
which we will need for more complex database queries, in order to isolate, reuse and test these queries,
it’s a good practice to create this custom repository class for your entity.
The common path for the Repository-classes are
src/Repository.
<?php // src/Repository/ProductRepository namespace App\Repository; use Enhavo\Bundle\AppBundle\Repository\EntityRepository; class ProductRepository extends EntityRepository { }
An empty Repository is very unspectacular, but we will learn how useful they can be later.
After this, we have a useable Product class with all important information for Doctrine to create the product table. But after all, we still have no table in our database, but creating it is very comfortable now, just run:
$ php bin/console doctrine:schema:update --force
It seems to be nothing special, but this command does a lot! It checks, how your database should look like (based on the mapping information we´ve defined with the annotations in our product class before) and compares it with how the database actually looks like. Only the differences will be executed as SQL statements to update the database.
An even better way to synchronize your database with the mapping information from your project is via migrations, which are as powerful as the schema:update command. In addition, changes to your database schema are safely and reliably tracked and reversible.
Even it is quite powerful, the doctrine:schema:update command should only be used during development.
Note
It should never be used in a production environment with important information in your database.
You can also create or update an entity with the command:
$ php bin/console make:entity
which will ask you everything you need to create or update an entity. You will find a good explanation in the Symfony Docs , but for the first time, we recommend to create your classes without this command, to understand how they work. | https://docs.enhavo.com/get-started/create-entity.html | 2022-08-08T02:05:11 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.enhavo.com |
Go to and download the Yarn windows installer .msi file and follow the instructions in the Setup Wizard.
Open the windows Command Prompt and locate the yarn folder or use this path:
C:\Program Files (x86)\Yarn\bin.
After locating the folder, run this command:
yarn install.
Open windows Command Prompt and navigate to the metasfresh frontend folder, in the example it would be this path:
C:\work-metas\metasfresh\frontend.
Than open IntelliJ, go to services and get
ServerBoot and
WebRestApiApplication up and running.
Than go back to Command Prompt and run this command :
yarn install & yarn start;.
localhost:3000that will be your local webUI. | https://docs.metasfresh.org/installation_collection/EN/How_to_install_use_yarn.html | 2022-08-08T00:26:03 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.metasfresh.org |
Parasoft Virtualize simulates the behavior of systems that are still evolving, hard to access, or difficult to configure for development or testing.
Test environment access constraints have become a significant barrier to delivering quality software efficiently
- Agile and parallel development teams inevitably reach deadlocks as different teams are simultaneously working on interconnected system components—and each team needs to access the others' components in order to complete its own development and testing tasks.
- Performance test engineers need to test vs. realistic transaction performance from dependent applications (3rd party services, mainframes, packaged apps, etc.), but it's often unfeasible to set up such conditions.
- End-to-end functional testing is stymied by the difficulty of accessing all of the necessary dependencies—with the configurations you need to test against—at a single, convenient, and lengthy enough time.
Parasoft Virtualize’s service virtualization provides access to the dependencies that are beyond your control, still evolving, or too complex to configure in a virtual test lab. For example, this might include third-party services (credit check, payment processing, etc.), mainframes and SAP or other ERPs. With service virtualization, you don’t have to virtualize an entire system when you need to access only a fraction of its.. | https://docs.parasoft.com/pages/viewpage.action?pageId=77005036 | 2022-08-08T01:35:42 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.parasoft.com |
A5 high-quality durable guide designed to provide the occupier with sufficient information regarding the Grade D mains-powered fire detection and fire alarm system installed within their home.
It is essential that the occupier of the premises, which might mean all occupiers in the case of a house of multiple occupation (HMO), understand the operation of the system installed within their dwelling, the action to be taken in the event of a fire alarm, how to avoid false alarms, procedures for testing the system and the need for routine maintenance.
It is of utmost importance that a warning is provided to the householder that informs them that children might not be woken by fire alarm tones and that children are never left alone in a dwelling, families rehearse a fire escape plan and that it is a priority that any sleeping children are woken immediately and taken to a safe place outside the property. The householder should also understand that the fire and rescue service should always be called without delay no matter how small the fire.
It is normally the responsibility of the installer of the smoke and heat alarms to provide all of this information and more, but in rented properties where there is a change in occupation then it is likely to be the responsibility of the landlord.
Our Home Occupier Guide provides a cost-effective and practical way to satisfy the issue of user information clause within BS5839-6 by the issue of a single booklet.
Specification
- A5 by standard
- High quality durable front and back cover
- 12 pages
Delivery
- 3-4 working days
Product Data Sheet
< Back to Logbook Categories | https://www.trade-docs.com/product/bs-5839-6-grade-d-occupiers-guide/ | 2022-08-08T01:04:30 | CC-MAIN-2022-33 | 1659882570741.21 | [] | www.trade-docs.com |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.