content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
These plugins are usually not neccessary for a normal website, but we found them useful for our own projects.
Namespace:, prefix: mirror
tree node: <mirror:switch />
Attributes: None
Compatible child nodes: one or more <mirror:host />
mirror host node: <mirror:host />
Attributes:
Allowed child nodes: None
If path resolving hits the <mirror:switch /> tree node, an HTTP redirect (302 Found) to a semi-random mirror is issued. The mirror is picked based on the following algorithm:
Note that this algorithm does by no means make sure that the file served by the mirror has the expected content. This should only be used with trusted mirrors and/or the appropriate security means. | https://docs.zombofant.net/pyxwf/devel/plugins/advanced.html | 2019-01-16T10:56:05 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.zombofant.net |
Outputs of fMRIPrep¶
FMRIPrep generates three broad classes of outcomes:
- Visual QA (quality assessment) reports: one HTML per subject, that allows the user a thorough visual assessment of the quality of processing and ensures the transparency of fMRIPrep operation.
- Pre-processed imaging data which are derivatives of the original anatomical and functional images after various preparation procedures have been applied. For example, INU-corrected versions of the T1-weighted image (per subject), the brain mask, or BOLD images after head-motion correction, slice-timing correction and aligned into the same-subject’s T1w space or into MNI space.
- Additional data for subsequent analysis, for instance the transformations between different spaces or the estimated confounds.
fMRIPrep outputs conform to the BIDS Derivatives specification (see BIDS Derivatives RC1).
Visual Reports¶
FMRIPrep outputs summary reports, written to
<output dir>/fmriprep/sub-<subject_label>.html.
These reports provide a quick way to make visual inspection of the results easy.
Each report is self contained and thus can be easily shared with collaborators (for example via email).
View a sample report.
Preprocessed data (fMRIPrep derivatives)¶
Preprocessed, or derivative, data are written to
<output dir>/fmriprep/sub-<subject_label>/.
The BIDS Derivatives RC1 specification describes the naming and metadata conventions we follow.
Anatomical derivatives are placed in each subject’s
anat subfolder:
anat/sub-<subject_label>_[space-<space_label>_]desc-preproc_T1w.nii.gz
anat/sub-<subject_label>_[space-<space_label>_]desc-brain_mask.nii.gz
anat/sub-<subject_label>_[space-<space_label>_]dseg.nii.gz
anat/sub-<subject_label>_[space-<space_label>_]label-CSF_probseg.nii.gz
anat/sub-<subject_label>_[space-<space_label>_]label-GM_probseg.nii.gz
anat/sub-<subject_label>_[space-<space_label>_]label-WM_probseg.nii.gz
Template-normalized derivatives use the space label
MNI152NLin2009cAsym, while derivatives in
the original
T1w space omit the
space- keyword.
Additionally, the following transforms are saved:
anat/sub-<subject_label>_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5
anat/sub-<subject_label>_from-T1w_to-MNI152NLin2009cAsym_mode-image_xfm.h5
If FreeSurfer reconstructions are used, the following surface files are generated:
anat/sub-<subject_label>_hemi-[LR]_smoothwm.surf.gii
anat/sub-<subject_label>_hemi-[LR]_pial.surf.gii
anat/sub-<subject_label>_hemi-[LR]_midthickness.surf.gii
anat/sub-<subject_label>_hemi-[LR]_inflated.surf.gii
And the affine translation between
T1w space and FreeSurfer’s reconstruction (
fsnative) is
stored in:
anat/sub-<subject_label>_from-T1w_to-fsnative_mode-image_xfm.txt
Functional derivatives are stored in the
func subfolder.
All derivatives contain
task-<task_label> (mandatory) and
run-<run_index> (optional), and
these will be indicated with
[specifiers].
func/sub-<subject_label>_[specifiers]_space-<space_label>_boldref.nii.gz
func/sub-<subject_label>_[specifiers]_space-<space_label>_desc-brain_mask.nii.gz
func/sub-<subject_label>_[specifiers]_space-<space_label>_desc-preproc_bold.nii.gz
Volumetric output spaces include
T1w and
MNI152NLin2009cAsym (default).
Confounds are saved as a TSV file:
func/sub-<subject_label>_[specifiers]_desc-confounds_regressors.nii.gz
If FreeSurfer reconstructions are used, the
(aparc+)aseg segmentations are aligned to the
subject’s T1w space and resampled to the BOLD grid, and the BOLD series are resampled to the
midthickness surface mesh:
func/sub-<subject_label>_[specifiers]_space-T1w_desc-aparcaseg_dseg.nii.gz
func/sub-<subject_label>_[specifiers]_space-T1w_desc-aseg_dseg.nii.gz
func/sub-<subject_label>_[specifiers]_space-<space_label>_hemi-[LR].func.gii
Surface output spaces include
fsnative (full density subject-specific mesh),
fsaverage and the down-sampled meshes
fsaverage6 (41k vertices) and
fsaverage5 (10k vertices, default).
If CIFTI outputs are requested, the BOLD series is also saved as
dtseries.nii CIFTI2 files:
func/sub-<subject_label>_[specifiers]_bold.dtseries.nii
Sub-cortical time series are volumetric (supported spaces:
MNI152NLin2009cAsym), while cortical
time series are sampled to surface (supported spaces:
fsaverage5,
fsaverage6)
Finally, if ICA-AROMA is used, the MELODIC mixing matrix and the components classified as noise are saved:
func/sub-<subject_label>_[specifiers]_AROMAnoiseICs.csv
func/sub-<subject_label>_[specifiers]_desc-MELODIC_mixing.tsv
FreeSurfer Derivatives¶
A FreeSurfer subjects directory is created in
<output dir>/freesurfer.
freesurfer/ fsaverage{,5,6}/ mri/ surf/ ... sub-<subject_label>/ mri/ surf/ ... ...
Copies of the
fsaverage subjects distributed with the running version of
FreeSurfer are copied into this subjects directory, if any functional data are
sampled to those subject spaces.
Confounds¶
See implementation on
init_bold_confs_wf.
For each BOLD run processed with fMRIPrep, a
<output_folder>/fmriprep/sub-<sub_id>/func/sub-<sub_id>_task-<task_id>_run-<run_id>_desc-confounds_regressors.tsv
file will be generated.
These are TSV tables, which look like the example below:
csf white_matter global_signal std_dvars dvars framewise_displacement t_comp_cor_00 t_comp_cor_01 t_comp_cor_02 t_comp_cor_03 t_comp_cor_04 t_comp_cor_05 a_comp_cor_00 a_comp_cor_01 a_comp_cor_02 a_comp_cor_03 a_comp_cor_04 a_comp_cor_05 non_steady_state_outlier00 trans_x trans_y trans_z rot_x rot_y rot_z aroma_motion_02 aroma_motion_04 682.75275 0.0 491.64752000000004 n/a n/a n/a 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 -0.00017029 -0.0 0.0 0.0 0.0 669.14166 0.0 489.4421 1.168398 17.575331 0.07211929999999998 -0.4506846719 0.1191909139 -0.0945884724 0.1542023065 -0.2302324641 0.0838194238 -0.032426848599999995 0.4284323184 -0.5809158299 0.1382414008 -0.1203486637 0.3783661265 0.0 0.0 0.0207752 0.0463124 -0.000270924 -0.0 0.0 -2.402958171 -0.7574011893 665.3969 0.0 488.03 1.085204 16.323903999999995 0.0348966 0.010819676200000001 0.0651895837 -0.09556632150000001 -0.033148835 -0.4768871111 0.20641088559999998 0.2818768463 0.4303863764 0.41323714850000004 -0.2115232212 -0.0037154909000000004 0.10636180070000001 0.0 0.0 0.0 0.0457372 0.0 -0.0 0.0 -1.341359143 0.1636017242 662.82715 0.0 487.37302 1.01591 15.281561 0.0333937 0.3328022893 -0.2220965269 -0.0912891436 0.2326688125 0.279138129 -0.111878887 0.16901660629999998 0.0550480212 0.1798747037 -0.25383302620000003 0.1646403629 0.3953613889 0.0 0.010164 -0.0103568 0.0424513 0.0 -0.0 0.00019174 -0.1554834655 0.6451987913
Each row of the file corresponds to one time point found in the
corresponding BOLD time-series
(stored in
<output_folder>/fmriprep/sub-<sub_id>/func/sub-<sub_id>_task-<task_id>_run-<run_id>_desc-preproc_bold.nii.gz).
Columns represent the different confounds:
csf and
white_matter are the average signal
inside the anatomically-derived CSF and WM
masks across time;
global_signal corresponds to the mean time series within the brain mask; two columns relate to
the derivative of RMS variance over voxels (or DVARS), and
both the original (
dvars) and standardized (
std_dvars) are provided;
framewise_displacement is a quantification of the estimated bulk-head motion;
trans_x,
trans_y,
trans_z,
rot_x,
rot_y,
rot_z are the 6 rigid-body
motion-correction parameters estimated by fMRIPrep;
if present,
non_steady_state_outlier_XX columns indicate non-steady state volumes with a single
1 value and
0 elsewhere (i.e., there is one
non_steady_state_outlier_XX column per
outlier/volume);
six noise components are calculated using CompCor,
according to both the anatomical (
a_comp_cor_XX) and temporal (
t_comp_cor_XX) variants;
and the motion-related components identified by
ICA-AROMA
(if enabled) are indicated with
aroma_motioon_XX.
All these confounds can be used to perform scrubbing and censoring of outliers, in the subsequent first-level analysis when building the design matrix, and in group level analysis.
Confounds and “carpet”-plot on the visual reports¶
Some of the estimated confounds, as well as a “carpet” visualization of the BOLD time-series (see [Power2016]). This plot is included for each run within the corresponding visual report. An example of these plots follows:
The figure shows on top several confounds estimated for the BOLD series: global signals (‘GlobalSignal’, ‘WM’, ‘GM’), standardized DVARS (‘stdDVARS’), and framewise-displacement (‘FramewiseDisplacement’). At the bottom, a ‘carpetplot’ summarizing the BOLD series. The colormap on the left-side of the carpetplot denotes signals located in cortical gray matter regions (blue), subcortical gray matter (orange), cerebellum (green) and the union of white-matter and CSF compartments (red).
References | https://fmriprep.readthedocs.io/en/latest/outputs.html | 2019-01-16T11:18:17 | CC-MAIN-2019-04 | 1547583657151.48 | [] | fmriprep.readthedocs.io |
Trait diesel::
serialize::[−][src] WriteTuple
pub trait WriteTuple<ST> { fn write_tuple<W: Write>(&self, out: &mut Output<W, Pg>) -> Result; }
Helper trait for writing tuples as named composite types
This trait is essentially
ToSql<Record<ST>> for tuples.
While we can provide a valid body of
to_sql,
PostgreSQL doesn't allow the use of bind parameters for unnamed composite types.
For this reason, we avoid implementing
ToSql directly.
This trait can be used by
ToSql impls of named composite types.
Example
#[derive(SqlType)] #[postgres(type_name = "my_type")] struct MyType; #[derive(Debug)] struct MyStruct<'a>(i32, &'a str); impl<'a> ToSql<MyType, Pg> for MyStruct<'a> { fn to_sql<W: Write>(&self, out: &mut Output<W, Pg>) -> serialize::Result { WriteTuple::<(Integer, Text)>::write_tuple( &(self.0, self.1), out, ) } }
Required Methods
fn write_tuple<W: Write>(&self, out: &mut Output<W, Pg>) -> Result
See trait documentation. | http://docs.diesel.rs/diesel/serialize/trait.WriteTuple.html | 2019-01-16T10:51:08 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.diesel.rs |
Before you can use Arnold in Maya you will need to download and install the MtoA plug-in and then configure Maya to use it.
Note that there is one plug-in for each supported version of Maya for each supported operating system:
- Windows 64 bit
- Linux 64 bit
- Mac OS X (64 bit 10.7 Lion or later)
There is also a troubleshooting page. | https://docs.arnoldrenderer.com/display/AFMUG/Installation | 2019-01-16T09:49:01 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.arnoldrenderer.com |
Activator 6.0.0 Administrator Guide User administration Use the Users and roles menu of the user interface to manage user access. Roles define the permissions users have for performing tasks. Roles can be defined with few or many permissions. Each user should be assigned at least one role. It is possible to assign multiple roles to a single user. You also use this area to change user passwords and manage global settings, such as session time-out intervals. The following topics provide information about user administration tasks you can perform in the Activator user interface: Single sign-on Example SSO IdP configuration Admin user Manage users Change password Manage roles Date and time preferences Global user settings Unlock a blocked user Change the interval time to close associated windows Related Links | https://docs.axway.com/bundle/Activator_600_AdministratorsGuide_allOS_en_HTML5/page/Content/User_admin/user_intro.htm | 2019-01-16T11:00:48 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.axway.com |
Securing Apache Druid using Kerberos
It is important to place the Apache Druid (incubating) endpoints behind a firewall and configure authentication.
You can configure Druid nodes to integrate a Kerberos-secured Hadoop cluster. Such an integration provides authentication between Druid and other HDP Services. If Kerberos is enabled on your cluster, to query Druid data sources that are imported from Hive, you must set up LLAP.
You can enable the authentication with Ambari 2.5.0 and later to secure the HTTP endpoints by including a SPNEGO-based Druid extension. After enabling authentication in this way, you can connect Druid Web UIs to the core Hadoop cluster while maintaining Kerberos protection. The main Web UIs to use with HDP are the Coordinator Console and the Overlord Console. | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/adding-druid/content/druid_securing_druid_overview.html | 2019-01-16T11:05:13 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.hortonworks.com |
TPump, TPump TPump TPump, a load job can be regarded as such a process.
TPump follows the flow required by TMSM:
1 Obtain a system‑generated UOW ID from TMSM for a TPump TPump job containing two BEGIN EXPORT/END LOAD tasks and two IMPORT tasks sends event messages to TMSM in order for the job to be monitored:
At job start: "Connecting session(s)"
Task 1 Import 1: Loading begins
Task 1 Import 1: Checkpoint completes
Task 1 Import 1: Loading completes
Task 1 Import 2: Loading begins
Task 1 Import 2: Checkpoint completes
Task 1 Import 2: Loading completes
Task 2 Import 1: Loading begins
Task 2 Import 1: Checkpoint completes
Task 2 Import 1: Loading completes
Task 2 Import 2: Loading begins
Task 2 Import 2: Checkpoint completes
Task 2 Import 2: Loading completes
At job end: "Job terminating”
“Task n” refers to multiple BEGIN LOAD/END LOAD tasks, “Import n” refers to multiple IMPORTs. TPump sends number of rows inserted/updated/deleted while loading completes.
For detailed information on setting up the TMSM environment, refer to the Teradata Multi-System Manager Installation Guide (B035-3202). | https://docs.teradata.com/reader/YeE2bGoBx9ZGaBZpxlKF4A/gs_CPfALLQLr9MLAUFEEcw | 2019-01-16T09:43:08 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.teradata.com |
wsprintfA.
Syntax
int WINAPIV wsprintfA( LPSTR , LPCSTR , ... );
Parameters
arg1
Type: LPTSTR
The buffer that is to receive the formatted output. The maximum size of the buffer is 1,024 bytes.
arg2
Type: LPCTSTR
The format-control specifications. In addition to ordinary ASCII characters, a format specification for each argument appears in this string. For more information about the format specification, see the Remarks section.
arg3.
Remarks
The format-control string contains format specifications that determine the output format for the arguments following the lpFmt parameter. Format specifications, discussed below, always begin with a percent sign (%). If a percent sign is followed by a character that has no meaning as a format field, the character is not formatted (for example, %% produces a single percent-sign character).
The format-control string is read from left to right. When the first format specification (if any) is encountered, it causes the value of the first argument after the format-control string to be converted and copied to the output buffer according to the format specification. The second format specification causes the second argument to be converted and copied, and so on. If there are more arguments than format specifications, the extra arguments are ignored. If there are not enough arguments for all of the format specifications, the results are undefined.
A format specification has the following form:
%[-][#][0][width][.precision]type
Each field is a single character or a number signifying a particular format option. The type characters that appear after the last optional format field determine whether the associated argument is interpreted as a character, a string, or a number. The simplest format specification contains only the percent sign and a type character (for example, %s). The optional fields control other aspects of the formatting. Following are the optional and required fields and their meanings.
Requirements
See Also
Conceptual
Reference | https://docs.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-wsprintfa | 2019-01-16T09:52:43 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.microsoft.com |
To add a raster image to the map
If you are adding a WMS image, see Adding an Image from a WMS (Web Map Service). If you are adding an image whose format does not appear in the Data Connect window, see Using Other Raster Image Formats.
You can give the connection any name you like. This name appears in Map Explorer as the name of the feature source.
If this source contains only a single image, that image is selected automatically. If it contains multiple images, you can right-click any of them and select Select All or Select None.
Some image files contain placement information and are placed automatically in your map. For images that do not contain placement information, you are prompted for the location, scale, and insertion point.
You can move the raster layer below objects and features. | http://docs.autodesk.com/MAP/2010/ENU/AutoCAD%20Map%203D%202010%20User%20Documentation/HTML%20Help/files/WS1a9193826455f5ff13ee121107bdda7722-73fb-procedure.htm | 2019-01-16T09:42:33 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.autodesk.com |
.
Page: Welcome Page: Understanding up.time Page: Quick Topics Page: My Portal Page: Managing Your Infrastructure Page: Overseeing Your Infrastructure Page: Using Service Monitors Page: Monitoring VMware vSphere Page: User Management Page: Service Level Agreements Page: Alerts and Actions Page: Understanding Report Options Page: Understanding Graphing Page: Configuring and Managing up.time | http://docs.uptimesoftware.com/display/UT74/Administrator%27s+Guide | 2019-01-16T10:55:51 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.uptimesoftware.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::OpsWorks::Types::DeleteStackRequest
- Defined in:
- (unknown)
Overview
Note:
When passing DeleteStackRequest as input to an Aws::Client method, you can use a vanilla Hash:
{ stack_id: "String", # required }
Instance Attribute Summary collapse
- #stack_id ⇒ String
The stack ID.
Instance Attribute Details
#stack_id ⇒ String
The stack ID. | https://docs.aws.amazon.com/sdkforruby/api/Aws/OpsWorks/Types/DeleteStackRequest.html | 2018-06-18T00:09:43 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.aws.amazon.com |
- Install Ops Manager >
- Installation Checklist
Installation Checklist¶
On this page
Overview¶
You must make the following decisions before you install Ops Manager. During the install procedures you will make choices based on your decisions here.
If you have not yet read the Introduction, please do so now. The introduction describes the Ops Manager components and common topologies.
The sequence for installing Ops Manager is to:
Plan your installation according to the questions on this page.
Provision servers that meet the Ops Manager System Requirements.
Warning
Failure to configure servers according to the Ops Manager System Requirements, including the requirement to read the MongoDB Production Notes, can lead to production failure.
Install the Application Database and optional Backup Database.
Note
To install a simple evaluation deployment on a single server, see Install a Simple Test Ops Manager Installation.
Topology Decisions¶
Do you require redundancy and/or high availability?¶
The topology you choose for your deployment affects the redundancy and availability of both your metadata and snapshots, and the availability of the Ops Manager Application.
Ops Manager stores application metadata and snapshots in the Ops Manager Application Database and Backup Database respectively. To provide data redundancy, run each database as a three-member replica set on multiple servers.
To provide high availability for write operations to the databases,
set up each replica set so that all three members hold data.
This way, if a member is unreachable the replica set can still write data.
Ops Manager uses
w:2 write concern,
which requires acknowledgement from the primary and one secondary for each
write operation.
To provide high availability for the Ops Manager Application, run at least two instances of the application and use a load balancer. A load balancer placed in front of the Ops Manager Application must not return cached content. For more information, see Configure a Highly Available Ops Manager Application.
The following tables describe the pros and cons for different topologies.
Test Install¶
This deployment runs on one server and has no data-redundancy. If you lose the server, you must start over from scratch.
Production Install with Redundant Metadata and Snapshots¶
This install runs on at least three servers and provides redundancy for your metadata and snapshots. The replica sets for the Ops Manager Application Database and the Backup Database are each made up of two data-bearing members and an arbiter.
Production Install with Highly Available Metadata and Snapshots¶
This install requires at least three servers. The replica sets for the Ops Manager Application Database and the Backup Database each comprise at least three data-bearing members. This requires more storage and memory than for the Production Install with Redundant Metadata and Snapshots.
Production Install with a Highly Available Ops Manager Application¶
This runs multiple Ops Manager Applications behind a load balancer and requires infrastructure outside of what Ops Manager offers. For details, see Configure a Highly Available Ops Manager Application.
Will you deploy managed MongoDB instances on servers that have no internet access?¶
If you use Automation and if the servers where you will deploy MongoDB do not have internet access, then you must configure Ops Manager to locally store and share the binaries used to deploy MongoDB so that the Automation agents can download them directly from Ops Manager.
You must configure local mode and store the binaries before you create the first managed MongoDB deployment from Ops Manager. For more information, see Configure Local Mode for Ops Manager Servers without Internet Access.
Will you use a proxy for the Ops Manager application’s outbound network connections?¶
If Ops Manager will use a proxy server to access external services, you must
configure the proxy settings in Ops Manager’s
conf-mms.properties
configuration file. If you have already started Ops Manager, you must restart
after configuring the proxy settings.
Security Decisions¶
Will you use authentication and/or SSL for the connections to the backing databases?¶
If you will use authentication or SSL for connections to the Ops Manager Application Database and Backup Database, you must configure those options on each database when deploying the database and then you must configure Ops Manager with the necessary certificate information for accessing the databases. For details, see Configure the Connections to the Backing MongoDB Instances
Will you use LDAP for user authenticate to Ops Manager?¶
If you want to use LDAP for user management, you can configure LDAP authentication before or after creating your first project. There are different prerequisites for implementing a new LDAP authentication scheme and converting existing authentication scheme to LDAP. To learn more about these differences, see Using LDAP from the Fresh Install vs. Converting to LDAP.
For details on LDAP authentication, see Configure Ops Manager Users for LDAP Authentication and Authorization.
Backup Decisions¶
Will the servers that run your Backup Daemons have internet access?¶
If the servers that run your Backup Daemons have no internet access, you must configure offline binary access for the Backup Daemon before running the Daemon. The install procedure includes the option to configure offline binary access.
Are certain backups required to be in certain data centers?¶
If you need to assign backups of particular MongoDB deployments to particular data centers, then each data center requires its own Ops Manager instance, Backup Daemon, and Backup Agent. The separate Ops Manager instances must share a single dedicated Ops Manager Application Database. The Backup Agent in each data center must use the URL for its local Ops Manager instance, which you can configure through either different hostnames or split-horizon DNS. For detailed requirements, see Assign Snapshot Stores to Specific Data Centers. | https://docs.opsmanager.mongodb.com/current/core/installation-checklist/ | 2018-06-18T00:11:59 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.opsmanager.mongodb.com |
Install and configure¶
This section describes how to install and configure the Key Manager service, code-named barbican, on the controller node.
This section assumes that you already have a working OpenStack environment with at least the Identity Service (keystone) installed.
For simplicity, this configuration stores secrets on the local file system.
Note that installation and configuration vary by distribution.
- Install and configure for openSUSE and SUSE Linux Enterprise
- Install and configure for Red Hat Enterprise Linux and CentOS
- Install and configure for Ubuntu
- Configure Secret Store Back-end | https://docs.openstack.org/barbican/victoria/install/install.html | 2021-05-06T03:35:49 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.openstack.org |
User guide¶
Using type checker functions¶
Two functions are provided, potentially for use with the
assert statement:
These can be used to implement fine grained type checking for select functions.
If the function is called with incompatible types, or
check_return_type() is used
and the return value does not match the return type annotation, then a
TypeError is raised.
For example:
from typeguard import check_argument_types, check_return_type def some_function(a: int, b: float, c: str, *args: str) -> bool: assert check_argument_types() ... assert check_return_type(retval) return retval
When combined with the
assert statement, these checks are automatically removed from the code
by the compiler when Python is executed in optimized mode (by passing the
-O switch to the
interpreter, or by setting the
PYTHONOPTIMIZE environment variable to
1 (or higher).
Note
This method is not reliable when used in nested functions (i.e. functions defined inside
other functions). This is because this operating mode relies on finding the correct function
object using the garbage collector, and when a nested function is running, its function object
may no longer be around anymore, as it is only bound to the closure of the enclosing function.
For this reason, it is recommended to use
@typechecked instead for nested functions.
Using the decorator¶
The simplest way to type checking of both argument values and the return value for a single
function is to use the
@typechecked decorator:
from typeguard import typechecked @typechecked def some_function(a: int, b: float, c: str, *args: str) -> bool: ... return retval @typechecked class SomeClass: # All type annotated methods (including static and class methods and properties) # are type checked. # Does not apply to inner classes! def method(x: int) -> int: ...
The decorator works just like the two previously mentioned checker functions except that it has no issues with nested functions. The drawback, however, is that it adds one stack frame per wrapped function which may make debugging harder.
When a generator function is wrapped with
@typechecked, the yields, sends and the return value
are also type checked against the
Generator annotation. The same applies to the
yields and sends of an async generator (annotated with
AsyncGenerator).
Note
The decorator also respects the optimized mode setting so it does nothing when the interpreter is running in optimized mode.
Using the profiler hook¶
Deprecated since version 2.6: Use the import hook instead. The profiler hook will be removed in v3.0.
This type checking approach requires no code changes, but does come with a number of drawbacks. It relies on setting a profiler hook in the interpreter which gets called every time a new Python stack frame is entered or exited.
The easiest way to use this approach is to use a
TypeChecker as a context
manager:
from warnings import filterwarnings from typeguard import TypeChecker, TypeWarning # Display all TypeWarnings, not just the first one filterwarnings('always', category=TypeWarning) # Run your entire application inside this context block with TypeChecker(['mypackage', 'otherpackage']): mypackage.run_app()
Alternatively, manually start (and stop) the checker:
checker = TypeChecker(['mypackage', 'otherpackage']) checker.start() mypackage.start_app()
The profiler hook approach has the following drawbacks:
Return values of
Noneare not type checked, as they cannot be distinguished from exceptions being raised
The hook relies on finding the target function using the garbage collector which may make it miss some type violations, especially with nested functions
Generator yield types are checked, send types are not
Generator yields cannot be distinguished from returns
Async generators are not type checked at all
Hint
Some other things you can do with
TypeChecker:
Display all warnings from the start with
python -W always::typeguard.TypeWarning
Redirect them to logging using
logging.captureWarnings()
Record warnings in your pytest test suite and fail test(s) if you get any (see the pytest documentation about that)
Using the import hook¶
The import hook, when active, automatically decorates all type annotated functions with
@typechecked. This allows for a noninvasive method of run time type checking. This method does
not modify the source code on disk, but instead modifies its AST (Abstract Syntax Tree) when the
module is loaded.
Using the import hook is as straightforward as installing it before you import any modules you wish to be type checked. Give it the name of your top level package (or a list of package names):
from typeguard.importhook import install_import_hook install_import_hook('myapp') from myapp import some_module # import only AFTER installing the hook, or it won't take effect
If you wish, you can uninstall the import hook:
manager = install_import_hook('myapp') from myapp import some_module manager.uninstall()
or using the context manager approach:
with install_import_hook('myapp'): from myapp import some_module
You can also customize the logic used to select which modules to instrument:
from typeguard.importhook import TypeguardFinder, install_import_hook class CustomFinder(TypeguardFinder): def should_instrument(self, module_name: str): # disregard the module names list and instrument all loaded modules return True install_import_hook('', cls=CustomFinder)
To exclude specific functions or classes from run time type checking, use the
@typeguard_ignore decorator:
from typeguard import typeguard_ignore @typeguard_ignore def f(x: int) -> int: return str(x)
Unlike
no_type_check(), this decorator has no effect on static type checking.
Using the pytest plugin¶
Typeguard comes with a pytest plugin that installs the import hook (explained in the previous
section). To use it, run
pytest with the appropriate
--typeguard-packages option. For
example, if you wanted to instrument the
foo.bar and
xyz packages for type checking, you
can do the following:
pytest --typeguard-packages=foo.bar,xyz
There is currently no support for specifying a customized module finder.
Checking types directly¶
Typeguard can also be used as a beefed-up version of
isinstance() that also supports checking
against annotations in the
typing module:
from typeguard import check_type # Raises TypeError if there's a problem check_type('variablename', [1234], List[int])
Support for mock objects¶
Typeguard handles the
unittest.mock.Mock and
unittest.mock.MagicMock classes
specially, bypassing any type checks when encountering instances of these classes. | https://typeguard.readthedocs.io/en/latest/userguide.html | 2021-05-06T04:23:28 | CC-MAIN-2021-21 | 1620243988725.79 | [] | typeguard.readthedocs.io |
You can identify when CloudBees Feature Management SDK has loaded configuration from local
storage or network by adding the
onConfigurationFetched handler to
RoxOptions.
Examples
The following are examples of code in different SDKs that add the
onConfigurationFetched handler:
FetcherResult information
The
FetcherResult has the following information regarding the actual
fetch:
fetcherStatus- an enum that identifies which configuration was fetched (from the network, from local storage, an error occurred)
creationDate- Date of configuration creation
errorDetails- The description of the error if one exists
hasChanges- Does this configuration differ from the one it is replacing | https://docs.cloudbees.com/docs/cloudbees-feature-management/latest/reporting/configuration-fetched-handler | 2021-05-06T02:52:03 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.cloudbees.com |
Wrapper for the diagnostic_msgs::DiagnosticStatus message that makes it easier to update. More...
#include <DiagnosticStatusWrapper.h>
Wrapper for the diagnostic_msgs::DiagnosticStatus message that makes it easier to update.
This class handles common string formatting and vector handling issues for filling the diagnostic_msgs::DiagnosticStatus message. It is a subclass of diagnostic_msgs::DiagnosticStatus, so it can be passed directly to diagnostic publish calls.
Definition at line 66 of file DiagnosticStatusWrapper.h.
Add a key-value pair.
This method adds a key-value pair. Any type that has a << stream operator can be passed as the second argument. Formatting is done using a std::stringstream.
Definition at line 204 of file DiagnosticStatusWrapper.h.
For bool, diagnostic value is "True" or "False".
Definition at line 245 of file DiagnosticStatusWrapper.h.
Add a key-value pair using a format string.
This method adds a key-value pair. A format string is used to set the value. The current implementation limits the value to 1000 characters in length.
Definition at line 256 of file DiagnosticStatusWrapper.h.
Clear the key-value pairs.
The values vector containing the key-value pairs is cleared.
Definition at line 228 of file DiagnosticStatusWrapper.h.
clears the summary, setting the level to zero and the message to "".
Definition at line 179 of file DiagnosticStatusWrapper.h.
Merges a level and message with the existing ones.
It is sometimes useful to merge two DiagnosticStatus messages. In that case, the key value pairs can be unioned, but the level and summary message have to be merged more intelligently. This function does the merge in an intelligent manner, combining the summary in *this, with the one that is passed in.
The combined level is the greater of the two levels to be merged. If both levels are non-zero (not OK), the messages are combined with a semicolon separator. If only one level is zero, and the other is non-zero, the message for the zero level is discarded. If both are zero, the new message is ignored.
Definition at line 101 of file DiagnosticStatusWrapper.h.
Version of mergeSummary that merges in the summary from another DiagnosticStatus.
Definition at line 123 of file DiagnosticStatusWrapper.h.
Formatted version of mergeSummary.
This method is identical to mergeSummary, except that the message is an sprintf-style format string.
Definition at line 140 of file DiagnosticStatusWrapper.h.
Fills out the level and message fields of the DiagnosticStatus.
Definition at line 76 of file DiagnosticStatusWrapper.h.
copies the summary from a DiagnosticStatus message
Definition at line 189 of file DiagnosticStatusWrapper.h.
Formatted version of summary.
This method is identical to summary, except that the message is an sprintf-style format string.
Definition at line 163 of file DiagnosticStatusWrapper.h. | https://docs.ros.org/en/groovy/api/diagnostic_updater/html/classdiagnostic__updater_1_1DiagnosticStatusWrapper.html | 2021-05-06T03:23:02 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.ros.org |
Update configuration variable. The command line option takes precedence over the configuration variable. If neither is given, a
update procedures supported both from the command line as well as through the
submodule.<name>.update configuration. | https://docs.w3cub.com/git/git-submodule | 2021-05-06T03:59:53 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.w3cub.com |
The QDoubleSpinBox class provides a spin box widget that takes doubles. More... valueChanged() and textChanged() signals, the former providing a double and the latter a QString. The textChanged() signal step type.
The step type can be single step or adaptive decimal().
[signal.
Note: Signal valueChanged is overloaded in this class. To connect to this signal by using the function pointer syntax, Qt provides a convenient helper for obtaining the function pointer as shown in this example:
connect(doubleSpinBox, QOverload<double>::of(&QDoubleSpinBox::valueChanged), [=](double d){ /* ... */ });
[virtual]QDoubleSpinBox::~QDoubleSpinBox()
Destructor.
[override virtual]void QDoubleSpinBox::fixup(QString &input) const
Reimplements: QAbstractSpinBox::fixup(QString &input) const.
Convenience function to set the minimum and maximum values with a single function call.
Note: The maximum and minimum values will be rounded to match the decimals property.
setRange(minimum, maximum);
is equivalent to:
setMinimum(minimum); setMaximum(maximum);
See also minimum and maximum..
It also works for any decimal values, 0.041 is increased to 0.042 by stepping once.().
[override virtual]QValidator::State QDoubleSpinBox::validate(QString &text, int &pos) const
Reimplements: QAbstractSpinBox::validate(QString &input, int &pos) const.
The Qt Company Ltd
Licensed under the GNU Free Documentation License, Version 1.3. | https://docs.w3cub.com/qt~5.15/qdoublespinbox | 2021-05-06T03:38:23 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.w3cub.com |
Whether to return column offsets in the offsets array.
If false, the returned offsets array contains just the row offsets. If true, the returned offsets array contains all column offsets for each column in the rows (i.e., it has nrows*ncols entries). Individual rows will have empty columns added or extra columns merged into the last column if they do not have exactly ncols columns.
The data to parse.
The delimiter to use. Defaults to ','.
Maximum number of rows to parse.
If this is not given, parsing proceeds to the end of the data.
The row delimiter to use. Defaults to '\r\n'.
The starting index in the string for processing. Defaults to 0. This index should be the first character of a new row. This must be less than data.length.
The options for a parser. | https://jupyterlab.readthedocs.io/en/stable/api/interfaces/csvviewer.iparser.ioptions.html | 2021-05-06T03:39:50 | CC-MAIN-2021-21 | 1620243988725.79 | [] | jupyterlab.readthedocs.io |
The Amazon Sumerian Dashboard
The Dashboard is the first thing you see when you open the Amazon Sumerian app. This is where you manage your projects, scenes, asset packs, and templates.
Projects collect scenes and the templates and asset packs that you export from them. You can create draft projects outside of a project, but you must have a project to export templates and assets.
When you open a scene in the editor, it is locked to prevent other users from modifying it. The dashboard manages locks and lets you steal a lock if the other user leaves a scene open by accident. | https://docs.aws.amazon.com/sumerian/latest/userguide/sumerian-dashboard.html | 2018-08-14T14:56:22 | CC-MAIN-2018-34 | 1534221209040.29 | [array(['images/dashboard.png', 'The Sumerian dashboard.'], dtype=object)] | docs.aws.amazon.com |
Smart Indentation
The PHP editor supports the smart indentation feature. The editor's cursor is automatically indented on new lines to the desirable position. This is most noticeable after
{, within
if, after
; or
:.
Settings
The option is enabled by default. The option can be disabled in
Tools | Options | Text Editor | PHP | Tabs.
Smart Indent Features
- enter key will place your caret to the best position for new statement or for continuing unfinished statement. This also works within PHPDoc or multi-lined comments, and adds
*at the line beginning automatically.
{character outdents current line to line up with code block start.
}character outdents current line to match block start. If the code is syntactically valid, it also reformats the code block.
:after
caseor
default, it makes indentation of the line the same as previous cases within the switch. | https://docs.devsense.com/en/editor/smart-indent | 2018-08-14T13:54:59 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.devsense.com |
The Local Ambari Managed Cluster Configuration option is enabled on the Ambari Admin page if you are managing a cluster with Ambari. When enabled, you can choose this option and Ambari will automatically configure the view based on how the cluster is configured.
When you configure the view using the Local option, the Files View communicates with HDFS based on the fs.defaultFS property (for example: hdfs://namenode:8020). The View also determines whether NameNode HA is configured and adjusts accordingly. | https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-views/content/Cluster_Configuration_Local.html | 2018-08-14T13:44:33 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.hortonworks.com |
Understanding the Compatibility view list
Websites that are designed for older versions of Windows Internet Explorer don't always display as expected in the current version. We addressed this in Windows Internet Explorer 8 by adding the Compatibility View function that allows users to "revert" to a previous browser version.
Examples of incompatibility issues that are addressed by Compatibility View include incorrect browser or feature detection. Today many sites use browser detection instead of feature and behavior detection to give Internet Explorer (IE) markup that is not interoperable with modern web standards. This can result in major functionality breaks on sites when rendered in newer versions of IE.
Compatibility View allows content designed for older web browsers to render well in newer versions of Internet Explorer. The Compatibility View (CV) List automatically displays the content of websites in Compatibility View without further interaction.
Why is a site added to the CV List?
We only add a site to the Compatibility View List when the site:
- Is designed to run in an older version of Internet Explorer
- Doesn't run well in Edge mode.
- Doesn't declare an "X-UA-Compatible-" meta tag or header
Updating the CV List
The CV List is an XML file on "Microsoft.com". We can update the list monthly, which means we can accommodate site developer requests to remove updated sites. Users automatically get updates.
In order to continue to reduce the size of the CV List, we encourage site developers to update their sites and serve the same markup they use with other browsers. At a minimum, sites should take advantage of the "X-UA-Compatible-" meta tag to specify a legacy document mode until the site is updated.
Steps to remove your site from the IE CV List
- Verify that your site works well in in the latest web standards or Edge mode. Start by turning off compatibility mode in IE before navigating to your website so that compatibility issues will appear. Do this by clearing the Use Microsoft Compatibility Lists check box in the Compatibility View Settings dialog box (ALT key to show menus -> select Tools -> select Compatibility View settings).
- Look for any compatibility issue on your website. You can also use the F12 developer tools to resolve any compatibility issues.
- Please note, the ability to add sites to a user’s local CV list has been removed from Microsoft Edge. You can stop using the CV list entirely by going to about:flags in the address bar and unchecking Use Microsoft compatibility lists
- The next step is to see if your website is on the CV list. Check the Internet Explorer CV List(s) to see if your site is on one of these lists:
- The Edge Compatibility View List
- The IE11 on Windows 10 Compatibility View List
- The Edge on Phone Compatibility View List
- The IE9 Compatibility View List
- The IE10 Compatibility View List
- The IE11 Compatibility View List
- The IE10 on Windows Phone 8.0 Compatibility View List
- The IE11 on Windows Phone 8.1 Compatibility View List
- The IE11 on Windows Phone 8.1 GDR1 Compatibility View List
- The XboxOne Compatibility View List
These are the lists Internet Explorer uses to determine whether to render the website in compatibility mode.
Microsoft Edge
Microsoft Edge has removed legacy Internet Explorer technologies like ActiveX controls, Toolbars, BHOs, Vbscript, etc. If your site relies on these controls, you should:
- Investigate moving away from these controls as much as possible. For video, please see the Microsoft Edge Blog “Moving to HTML5 Premium Media”
- Communicate to your users how they can continue to use your website. For instance, Our site is currently testing the new Microsoft Edge browser. Until our tests are complete, it may not function correctly. Please continue to use Internet Explorer until we are done with our testing and provide a notice of compatibility
- Lastly, email [email protected] with the following information and ask that your site be removed from either the IE or Microsoft Edge CV List when your updates are live on the web:
- Owner Name
- Corporate Title
- Telephone Number
- Company Name
- Street Address
- Website Address
- Target platform (Desktop, Phone, Xbox)
- CV List(s) from which the website is to be removed. Microsoft will review the provided information and remove your site from the Compatibility View List at the next scheduled list update.
Related Topics
- Understanding the Compatibility View List (Internet Explorer 8)
- Windows Internet Explorer 9 faster, more capable Compatibility View List
- IE's Compatibility Features for Site Developers
- Introducing Compatibility View
- Same Markup: Writing Cross-Browser Code
- Internet Explorer 9 Compatibility View List - Server Version
- Internet Explorer 9 Developer Tools: Network Tab
- How to use F12 Developer Tools to Debug your Webpages | https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/compatibility/gg622935(v=vs.85) | 2018-08-14T14:14:09 | CC-MAIN-2018-34 | 1534221209040.29 | [array(['images/gg622935.compatibilityviewsettingsdialog%28en-us%2cvs.85%29.png',
None], dtype=object) ] | docs.microsoft.com |
i2pd configuration
i2pd can be configured via either command-line arguments or config files.
Modifying
i2pd.conf is just a way of passing command line arguments to i2pd at boot;
for example, running i2pd with argument
--port=10123 and setting option
port = 10123 in config file will have
the same effect.
There are two separate config files:
i2pd.conf and
tunnels.conf.
i2pd.conf is the main configuration file, where
you configure all options.
tunnels.conf is the tunnel configuration file, where you configure I2P hidden services
and client tunnels. the
tunnels.conf options are documented here.
i2pd.conf accets INI-like syntax, such as the following :
For example:
i2pd.conf:
# comment log = true ipv6 = true # settings for specific module [httpproxy] port = 4444 # ^^ this will be --httproxy.port= in cmdline # another comment [sam] enabled = true
You are also encouraged to see the commented config with examples of all options in
contrib/i2pd.conf.
Options specified on the command line take precedence over those in the config file. If you are upgrading your very old router (< 2.3.0) see also this page.
Available options
Run
./i2pd --help to show builtin help message (default value of option will be shown in braces)
General options
Notes
datadir and
service options are only used as arguments for i2pd, these options have no effect when set in
i2pd.conf.
Windows-specific options
All options below still possible in cmdline, but better write it in config file:
HTTP webconsole
HTTP proxy
Socks proxy
SAM interface
BOB interface
I2CP interface
I2PControl interface
UPNP
Cryptography
Reseeding
Addressbook options
Limits
Trust options
Websocket server
Exploratory tunnels
Local addressbook
There is also a special addressbook config file in the working directory at
addressbook/local.csv.
It is used to map long I2P destinations to short, human readable domain names. The syntax is csv and
it can be modified manually if the user wishes. | https://i2pd.readthedocs.io/en/latest/user-guide/configuration/ | 2018-08-14T14:22:06 | CC-MAIN-2018-34 | 1534221209040.29 | [] | i2pd.readthedocs.io |
Hosting Types
The hosting type defines the website's behavior. Plesk supports three types of hosting: Website hosting, Forwarding, and No hosting.
Website Hosting
The Website hosting means that a website is physically located on the server.
For the website hosting type you can specify:
- Document root. The location of the directory where all files and subdirectories of the site will be kept. You can use the default directory
httpdocsor specify another directory.
- Preferred domain. Typically, any website is available on two URLs: with the www prefix (like in)and without it (like in
example.com). We recommend that you always redirect visitors to one of these URLs (typically to the non-www version). For example, after you set the Preferred domain to the non-www version (
example.com), site visitors will be redirected to this URL even if they specify their browsers.
Plesk uses the search engine friendly HTTP 301 code for such a redirection. This allows preserving search engine rankings of your site (preferred domain). If you turn off the redirection by choosing None, search engines will treat both URL versions (www and non-www) as URLs of different sites. As a result, rankings will be split between these URLs.
Forwarding
You can point one or more registered domain names to the same physical website, by using the domain name forwarding. This allows automatic redirection of visitors from the URL they specify in a browser to a site with a different URL. For example, visitors of the site can be redirected to. There are two types of forwarding in Plesk: the standard and frame forwarding.
Standard Forwarding
With the Standard forwarding, users who have been redirected to another URL can see the destination URL in the browser address bar.
Depending on how long you intend to use the redirection, you can select the type of redirection – Moved permanently (code 301) or Moved temporarily (code 302). These are HTTP response codes which Plesk sends to browsers to perform the redirection. From visitors’ point of view, the response code does not matter: in both cases they will be simply redirected to the destination URL. For search engines, the code defines how they should treat the redirected site and affects search engine rankings.
- Moved permanently (code 301).
Use this redirection type if you want to keep search engine rankings of your site after moving it permanently to another address.
For example, if
example1.comhas been moved permanently to the domain
example2.com, the rankings will not be split between
example1.comand
example2.com– search engine crawlers will treat them as a single website.
- Moved temporarily (code 302).
Use this redirection type when the destination domain is used temporarily, for example, when you are testing a new version of your site with real visitors while keeping the old version intact. If you set this redirection for a newly created destination domain, this domain will not be indexed by search engines.
Frame Forwarding
With the Frame forwarding, when visitors are redirected to another site, the address bar of their browsers continues to show the source URL. Thus, visitors remain unaware of the redirection. This is called frame forwarding as the index page of the source site contains a frame with the destination site.
Domains Without Web Hosting
You can switch off web service and use only email services under that domain (Websites & Domains > domain name > Hosting Settings > the Change link near Hosting Type > the No web hosting option). | https://docs.plesk.com/en-US/onyx/reseller-guide/website-management/websites-and-domains/hosting-settings/general-settings/hosting-types.72051/ | 2018-08-14T13:49:46 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.plesk.com |
Suppose you wanted a list of the top bloggers in the social analytics space. To begin with, you need to run a query to collect the top performing blog posts in the category. Next, group the dataset into author clusters. Your influencer campaign should target those bloggers who consistently rank for social signals, create a steady flow of content, and ideally spark reader engagement in comments.
- Choose web content data API and filter blog posts for top social signals
- Sort by participant count in threads &sort=participants_count
- Filter by author across news/blogs/discussions | https://docs.webhose.io/v1.0/docs/influencer-marketing | 2018-08-14T14:23:39 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.webhose.io |
Installing and Activating
The ZBrush to KeyShot Bridge plugin is already installed with ZBrush and doesn’t need any additional files to be added by you after your purchase. All that is necessary is to activate the plugin after purchasing a license for it.
The activation process is simple and will be triggered the first time you use the ZBrush to KeyShot Bridge.
- First, install and activate KeyShot.
- Click Render>External Render>KeyShot to set it as the default BPR renderer.
- Load a model and click Render>BPR RenderPass>BPR or use the Shift + R hotkey.
- A dialog box will open, notifying you that you need a license to run the ZBrush to KeyShot Bridge. Click the “Install my License” button. If you don’t own a license, you will need to purchase one at the Pixologic Store () or download a trial license. Internet links in the dialog box can take you directly to the appropriate pages for these choices.
- Upon clicking the “Install my License” button, a window will open so that you can browse your computer hard drive to locate and load the license file that you saved after your purchase. (If you haven’t download your license you will need to do so before you can proceed. Please refer to the email you received from the Pixologic store which includes all the information needed to download your license file.)
- When your license has been located, select it and click the “Open” button.
- A new dialog box will appear notifying you that the license has been successfully installed on your computer.
- The final step is to activate your license on this computer. To do this, click the “Activate My License” button.
- If your computer is connected to the internet, activation will be immediate. If you do not have an internet connection, see the paragraph immediately following these steps.
- Upon activation, KeyShot should now launch and your ZBrush model should appear in its document window. If you have a connection error in ZBrush after performing the activation, click the BPR button again. (You should not have to go through steps 4-9 again). an activation file. Download the file and save it to your USB stick. The ZBrush to KeyShot Bridge activation window will allow you to load that file from the USB stick and complete the activation. | http://docs.pixologic.com/user-guide/materials-lights-rendering/rendering/zbrush-to-keyshot/activation/ | 2018-08-14T13:29:02 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.pixologic.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Container for the parameters to the GetEventSelectors operation..
Namespace: Amazon.CloudTrail.Model
Assembly: AWSSDK.CloudTrail.dll
Version: 3.x.y.z
The GetEventSelect | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CloudTrail/TGetEventSelectorsRequest.html | 2018-08-14T14:33:55 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.aws.amazon.com |
clojureEstimated reading time: 7 minutes
Clojure is a dialect of Lisp that runs on the JVM.
GitHub repo:
Library reference
This content is imported from the official Docker Library docs, and is provided by the original uploader. You can view the Docker Store page for this image at
Supported.8.1,
boot(debian/boot/Dockerfile)
boot-2.8.1-alpine,
boot-alpine(alpine/boot/Dockerfile)
tools-deps-1.9.0.381,
tools-deps(debian/tools-deps/Dockerfile)
tools-deps-1.9.0.381-alpine,
tools-deps-alpine(alpine/tools-deps/Dockerfile))
clojure/ directory.
As for any pre-built image usage, it is the image user’s responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.library, sample, clojure | https://docs.docker.com/samples/library/clojure/ | 2018-08-14T14:08:03 | CC-MAIN-2018-34 | 1534221209040.29 | [array(['https://raw.githubusercontent.com/docker-library/docs/665526c3b12cedfd721234cedb61e8433f73b75a/clojure/logo.png',
'logo'], dtype=object) ] | docs.docker.com |
Condition Variables
Condition variables are synchronization primitives that enable threads to wait until a particular condition occurs. Condition variables are user-mode objects that cannot be shared across processes.
Condition variables enable threads to atomically release a lock and enter the sleeping state. They can be used with critical sections or slim reader/writer (SRW) locks. Condition variables support operations that "wake one" or "wake all" waiting threads. After a thread is woken, it re-acquires the lock it released when the thread entered the sleeping state.
Note that the caller must allocate a CONDITION_VARIABLE structure and initialize it by either calling InitializeConditionVariable (to initialize the structure dynamically) or assign the constant CONDITION_VARIABLE_INIT to the structure variable (to initialize the structure statically).
Windows Server 2003 and Windows XP: Condition variables are not supported.
The following are the condition variable functions.
The following pseudocode demonstrates the typical usage pattern of condition variables.
CRITICAL_SECTION CritSection; CONDITION_VARIABLE ConditionVar; void PerformOperationOnSharedData() { EnterCriticalSection(&CritSection); // Wait until the predicate is TRUE while( TestPredicate() == FALSE ) { SleepConditionVariableCS(&ConditionVar, &CritSection, INFINITE); } // The data can be changed safely because we own the critical // section and the predicate is TRUE ChangeSharedData(); LeaveCriticalSection(&CritSection); // If necessary, signal the condition variable by calling // WakeConditionVariable or WakeAllConditionVariable so other // threads can wake }
For example, in an implementation of a reader/writer lock, the
TestPredicate function would verify that the current lock request is compatible with the existing owners. If it is, acquire the lock; otherwise, sleep. For a more detailed example, see Using Condition Variables.
Condition variables are subject to spurious wakeups (those not associated with an explicit wake) and stolen wakeups (another thread manages to run before the woken thread). Therefore, you should recheck a predicate (typically in a while loop) after a sleep operation returns.
You can wake other threads using WakeConditionVariable or WakeAllConditionVariable either inside or outside the lock associated with the condition variable. It is usually better to release the lock before waking other threads to reduce the number of context switches.
It is often convenient to use more than one condition variable with the same lock. For example, an implementation of a reader/writer lock might use a single critical section but separate condition variables for readers and writers.
Related topics | https://docs.microsoft.com/en-us/windows/desktop/Sync/condition-variables | 2018-08-14T13:45:25 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.microsoft.com |
Alert Messages
Overview
inSync Master displays an alert message to indicate an exception situation or to indicate completion of an activity. For example, detection of antivirus activity is an exception, while backup success is a completion of an activity. inSync sends alert messages only to its subscribers. You can configure inSync to send specific alert messages to specific administrators and users.
inSync Master displays alerts for a certain duration, depending on the nature and severity of the alert. inSync Master displays alert messages, such as low storage space alert or misconfigured backup folder alert until you resolve the issue.
The ‘alert’ processing happens every 10 minutes and once a new alert is generated, the same is used for future emails until it is resolved. New alerts for the same condition are not generated every time when the same situation is encountered. The original alert which was generated on the first instance is updated with new information and sent. A total of two emails are sent as per the notification interval guidelines mentioned below in case the alert is not resolved before that time (+/- 10 minutes time difference may be present).
Note: Timestamps that appear on the alert messages follow the inSync Master's time zone.
About inSync alert messages
The following table lists various inSync alert messages.
View active alerts
To view he list of alerts that inSync displays
- On the inSync Master Management Console menu bar, click Reporting > Alerts.
The list of all alerts appear under the Active Alerts tab. On the Manage Alerts page, use filters to view a specific type of alert.
Update the subscriber list for an alert
To update the subscribers for an alert
-. | https://docs.druva.com/010_002_inSync_On-premise/010_5.4.1/040_Monitor/Alert_Messages | 2018-08-14T14:22:37 | CC-MAIN-2018-34 | 1534221209040.29 | [array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object) ] | docs.druva.com |
Troubleshooting Poor Query Performance: Cardinality Estimation
The query optimizer in SQL Server is cost-based. This means that it selects query plans that have the lowest estimated processing cost to execute. The query optimizer determines the cost of executing a query plan based on two main factors:
The total number of rows processed at each level of a query plan, referred to as the cardinality of the plan.
The cost model of the algorithm dictated by the operators used in the query.
The first factor, cardinality, is used as an input parameter of the second factor, the cost model. Therefore, improved cardinality leads to better estimated costs and, in turn, faster execution plans.
SQL Server estimates cardinalities primarily from histograms that are created when indexes or statistics are created, either manually or automatically. Sometimes, SQL Server also uses constraint information and logical rewrites of queries to determine cardinality.
In the following cases, SQL Server cannot accurately calculate cardinalities. This causes inaccurate cost calculations that may cause suboptimal query plans. Avoiding these constructs in queries may improve query performance. Sometimes, alternative query formulations or other measures are possible and these are pointed out.
Queries with predicates that use comparison operators between different columns of the same table.
Queries with predicates that use operators, and any one of the following are true:
There are no statistics on the columns involved on either side of the operators.
The distribution of values in the statistics is not uniform, but the query seeks a highly selective value set. This situation can be especially true if the operator is anything other than the equality (=) operator.
The predicate uses the not equal to (!=) comparison operator or the NOT logical operator.
Queries that use any of the SQL Server built-in functions or a scalar-valued, user-defined function whose argument is not a constant value.
Queries that involve joining columns through arithmetic or string concatenation operators.
Queries that compare variables whose values are not known when the query is compiled and optimized.
The following measures can be used to try to improve performance on these types of queries:
Build useful indexes or statistics on the columns that are involved in the query. For more information, see Designing Indexes and Using Statistics to Improve Query Performance.
Consider using computed columns and rewriting the query if the query uses comparison or arithmetic operators to compare or combine two or more columns. For example, the following query compares the values in two columns:
SELECT * FROM MyTable WHERE MyTable.Col1 > MyTable.Col2
Performance may be improved if you add a computed column Col3 to MyTable that calculates the difference between Col1 and Col2 (Col1 minus Col2). Then, rewrite the query:
SELECT * FROM MyTable WHERE Col3 > 0
Performance will probably improve more if you build an index on MyTable.Col3. | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms181034(v=sql.105) | 2018-08-14T13:56:05 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.microsoft.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
This is the response object from the DeleteVolume operation.
Namespace: Amazon.EC2.Model
Assembly: AWSSDK.EC2.dll
Version: 3.x.y.z
The DeleteVolumeResponse type exposes the following members
This example deletes an available volume with the volume ID of ``vol-049df61146c4d7901``. If the command succeeds, no output is returned.
var response = client.DeleteVolume(new DeleteVolumeRequest { VolumeId = "vol-049df61146c4d79 | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/EC2/TDeleteVolumeResponse.html | 2018-08-14T14:34:11 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.aws.amazon.com |
Authenticate access with personal access tokens for VSTS and TFS
VSTS | TFS 2018 | TFS 2017
Visual Studio Team Services (VSTS) and Team Foundation Server (TFS) use enterprise-grade authentication, backed by a Microsoft account or Azure Active Directory (Azure AD), to help protect and secure your data. Clients like Visual Studio and Eclipse (with the Team Explorer Everywhere plug-in) natively support Microsoft account and Azure AD authentication, so you can directly use those authentication methods to sign in.
For non-Microsoft tools that integrate into VSTS but do not support Microsoft account or Azure AD authentication interactions (for example, Git, NuGet, or Xcode), you need to set up personal access tokens (PATs). You set up PATs by using Git credential managers or by creating them manually. You can also use personal access tokens when there is no "pop- up UI," such as with command-line tools, integrating tools or tasks into build pipelines, or using REST APIs.
Personal access tokens essentially are alternate passwords that you create in a secure way by using your normal authentication. PATs can have expiration dates, limited scopes (for example, only certain REST APIs or command-line operations are valid), and specific VSTS organizations. You can put them in environment variables so that scripts don't hard code passwords. For more information, see Authentication overview and Scopes.
Create personal access tokens to authenticate access
https://{yourorganization}.visualstudio.com) or your Team Foundation Server web portal (
https://{server}:8080/tfs/).
From your home page, open your profile. Go to your security details.
TFS 2017
VSTS
Create a personal access token.
Name your token. Select a lifespan for your token.
If you're using VSTS, and you have more than one account, you can also select the VSTS account where you want to use the token.
Select the scopes that this token will authorize for your specific tasks.
For example, to create a token to enable a build and release agent to authenticate to VSTS.
VSTS
TFS 2017
Revoke access.
Using PATs
For examples of using PATs, see Git credential managers, REST APIs, NuGet on a Mac, and Reporting clients.
Frequently asked questions
Q: What is my Visual Studio Team Services URL?
A: https://{yourorganization}.visualstudio.com, for example.
Q: What notifications might I receive about my PAT?
A: Users receive two notifications during the lifetime of a PAT, one at creation and the other 7 days approaching the expiration.
Here's the notification at PAT creation:
Here's the notification that a PAT is nearing expiration:
Q: What do I do if I believe that someone other than me is creating access tokens on my organization?
A: If you get a notification that a PAT was created and you don't know what caused this, keep in mind that some actions can automatically create a PAT on your behalf. For example:
- Connecting to a VSTS Git repo through git.exe. This creates a token with a display name like "git: on MyMachine."
- Setting up an Azure App Service web app deployment. This creates a token with a display name like "Service Hooks :: Azure App Service :: Deploy web app."
- Setting up web load testing as part of a pipeline. This creates a token with a display name like "WebAppLoadTestCDIntToken."
If you still believe that a PAT was created in error, we suggest revoking the PAT. The next step is to investigate whether your password has been compromised. Changing your password is a good first step to defend against this attack vector. If you’re an Azure Active Directory user, talk with your administrator to check if your organization was used from an unknown source or location. | https://docs.microsoft.com/en-us/vsts/organizations/accounts/use-personal-access-tokens-to-authenticate?view=vsts | 2018-08-14T13:21:19 | CC-MAIN-2018-34 | 1534221209040.29 | [array(['_img/use-personal-access-tokens-to-authenticate/pat-creation.png?view=vsts',
'PAT creation notification'], dtype=object)
array(['_img/use-personal-access-tokens-to-authenticate/pat-expiration.png?view=vsts',
'PAT nearing expiration notification'], dtype=object) ] | docs.microsoft.com |
Active Directory Domain Services Overview
Applies To: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012
A directory is a hierarchical structure that stores information about objects on the network. A directory service, such as Active Directory Domain Services ., see Directory data store.. For more information about Active Directory security, see Security overview.
Active Directory also includes:.
Understanding Active Directory
This section provides links to core Active Directory concepts:
- Active Directory Structure and Storage Technologies
- Domains Controller Roles
- Active Directory Schema
- Understanding Trusts
- Active Directory Replication Technologies
- Active Directory Search and Publication Technologies
- Interoperating with DNS and Group Policy
- Understanding Schema
For a detailed list of Active Directory concepts, see Understanding Active Directory. | https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview | 2018-08-14T13:20:15 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.microsoft.com |
Legacy: The Legacy Surveys and Legacy Administration modules are available on instances upgraded from a previous release but not available for new instances. Customers using legacy survey or survey wizard should plan to migrate to the Survey Management application to create modern and high quality surveys for their users. The following legacy survey plugins are inactive by default, and are available upon request: Best Practice - Task Survey Management (ID: com.snc.bestpractice.task_survey) Survey Management (ID: com.glideapp.survey) Assessment Components (ID: com.snc.assessment) Survey Wizard (ID: com.glideapp.survey_wizard) questionsSurvey trigger conditionsSurvey distribution | https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/administer/survey-administration/concept/c_MigrateSurveys_1.html | 2018-08-14T13:51:43 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.servicenow.com |
f3 - Fight Flash Fraud¶
f3 is a simple tool that tests flash cards capacity and performance to see if they live up to claimed specifications.
F3 stands for Fight Flash Fraud, or Fight Fake Flash.
Table of Contents
Examples¶
Testing performance with f3read/f3write¶
Use these two programs in this order. f3write will write large files to your mounted disk and f3read will check if the flash disk contains exactly the written files:
$ ./f3write /media/michel/5EBD-5C80/ $ ./f3read /media/michel/5EBD-5C80/
Please replace “/media/michel/5EBD-5C80/” with the appropriate path. USB devices are mounted in “/Volumes” on Macs.
If you have installed f3read and f3write, you can remove the “./” that is shown before their names.
Quick capacity tests with f3probe¶
f3probe is the fastest drive test and suitable for large disks because it only writes what’s necessary to test the drive. It operates directly on the (unmounted) block device and needs to be run as a privileged user:
# ./f3probe --destructive --time-ops /dev/sdb
Warning
This will destroy any previously stored data on your disk!
Installation¶
Download and Compile¶
The files of the stable version of F3 are here. The following command uncompresses the files:
$ unzip f3-7.1.zip
Compile stable software on Linux or FreeBSD¶
To build:
make
If you want to install f3write and f3read, run the following command:
make install
Compile stable software on Windows/Cygwin¶
If you haven’t already, install the following Cygwin packages and their dependencies:
- gcc-core
- make
- libargp-devel
To build, you need special flags:
export LDFLAGS="$LDFLAGS -Wl,--stack,4000000 -largp" make
If you want to install f3write and f3read, run the following command:
make install
Compile stable software on Apple Mac¶
Using HomeBrew¶
If you have Homebrew already installed in your computer, the command below will install F3:
brew install f3
Compiling the latest development version from the source code¶
Most of the f3 source code builds fine using XCode, the only dependency missing is the GNU C library “argp”. You can build argp from scratch, or use the version provided by HomeBrew and MacPorts as “argp-standalone”
The following steps have been tested on OS X El Capitan 10.11.
Install Apple command line tools:
xcode-select --install
See for details.
Install Homebrew or MacPorts
HomeBrew:
/usr/bin/ruby -e "$(curl -fsSL)"
See for details.
MacPorts:
Install argp library:
brew install argp-standalone
See and for more information.
Or, for MacPorts:
port install argp-standalone
See for more information.
Build F3:
When using Homebrew, you can just run:
make
When using MacPorts, you will need to pass the location where MacPorts installed argp-standalone:
make ARGP=/opt/local
The extra applications for Linux¶
Install dependencies¶
f3probe and f3brew require version 1 of the library libudev, and f3fix requires version 0 of the library libparted to compile. On Ubuntu, you can install these libraries with the following command:
sudo apt-get install libudev1 libudev-dev libparted0-dev
Compile the extra applications¶
make extra
Note
- The extra applications are only compiled and tested on Linux platform.
- Please do not e-mail me saying that you want the extra applications to run on your platform; I already know that.
- If you want the extra applications to run on your platform, help to port them, or find someone that can port them for you. If you do port any of them, please send me a patch to help others.
- The extra applications are f3probe, f3brew, and f3fix.
If you want to install the extra applications, run the following command:
make install-extra
Other resources¶
Graphical User Interfaces¶
Thanks to our growing community of fraud fighters, we have a couple of graphical user interfaces (GUIs) available for F3:
F3 QT is a Linux GUI that uses
QT. F3 QT supports
f3write,
f3read, and
f3probe. Author:
Tianze.
F3 X is a OS X GUI that uses
Cocoa. F3 X supports
f3write and
f3read. Author: Guilherme
Rambo.
Please support these projects testing and giving feedback to their authors. This will make their code improve as it has improved mine.
Files¶
changelog - Change log for package maintainers f3read.1 - Man page for f3read and f3write In order to read this manual page, run `man ./f3read.1` To install the page, run `install --owner=root --group=root --mode=644 f3read.1 /usr/share/man/man1` LICENSE - License (GPLv3) Makefile - make(1) file README - This file *.h and *.c - C code of F3
Bash scripts¶
Although the simple scripts listed in this section are ready for use, they are really meant to help you to write your own scripts. So you can personalize F3 to your specific needs:
f3write.h2w - Script to create files exactly like H2testw. Use example: `f3write.h2w /media/michel/5EBD-5C80/` log-f3wr - Script that runs f3write and f3read, and records their output into a log file. Use example: `log-f3wr log-filename /media/michel/5EBD-5C80/`
Please notice that all scripts and use examples above assume that f3write, f3read, and the scripts are in the same folder. | https://fight-flash-fraud.readthedocs.io/en/latest/introduction.html | 2018-08-14T14:16:12 | CC-MAIN-2018-34 | 1534221209040.29 | [] | fight-flash-fraud.readthedocs.io |
The ZModeler Brush: Actions and Targets
The ZModeler brush is a whole modeling universe by itself. It contains a vast array of functions that can be applied to multiple Targets, resulting in hundreds of combinations of modeling possibilities within ZBrush.
To make the process easier to understand and not create a restricted set of tools, the ZModeler functions are split into two different elements: the Action and the Target.
- The Action is the function itself, such as Extrude Move, Bridge or Split.
- The Target is the element to which the Action will be applied. This can be individual points, edges or polygon as well as smart compound selections such as borders, PolyGroups, edge loops and more.
Taking the Poly Move Action as an example: The Action contains Targets such as Move Poly, Move All Mesh, Move Brush Radius, Move Curved Island, Move Flat Island, Move Island, Move PolyGroup All, Move PolyGroup Island, etc. (More may also be added in the future.) The same Move Action when applied to points or edges is associated with different Targets, offering even more tools for your modeling process.
It is up to you to define the tool you need for your modeling at the moment by combining the desired Action with the best Target.
To display the list of Actions and Targets, you must have the ZModeler brush selected and hover over a point, edge, or polygon of a Polymesh 3D model. By right-clicking or pressing the space bar, the ZModeler context pop-up menu will appear, displaying Actions with Targets beneath them.
The displayed Actions and Target will depend upon exactly what the cursor is hovering over, with different items being show for points, edges or polygons. If you are looking for a function that you can’t find, it may be because you were not hovering over the correct part of the mesh before opening the ZModeler pop-up menu. | http://docs.pixologic.com/user-guide/3d-modeling/modeling-basics/creating-meshes/zmodeler/actions-and-targets/ | 2018-08-14T13:26:30 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.pixologic.com |
Anonymous chat servers
Connect to anonymous IRC server
You can connect to IRC servers in I2P by using Socks proxy. By default, it listens at
127.0.0.1:4447
(look at configuration docs for details).
Configure your IRC client to use this Socks proxy and connect to I2P servers just like to any other servers.
Alternatively, you may want to create client I2P tunnel to specific server. This way, i2pd will "bind" IRC server port on your computer and you will be able to connect to server without modifying any IRC client settings.
To connect to IRC server at irc.ilita.i2p:6667, add this to ~/.i2pd/tunnels.conf:
[IRC2] type = client address = 127.0.0.1 port = 6669 destination = irc.ilita.i2p destinationport = 6667 #keys = irc-client-key.dat
Restart i2pd, then connect to irc://127.0.0.1:6669 with your IRC client.
Running anonymous IRC server
1) Run your IRC server software and find out which host:port it uses (for example, 127.0.0.1:5555).
For small private IRC servers you can use miniircd, for large public networks UnreadIRCd.
2) Configure i2pd to create IRC server tunnel.
Simplest case, if your server does not support WebIRC, add this to ~/.i2pd/tunnels.conf:
[anon-chatserver] type = irc host = 127.0.0.1 port = 5555 keys = chatserver-key.dat
And that is it.
Alternatively, if your IRC server supports WebIRC, for example, UnreadIRCd, put this into UnrealIRCd config:
webirc { mask 127.0.0.1; password your_password; };
Also change line:
modes-on-connect "+ixw";
to
modes-on-connect "+iw";
And this in ~/.i2pd/tunnels.conf:
[anon-chatserver] type = irc host = 127.0.0.1 port = 5555 keys = chatserver-key.dat webircpassword = your_password
3) Restart i2pd.
4) Find b32 destination of your anonymous IRC server.
Go to webconsole -> I2P tunnels page. Look for Sever tunnels and you will see address like \<long random string>.b32.i2p next to anon-chatserver.
Clients will use this address to connect to your server anonymously. | https://i2pd.readthedocs.io/en/latest/tutorials/irc/ | 2018-08-14T14:21:29 | CC-MAIN-2018-34 | 1534221209040.29 | [] | i2pd.readthedocs.io |
If… we don’t have an i-doc sourcebook – is this a problem?
Reginald Blackledge believes it is.
For the last 8 years, Reg has been trying to solve this problem. You may remember the website WiredMovie… well that was one of Reg’s attempts and prototypes. Reg has now launched FilmKode.com: a new platform to connect people in our industry. I invite you to check it out and to help him populate it with information relevant to you, so that it becomes a useful tool for us all.
The following is an interview I did with Reg because I wanted to understand the genesis of his project and the purpose of his new FilmKode website.
Sandra Gaudenzi: I am curious about the story behind WiredMovie & FilmKode: when and why did you created them? What was your vision when you started it?
Reginald Blackledge: The original inspiration for the WiredMovie (WM) concept, and my newest endeavor, FilmKode.com, goes back a few years to my master’s thesis titled, “Post- Documentary.” As part of that thesis, I was curating international news and projects related to interactive-documentary filmmaking, and sharing that information via social media.
At the same time, I was planning to enter an Ed.D. or Ph.D. program at Teacher’s College at Columbia, or Universitat Oberta de Catalunya in Barcelonia, and my thesis project was to serve as a vehicle for informing my academic endeavors, as well as the foundation for my doctoral proposal: “Post-Documentary: Emergent nonfiction media practices in the digital world.”
The concept for WM came about after graduate school, and my return to Southern California to build a design agency. Because Post-Documentary’s Twitter followers had grown to over 700 in a short period of time, I was wrestling with whether or not to keep the account active, since it was time consuming. But, because of the encouraging follower feedback, I decided to maintain the account, and began exploring ways to best serve its followers, and the emerging i-doc filmmaking community as a whole.
“WM was centered on four problems I’d identified within emergent filmmaking (e.g. i-docs): 1. public awareness, 2. audience engagement, 3. distribution, and 4. monetization.”
SG: Lesson learned from WiredMovie?
MRB: When starting any type of business venture, rule number one is to solve someone’s problem, and with that in mind, the initial charter for WM was centered on four problems I’d identified within emergent filmmaking (e.g. i-docs): 1. public awareness, 2. audience engagement, 3. distribution, and 4. monetization.
WM went through 4 iterations, each built on a slightly different platform with a unique feature set, based on user feedback, and evolving business models. Each of the iterations directly correlated to the core problems defined in the original charter, as outlined earlier.
After collecting 6 months of data, the site had nearly 16,000 organic page views with an average bounce rate just below 35%. And, 44% of sessions were from returning visitors; the average session duration was about 7 minutes. Said differently, the metrics suggested that without any advertising, people were visiting the site (90 per day on average), the majority of visitors were finding the site relevant, and nearly half were returning regularly. So, with these and other metrics, I felt that there was ample interest to continue moving forward with the project.
Keeping aligned with the original goals, WM version 2.0 was enhanced to include several new features intended to target both the general public, and the production community. Although, after 3 additional months, the data for version 2.0 suggested that while people were still visiting, it was clearly moving in the wrong direction with a higher bounce rate, and lower user engagement.
For WM version 3.0, the site was overhauled, this time built on a social content sharing platform, and included a name change from WiredMOV to WiredMovie. The final two iterations of the site incorporated social networking, social news sharing (similar to reddit or Digg) along with several gamification aspects. Additionally, “WiredMovie PRO” was introduced to test out demand for i-doc promotion, distribution and monetization services.
WM version 4.0, the final iteration, was live through the end 2015, and outperformed all previous versions for overall visitors, however, there was a lack of traction for social networking, and social sharing, despite my attempts to encourage participation.
SG: Why moving to FilmKode.com? What is the aim of the project?
RB: First, I’d like to point out, just so your readers understand, WM, and now FilmKode.com, are “passion projects”, and I dedicate time to them in-between working on client websites, or digital marketing activities at my design firm, IntraActif. And, these enterprises are often diverse and very personal; for example, another of my projects is SoilSurfer.com, chartered with encouraging urban agriculture and sustainability, by connecting landowners with urban farmers.
Because there’s only so much time I can dedicate to ancillary projects, I follow the axiom to “fail fast.” In the startup realm, this means that if something isn’t going to work, it’s crucial to move onto a new idea or direction as quickly as possible.
“I returned to square one equipped with a new perspective”
So why the pivot (i.e. a dramatic change in direction) to FilmKode.com? In retrospect, I think WM was trying to solve too many problems, and ultimately, problems emergent filmmakers weren’t willing [or didn’t need] to pay someone to solve.
So, I returned to square one equipped with a new perspective, but this time, rather than trying to solve multiple problems, the aim would be to identify and focus on a specific problem. During the ideation process, I came up with several potential ideas, but one stood out among the others: something very basic, yet affecting everyone working with i-docs, virtual reality, and emergent [interactive] filmmaking production.
What was it? The lack of a production “sourcebook”.
Why is this a problem? Because, there’s not an easy way to keep up with the rapid expansion of tools, technologies, services, and new organizations within this budding field, and a production sourcebook could solve that.
Reflecting on my time at university, every film student had a production sourcebook (sometimes called a “production guide” or “guidebook”), and this annually updated index was a directory of companies, studios, tools, and services for filmmakers. Eventually, these physical sourcebooks morphed into websites, and are still used by filmmakers today, but there isn’t one serving the needs of the i-docs and emergent [interactive] filmmaking community.
“There’s not an easy way to keep up with the rapid expansion of tools, technologies, services, and new organizations within this budding field, and a production sourcebook could solve that.”
As I started researching the idea for a specialized online sourcebook, I discovered that nearly every i-doc related website–non-profit and for-profit a like–had some form of “resources” or “tools” page, but there wasn’t a central hub for this information. So, from there, I socialized the idea a bit more, and with the additional feedback, began work on FilmKode.com.
SG: What is FilmKode.com bringing to us, academic and practitioners and how should we use it?
RB: FilmKode.com was created to help digital storytellers navigate the expanding list of companies, institutions, products, and services within emergent filmmaking. As an online sourcebook, FilmKode.com is intended to connect filmmakers with tool makers, service providers, and related organizations.
The benefit is twofold: 1) Filmmakers have an easy way to find resources, and 2) vendors and organizations have an easy way to reach filmmakers, and promote their offerings. And, as an added benefit for digital storytellers, FilmKode Canal, lets artists and filmmakers submit and showcase their work for free, helping with SEO and audience engagement.
FilmKode.com members can add free or enhanced listings, and each listing comes with the ability to add classifieds, events, digital downloads, and a metrics dashboard with built-in analytics; this way, anyone who adds a listing can see how well it’s performing. These “listings” are seen by anyone who visits FilmKode.com, and convenient for providing feedback and reviews, not unlike Yelp.
Below, I’ve outlined two example scenarios for how 1) a company, and 2) how a creative might use FilmKode.com:
Example 1: “ACME” (A business)
In the first example, lets say that “ACME” is a company that makes tools for digital storytellers. When they add a listing to FilmKode.com, they can do the following:
1. Add a listing for “ACME”
2. Add a company summary and meta data
3. Add downloadable brochure(s) for their products
4. Add contact information, and link to their website
5. Add a an upcoming workshop event to the events calendar
6. Add products, product images, price and links to an external store
7. Add links to social media channels
8. Receive contact requests from potential customers
9 Receive, and respond to customer ratings and comments
10. Review an analytics dashboard to see impressions, clicks, views, etc.
Example 2: Creatives (A Filmmaker)
In the second example, lets say that “Kelly” is a filmmaker embarking on a new project. Kelly can go to FilmKode.com and do the following:
1. Search for companies, tools, services and other resources
2. Contact “ACME” and ask questions
3. Visit a vendor’s website
4. Download product specs
5. Purchase products directly from a vendor listing
6. Add her interactive documentary to FilmKode Canal (Free)
7. Add production credits, images and/or PR items
8. Add a link to her i-doc’s website or App
9. Add contact information, and social media links
10. Add her i-doc’s premiere to the events calendar
11. Write a review about a tool she liked
12. Read a review about a tool she’s thinking about purchasing
13. Bookmark a tool to her favorites list
14. Add a listing for her own studio, or service
15. For her own studio or service, do anything that “ACME” did in Example 1
SG: Your vision: where would you want FilmKode.com to be in three years time?
RB: Well, given the rapid nature of emergent filmmaking, technology, and the Internet in general, it would be difficult to define an outcome so far out. But, if I had to look beyond the initial proof of concept goals I’ve set for FilmKode.com, then I’d want it to be a community marketplace where filmmakers, and tool makers [and other organizations] come and make transactions (both non-profit, and for-profit), and ultimately kindle economic growth within this unique production community.
Anyone interested in finding tools and resources, or adding their own tools and resources, as a business or creative, can join FilmKode free at:.
SG: Actually Reg, would you tell us some more about you? Your background, your hopes & dreams…
RB: Until age 17, I spent the better part of childhood on a wheat farm in rural Nebraska, where my father worked an agricultural pilot, and wheat farmer. My interest in filmmaking arose at an early age after my sister and I appeared, by chance, as extras in a television series being shot at a local airshow. For me, seeing the big 35mm motion picture cameras in action was awe inspiring, and after the experience, I started reading more about cinematography.
After high school, I applied to the University of California, Los Angeles (UCLA), and University of Southern California (USC) because of their exceptional film programs. And, I was accepted into USC, but the costly tuition prevented me from attending. Instead, I studied at Arizona State University, where I also worked as a media tech, film splicer, and projectionist to help pay tuition.
While at ASU, I wrote and directed several student films, including my final thesis film, a documentary titled, “The Last Resort”, about Castle Hot Springs, an abandoned circa- 1920’s luxury resort, concealed in the Arizona desert. The resort was a getaway for celebrity A-listers, and political figures in its heyday.
“I was inspired to encourage the use of storytelling within our game degree program”
After graduating, I spent time working as an Event Videographer, along with a few other production gigs, but wasn’t able to find steady work. One of my sisters had invited me to move up to the Northwest, and offered a place to stay while looking for work; eventually, I walked into a temp agency in Seattle, and was hired as a computer analyst. My first real job was with Wizards of the Coast, a game startup, now owned by Hasbro. Because I’d been using non-linear editing systems at school, to edit student films, I knew my way around an OS pretty well.
Working at a game company was a lot more fun than I expected, given the creative and lively culture. In the next few years, I made my way to Microsoft, and eventually, an opportunity lead me to California, where I joined an Internet startup. The timing was awful given the dot-com bust, but fortunately for me, Electronic Arts and Sony were hiring.
For over 10 years, I managed centralized-production services groups, within product development across Asia, Europe and the US, and watched as the industry expanded into a juggernaut. At one point, I was managing a global organization of over 1000 people, and spending much of my time traveling. In 2008, I stepped away from the industry to design the inaugural interactive (game) degree program for The Los Angles Film School.
“”i-docs” seemed to be a near- perfect union of story, interactivity, and filmmaking;”
One of the great things about working at a film school was being able to fraternize with the faculty, many whom remain active in the industry, and I was inspired to encourage the use of storytelling within our game degree program. Like many great films, games and interactivity have the potential to bring awareness to important social, ecological and humanitarian issues, and not solely for the purpose of entertainment.
Around this time, I discovered interactive-documentaries while researching “serious- games” and realized that these so called “webdocs” or “i-docs” seemed to be a near- perfect union of story, interactivity, and filmmaking; perhaps, and an ideal vehicle for public awareness and social action. Shortly thereafter, I started work on a master’s degree at The New School in New York City, with the hope of someday creating an academic program that combined story, interactivity, and filmmaking.
Thank you Reg for your time, and good luck with FilmKode!
Any comments on how to improve FilmKode, or on what you would like to add to it, please comment below. | http://i-docs.org/2016/04/25/would-you-use-an-i-doc-sourcebook/ | 2018-08-14T14:20:38 | CC-MAIN-2018-34 | 1534221209040.29 | [] | i-docs.org |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Module: Aws::Organizations
- Defined in:
- (unknown)
Overview
This module provides a client for making API requests to AWS Organizations.
Aws::Organizations::Client
The Client class provides one-to-one mapping for each API operation.
organizations = Aws::Organizations::Client.new(region: 'us-east-1') organizations.operation_names #=> [:accept_handshake, :attach_policy, :cancel_handshake, :create_account, ...]
Each API operation method accepts a hash of request parameters and returns a response object.
resp = organizations.accept_handshake(params)
See Client for more information.
Aws::Organizations::Errors
Errors returned from AWS Organizations are defined in the Errors module and extend Errors::ServiceError.
begin # do stuff rescue Aws::Organizations::Errors::ServiceError # rescues all errors returned by AWS Organizations end
See Errors for more information.
Defined Under Namespace
Modules: Errors, Types Classes: Client | https://docs.aws.amazon.com/sdkforruby/api/Aws/Organizations.html | 2018-08-14T14:43:43 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.aws.amazon.com |
Getting Started with Basqet
Basqet is a payment gateway that makes it easy for your business to accept crypto payments from customers, without ever having to worry about chargebacks, and credit card frauds.
What can you do with basqet ?
- Create payment pages for customers
- Accept crypto deposits in 5+ cryptocurrencies.
- Get paid easily from your customers.
- Send crypto invoices to your customers.
Explore demo
Updated 7 months ago
Did this page help you? | https://docs.basqet.com/docs/getting-started | 2022-09-25T07:53:30 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.basqet.com |
BlockNeighborNotify
Link to blockneighbornotify
The BlockNeighborNotify Event is fired when a physics update occurs on a block. This event acts as a way for mods to detect physics updates, in the same way a BUD switch does. This event is only called on the server.
Event Class
Link to event-class
You will need to cast the event in the function header as this class:
crafttweaker.event.BlockNeighborNotify | https://docs.blamejared.com/1.12/en/Vanilla/Events/Events/BlockNeighborNotify | 2022-09-25T07:39:47 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.blamejared.com |
Call Models and Flows
Legend
All parties shown in a call scenario, except where stated explicitly, are considered internal and are monitored by T-Server. If one or more external parties participated in the call, the following apply:
- T-Server will not distribute any events to the external (nonmonitored) party.
- T-Server may not have any information about the nonmonitored party, so its reference may not be specified.
The following diagram illustrates a basic call model.
Activities like conference and transfer can be performed to an existing multi-party call (a conference call). When so, Party A is considered a "complex party" and the following apply:
- Events assigned to Party A, as shown in call scenarios, are sent to every party of "complex party."
- Reference to Party A in AttributeOtherDN are not present.
This is also represented in the following tables.
Since T-Library is a superset of functions, not every scenario described in this document is supported by every type of switch. For more details, see the T-Server Deployment Guide that applies to your T-Server/switch pair.
When more than one event is presented in one table cell, the order in which the events are distributed may vary.
Attributes ThirdPartyDN and ThirdPartyDN Role specify DNs to which two-step operations are initiated and completed.
A call is considered to be queued until either EventDiverted or EventAbandoned regarding the queue is generated.
Note the following comments in the call models:
*OPT—Optional.
*DIAL—May be a dialed number or is not present if T-Server has no information about the other party.
List of Call Models
- Basic Call Models
- Simple Call Model
- Connection-Establishing Phase (Internal/Inbound Call)
- Connection-Establishing Phase (Internal/Inbound Call to ACD)
- Connection-Establishing Phase (Internal/Inbound Call Queued to Multiple ACDs)
- Connection-Establishing Phase (Internal/Inbound Call with Call Parking)
- Connection-Establishing Phase (Internal/Inbound Call with Routing—RouteQueue Case)
- Connection-Establishing Phase (Internal/Inbound Call with Routing)
- Connection-Establishing Phase (Internal/Inbound Call with Routing Outbound)
- Connection-Establishing Phase (Outbound Call)
- Connection-Establishing Phase While On Hold (Internal/Outbound Call)
- Releasing Calls
- Holding, Transferring, and Conferencing
- Hold/Retrieve Function, Consulted Party Answers
- Hold/Retrieve Function, Consulted Party Does Not Answer
- Single-Step Transfer
- Single-Step Transfer (Outbound)
- Mute Transfer
- Two-Step Transfer: Complete After Consulted Party Answers
- Two-Step Transfer: Complete Before Consulted Party Answers (Blind)
- Two-Step Transfer: to ACD
- Two-Step Transfer: to a Routing Point
- Trunk Optimization: Trunk Anti-Tromboning
- Single-Step Conference
- Conference
- Blind Conference (Complete Before Consulted Party Answers)
- Conference with Two Incoming Calls Using TMergeCalls
- Special case: Multi-site ISCC Transfers and Conferences
- Handling User Data
- Special Cases
- Outbound Call to a Busy Destination
- Rejected Call
- Internal Call to Destination with DND Activated
- Call Forwarding (on No Answer)
- Alternate-Call Service
- Reconnect-Call Service
- Redirect-Call Service
- Internal/Inbound Call with Bridged Appearance
- Outbound Call from Bridged Appearance
- Hold/Retrieve for Bridged Appearance
- Internal/Inbound Call Answerable by Several Agents (Party B Answers)
- Call Treatment with Routing
- Predictive Dialing
- Monitoring Calls
- Working With Queues
- Network T-Server Attended Transfer Call Flows
- Standard Network Call Initiation
- Consultation Leg Initiation, Specific Destination
- Failed Consultation: Specific Target
- Consultation Leg Initiation, URS Selected Destination
- Failed Consultation: URS Selected Destination
- Transfer/Conference Completion: Explicit
- Transfer Completion: Implicit
- Conference Completion
- Alternate Call Service
- Alternate Call Service with Transfer Completion
- Explicit Reconnect
- Implicit Reconnection (by SCP)
- Implicit Reconnection (by Network T-Server)
- Caller Abandonment
- Network Single-Step Transfer
- Premature Disconnection, One Variation
- Premature Disconnection, a Second Variation
- Transactional Error
- Shared Call Appearance | https://docs.genesys.com/Documentation/System/latest/GenEM/CallModelsandFlows | 2022-09-25T08:02:41 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.genesys.com |
The OpenShift Container Platform HAProxy router scales to optimize performance.
The OpenShift Container Platform Ingress Controller, or router, is the Ingress point for all external traffic destined for OpenShift Container Platform.
For more information on Ingress sharding, see Configuring Ingress Controller sharding by using route labels and Configuring Ingress Controller sharding by using namespace labels.
OpenShift Container Platform. | https://docs.openshift.com/container-platform/4.5/scalability_and_performance/routing-optimization.html | 2022-09-25T07:14:45 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.openshift.com |
Sends .
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
publish [--topic-arn <value>] [--target-arn <value>] [--phone-number <value>] --message <value> [--subject <value>] [--message-structure <value>] [--message-attributes .
Warning
The Message parameter is always a string. If you set MessageStructure to json , you must string-encode the Message parameter.
--message-attributes (map)
Message attributes for Publish action.
Shorthand Syntax:
KeyName1=DataType=string,StringValue=string,BinaryValue=blob,KeyName2=DataType=string,StringValue=string,BinaryValue=blob
JSON Syntax:
{"string": { "DataType": "string", "StringValue": "string", "BinaryValue": publishes a message to an SNS topic named my-topic:
aws sns publish --topic-arn "arn:aws:sns:us-west-2:0123456789012:my-topic" --message
message.txt is a text file containing the message to publish:
Hello World Second Line
Putting the message in a text file allows you to include line breaks. | https://docs.aws.amazon.com/cli/latest/reference/sns/publish.html | 2019-11-11T22:23:57 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.aws.amazon.com |
Metrics & Monitoring
In addition to Logging & Auditing, meshStack supports metric monitoring of production
deployments using Prometheus. The following components and services therefore expose
/prometheus URL endpoints for metrics scraping.
- meshfed-api
- meshfed-replicator-aws
ConfigurationConfiguration
Prometheus endpoints are secured via HTTP Basic Auth. To enable Prometheus monitoring, configure a user for Prometheus
in the
application.yml of the service.
auth: basic: realm: <COMPONENT> users: - username: <USERNAME> password: <PASSWORD> authorities: - ACTUATOR
Realm is the RFC-1945 part of the authorization challenge and can be an arbitrary value for the Prometheus page. The entered username and password must be used set in your Prometheus scraper configuration. | https://docs.meshcloud.io/docs/meshstack.monitoring.html | 2019-11-11T22:23:39 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.meshcloud.io |
Backup and Restore Operations for Reporting Services
This article provides an overview of all data files used in a Reporting Services installation and describes when and how you should back up the files. Developing a backup and restore plan for the report server database files is the most important part of a recovery strategy. However, a more complete recovery strategy would include backups of the encryption keys, custom assemblies or extensions, configuration files, and source files for reports.
Applies to: Reporting Services Native Mode | Reporting Services SharePoint Mode
Note
Reporting Services integration with SharePoint is no longer available after SQL Server 2016. articles:
Backing Up the Report Server Databases
Because a report server is a stateless server, all application data is stored in the reportserver and reportservertempdb databases that run on a SQL Server Database Engine instance. You can back up the reportserver and reportservertempdb databases using one of the supported methods for backing up SQL Server databases. Here are some recommendations specific to the report server databases:
Use the full recovery model to back up the reportserver database.
Use the simple recovery model to back up the reportservertempdb database.
You can use different backup schedules for each database. The only reason to back up Back Up and Restore of SQL Server Databases.
Important
If your report server is in SharePoint mode, there are additional databases to be concerned with, including SharePoint configuration databases and the Reporting Services alerting database. In SharePoint mode, three databases are created for each Reporting Services service application. The reportserver, reportservertempdb, and dataalerting databases. For more information, see Backup and Restore Reporting Services SharePoint Service Applications
Backing Up the Encryption Keys
You should back up the encryption keys when you configure a Reporting Services installation for the first time. You should also back up the keys any time you change the identity of the service accounts or rename the computer. For more information, see Back Up and Restore Reporting Services Encryption Keys.
For SharePoint mode report servers, see the "Key Management" section of Manage a Reporting Services SharePoint Service Application.
Backing Up the Configuration Files
Reporting Services uses configuration files to store application settings. You should back up the files when you first configure the server and after you deploy any custom extensions. Files to back up include:
Rsreportserver.config
Rssvrpolicy.config
Rsmgrpolicy.config
Reportingservicesservice.exe.config
Web.config for the Report Server ASP.NET application
Machine.config for ASP.NET
Backing Up Data Files
Back up the files that you create and maintain in Report Designer. These include report definition (.rdl) files, shared data source (.rds) files, data view (.dv) files, data source (.ds) files, report server project (.rptproj) files, and report solution (.sln) files.
Remember to back up any script files (.rss) that you created for administration or deployment tasks.
Verify that you have a backup copy of any custom extensions and custom assemblies you are using.
Next steps
Report Server Database
Reporting Services Configuration Files
rskeymgmt Utility
Copy Databases with Backup and Restore
Administer a Report Server Database
Configure and Manage Encryption Keys
More questions? Try asking the Reporting Services forum | https://docs.microsoft.com/en-us/sql/reporting-services/install-windows/backup-and-restore-operations-for-reporting-services?redirectedfrom=MSDN&view=sql-server-ver15 | 2019-11-11T22:27:24 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.microsoft.com |
Create gMSAs for Windows containers
Windows-based networks commonly use Active Directory (AD) to facilitate authentication and authorization between users, computers, and other network resources. Enterprise application developers often design their apps to be AD-integrated and run on domain-joined servers to take advantage of Integrated Windows Authentication, which makes it easy for users and other services to automatically and transparently sign in to the application with their identities.
Although Windows containers cannot be domain joined, they can still use Active Directory domain identities to support various authentication scenarios.
To achieve this, you can configure a Windows container to run with a group Managed Service Account (gMSA), which is a special type of service account introduced in Windows Server 2012 designed to allow multiple computers to share an identity without needing to know its password.
When you run a container with a gMSA, the container host retrieves the gMSA password from an Active Directory domain controller and gives it to the container instance. The container will use the gMSA credentials whenever its computer account (SYSTEM) needs to access network resources.
This article explains how to start using Active Directory group managed service accounts with Windows containers.
Prerequisites
To run a Windows container with a group managed service account, you will need the following:
- An Active Directory domain with at least one domain controller running Windows Server 2012 or later. There are no forest or domain functional level requirements to use gMSAs, but the gMSA passwords can only be distributed by domain controllers running Windows Server 2012 or later. For more information, see Active Directory requirements for gMSAs.
- Permission to create a gMSA account. To create a gMSA account, you'll need to be a Domain Administrator or use an account that has been delegated the Create msDS-GroupManagedServiceAccount objects permission.
- Access to the internet to download the CredentialSpec PowerShell module. If you're working in a disconnected environment, you can save the module on a computer with internet access and copy it to your development machine or container host.
One-time preparation of Active Directory
If you have not already created a gMSA in your domain, you'll need to generate the Key Distribution Service (KDS) root key. The KDS is responsible for creating, rotating, and releasing the gMSA password to authorized hosts. When a container host needs to use the gMSA to run a container, it will contact the KDS to retrieve the current password.
To check if the KDS root key has already been created, run the following PowerShell cmdlet as a domain administrator on a domain controller or domain member with the AD PowerShell tools installed:
Get-KdsRootKey
If the command returns a key ID, you're all set and can skip ahead to the create a group managed service account section. Otherwise, continue on to create the KDS root key.
In a production environment or test environment with multiple domain controllers, run the following cmdlet in PowerShell as a Domain Administrator to create the KDS root key.
# For production environments Add-KdsRootKey -EffectiveImmediately
Although the command implies the key will be effective immediately, you will need to wait 10 hours before the KDS root key is replicated and available for use on all domain controllers.
If you only have one domain controller in your domain, you can expedite the process by setting the key to be effective 10 hours ago.
Important
Don't use this technique in a production environment.
# For single-DC test environments ONLY Add-KdsRootKey -EffectiveTime (Get-Date).AddHours(-10)
Create a group Managed Service Account
Every container that uses Integrated Windows Authentication needs at least one gMSA. The primary gMSA is used whenever apps running as a System or a Network Service access resources on the network. The name of the gMSA will become the container's name on the network, regardless of the hostname assigned to the container. Containers can also be configured with additional gMSAs, in case you want to run a service or application in the container as a different identity from the container computer account.
When you create a gMSA, you also create a shared identity that can be used simultaneously across many different machines. Access to the gMSA password is protected by an Active Directory Access Control List. We recommend creating a security group for each gMSA account and adding the relevant container hosts to the security group to limit access to the password.
Finally, since containers don't automatically register any Service Principal Names (SPN), you will need to manually create at least a host SPN for your gMSA account.
Typically, the host or http SPN is registered using the same name as the gMSA account, but you may need to use a different service name if clients access the containerized application from behind a load balancer or a DNS name that's different from the gMSA name.
For example, if the gMSA account is named "WebApp01" but your users access the site at
mysite.contoso.com, you should register a
http/mysite.contoso.com SPN on the gMSA account.
Some applications may require additional SPNs for their unique protocols. For instance, SQL Server requires the
MSSQLSvc/hostname SPN.
The following table lists the required attributes for creating a gMSA.
Once you've decided on the name for your gMSA, run the following cmdlets in PowerShell to create the security group and gMSA.
Tip
You'll need to use an account that belongs to the Domain Admins security group or has been delegated the Create msDS-GroupManagedServiceAccount objects permission to run the following commands. The New-ADServiceAccount cmdlet is part of the AD PowerShell Tools from Remote Server Administration Tools.
# Replace 'WebApp01' and 'contoso.com' with your own gMSA and domain names, respectively # # Create the security group New-ADGroup -Name "WebApp01 Authorized Hosts" -SamAccountName "WebApp01Hosts" -GroupScope DomainLocal # Create the gMSA New-ADServiceAccount -Name "WebApp01" -DnsHostName "WebApp01.contoso.com" -ServicePrincipalNames "host/WebApp01", "host/WebApp01.contoso.com" -PrincipalsAllowedToRetrieveManagedPassword "WebApp01Hosts" # Add your container hosts to the security group Add-ADGroupMember -Identity "WebApp01Hosts" -Members "ContainerHost01", "ContainerHost02", "ContainerHost03"
We recommend you create separate gMSA accounts for your dev, test, and production environments.
Prepare your container host
Each container host that will run a Windows container with a gMSA must be domain joined and have access to retrieve the gMSA password.
Join your computer to your Active Directory domain.
Ensure your host belongs to the security group controlling access to the gMSA password.
Restart the computer so it gets its new group membership.
Set up Docker Desktop for Windows 10 or Docker for Windows Server.
(Recommended) Verify the host can use the gMSA account by running Test-ADServiceAccount. If the command returns False, follow the troubleshooting instructions.
# Test-ADServiceAccount WebApp01
Create a credential spec
A credential spec file is a JSON document that contains metadata about the gMSA account(s) you want a container to use. By keeping the identity configuration separate from the container image, you can change which gMSA the container uses by simply swapping the credential spec file, no code changes necessary.
The credential spec file is created using the CredentialSpec PowerShell module on a domain-joined container host. Once you've created the file, you can copy it to other container hosts or your container orchestrator. The credential spec file does not contain any secrets, such as the gMSA password, since the container host retrieves the gMSA on behalf of the container.
Docker expects to find the credential spec file under the CredentialSpecs directory in the Docker data directory. In a default installation, you'll find this folder at
C:\ProgramData\Docker\CredentialSpecs.
To create a credential spec file on your container host:
Install the RSAT AD PowerShell tools
- For Windows Server, run Install-WindowsFeature RSAT-AD-PowerShell.
- For Windows 10, version 1809 or later, run Add-WindowsCapability -Online -Name 'Rsat.ActiveDirectory.DS-LDS.Tools~~~~0.0.1.0'.
- For older versions of Windows 10, see.
Run the following cmdlet to install the latest version of the CredentialSpec PowerShell module:
Install-Module CredentialSpec
If you don't have internet access on your container host, run
Save-Module CredentialSpecon an internet-connected machine and copy the module folder to
C:\Program Files\WindowsPowerShell\Modulesor another location in
$env:PSModulePathon the container host.
Run the following cmdlet to create the new credential spec file:
New-CredentialSpec -AccountName WebApp01
By default, the cmdlet will create a cred spec using the provided gMSA name as the computer account for the container. The file will be saved in the Docker CredentialSpecs directory using the gMSA domain and account name for the filename.
You can create a credential spec that includes additional gMSA accounts if you're running a service or process as a secondary gMSA in the container. To do that, use the
-AdditionalAccountsparameter:
New-CredentialSpec -AccountName WebApp01 -AdditionalAccounts LogAgentSvc, OtherSvc
For a full list of supported parameters, run
Get-Help New-CredentialSpec.
You can show a list of all credential specs and their full path with the following cmdlet:
Get-CredentialSpec
Next steps
Now that you've set up your gMSA account, you can use it to:
If you run into any issues during setup, check our troubleshooting guide for possible solutions.
Additional resources
- To learn more about gMSAs, see the group Managed Service Accounts overview.
- For a video demonstration, watch our recorded demo from Ignite 2016.
Feedback | https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts | 2019-11-11T23:01:26 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.microsoft.com |
Using Snapshots with Replication
Some replications, especially those that require a long time to finish, can fail because source files are modified during the replication process. You can prevent such failures by using Snapshots in conjunction with Replication. This use of snapshots is automatic with CDH versions 5.0 and higher. To take advantage of this, you must enable the relevant directories for snapshots (also called making the directory snapshottable).
When the replication job runs, it checks to see whether the specified source directory is snapshottable. Before replicating any files, the replication job creates point-in-time snapshots of these directories and uses them as the source for file copies. This ensures that the replicated data is consistent with the source data as of the start of the replication job. The replication. | https://docs.cloudera.com/documentation/enterprise/5-11-x/topics/cm_bdr_snap_repl.html | 2019-11-11T23:04:33 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.cloudera.com |
New Features
- Developer Role incorporated into new site level permission roles.
- In addition to the existing Site Admin role, users can now be designated as Users or Developers. Developers have permissions that allow uploading gears to the site, which previously required Site Admin access.
- Project Download enhancements.
- Users can now choose to download analyses and metadata (in json files) when they select `Project Download`
- Gear configuration modal now supports drop-downs when applicable.
- Admins now have the option to clear permissions of disabled users
- Connectors support custom de-identification profiles in the same manner as the CLI
- CLI templated imports allow filename pattern matching
Key Fixes
- Improved warning message when attempting to disable a device from Device List
- Matlab SDK Zip README now includes 'javaclasspath' details
- Addressed issue preventing some html QA reports from loading in the viewer
- Fixed issue preventing project and subject changes from immediately being reflected
- Addressed issue where project name and group name were not being displayed in certain cases
- Restriction preventing gears with fixed inputs from auto-upgrading has been removed
- Search indexing speed improvements | https://docs.flywheel.io/hc/en-us/articles/360026756013-EM-8-7-x-Release-Notes | 2019-11-11T22:09:18 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.flywheel.io |
Student Transfers – ALOP Identified Serving School
Due to the ISBE requirements for student enrollment and attendance reporting, students attending an ALOP Identified Serving school must have their PowerSchool enrollment reflect where they are attending. This process applies to transfer from home school to ALOP identified serving school and back to home school. This process may also include Court Placed student enrollments.
- The home school (APSS/Guidance Director) and the off campus facilitator initiating the change in a student’s enrollment notifies the Assistant to Off campus facilitator/Data/Registrar that the student will be changing enrollment and the effective dates.
- Assistant to Off Campus facilitator – Exit student enrollment using state exit code 17 (Change in serving school or FTE). You must enter an exit comment of “Exited to attend ‘Name of serving school’ ” or “Exited ‘Name of serving school’ to attend home school”. Exit/enroll dates must be 1 day apart.
- Assistant to Off Campus facilitator – Enter a Web Help ticket to IT PowerSchool admin for state reporting for the student exit in ISBE SIS system.
- Indicate that this is an Off Campus transfer along with details of the transfer.
- Assistant to Off Campus facilitator will re-enroll the student when all placement procedures are complete using state entry code 04- (Transfer in from within District.) Use entry comment ‘Attending name of serving school;non claim code ALOP. Enter enrollment type non claimable for attendance reporting.
- If a student has returned to the home school from the ALOP identified serving school follow above steps to exit and re-enroll in the home school. Non claim code ALOP is NOT entered in the transfercomment. Change enrollment type to Claimable.
- Assistant to Off Campus facilitator will send out a final email to data specialist/registrar when student enrollment is complete.
- Data specialist – enter the student’s off campus schedule. | https://docs.glenbard.org/index.php/uncategorized/student-transfers-alop-identified-serving-school/ | 2019-11-11T22:53:10 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.glenbard.org |
This procedure requires LogicalDOC to be able to access Internet, if you are running an old version of the software or if LogicalDOC cannot access the Internet, please follow the manual patch guide.
1
List the available patches
2
Download the patch file
3
Run the patch
Now the patch is utomatically executed by an external program and during this process LogicalDOC may become unavailable(the time depends on several factors but expect at least a couple of minutes).
4 | https://docs.logicaldoc.com/en/patch/patch-procedure | 2019-11-11T22:41:25 | CC-MAIN-2019-47 | 1573496664439.7 | [array(['/images/stories/en/patch-list.gif', None], dtype=object)
array(['/images/stories/en/patch-download.gif', None], dtype=object)] | docs.logicaldoc.com |
Event ID 77 — Print Migration Import Status
This is preliminary documentation and subject to change.
.
Event Details
Resolve
Install a printer port and restore the print queue settings
To install a printer port manually and set the printer settings:
- Open the Administrative Tools folder, and then double-click Print Management.
- Click Print Servers, click the print server you want to use, click Printers, right-click the printer for which the printer port was not imported, and then click Properties.
- Click the Ports tab, click Add Port, select the appropriate type of printer port, click New Port, and then follow the instructions on the screen.
- After installing the printer port, use the Properties dialog box to specify the appropriate printer settings. | https://docs.microsoft.com/en-us/previous-versions/orphan-topics/ws.10/dd337161(v=ws.10)?redirectedfrom=MSDN | 2019-11-11T23:07:34 | CC-MAIN-2019-47 | 1573496664439.7 | [array(['images/dd492040.yellow%28ws.10%29.jpg', 'yellow yellow'],
dtype=object) ] | docs.microsoft.com |
. Publishing events from aggregate roots
- 3.8. Spring Data extensions
- 4. Query by Example
- 5. Auditing
- Appendix
Preface
1. Project metadata
Version control -
Bugtracker -
Release repository -
Milestone repository -
Snapshot repository -
Reference documentation
2.
Kay-M>
2. { }...). through..(…); })..() { // … } });
3.6. Custom implementations for Spring Data repositories
Often it is necessary to provide a custom implementation for a few repository methods. Spring Data repositories easily allow you to provide custom repository code and integrate it with generic CRUD abstraction and query method functionality.cTemplate,>" base-
3.7. Publishing events from aggregate roots
Entities managed by repositories are aggregate roots.
In a Domain-Driven Design application, these aggregate roots usually publish domain events.
Spring Data provides an annotation
@DomainEvents you can use on a method of your aggregate root to make that publication as easy as possible.
class AnAggregateRoot { @DomainEvents (1) Collection<Object> domainEvents() { // … return events you want to get published here } @AfterDomainEventsPublication (2) void callbackMethod() { // … potentially clean up domain events list } }
The methods will be called every time one of a Spring Data repository’s
save(…) methods is called.
3.8. Spring Data extensions
This section documents a set of Spring Data extensions that enable Spring Data usage in a variety of contexts. Currently most of the integration is targeted towards Spring MVC.
3.8);
3.8) } }
3.8>
3"; } }
4. Query by Example.. will expect all values set on the probe to match. If you want to get results matching any of the predicates defined implicitly, use
ExampleMatcher.matchingAny().. Auditing
5.
5.1.1..
5.1.2... | https://docs.spring.io/spring-data/commons/docs/2.0.0.M2/reference/html/ | 2019-11-11T23:38:53 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.spring.io |
Philosophy
Parsers are innately complicated and confusing. They're difficult to understand, difficult to write, and difficult to use. Even experts on the subject can become baffled by the nuances of these complicated state-machines.
Lark's mission is to make the process of writing them as simple and abstract as possible, by following these design principles:
Design Principles
Readability matters
Keep the grammar clean and simple
Don't force the user to decide on things that the parser can figure out on its own
Usability is more important than performance
Performance is still very important
Follow the Zen Of Python, whenever possible and applicable
In accordance with these principles, I arrived at the following design choices:
Design Choices
1. Separation of code and grammar
Grammars are the de-facto reference for your language, and for the structure of your parse-tree. For any non-trivial language, the conflation of code and grammar always turns out convoluted and difficult to read.
The grammars in Lark are EBNF-inspired, so they are especially easy to read & work with.
2. Always build a parse-tree (unless told not to)
Trees are always simpler to work with than state-machines.
Trees allow you to see the "state-machine" visually
Trees allow your computation to be aware of previous and future states
Trees allow you to process the parse in steps, instead of forcing you to do it all at once.
And anyway, every parse-tree can be replayed as a state-machine, so there is no loss of information.
See this answer in more detail here.
To improve performance, you can skip building the tree for LALR(1), by providing Lark with a transformer (see the JSON example).
3. Earley is the default
The Earley algorithm can accept any context-free grammar you throw at it (i.e. any grammar you can write in EBNF, it can parse). That makes it extremely friendly to beginners, who are not aware of the strange and arbitrary restrictions that LALR(1) places on its grammars.
As the users grow to understand the structure of their grammar, the scope of their target language, and their performance requirements, they may choose to switch over to LALR(1) to gain a huge performance boost, possibly at the cost of some language features.
In short, "Premature optimization is the root of all evil."
Other design features
Automatically resolve terminal collisions whenever possible
Automatically keep track of line & column numbers | https://lark-parser.readthedocs.io/en/latest/philosophy/ | 2019-11-11T22:23:35 | CC-MAIN-2019-47 | 1573496664439.7 | [] | lark-parser.readthedocs.io |
AWS Lambda Limits
AWS Lambda limits the amount of compute and storage resources that you can use to run and store functions. The following limits apply per-region and can be increased. To request an increase, use the Support Center console.
For details on concurrency and how Lambda scales your function concurrency in response to traffic, see AWS Lambda Function Scaling.
The following limits apply to function configuration, deployments, and execution. They cannot be changed.
Limits for other services, such as AWS Identity and Access Management, Amazon CloudFront (Lambda@Edge), and Amazon Virtual Private Cloud, can impact your Lambda functions. For more information, see AWS Service Limits and Using AWS Lambda with Other Services. | https://docs.amazonaws.cn/en_us/lambda/latest/dg/limits.html | 2019-11-11T22:50:33 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.amazonaws.cn |
General Information
Desktop
Web
Controls and Extensions
Mainteinance Mode
Enterprise and Analytic Tools
Quality Assurance and Productivity
Frameworks and Libraries
Data Shaping
This section contains the following examples:
- How to: Add Custom Totals
- How to: Calculate Multiple Custom Totals with Custom Summary Type
- How to: Calculate Running Totals
- How to: Apply a Filter
- How to: Implement the Group Filter
- How to: Change the Prefilter's filter criteria in code
- How to: Display Underlying Records
- How to: Group Date-Time Values
- How to: Implement Custom Group Intervals
- How to: Implement Custom Summary
- How to: Sort Data by a Data Field
- How to: Sort Data by Individual Columns (Rows)
- How to: Sort Data by Individual Columns (Rows) in OLAP Mode
- How to: Sort Data in Server Mode Using Custom Sorting Algorithm
- How to: Sort Data by OLAP Member Properties
- How to: Prevent End-Users From Changing Filter Conditions
- How to: Replace Default Filter Items with Custom Ones
- How to: Add and Remove Items From Filter Drop-Down Lists
- How to: Sort Filter Drop-Down Items in a Custom Manner
- How to: Locate a Column (Row) Header By Its Column's (Row's) Summary Values
Feedback | https://docs.devexpress.com/AspNet/9439/aspnet-webforms-controls/pivot-grid/examples/data-shaping | 2019-11-11T23:00:53 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.devexpress.com |
Hosting HTML pages from Domino¶
Summary¶
This is a simple example on how to host a HTML page on Domino. There are a number of ways to host web applications of course but this example shows how you can integrate a simple HTTP server using python with Domino. You could also add java script and other pages to your project. The example in this note just shows how you would start the server to support your page(s).
Files¶
You’ll need to create two files in your project (in addition to your
files required for your page such as
index.html etc)
app.sh
#!/usr/bin/env bash python ./app.py
app.py
import SimpleHTTPServer import SocketServer PORT = 8888 Handler = SimpleHTTPServer.SimpleHTTPRequestHandler httpd = SocketServer.TCPServer(("", PORT), Handler) print "serving at port", PORT httpd.serve_forever()
Publishing¶
To publish your app, go to the left-hand sidebar in your browser and click Publish.
The Publish page should now be visible in your browser. Click the App tab. Your screen should now look something like this.
Click Publish to publish your app.
Getting started with App Publishing. | https://docs.dominodatalab.com/en/3.6/reference/publish/apps/advanced/Hosting_HTML_pages_from_Domino.html | 2019-11-11T21:51:46 | CC-MAIN-2019-47 | 1573496664439.7 | [array(['../../../../_images/publish_button.png', 'image0'], dtype=object)
array(['../../../../_images/mceclip0.png', 'image1'], dtype=object)] | docs.dominodatalab.com |
Set-SPService
Host Config
Syntax
Set-SPServiceHostConfig [-Identity] <SPIisWebServiceSettings> -SslCertificateThumbprint <String> [-AssignmentCollection <SPAssignmentCollection>] [-Confirm] [-HttpPort <Int32>] [-HttpsPort <Int32>] [-NetTcpPort <Int32>] [-NoWait] [-SslCertificateStoreName <String>] [-WhatIf] [<CommonParameters>]
Set-SPServiceHostConfig [-Identity] <SPIisWebServiceSettings> [-AssignmentCollection <SPAssignmentCollection>] [-Confirm] [-HttpPort <Int32>] [-HttpsPort <Int32>] [-ImportSslCertificate <X509Certificate2>] [-NetTcpPort <Int32>] [-NoWait] [ServiceHostConfig cmdlet configures one or more common settings for all Web services.
For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation at SharePoint Server Cmdlets.
Examples
------------------EXAMPLE-----------------------
Set-SPServiceHostConfig -Port 12345
This example sets the HTTP port for the Web services. new port for the Web service.
Specifies the new secure port for the Web service.
Specifies the identity of the Web service application to configure.
Specifies the SSL Certificate to use for secure protocols.
Sets the TCP port for the Web service.
For more information, see TechNet.
Specifies the thumbprint of the SSL certificate to retrieve for secure protocols.
Specifies the thumbprint of the SSL certificate to retrieve for secure protocols.
Displays a message that describes the effect of the command instead of executing the command.
For more information, type the following command:
get-help about_commonparameters
Feedback | https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/Set-SPServiceHostConfig?view=sharepoint-ps | 2019-11-11T22:08:44 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.microsoft.com |
Generating API Token
- Log in to Zoho CRM and go to the Setup Section
2.Click on APIs, and in the CRM API section, click the Settings icon and click the Authentication Token Generation link.
3. Now enter the Application Name as "Automate.io" and Generate the token.
4. The API token will be shown in a new window. Select the token code and paste it in Automate.io
Need more help? Referer to Zoho documentation | https://docs.automate.io/en/articles/941500-zoho-crm | 2019-11-11T21:52:38 | CC-MAIN-2019-47 | 1573496664439.7 | [array(['https://downloads.intercomcdn.com/i/o/28648815/658aa9db6b5bf13a3f55e7d4/setup-zoho.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/28644645/334e17aed143f88c56fa4e3b/api-auth-token-generation.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/28644898/e754d4116098eda9aa7df44c/api-browser-mode.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/28649457/75d00ab53b07085e49d8c187/zoho+token.png',
None], dtype=object) ] | docs.automate.io |
three.
In this tutorial, you learn how to:
- Set up a test network environment
- Deploy a firewall
- Create a default route
- Configure an application rule to allow access to
- Configure a network rule to allow access to external DNS servers
- Test the firewall
If you prefer, you can complete this tutorial using Azure PowerShell.
If you don't have an Azure subscription, create a free account before you begin.
Set up the network
First, create a resource group to contain the resources needed to deploy the firewall. Then create a VNet, subnets, and test servers. subsequent Name, type Test-FW-VN.
- For Address space, type 10.0.0.0/16.
- For Subscription, select your subscription.
- For Resource group,/26.
- Accept the other default settings, and then select Create.
Create additional subnets
Next, create subnets for the jump server, and a subnet for the workload servers.
- On the Azure portal menu, select Resource groups or search for and select Resource groups from any page. Then select Test-FW-RG.
- Select the Test-FW-VN virtual network.
- Select Subnets > +Subnet.
- For Name, type Workload-SN.
- For Address range, type 10.0.2.0/24.
- Select OK.
Create another subnet named Jump-SN, address range 10.0.3.0/24.
Create virtual machines
Now create the jump and workload virtual machines, and place them in the appropriate subnets.
On the Azure portal menu or from the Home page, select Create a resource.
Select Compute and then select Windows Server 2016 Datacenter in the Featured list.
Enter these values for the virtual machine:
Under Inbound port rules, for Public inbound ports, select Allow selected ports.
For Select inbound ports, select RDP (3389).
Accept the other defaults and select Next: Disks.
Accept the disk defaults and select Next: Networking.
Make sure that Test-FW-VN is selected for the virtual network and the subnet is Jump-SN.
For Public IP, accept the default new public ip address name (Srv-Jump-ip).
Accept the other defaults and select Next: Management.
Select Off to disable boot diagnostics. Accept the other defaults and select Review + create.
Review the settings on the summary page, and then select Create.
Use the information in the following table to configure another virtual machine named Srv-Work. The rest of the configuration is the same as the Srv-Jump virtual machine.
Deploy:
Select Review + create.
Review the summary, and then select Create to create the firewall.
This will take a few minutes to deploy.
After deployment completes, go to the Test-FW-RG resource group, and select the Test-FW01 firewall.
Note the private IP address. You'll use it later when you create the default route. Add.
For Route name, type fw-dg.
For Address prefix, type 0.0.0.0/0.
For Next hop type, select Virtual appliance.
Azure Firewall is actually a managed service, but virtual appliance works in this situation.
For Next hop address, type the private IP address for the firewall that you noted previously. Addresses,, for Name, type Allow-DNS.
For Protocol, select UDP.
For Source Addresses, type 10.0.2.0/24.
For Destination address, type 209.244.0.3,209.244.0.4
These are public DNS servers operated by CenturyLink.
For Destination Ports, type 53..
From the Azure portal, review the network settings for the Srv-Work virtual machine and note the private IP address.
Connect a remote desktop to Srv-Jump virtual machine, and sign in. From there, open a remote desktop connection to the Srv-Work private IP address.
Open Internet Explorer and browse to.
Select OK > Close on the Internet Explorer security alerts.
You should see the Google home page.
Browse to.
You should be blocked by the firewall.
So now you've verified that the firewall rules are working:
- You can browse to the one allowed FQDN, but not to any others.
- You can resolve DNS names using the configured external DNS server.
Clean up resources
You can keep your firewall resources for the next tutorial, or if no longer needed, delete the Test-FW-RG resource group to delete all firewall-related resources.
Next steps
Feedback | https://docs.microsoft.com/en-us/azure/firewall/tutorial-firewall-deploy-portal?toc=%2Fazure%2Fvirtual-network%2Ftoc.json | 2019-11-11T22:08:59 | CC-MAIN-2019-47 | 1573496664439.7 | [array(['media/tutorial-firewall-rules-portal/tutorial_network.png',
'Tutorial network infrastructure'], dtype=object) ] | docs.microsoft.com |
The thing is our membership content is protected through using Vimeo. Unfortunately, there is no closed captioning like there is on youtube.
However, you can look at technical analysis charts.
That will give you a better idea through pictures.
That way, you can go through and read the captions.
Right there you do have some sort of captions that can help you.
If you go to our membership content and go to swing charts, and you watch the video, they do not have any closed captioning.
There is just no option for that. | https://docs.tradersfly.com/do-your-videos-for-members-have-closed-captioning/ | 2019-11-11T23:22:44 | CC-MAIN-2019-47 | 1573496664439.7 | [array(['https://docs.tradersfly.com/wp-content/uploads/2019/05/do-your-videos-for-members-have-closed-captioning-31-1024x576.jpg',
None], dtype=object)
array(['https://docs.tradersfly.com/wp-content/uploads/2019/05/do-your-videos-for-members-have-closed-captioning-51-1024x576.jpg',
None], dtype=object)
array(['https://docs.tradersfly.com/wp-content/uploads/2019/05/do-your-videos-for-members-have-closed-captioning-60-1024x576.jpg',
None], dtype=object) ] | docs.tradersfly.com |
Last be able to access each other’s jobs.
Task Route:
Now you can see the route taken to the job, help your team improve the arrival times.
Real-time Tracking:
You’ll notice some visual and performance improvements on real-time user tracking. Simply click on a user to track on TEAM list. You can also track users by typing their name on MAP at Dooing iPhone or Android app
Location History:
You can now see where were your users between specific times, go back in time and see the paths they’ve taken. Simply click on a user under TEAM section and click Location History.
This feature can only be used by Managers and Team Leaders.
More Languages:
We know that localisation is important for your team’s success. You can now use Dooing in 4 languages; English, Spanish, Turkish and Russian. More language options on the way.
This update is available on Dooing web app and soon will be available on both iPhone and Android apps as well.
Just in case you missed the previous announcement:
Fleet Intelligence
Now Dooing customers can connect their fleet to the grid with Mojio, gaining unparalleled insights into what’s happening under the hood and behind the wheel. Narrow down on actual cost per job with fuel consumption data, prevent breakdowns with diagnostics, and track entire fleets in real time.
1 Comment
Pingback: Contacts, SMS Notifications, Frequent Locations and more | Dooing Self Service Support | http://docs.dooing.com/important-update-private-tasks-and-more/ | 2019-11-11T21:53:25 | CC-MAIN-2019-47 | 1573496664439.7 | [array(['http://docs.dooing.com/wp-content/uploads/2015/09/TASK-ROUTE-600x298.png',
'TASK-ROUTE'], dtype=object)
array(['http://docs.dooing.com/wp-content/uploads/2015/09/TASK-DETAIL-600x310.png',
'TASK-DETAIL'], dtype=object)
array(['http://docs.dooing.com/wp-content/uploads/2015/09/Track-Users-Realtime.png',
'Track-Users-Realtime'], dtype=object)
array(['http://docs.dooing.com/wp-content/uploads/2015/09/Track-Users-Location-History.png',
'Track-Users-Location-History'], dtype=object)
array(['http://docs.dooing.com/wp-content/uploads/2015/09/LANGUAGES.png',
'LANGUAGES'], dtype=object) ] | docs.dooing.com |
If you believe Envision is missing data, you place EDC into debug mode. When in debug mode all data received from PLC is stored for later review. Normally data is destroyed once processed and sent to Envision Application Server (EAS).
Step-by-step guide
- Remote desktop to EDC.
- Open file C:\_Envision\EnvisionDataProcess\config.xml using your favorite text editor.
- Change value in logTags to 1.
- Save file and exit.
- On EAS, Launch Mongochef and connect to EDC Database.
It will show EnvisionDL Database will create 2 Collections, zz_tagsd_debug and zz_tagsd_diagnostics
Related articles
1 Comment
Ivan Nausley
Page content was written by Kusmady Susanto. | https://docs.beet.com/display/EKB/How+to+enable+debug+mode+on+EDC | 2019-11-11T23:02:56 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.beet.com |
Step 5: Verifying AD RMS Functionality
Applies To: Windows Server 2008, Windows Server 2008 R2
The AD RMS client is included in the default installation of Windows Vista and Windows Server 2008. Previous versions of the client are available for download for some earlier versions of the Windows operating systems. For more information, see the Windows Server 2003 Rights Management Services page in the Microsoft Windows Server TechCenter ().
Before you can publish or consume rights-protected content on Windows Vista, you must add the AD RMS cluster URL, the ADFS-RESOURCE URL, and the ADFS-ACCOUNT URL to the Internet Explorer Local Intranet security zone of the ADRMS-CLNT2 computer. This is required to ensure that your credentials are automatically passed from Microsoft Office Word to the AD RMS Web services.
To add AD RMS cluster URL to the Internet Explorer Local Intranet security zone
Log on to ADRMS-CLNT2 as Terence Philip (TREYRESEARCH\tphilip).
Click Start, click Control Panel, click Network and Internet, and then click Internet Options.
Click the Security tab, and then click Local Intranet.
Click Sites, and then click Advanced.
In the Add this website to the zone box, do the following:
Type, and then click Add.
Type, and then click Add.
Type, and then click Add.
To verify the functionality of the AD RMS deployment, you log on as Nicole Holliday, create a Microsoft Word 2007 document, and then restrict permissions on it so that Terrence Philip is able to read the document but is unable to change, print, or copy it. You then log on as Terence Philip, verifying that Terence Philip can read the document but do nothing else with it.
To restrict permissions on a Microsoft Word document
Log on to ADRMS-CLNT as Nicole Holliday (CPANDL\nhollida).
Click Start, point to All Programs, point to Microsoft Office, and then click Microsoft Office Word 2007.
Type Only Terence Philip can read this document, but cannot change, print, or copy it. Click Microsoft Office Button, point to Prepare, point to Restrict Permission, and then click Restricted Access.
Click the Restrict permission to this document check box.
In the Read text box, type [email protected], and then click OK to close the Permission dialog box.
Click the Microsoft Office Button, click Save As, and then save the file as \\adrms-db\public\ADRMS-TST.docx
Log off as Nicole Holliday.
Finally, log on as Terence Philip on ADRMS-CLNT2 in the TREYRESEARCH.NET domain and attempt to open the document, ADRMS-TST.docx.
To view a protected document
Log on to ADRMS-CLNT2 as Terence Philip (TREYRESEARCH\tphilip).
Click Start, point to All Programs, point to Microsoft Office, and then click Microsoft Office Word 2007.
Click the Microsoft Office Button, click Open, and then type \\ADRMS-DB\PUBLIC \ADRMS-TST.docx. If you are prompted for credentials, use CPANDL\Administrator.
The following message appears: "Permission to this document is currently restricted. Microsoft Office must connect to\_wmcs/licensin to verify your credentials and download your permissions."
Click OK.
The following message appears: "Verifying your credentials for opening content with restricted permissions".
When the document opens, click Microsoft Office Button. Notice that the Print option is not available.
Click View Permission in the message bar. You should see that Terence Philip has been restricted to being able only to read the document.
Click OK to close the My Permissions dialog box, and then close Microsoft Word.
You have successfully deployed and demonstrated the functionality of using identity federation with AD RMS, using the simple scenario of applying restricted permissions to a Microsoft Word 2007 document. You can also use this deployment to explore some of the additional capabilities of AD RMS through additional configuration and testing. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754072(v=ws.10) | 2018-04-19T14:51:16 | CC-MAIN-2018-17 | 1524125936969.10 | [] | docs.microsoft.com |
Python
Python support matrix
Python versions
- cPython
- 2.6.x
- cPython
- 2.7.x
Python databases
- SQLAlchemy
- Django ORM
- PyMongo
- pycassa 1.7.1
Python RPC clients
- Apache Thrift
- 0.5, 0.6, 0.8
- httplib
- -
- httplib2
- -
- urllib
- -
- urllib2
- -
- urllib3
- -
- requests
- -
Other Python components
- Greenlet
- 0.3.1, 0.3.4
- gevent
- via Greenlet
- Eventlet
- via Greenlet
- Memcache
- -
- pylibmc
- -
Installing instrumentation
Our python module called oboe gets detailed insights out of the box, without any code modification. The oboe package also provides the oboeware module, which contains middleware components and instrumentation helpers for Django, Tornado, and other supported components.
When you install language level instrumentation, here are the things you should be thinking about: first, install the instrumentation. Then, go back to your dashboard and take a look at your traces. Are you getting the level of detail you need? If you’re not getting a sufficient level of detail from out-of-box instrumented php, then you’ll want to consider configuration options.
Python instrumentation is provided by the oboe package, which can either be downloaded manually from its home on PyPI or installed automatically with pip. If you haven’t used pip before you might need to first install python-pip, g++, and python-dev using your favorite package manager.
pip install oboe
Add django instrumentation
Add the following import to your django settings.py as well as your WSGI file, if you have one. If you’re deploying with uWSGI and are specifying the WSGI handler module as django.core.handlers.wsgi:WSGIHandler(), you can use our handler module instead, which is a drop-in replacement. Just specify oboeware.djangoware:OboeWSGIHandler() and you won’t need to modify your settings.py.
import oboeware.djangoware
Add greenlet instrumentation
Support for event-based services is provided through a modified greenlet module.
pip install --extra-index-url= greenlet==0.4.5-oboe1
Add tornado instrumentation
TraceView supports web apps written with the tornado web framework for python. The tornado instrumentation relies on the insertion of a handful of one-line calls to the oboeware library, and thus requires patching the tornado library. Patched tornado packages are available from our pypi repository, or you can download a git patch from files.tv.solarwinds.com and apply it yourself. To install our tornado packages using pip, specify the version and pypi address.
pip install --extra-index-url= tornado==2.3-oboe3
Add WSGI instrumentation
Many python apps are implemented atop WSGI, which provides a uniform interface through which web frameworks and middleware libraries interoperate. TraceView offers a WSGI middleware component called OboeM.
Set the tracing mode: If you don’t have an instrumented webserver in front of your app, you’ll also need to set the tracing mode.
1 # flask example 2 from flask import Flask 3 from oboeware import OboeMiddleware 4 5 # the flask app 6 app = Flask(__name__) 7 8 app.wsgi_app = OboeMiddleware(app.wsgi_app, {'tracing_mode':'always'}) 9 10 @app.route("/") 11 def hello(): 12 return "Hello World!"
Upgrading instrumentation
The python instrumentation package is managed by the pip command regardless of platform.
pip install oboe --upgrade
Configuring instrumentation
Common configuration tasks:
- Change the tracing mode
- Report backtraces
- Suppress reporting for sensitive data
- Disable an instrumentation module
- Disable debug logging
- Complete config options
Report backtraces
The instrumented libraries have the ability to collect backtraces; some of them collect backtraces by default, others don’t. See complete list of python configuration options. Many public API methods also take backtrace parameters.
oboe.config['inst']['memcache']['collect_backtraces'] = True
Suppress reporting for sensitive data
Python.config['sanitize_sql'] = True
Disable debug logging
By default, the oboe module emits debug output on stderr, which for most environments is mixed into the web application’s logs. You can check there for debug information if something doesn’t seem to be working as expected. This output can also be disabled via an environment variable. Running your application with OBOE_DISABLE_DEFAULT_LOGGER set in your environment disables the logging entirely.
Complete config options
The following is the complete list of python instrumentation configuration options.
oboe.config[‘tracing_mode’]
- config option
- oboe.config[‘tracing_mode’]
- description
- Specify when traces should be initiated for incoming requests. This should be set to ‘always’ when you don’t have an instrumented webserver in front of your app.
- possible values
- ‘always’
- (Default) Continue existing traces, otherwise attempt to start a new one.
- ‘through’
- Continue existing traces, but never start them. This mode assumes that a higher layer makes the decision about whether to trace.
- ‘never’
- Never continue existing traces or start new ones.
- history
- Introduced with the original python instrumentation.
- example
oboe.config['tracing_mode'] = 'always'
oboe.config[‘inst_enabled’]
- config option
- oboe.config[‘inst_enabled’]
- description
- To disable an individual instrumentation module, simply add the following lines before loading the middleware into your app.
- parameters
- [‘<module-name>’]
- A module name. Valid options: django_orm, httplib, memcache, pymongo, redis, sqlalchemy.
- history
- Introduced in version 0.4.2.
- example
oboe.config['inst_enabled']['memcache'] = False
oboe.config[‘sanitize_sql’]
- config option
- oboe.config[‘sanitize_sql’]
- description
- Do not collect or report sql query parameters to TraceView.
- history
- Introduced in version 1.4.0.
- example
oboe.config['sanitize_sql'] = True
oboe.config[‘inst’][‘django_orm’]
- config option
- oboe.config[‘inst’][‘django_orm’]
- description
- Configuration options for the django ORM instrumentation.
- parameters
- [‘collect_backtraces’]
- Default: True.
- history
- Introduced with [‘collect_backtraces’] in version 1.5.0.
- example
oboe.config['inst']['django_orm']['collect_backtraces'] = True
oboe.config[‘inst’][‘httplib’]
- config option
- oboe.config[‘inst’][‘httplib’]
- description
- Configuration options for httplib instrumentation.
- parameters
- [‘collect_backtraces’]
- Default: True.
- history
- Introduced with [‘collect_backtraces’] in version 1.5.0.
- example
oboe.config['inst']['httplib']['collect_backtraces'] = True
oboe.config[‘inst’][‘memcache’]
- config option
- oboe.config[‘inst’][‘memcache’]
- description
- Configuration options for memcache instrumentation.
- parameters
- [‘collect_backtraces’]
- Default: False.
- history
- Introduced with [‘collect_backtraces’] in version 1.5.0.
- example
oboe.config['inst']['memcache']['collect_backtraces'] = True
oboe.config[‘inst’][‘pymongo’]
- config option
- oboe.config[‘inst’][‘pymongo’]
- description
- Configuration options for pymongo instrumentation.
- parameters
- [‘collect_backtraces’]
- Default: True.
- history
- Introduced with [‘collect_backtraces’] in version 1.5.0.
- example
oboe.config['inst']['pymongo']['collect_backtraces'] = True
oboe.config[‘inst’][‘redis’]
- config option
- oboe.config[‘inst’][‘redis’]
- description
- Configuration options for redis instrumentation.
- parameters
- [‘collect_backtraces’]
- Default: False.
- history
- Introduced with [‘collect_backtraces’] in version 1.5.0.
- example
oboe.config['inst']['redis']['collect_backtraces'] = True
oboe.config[‘inst’][‘sqlalchemy’]
- config option
- oboe.config[‘inst’][‘sqlalchemy’]
- description
- Configuration options for sqlalchemy instrumentation.
- parameters
- [‘collect_backtraces’]
- Default: True.
- history
- Introduced with [‘collect_backtraces’] in version 1.5.0.
- example
oboe.config['inst']['sqlalchemy']['collect_backtraces'] = True
Customizing instrumentation
Customization involves adding hooks from our public API to your code so that you can to take advantage of additional filtering capabilities on the dashboard, change how requests are traced, or capture additional information during the trace.
Python custom layers
Python instrumentation creates some layers by default, e.g., ‘django’, ‘wsgi’, ‘pylibmc’, functions as a discrete service without being external to the process. In any case, python instrumentation offers two facilities for manually creating layers: one is a decorator, for use when you want to represent a particular function as a layer; this other is an API method that’s better used when the block of code isn’t neatly contained.
Custom layer using the logging decorator
Follow the link to the API documentation for complete syntax and usage information.
1 @oboe.log_method('slow_thing') 2 def my_possibly_slow_method(...):
Custom layer using the logging API method
There’s a convention that must be followed when defining a new layer: the logging call which marks the entry into a new layer must be labeled ‘entry’; the logging call which marks the exit from the user-defined layer must be labeled ‘exit’. Follow the link to the API documentation for complete syntax and usage information.
1 def myLogicallySeparateTask(...): 2 oboe.log('entry', 'serviceX', {'key':'value'}) 3 ... // do stuff 4 oboe.log('exit', 'serviceX', {'key':'value'})
Python. Python instrumentation offers two facilities for creating manually creating layers: one is a decorator, for use when you want to represent a particular function; this other is an API method that’s better used when the block of code isn’t neatly defined. Follow the link to the API documentation for complete syntax and usage information.
Function profiling using a decorator
1 @oboe.profile_function('_profile_name_', Store_return=False, Store_args=False) 2 def my_function(...):
Block profiling using a context manager
1 with oboe.profile_block('_profile-name_'): 2 ...('info', {'key':'value', None, 'key': 'value'})
Python('info', None, {'Partition':'_partition-name_'})
Python controllers filter
Python', {'Controller':'your_controller', 'Action':'do_something'})
Report errors and exceptions
Many types of errors are already caught, e.g., web server errors, or any exceptions that reach the django or WSGI top layers. Those that aren’t caught by instrumentation can be reported manually using either oboe.log_error or oboe TraceView to classify the extent as an error extent. Error events are marked in the trace view
and corresponding information is shown in the errors panel. Follow the link to the API documentation for complete syntax and usage information.
1 if call_method() == None: 2 oboe.log_error('nothing was returned!') 3 4 try: 5 call_method() 6 except Exception as e: 7 oboe.log_exception()
RUM for python. These are provided in django as custom template tags. For other web frameworks, use the provided helper functions which automatically generate the required javascript.
- Place oboe.rum_header just after any <meta> tags in your <head> block. It should be the first non-meta tag inside <head>.
- Place oboe.rum_footer immediately before the closing </body> tag.
1 {% load oboe %} {# include our tag library then just reference them #} 2 ... 3 4 <head> 5 <meta ... > 6 {% oboe_rum_header %} 7 ... 8 </head> 9 10 <body> 11 ... 12 {% oboe_rum_footer %} 13 </body>
1 {# stash the tag strings in two variables for your templates, then reference them #} 2 header_code = oboe.rum_header() 3 footer_code = oboe.rum_footer() 4 5 <head> 6 <meta ... > 7 8 ... 9 </head> 10 11 <body> 12 ... 13 14 </body> | http://docs.traceview.solarwinds.com/Instrumentation/python.html | 2017-06-22T18:30:02 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.traceview.solarwinds.com |
Código fonte para django.db.models.fields.files
import datetime import os import posixpath import warnings from django import forms from django.core import checks from django.core.files.base import File from django.core.files.images import ImageFile from django.core.files.storage import default_storage from django.db.models import signals from django.db.models.fields import Field from django.utils import six from django.utils.deprecation import RemovedInDjango20Warning from django.utils.encoding import force_str, force_text from django.utils.translation import ugettext_lazy as _[documentos]class FieldFile(File): def __init__(self, instance, field, name): super(FieldFile, self).__init__(None, name) self.instance = instance self.field = field self.storage = field.storage self._committed = True def __eq__(self, other): # Older code may be expecting FileField values to be simple strings. # By overriding the == operator, it can remain backwards compatibility. if hasattr(other, 'name'): return self.name == other.name return self.name == other def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return hash(self.name) # The standard File contains most of the necessary properties, but # FieldFiles can be instantiated without a name, so that needs to # be checked for here. def _require_file(self): if not self: raise ValueError("The '%s' attribute has no file associated with it." % self.field.name) def _get_file(self): self._require_file() if not hasattr(self, '_file') or self._file(object): """ The descriptor for the file attribute on the model instance. Returns a FieldFile when accessed so you can do stuff like:: >>> from myapp.models import MyModel >>> instance = MyModel.objects.get(pk=1) >>> instance.file.size Assigns a file object on assignment so you can do:: >>> with open('/path/to/hello.world', 'r') as f: ... instance.file = File(f) """ def __init__(self, field): self.field = field def __get__(self, instance, cls=None): if instance is None: return self # This is slightly complicated, so worth an explanation. # instance.file`needs to ultimately return some instance of `File`, # probably a subclass. Additionally, this returned object needs to have # the FieldFile API so that users can easily do things like # instance.file.path and have that delegated to the file storage engine. # Easy enough if we're strict about assignment in __set__, but if you # peek below you can see that we're not. So depending on the current # value of the field we have to dynamically construct some sort of # "thing" to return. # The instance dict contains whatever was originally assigned # in __set__. if self.field.name in instance.__dict__: file = instance.__dict__[self.field.name] else: instance.refresh_from_db(fields=[self.field.name]) file = getattr(instance, self.field.name) # If this value is a string (instance.file = "path/to/file") or None # then we simply wrap it with the appropriate attribute class according # to the file field. [This is FieldFile for FileFields and # ImageFieldFile for ImageFields; it's also conceivable that user # subclasses might also want to subclass the attribute class]. This # object understands how to convert a path to a file, and also how to # handle None. if isinstance(file, six.string_types) or file is None: attr = self.field.attr_class(instance, self.field, file) instance.__dict__[self.field.name] = attr # Other types of files may be assigned as well, but they need to have # the FieldFile interface added to them. Thus, we wrap any other type of # File inside a FieldFile (well, the field's attr_class, which is # usually FieldFile). elif isinstance(file, File) and not isinstance(file, FieldFile): file_copy = self.field.attr_class(instance, self.field, file.name) file_copy.file = file file_copy._committed = False instance.__dict__[self.field.name] = file_copy # Finally, because of the (some would say boneheaded) way pickle works, # the underlying FieldFile might not actually itself have an associated # file. So we need to reset the details of the FieldFile in those cases. elif isinstance(file, FieldFile) and not hasattr(file, 'field'): file.instance = instance file.field = self.field file.storage = self.field.storage # Make sure that the instance is correct. elif isinstance(file, FieldFile) and instance is not file.instance: file.instance = instance # That was fun, wasn't it? return instance.__dict__[self.field.name] def __set__(self, instance, value): instance.__dict__[self.field.name] = value[documentos] def open(self, mode='rb'): self._require_file() if hasattr(self, '_file') and self._file is not None: self.file.open(mode) else: self.file = self.storage.open(self.name, mode)# open() doesn't alter the file's contents, but it does reset the pointer open.alters_data = True # In addition to the standard File API, FieldFiles have extra methods # to further manipulate the underlying file, as well as update the # associated model instance.[documentos] def save(self, name, content, save=True): name = self.field.generate_filename(self.instance, name) self.name = self.storage.save(name, content, max_length=self.field.max_length) setattr(self.instance, self.field.name, self.name) self._committed = True # Save the object because it has changed, unless save is False if save: self.instance.save()save.alters_data = True[documentos] def delete(self, save=True): if not self: return # Only close the file if it's already open, which we know by the # presence of self._file if hasattr(self, '_file'): self.close() del self.file self.storage.delete(self.name) self.name = None setattr(self.instance, self.field.name, self.name) self._committed = False if save: self.instance.save()delete.alters_data = True @property def closed(self): file = getattr(self, '_file', None) return file is None or file.closed[documentos] def close(self): file = getattr(self, '_file', None) if file is not None: file.close()def __getstate__(self): # FieldFile needs access to its associated model field and an instance # it's attached to in order to work properly, but the only necessary # data to be pickled is the file's name itself. Everything else will # be restored later, by FileDescriptor below. return {'name': self.name, 'closed': False, '_committed': True, '_file': None}[documentos]class FileField(Field): # The class to wrap instance attributes in. Accessing the file object off # the instance will always return an instance of attr_class. attr_class = FieldFile # The descriptor to use for accessing the attribute off of the class. descriptor_class = FileDescriptor description = _("File") def __init__(self, verbose_name=None, name=None, upload_to='', storage=None, **kwargs): self._primary_key_set_explicitly = 'primary_key'_primary_key()) errors.extend(self._check_upload_to()) return errors def _check_primary_key(self): if self._primary_key_set_explicitly: return [ checks.Error( "'primary_key' is not a valid argument for a %s." % self.__class__.__name__, obj=self, id='fields.E201', ) ] else: return [] def _check_upload_to(self): if isinstance(self.upload_to, six.string_types) = super(FileField, self).deconstruct() if kwargs.get("max_length") == 100: del kwargs["max_length"] kwargs['upload_to'] = self.upload_to if self.storage is not default_storage: kwargs['storage'] = self.storage return name, path, args, kwargs def get_internal_type(self): return "FileField" def get_prep_value(self, value): ))) def generate_filename(self, instance, filename): """ Apply (if callable) or prepend (if a string) upload_to to the filename, then delegate further processing of the name to the storage backend. Until the storage layer, all file paths are expected to be Unix style (with forward slashes). """ if callable(self.upload_to): filename = self.upload_to(instance, filename) else: dirname = force_text(datetime.datetime.now().strftime(force_str(self.upload_to))) filename = posixpath.join(dirname, filename) return self.storage.generate_filename(filename) def save_form_data(self, instance, data): # Important: None means "no change", other false value means "clear" # This subtle distinction (rather than a more explicit marker) is # needed because we need to consume values that are also sane for a # regular (non Model-) Form to find in its cleaned_data dictionary. if data is not None: # This value will be converted to) class ImageFieldFile(ImageFile, FieldFile): def delete(self, save=True): # Clear the image dimensions cache if hasattr(self, '_dimensions_cache'): del self._dimensions_cache super(ImageFieldFile, self).delete(save)[documentos]class ImageField(FileField): attr_class = ImageFieldFile descriptor_class = ImageFileDescriptor description = _("Image") def __init__(self, verbose_name=None, name=None, width_field=None, height_field=None, **kwargs): self.width_field, self.height_field = width_field, height_field super(ImageField, self).__init__(verbose_name, name, **kwargs) def check(self, **kwargs): errors = super(ImageField, self).check(**kwargs) errors.extend(self._check_image_library_installed()) return errors def _check_image_library_installed(self): try: from PIL import Image # NOQA except ImportError: return [ checks.Error( 'Cannot use ImageField because Pillow is not installed.', hint=('Get Pillow at ' 'or run command "pip install Pillow".'), obj=self, id='fields.E210', ) ] else: return [] def deconstruct(self): name, path, args, kwargs = super, self).contribute_to_class(cls, name, **kwargs) # Attach update_dimension_fields so that dimension fields declared # after their corresponding image field don't stay cleared by # Model.__init__, see bug #11196. # Only run post-initialization dimension update on non-abstract models if not cls._meta.abstract: signals.post_init.connect(self.update_dimension_fields, sender=cls) def update_dimension_fields(self, instance, force=False, *args, **kwargs): """ Updates field's width and height fields, if defined. This method is hooked up to model's post_init signal to update dimensions after instantiating a model instance. However, dimensions won't be updated if the dimensions fields are already populated. This avoids unnecessary recalculation when loading an object from the database. Dimensions can be forced to update with force=True, which is how ImageFileDescriptor.__set__ calls this method. """ # Nothing to update if the field doesn't have dimension fields or if # the field is deferred. has_dimension_fields = self.width_field or self.height_field if not has_dimension_fields or self.attname not in instance.__dict__: return # getattr will call the ImageFileDescriptor's __get__ method, which # coerces the assigned value into an instance of self.attr_class # (ImageFieldFile in this case). file = getattr(instance, self.attname) # Nothing to update if we have no file and not being forced to update. if not file and not force: return dimension_fields_filled = not( (self.width_field and not getattr(instance, self.width_field)) or (self.height_field and not getattr(instance, self.height_field)) ) # When both dimension fields have values, we are most likely loading # data from the database or updating an image field that already had # an image stored. In the first case, we don't want to update the # dimension fields because we are already getting their values from the # database. In the second case, we do want to update the dimensions # fields and will skip this return because force will be True since we # were called from ImageFileDescriptor.__set__. if dimension_fields_filled and not force: return # file should be an instance of ImageFieldFile or should be None. if file: width = file.width height = file.height else: # No file, so clear dimensions fields. width = None height = None # Update the width and height fields. if self.width_field: setattr(instance, self.width_field, width) if self.height_field: setattr(instance, self.height_field, height) def formfield(self, **kwargs): defaults = {'form_class': forms.ImageField} defaults.update(kwargs) return super(ImageField, self).formfield(**defaults) | https://docs.djangoproject.com/pt-br/1.11/_modules/django/db/models/fields/files/ | 2018-01-16T11:07:45 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.djangoproject.com |
Approve a request for a virtual machine As an EC2 approver, navigate to Service Desk > My Approvals and approve the virtual machine request. Before you beginRole required: cloud_adminComplete the provision task | https://docs.servicenow.com/bundle/istanbul-it-operations-management/page/product/amazon-ec2-cloud-provisioning/task/t_ApproveTheRequest.html | 2018-01-16T11:38:58 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.servicenow.com |
Building Addons
An addon is how you can extend the core functionality of Statamic. Rather than digging in and messing with core files, we’ve created a system where developers can easily build new features that are compatible with everyone’s Statamic installations. Addons can then be easily shared or sold to others to let them extend their Statamic installation.
Regarding an Addon’s responsibility
Each Addon can be viewed as a new “feature”, or group of features, for your site. It can be something simple, such as a tag that returns the current time in Klingon, or something rather large, like an Ecommerce Shopping Cart. Although the complexity varies between the examples, each solves one clear problem.
If you’re in the need for a quick feature, you may consider creating a site helper. These are good place to throw a bunch of extensions without needing to go through creating a whole addon.
Assumed Knowledge
Statamic Addons involve writing PHP. While we have a lot of helper methods and API classes to take advantage of, you’ll still need to know how to write PHP, and be familiar with object oriented programming, PHP namespacing, and ideally - Laravel.
Addon Structure
An addon can contain a number of classes, each. It can be made up of one of them, or up to all of them. Each aspect accomplishes different missions.
- Tags create tags for use within templates.
- Modifiers are used in your templates to manipulate variables.
- Fieldtypes create new ways to enter data into the Control Panel.
- Event listeners can perform functionality when specific events are executed throughout the application.
- Commands are used for executing functionality through the terminal.
- Tasks allow you to schedule automated functionality at a predefined schedule.
- Service Providers let you bootstrap functionality into the application.
- API allows you to interact with other addons, and them with you.
- Controllers allow you to create pages in the control panel.
- Filters let you perform more complicated filtering of content on certain tags.
- Widgets can be added to your Control Panel’s dashboard.
By using combinations of these aspects in your addons, you can create some truly fascinating results. And remember, all of these aspects are simply PHP, so anything you can do with PHP is possible here.
Not to mention, Statamic is built upon Laravel, so you can use any of Laravel’s features if you’d like.
So, what to tackle first? Let’s get started.. | https://docs.statamic.com/addons | 2018-01-16T11:36:23 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.statamic.com |
updates, if needed, 12, 2017, Version 1708 of Office 365 ProPlus was released to Semi-Annual Channel (Targeted). On January 9, 2018, Version 1708 is scheduled to be made available to Semi-Annual Channel and will be supported in Semi-Annual Channel until March 2019. In March 2018, a new version of Office 365 ProPlus with new features will be released to Semi-Annual Channel (Targeted). That version is scheduled to be made available to Semi-Annual Channel in July 2018, and will be supported in Semi-Annual Channel until September 2019. 2016 Monthly, Broad, and.
You can also provide users with Semi-Annual Channel (Targeted) by selecting them for the First Release program for Office 365. If you do this, those users can install Semi-Annual Channel (Targeted) directly from the Software page in the Office 365 portal..
Tip
New to Office 365?
Discover free video courses for Office 365 admins and IT pros, brought to you by LinkedIn Learning.
Related Topics
Version and build numbers of update channel releases
Change management for Office 365 clients
Office 365 client update channel releases | https://docs.microsoft.com/en-us/DeployOffice/overview-of-update-channels-for-office-365-proplus?ui=zh-TW&rs=zh-TW&ad=TW | 2018-01-16T11:55:15 | CC-MAIN-2018-05 | 1516084886416.17 | [array(['images/16fae804-8d79-43db-8902-2adbbc6e0c9e.png',
'The three primary Office 365 update channels, showing the relationship between the update channels and the release cadence'],
dtype=object) ] | docs.microsoft.com |
This document describes the basics of the BarNet SSL VPN Service.
The software required is downloaded during the installation.
Point your webbrowser to (note the HTTPS),
select My own computer and click on Connect. Then
download the file offered.
When you click on the icon of the downloaded software, it will
install the Citrix VNP Software.
Login with your BarNet username and password.
If it accepts your username and password, you will get a secure VPN
connection. You can access your computer in chambers via the normal
Windows shares or the desktop via the VNC viewer.
When done, select Disconnect from the icon in the task menu. | http://docs.barnet.com.au/documentation.php?topic=ssl-vpn | 2018-01-16T11:06:00 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.barnet.com.au |
Subviews (tabs in other modules)
If you want tabs for your module to appear within other modules, do not edit those other modules. Instead, you should use our subview loader to get the tabs to appear automagically. No further configuration is necessary.
Naming Convention
In order to have your tabs/subviews appear within other modules, there is a specific naming convention to follow:
{OtherModule}_tab.[Modulaction].{BaseModule}.php where
- OtherModule - the module where the tab should appear;
- BaseModule - the name of your add on module;
- Moduleaction: is the filename where the tab should appear (without .php), e.g. view for view.php, day_view for day_view.php, etc.
In Practice
From our Todos Module example, there are a trio of files that give us Todos subviews within other modules.
- companies_tab.view.todolist.php - Includes a Todo List tab within a Company’s view page.
- contacts_tab.view.todolist.php - Includes a Todo List tab within a Contact’s view page.
- projects_tab.view.todolist.php - Includes a Todo List tab within a Project’s view page.
Notice: These tabs are cached whenever a User logs in, therefore after enabling a module that has subviews, any active users will have to log out and log back in to see the new tabs. | http://docs.web2project.net/docs/subviews.html | 2018-01-16T11:22:17 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.web2project.net |
Database Replication and Clustering
The tpm update command is used when applying configuration changes or upgrading to a new version. The process is designed to be simple and maintain availability of all services. The actual process will be performed as described in Section-replicator-5.3.0-348.
An installation based on an INI file may include this line but there
will be an
/etc/tungsten/tungsten.ini file on each
node.
Upgrading to a new version
If a staging directory was used; see Section 10.3.6, “Upgrades from a Staging Directory”.
If an INI file was used; see .Section 10.4.3, “Upgrades with an INI File”
Applying configuration changes to the current version
If a staging directory was used; see Section 10.3.7, “Configuration Changes from a Staging Directory”.
If an INI file was used; see Section 10.4.4, “Configuration Changes with an INI file”. | http://docs.continuent.com/tungsten-replicator-5.3/cmdline-tools-tpm-commands-update.html | 2018-01-16T11:35:56 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.continuent.com |
EXECUTIVE SUMMARY 1
CHAPTER ONE: OVERVIEW OF NUMBERING 6
A. Inefficient Use and Increasing Demand for New Numbers in California Is Causing Area Code Proliferation 6
B. 408 History and CPUC Decisions 7
1. Monthly Lottery Allocates Prefixes 8
C. CPUC Efforts to Resolve Area Code Proliferation 9
2. Improved Number Inventory Management 10
3. CPUC Efforts at Federal Level 11
4. Utilization Studies 14
CHAPTER TWO: 3.81 MILLION UNUSED NUMBERS IN THE 408 AREA CODE 15
A. The Scope of the Utilization Study 15
1. Prefix Distribution Statistics 15
2. Companies Reporting 16
3. Non-Reporting Companies 16
B. Numbers Available in the 408 Area Code 16
1. 3.81 Million Numbers Available in the 408 Area Code 16
C. Analysis of Available Numbers 20
1. Analysis of Wireline Carriers' Contamination Rates 20
2. Analysis of Wireless Carriers' Contamination Rates 22
3. Potential Block Contamination Abuses 24
4. Reclamation of Prefixes 25
D. Analysis of 3.97 Million Unavailable Numbers 26
1. 3.27 Million Assigned Numbers 27
2. Reserved Numbers Are a Potential Source of Additional Numbers 31
3. Restrictions on Administrative Numbers Could Yield More Numbers 33
4. Intermediate Numbers 34
6. The Need to Audit the Data 38
CHAPTER THREE: NUMBER POOLING AND OTHER NUMBER CONSERVATION MEASURES 39
1. More Accurate Forecasting Will Improve Number Pooling 40
C. Lack of Local Number Portability Stands as a Key Barrier to Pooling 41
D. Unassigned Number Porting 43
E. Consolidation of Rate Centers to Maximize Number Use 44
F. Sharing Prefixes May Yield More Efficient Number Use 46
CONCLUSION 48
APPENDICES 50 408 area code, from its creation in 1959 through the various splits to its present status, now covering Santa Clara County south of Sunnyvale and Cupertino, and small portions of Alameda, Santa Cruz, and Stanislaus Counties. Most of the 408 area code is contained within the San Jose Metropolitan Statistical Area (MSA). This report should be viewed in a broader context than the facts pertaining solely to the 408 area code. The report evaluates the status of number availability in the 408 area code, and discusses the various state and federal policies which govern number use in California and nationwide. In addition, the report analyzes number use by carrier category and identifies what measures the CPUC can employ in the 408 and other area codes to improve efficiency of number use in order to avoid prematurely opening new area codes. Data is self-reported by the companies; the CPUC staff has not audited any of the 408 utilization data submitted for this study and report.
The utilization study sheds new light on the numbering crisis in the 408 area code. The data reveals that despite increasing demand for numbers, the 408 area code is not fully utilized. The study found that, of the 7.78 million useable numbers in the 408 area code, approximately 3.81 million, or slightly less than half, presently are not in use. The data further establishes that the 408 area code possesses considerable room for growth, and thus, aggressive measures such as splits or overlays are not yet warranted in the 408 area code. The report further urges the CPUC to seek from the FCC authority to implement Unassigned Number Porting (UNP) as a means to more efficiently use numbers still available in the 408 408 area code continues TD's analysis covering number utilization levels in specific area codes.
The 408 area code contains approximately 7.78 million useable telephone numbers. These numbers are available to telecommunications companies, which are not available for public use, as they have been set aside for emergency purposes, for technical network support, or for other reasons.
The FCC has defined numbers in these five categories - assigned, administrative, reserved, intermediate, or aging - as unavailable, either because they are already in use or are designated for some present or future use. Of the 3.81 million available numbers, 1.45 million have been set aside by the CPUC to use in a lottery for companies seeking numbers, and for donation to the future 408 number pool.2 Companies possess the remaining 2.36 million numbers. Wireline carriers, such as Pacific Bell and many competitive local exchange carriers, hold roughly 1.67 million available numbers, while wireless carriers3 hold approximately 690,000 available numbers.
At the same time, the 408 study finds that under FCC rules, about 1.76 million numbers cannot be contributed to the 408 number lottery, nor can they be contributed to the future 408 number pool for reassignment to other companies. The FCC has determined that wireless carriers do not have to participate in number pools at this time.4 In addition, they need to use those blocks within six months. Thus, 1.76 million numbers in the 408 area code are available only to the companies holding those numbers, because they are held by wireless carriers, are in blocks that are more than 10% contaminated, or are in blocks 10% or less contaminated but kept for six-month inventory. The study further finds that, of the 3.81 million numbers not in use, about 2.62 million could be made available to companies through pooling if (a) the companies were required to donate blocks with higher contamination levels to the future pool, and (b) wireless carriers were required to participate in the 408 number pool. The first table below illustrates the current distribution of numbers. The second table shows the distribution that would occur if all the recommendations in this report were implemented.
Finally, the study notes that companies identify 3.97 408 Area Code Report recommendations are summarized in Appendix I.s that is inefficient in today's competitive marketplace.
Prior to 1997, one phone company6 provided local telephone service to all customers in a particular area, and new area codes were opened as the population grew. The number of California area codes rose steadily from 3. 408 HISTORY AND CPUC DECISIONS
The 408 area code is a classic example of area code proliferation in California. Originally, the 408 area code was part of the 415 area code, one of the first three area codes created in California in 1947. The 415 area code originally covered all of central California. The 408 area code was created in June 1959 when it was split from the 415 area code. The 408 area code was reduced in size in 1999 when San Benito County and most of Monterey County and Santa Cruz County were assigned a new area code, 831. The 408 area code currently includes Santa Clara County south of Sunnyvale and Cupertino, and small portions of Alameda, Santa Cruz, and Stanislaus Counties. Most of the 408 area code is contained within the San Jose Metropolitan Statistical Area (MSA).
Despite the knowledge that the 408 area code would be split into the 831 area code in 1999 to provide new numbers to the area, the North American Numbering Plan Administrator (NANPA) determined in 1997 that the 408 area code was running short of numbers. In response to the NANPA's determination that the CPUC must act to provide additional numbers for phone company use, the CPUC approved an area code overlay on November 19, 1998. new area code was scheduled to be overlaid on the 408 area code on January 1, 1999, with mandatory 1 + 10 digit dialing to begin on October 1, 1999. 408 area code. In that same decision, the CPUC required its Telecommunications Division (TD) staff to study number use to determine the quantity of available, unused numbers in the 408 area code. This report fulfills that requirement.8
1. Monthly Lottery Allocates Prefixes
For those area codes nearing number exhaust, the CPUC has instituted a lottery process to fairly allocate the remaining prefixes among phone companies when demand exceeds supply. The 408 lottery began in May 1997. Currently, the CPUC distributes three prefixes (two initial prefixes and one growth prefix9) in the monthly 408. Thirty-eight prefixes have been allocated in the 408 area code through this process between January 1, 2000 and December 31, 2000. most populous 100 Metropolitan Statistical Areas (MSAs) in the country. Thirteen of the top 100 MSAs are located in California; the 408 area code is in one of them with rationing in effect, all phone companies participate in the lottery.
The CPUC has been aggressively setting up number pools. In November, 2000, by an Assigned Commissioner's Ruling, the CPUC established a schedule for ten pooling trials for 2001. The ruling set a pooling trial for the 408 area code to begin in April 2001, later changed to May 12, 2001 by Assigned Commissioner's Ruling of February 9, 2001. All wireline companies with numbers in 408 will be required to donate 1,000-number blocks to the pooling administrator. Under the number pooling program, all LNP-capable carriers will receive numbers in blocks of 1,000 in the 408 area code on an as-needed basis. There is no rationing process in the pooling trial and the blocks received can be put into service almost immediately upon receipt. All wireless carriers will continue to receive numbers in blocks of 10,000 through the monthly lottery allocation process. provide to the NANPA a form demonstrating they will be out of numbers within three months.
· Companies must satisfy a minimum 75% fill rate requirement before being eligible to request a growth prefix in any area code in rationing and before being eligible to receive a thousand-block through a number pool. Companies must assign numbers in thousand-block sequence, assigning numbers 408, CPUC staff has observed that the demand for growth prefixes in each month's lottery has declined. Further evidence of the effectiveness of the CPUC's number conservation policies is the recent increase in the number of excess prefixes in the 408 area code being returned to the NANPA by companies. Sixteen prefixes have been returned to the NANPA in calendar year 2000, with the CPUC working with companies to reclaim excess prefixes held by companies. As of December 31, 2000, there were 130 prefixes available for assignment in the 408 area code. numbering) cited authority to create service-specific or technology-specific area codes. In the 408 area code, there are 20 wireless carriers holding 165 prefixes. If the CPUC were allowed to create a separate area code for those companies, the 165 prefixes in the 408 408 area code to undergo another area code change, the CPUC recognized the necessity of determining the number of telephone numbers that are in use and the number yet to be used. To that end, the CPUC instituted a utilization study of the 408 area code, and required companies to provide usage data to the CPUC as of April 30, 2000. The TD contracted with NeuStar to collect the data; NeuStar submitted the aggregated data in its entirety to TD on August 18, 2000. The definitions used in the utilization study are in Appendix A-1.
Of the 7.78 million numbers in the 408 area code, companies hold 6.33 million. The other 1.45 million numbers have yet to be assigned to companies. The CPUC's utilization study found that, of the 6.33 million numbers held by companies, 2.36 million remain unused in their inventories. Therefore, 3.81 million numbers in the 408 area code remain unused. A portion of these unused numbers can be made available for use by all companies, either through pooling or through the monthly lottery allocation process. In addition, companies have reported 3.97 million numbers as unavailable. A portion of these unavailable numbers can be used more efficiently if the recommendations contained in this report are implemented.
A. THE SCOPE OF THE UTILIZATION STUDY
2. Prefix Distribution Statistics
The CPUC asked 49 companies, holding 633 prefixes (6.33 million numbers) in the 408 area code, to report their utilization data, with a reporting cutoff date of April 30, 2000. Table 2-1 shows the distribution of these prefixes by type of carrier; incumbent local exchange carrier (ILEC), competitive local exchange carrier (CLEC),15 and wireless carrier.
3. Companies Reporting
Of the 49 companies in the 408 area code, 47 submitted utilization data. Although one company submitted data too late to be included in the summaries provided by NeuStar, TD has considered this late filer in its analysis. A list of the companies that have been allocated numbers in the 408 area code appears in Appendix A-2.
4. Non-Reporting Companies
The remaining two companies holding six prefixes in the 408 area code are no longer offering service in the 408 area code. Both CRL Network Services and GTE Communications Corporation wrote to NeuStar that they were returning all their prefixes in California. The NANPA has confirmed that CRL Network Services and GTE Communications Corporation have returned all six of the prefixes they held in the 408 area code.
C. NUMBERS AVAILABLE IN THE 408 AREA CODE
1. 3.81 Million Numbers Available in the 408 Area Code
The 408 area code has 3.81 million unused numbers. Of these unused numbers, TD found that companies held 2.36 million numbers in their inventories.16 These numbers held in inventory are currently not used for any purpose but held in anticipation of future need. The remaining 1.45 million unused numbers are not yet assigned to companies, and are available for allocation in the 408 monthly lottery. The summary of available numbers is shown in the table below.
Table 2-2
Summary of Available Numbers
Wireline Carriers 1,666,225
Wireless Carriers 578,192
Type 1 Carriers17 113,275
Total Available/Unused Numbers Held by Carriers 2,357,692
Numbers Available for the 408 Lottery 1,450,000
Total Available Numbers in the 408 Area Code 3,807,692
Not all of the 3.81 million unused numbers are immediately available to every company that wants numbers. Of the 3.81 million, a maximum of 2.05 million numbers18 are estimated to be available to all companies via the pooling trial or the lottery. The remaining 1.76 million numbers are only available to the companies that hold them. As shown in the table below, the CPUC could shift 0.57 million numbers from one category to the other by adopting the recommendations19 in this report. Of the 3.81 million unused numbers, those actions could result in making a maximum of 2.62 million numbers20 available to all companies, with the remaining 1.19 million numbers available to the companies that hold them.
Current technology requires a company to be LNP capable in order to donate numbers for another company to use. All wireline carriers in the 408 area code are required to be LNP capable.21 Wireline carriers hold 1.67 million unused numbers in the 408 area code. In order for the unused numbers to be retrieved from company inventories, the FCC requires these unused numbers to be retrieved from blocks which are 10% or less contaminated.22 Of wireline companies' 1.67 million unused numbers, 960,000 are contained in 973 thousand-blocks held by LNP-capable companies that are 10% or less contaminated. However, not all of these 960,000 numbers can be retrieved from companies' inventories because companies need to have enough numbers to meet anticipated future need.23 Both the CPUC and the FCC have determined that six months of inventory is a reasonable quantity to hold for future use. TD will not know how many of these 960,000 numbers will be available for pooling until companies submit their pooling block donations to the pooling administrator, which is scheduled to occur on May 4, 2001.24 In the meantime, a reasonable estimate of numbers likely to be donated to the 408 pool, based on the experience of the 310 pool, is 600,00025. The difference between the potential maximum 960,000 currently poolable numbers that wireline carriers hold and the 600,000 numbers estimated as likely to be donated to the pool consists of 360,000 numbers that companies are estimated to need for their six-month inventories.
The remaining 700,000 of the 1.7 million unused numbers cannot be retrieved, either because the numbers are in blocks more than 10% contaminated or because they are in non-LNP-capable blocks. However, companies can immediately use these numbers to provide service to their customers or meet other needs. Wireline carriers hold 650,000 numbers in blocks that are more than 10% contaminated.26 Wireline carriers hold another 40,000 unused numbers in blocks that are non LNP-capable. Special-use prefixes27 are generally not LNP capable, and constitute 10,000 of the 1.7 million unused numbers.
Wireless carriers hold 580,000 unused numbers in the 408 area code. Of these unused numbers, 290,000 are in blocks that are 10% or less contaminated, and 290,000 numbers are in blocks more than 10% contaminated. Until wireless carriers become LNP capable in November 2002, none of these numbers may be reallocated to other companies. In the interim, wireless carriers may assign these numbers to their own customers.
D. ANALYSIS OF AVAILABLE NUMBERS
1. Analysis of Wireline Carriers' Contamination Rates
The CPUC requires each company participating in the 408 number pool to donate blocks that are 10% or less contaminated, excluding those retained for the company's six-month inventory.28
TD analyzed the 408 utilization data to determine the availability of numbers within blocks of different contamination rates, to assess different contamination thresholds that could be employed in the number pool. The following table summarizes available numbers by contamination level by rate center for wireline carriers. all rate centers except Hollister, Salinas, and San Antonio have available numbers that companies could donate to the pool. The first two of these three rate centers are no longer located within the present boundaries of the 408 area code; they will continue to serve wireless customers who chose to retain their 408 area code number when the 831 area code split from 408, but will no longer be exerting demand for prefixes in the 408 area code. Thus, there should be no demand from wireline companies for 408 prefixes in these rate centers.
The last three columns of Table 2-4 capture available numbers in blocks that are more than 10% contaminated but no more than 25% contaminated. Under the current number pool rules, companies retain thousand-number blocks that are more than 10% contaminated. Increasing the contamination rate threshold for donations from 10% to 25% would potentially free up an additional 134,00029 numbers for use in the number pool. TD cautions that, although Table 2-4 shows potential results from increasing allowable contamination levels, further analysis and input from the industry may be necessary to determine accurately the quantity of additional blocks that could were increased from 10% to 25%, more unused numbers exist in most rate centers that potentially could Rates 289,000 available numbers in blocks that are 10% or less contaminated, as shown in the first two numeric columns of Table 2-5. Wireless carriers also have 66,000 available numbers in blocks with contamination levels greater than 10% but less than or equal to 25%, as indicated by the last three columns of Table 2-5. Of these 356,000 unused numbers held by wireless carriers, TD estimates that 160,000 (45%) are held by paging companies30. TD staff is investigating whether there are methods to make some of these 160,000 unused numbers available to other carriers despite the FCC's exemption of paging companies from becoming LNP-capable.
Because the FCC has granted wireless carriers an extension of time to implement LNP, no wireless carriers serving the 408 area code are capable of implementing LNP. Thus, wireless carriers cannot participate in number pooling at this time, resulting in 356,000 unused numbers in blocks between 0% and 25% contaminated in the 408 area code..
3. Potential Block Contamination Abuses
When blocks are slightly more than 10% contaminated, those blocks cannot be donated to the pool under current pooling rules. In the 408 area code, TD found four prefixes in which companies have contaminated three blocks in each of these prefixes above 10% but less than 15%. In another twelve prefixes, companies have contaminated two blocks in each prefix above 10% but less than 15%. These instances are a small proportion of the 6,300 blocks in use in the 408 area code, and do not necessarily indicate that companies have intentionally contaminated blocks to avoid having to donate them to the number pool. Viewing the utilization data suggests, however, that companies have not generally followed practices of sequential numbering and filling blocks substantially before using new blocks. The CPUC's rules on sequential numbering and fill rate practices promulgated in Decision 00-07-052 are designed to ensure that companies efficiently use their numbers in the future. Fill rates mitigate contamination by requiring companies to use contaminated blocks up to 75% before they can receive additional blocks or prefixes. rate decision was issued in July 2000. Therefore, TD does not expect companies to continue contaminating blocks unnecessarily..31
4. Reclamation of Prefixes
Decision 00-07-052 directed companies to return prefixes that are held unused for more than six months. As shown in Appendix B-1, wireline carriers and wireless carriers hold 707,000 unused numbers and 184,000 unused numbers, respectively, in 0% contaminated blocks. Of these unused numbers, 170,000 are in 17 whole prefixes that are completely uncontaminated, i.e., spare prefixes, while 721,000 numbers are in uncontaminated blocks that are scattered throughout many different prefixes. The following table shows the breakdown between wireless and wireline carriers.
Table 2-6
Breakdown of Numbers in 0% Contaminated Blocks
The 170,000 numbers in 17 spare prefix holders have activated prefixes within the allowed time frames, and directed the NANPA to abide by the state commission's determination to reclaim a prefix if the state commission is satisfied that the prefix holder has not activated the prefix within the time specified in the first NRO Order.32.
The either that the prefix(es) have been placed in service or returned, that the company was incorrectly included in the NANPA's delinquent list, or the reasons the prefix(es) have not been placed in service. The CPUC will review the reasons and make a determination as to whether the prefix(es) must be returned or reclaimed by the NANPA, or whether to grant an extension of time to the company to place the prefix(es) in service. Any delinquent company that fails to comply will be subject to penalties and sanctions.
E. ANALYSIS OF 3.97 MILLION UNAVAILABLE NUMBERS
In the following sections, the TD recommends a series of policies designed to require companies to use unavailable numbers more efficiently. These policies would potentially free more numbers for use in the pool, to be allocated through the monthly lottery, or to be used otherwise by companies.
Companies report that 3.97 million numbers in the 408 area code are either assigned to customers or are used by companies for reserved, administrative, intermediate and aging purposes;
· Intermediate numbers - Numbers that are made available for use by another telecommunications carrier or non-carrier entity for the purpose of providing telecommunications service to an end user or customer; and
· Aging - Numbers from recently disconnected service, which are not reassigned during a fixed interval.
In the following sections, the TD recommends a series of policies designed to require companies to use unavailable numbers more efficiently. These policies would potentially free more numbers for use in the pool, to be allocated through the prefix lottery, or to be used otherwise by companies.
1. 3.27 Million Assigned Numbers
In the 408 area code, there are 3.27 million assigned numbers, with 2.39 million assigned to customers by wireline carriers and 0.89 million assigned to customers by wireless carriers33. The percentages of assigned numbers to total numbers held by companies are shown in the table below.
Table 2-7
Assigned Numbers to Numbers Held by Companies (in millions)
Assigned Numbers Nos. Held by Companies Percentage
Wireline Carriers 2.39 4.68 51.0%
Wireless Carriers 0.89 1.65 53.7%. Only one company reported 8,000 assigned numbers in the non-working wireless category for the 408. 34 408 area code is included in one of the top 100 MSAs in the nation, all wireline carriers in 408 should be LNP-capable.35 The only companies that reported INP numbers were ILECs. They reported a total of 255 INP numbers in the 408 area code. Since all the reported INP numbers were from ILECs and none were from their competitors, it does not appear that INP exists in the 408 area code to facilitate competition for customers. Thus, TD questions why any INP numbers exist in this area code. Switching to LNP technology and eliminating INP will free up half of the 255 numbers that are currently dedicated to INP. and weather forecasts, high-volume call-in numbers, and emergency preparedness 36 four prefixes are dedicated for special uses; one each for directory assistance, high-volume calling, time, and emergency preparedness. Excluding high-volume calling, companies reported 30,000 unavailable numbers in three special-use prefixes. TD questions the necessity of assigning an entire prefix for each purpose.
Furthermore, having multiple special-use prefixes is an inefficient use of numbers in the 408 area code as well as in other area codes in California. For example, if the 555 prefix, currently reserved only for directory assistance,37 could be used to provide time and emergency preparedness service, then two more prefixes could be returned for reallocation in the 408.38 Previously, industry number assignment guidelines allowed companies to reserve a prefix for up to 18 months for customers' future use.39 The FCC's first NRO Order modified the number reservation period to 45 days. This 408 utilization study incorporated the FCC's 45-day requirement. The FCC later issued an extension until December 1, 2000 for companies to comply with the 45-day rule. 40 The extension allows companies time to upgrade their number tracking mechanisms, which tally the quantities of reserved numbers they hold. The FCC's second NRO Order changed the number reservation period to 180 days. This took effect on December 29, 2000.41 Companies reported a total of nearly 269,000 reserved numbers in the 408 utilization study. 42
Wireline carriers reported a total of about 259,000 reserved numbers in the 408 408 number pool, once established.43 Currently there are no limitations on the quantity or percentage of numbers a company can classify as reserved before requesting new numbers. Similarly, companies are not required to use their reserved numbers stock before they can request that new numbers be allocated to them. Comparing the data on the San Martin rate center and the Campbell rate center illustrates wide discrepancies between the quantity of reserved numbers companies hold. Wireline carriers reserved over 17 times as many numbers as a percentage of numbers held in the latter rate center.44 In another example, one company holds over 6,800 reserved numbers in one prefix in the Campbell rate center. Other companies in that same rate center hold as few as zero reserved numbers. If the CPUC orders efficient use practices specific to reserved numbers, more numbers could be made available for customer use.
Wireless carriers reported nearly 10,000 reserved numbers in the 408 area code. Wireless carriers also reported wide variances in reserved numbers. In the San Jose South rate center, two wireless carriers reported no reserved numbers. By contrast, one wireless carrier in that rate center reported nearly 2,700 reserved numbers in one prefix. Just as for wireline carriers, efficient number use practices specific to reserved numbers could immediately free up numbers within wireless carriers' 45,000 administrative numbers in the 408 area code. Wireline carriers hold about 31,000 of these numbers and wireless carriers hold about 14,000 of them.
The utilization study revealed that there is a potential for companies to over-assign administrative numbers within a particular thousand-block, prefix, or rate center in the 408 area code. The following examples demonstrate this potential for over-assignment. In the Gilroy rate center, a company is using over 291 numbers for administrative purposes in one prefix, while the average across all companies is 71 numbers per prefix. In addition, the Saratoga rate center uses almost seven times as many administrative numbers as a percentage of numbers assigned as San Jose South. 321,000 intermediate numbers in the 408 area code. Wireline carriers hold about 247,000 of those numbers and wireless carriers hold about 74,000. The quantity of intermediate numbers varied significantly among rate centers in the 408 companies' reports. In the 408 area code, over 70% neither by the wireline nor.
Improved Type 1 number management is particularly crucial because, unlike numbers held by most wireless carriers, Type 1 numbers are eligible for number pooling.48 Therefore, once unused Type 1 numbers are recovered by wireline carriers, these numbers could be made available for pooling. Despite the problems with reporting, TD has identified 11 blocks of Type 1 numbers in the 408 area code that may be eligible for donation to the pool.49 The Commission that the CPUC develop a system to ensure compliance.50 for business numbers.
In the 408 area code, there are approximately 221,000 numbers in the aging category, representing 5.43% of the total unavailable numbers. While most companies track aging telephone numbers by business and residential categories, Pacific Bell, the largest single holder of numbers in the 408 7.84% of the total unavailable wireless numbers, or about 84,000 numbers. Aging numbers represent 4.57% of the total unavailable wireline numbers, or about 138,000 numbers. This is consistent with the higher turnover or "churn" that occurs in the wireless industry. Table G-1, in utilization study was self-reported by companies. Given the area code crisis in California, the CPUC should audit this data for two reasons. First, verifying number usage data is important to ensure that the public resource of telephone numbers is efficiently managed. Second, audits will help verify whether companies are complying with CPUC and FCC rules for number usage.
Recommendation for Audit
· The CPUC should audit the data submitted by companies in this study and future area code number utilization studies.
F. INTRODUCTION
Many of the recommendations in Chapter Two resulted directly from the analysis of the utilization data and address actions that the CPUC should undertake to make additional numbers available either for pooling or for the regular monthly lottery. The recommendations contained in this chapter suggest additional conservation measures as required by Public Utilities Code Section 7935(a). The CPUC could adopt the following conservation measures in the 408 area code and statewide: LNP-related actions, Unassigned Number Porting, Rate Center Consolidation, and prefix sharing. When applied, these conservation measures would result in uniform policies which will cause companies to use numbers more efficiently across California and would minimize customer confusion..51 Prior to pooling, 128 prefixes would have been opened to satisfy the demand for numbers. Number pooling has avoided the need to open prefixes and extended the life of the 310 area code by at least 18 months. 52
The positive experience in 310 is mirrored in 415, 714 and 909. Only two prefixes have been opened, and the numbering needs of companies have been met.53 Pooling has saved 44 prefixes in these three area codes.
Pooling benefits not only the public but the companies as well by reducing the time necessary to acquire numbering resources. Without pooling, activating new numbers takes at least 66 days.54 With number pooling, new numbers can be activated truly a.55.56. 408 area code is critical to effective number conservation. As described in Chapter 1,.57 The 408 area code falls within one of the top 100 MSAs. This study reveals that all but three wireline carriers in the 408 area code are LNP capable. These companies hold 35,000 numbers that could be made available for number pooling, if they implemented LNP technology.58 This non-compliance could likely be explained by the existence of subsequent FCC documents contradicting the original LNP order. However, in the Second Report and Order adopted December 7, 2000, the FCC resolved the confusion with a statement in footnote 399, 165 prefixes in the 408 area code, of which 292.
Recommendations for LNP
· The CPUC should request that non-LNP-capable wireline carriers in the 408 area code become LNP capable.
I..
J..
Recommendations for Rate Center Consolidation
· The CPUC should undertake further investigation by ordering the telecommunications industry to develop a plan, within 180 days, for rate center consolidation. 408 in which number pooling has not been implemented. Sharing prefixes between companies appears worthy of further investigation by the CPUC as a mechanism to promote more efficient use of numbers.
Recommendations for Sharing of Prefixes
· The CPUC should further explore sharing of prefixes as a means to more efficiently utilize numbers in all area codes.
Analyzing the utilization data provided by carriers has provided useful information regarding number availability and usage practices in the 408 area code. It also has offered insights into developing better public policies to improve efficiency of number use.
We now know that, of the approximately 7.78 million usable numbers in the 408 area code, approximately 3.81 million, or roughly half, presently are not in use. Despite the increasing demand for numbers, the 408 area code is not fully utilized. The data indicates that there is considerable room for growth within the existing 408 area code, and it is premature to consider splitting or overlaying the 408 area code at this time.
The CPUC already has directed carriers to employ measures to use the numbering resources in 408 more efficiently. Recently adopted fill rates and sequential numbering rules will ensure that carriers use their existing resources more fully and receive additional numbers only on an as-needed basis. The proposed number pooling trial to begin in May 2001 will assure that all LNP-capable carriers are given numbers expeditiously and in usable blocks. Allocating numbers in thousand-block increments rather than in full prefixes of 10,000 numbers will ensure, 1.76 million numbers are not in use in 408 but cannot be reassigned to other carriers. Changes in contamination thresholds and requiring LNP capability for all carriers could make about 570,000 of these stranded numbers available for reassignment.
The CPUC should continue its collaborative process with the FCC and the telecommunications industry to implement Unassigned Number Porting, the development of non-geographic-specific area codes, and other measures that
*0 Test Numbers: Telephone numbers (TNs) assigned for inter-and intra-network testing purposes
*1 Other Administrative Numbers (include only Location Routing Number, Temporary Local Directory Number and Wireless E911 ESRD/ESRK) where
*2 Identical to a Local Routing Number (LRN): The ten-digit (NPA-XXX-XXXX) number assigned to a switch/point of interconnection (POI) used for routing in a permanent local number portability environment
*3 Temporary Local Directory Number (TLDN): A number dynamically assigned on a per call basis by the serving wireless service provider to a roaming subscriber for the purpose of incoming call setup
*4:
0.
1.
2.
APPENDIX D
Table D-1
Table D-2
APPENDIX E
Table E-1
Table E-2
APPENDIX F
Table F-1
Table F-2
APPENDIX G
APPENDIX H
TABLE H-1
NUMBER POOLING
APPENDIX H
NUMBER POOLING
SUMMARY OF RECOMMENDATIONS Audit 408 area code become LNP capable.
Recommendations for UNP., I.87-ll-033. The IRD phase took three years to complete. 67. 68 See Chapter 1 for the discussion of Decision 00-07-052. | http://docs.cpuc.ca.gov/Published/Report/5693.htm | 2015-06-30T03:26:04 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.cpuc.ca.gov |
Difference between revisions of "Extensions Module Manager Banners"
From Joomla! Documentation
Revision as of 17:37, 15 September 2012
Contents
How to Access
Description
This Module allows you to show active Banners from the Banner Component created in the Banner Manager screen. The Module Type name for this is "mod_banners".
Screenshot
File:Extensions-Module-Manager-Banners-Banners-basic-options-parameters.png
--Banners. | https://docs.joomla.org/index.php?title=Help32:Extensions_Module_Manager_Banners&diff=74660&oldid=74659 | 2015-06-30T04:53:00 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
Revision history of "JDocumentRendererMessage/11.1"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 17:28, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JDocumentRendererMessage/11.1 to API17:JDocumentRendererMessage without leaving a redirect (Robot: Moved page) | https://docs.joomla.org/index.php?title=JDocumentRendererMessage/11.1&action=history | 2015-06-30T04:59:35 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
Set an alarm using the Voice Control app
To open the Voice Control app, press and hold the
Mute
key on the right side of your
BlackBerry
device.
After the beep, say "Set an alarm" and the time that you want to set the alarm for. For example, "Set an alarm for 6:00" or "Set an alarm for one hour from now."
Parent topic:
Using the BlackBerry Voice Control app | http://docs.blackberry.com/en/smartphone_users/deliverables/61705/amc1349461949369.html | 2015-06-30T03:34:09 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.blackberry.com |
The Hortonworks Data Platform consists of three layers of components. A coordinated and tested set of these components is sometimes referred to as the Stack.
Core Hadoop 1: The basic components of Apache Hadoop version 1.x.
Hadoop Distributed File System (HDFS) : A special purpose file system designed to provide high-throughput access to data in a highly distributed environment.
MapReduce: A framework for performing high volume distributed data processing using the MapReduce programming paradigm.
Core Hadoop 2: The basic components of Apache Hadoop version 2.x.
Hadoop Distributed File System (HDFS) : A special purpose file system designed to provide high-throughput access to data in a highly distributed environment.
YARN: A resource negotiator for managing high volume distributed data processing. Previously part of the first version of MapReduce.
MapReduce 2 (MR2) : A set of client libraries for computation using the MapReduce programming paradigm and a History Server for logging job and task information. Previously part of the first version of MapReduce.. Included with Apache HCatalog.
Apache HCatalog: A metadata abstraction layer that insulates users and scripts from how and where data is physically stored. Now part of Apache Hive. Includes WebHCat, which. It is not supported in the context of Ambari at this time.. | http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.7/bk_using_Ambari_book/content/ambari-chap1-1.html | 2015-06-30T03:27:55 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.hortonworks.com |
- Docs
- DDoc Documentation
-
- Getting Started
-
- Required Plugins
Required Plugins
Estimated reading : 2 minute
These plugins are required/recommended for the DDoc – documentation WordPress theme.
- DDoc Core *: This plugin adds the core features to the DDoc – documentation WordPress theme. To access all the features of the DDoc DDoc – documentation.
- Void Contact Form 7 Widget: Use this plugin to create forms.
- bbPress *: This plugin adds forum functionality to WordPress.
- weDocs *: This plugin simplifies the documentation creation process.
Note: The plugins marked with an asterisk (*) are required. They must be installed along with DDoc. Additional plugins are recommended. All of these plugins are available under Appearance > Install Plugins.
Still Stuck?
We can help you. Create a Support Ticket | https://docs.droitthemes.com/docs/ddoc-documentation/getting-started/required-plugins/ | 2022-06-25T07:17:36 | CC-MAIN-2022-27 | 1656103034877.9 | [array(['https://docs.droitthemes.com/wp-content/themes/ddoc/assets/images/Still_Stuck.png',
'Still_Stuck'], dtype=object) ] | docs.droitthemes.com |
_ID area information available. | https://docs.genesys.com/Documentation/RAA/latest/PDMMS/Table-AGT_ID_FCR_HOUR | 2022-06-25T08:01:48 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.genesys.com |
HBv3-series virtual machine overview
Applies to: ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets
An HBv3-series server features 2 * 64-core EPYC 7V73X CPUs for a total of 128 physical "Zen3" cores with AMD 3D V-Cache. Simultaneous Multithreading (SMT) is disabled on HBv3. These 128 cores are divided into 16 sections (8 per socket), each section containing 8 processor cores with uniform access to a 96 MB L3 cache. Azure HBv3 servers also run the following AMD BIOS settings:
Nodes per Socket (NPS) = 2 L3 as NUMA = Disabled NUMA domains within VM OS = 4 C-states = Enabled
As a result, the server boots with 4 NUMA domains (2 per socket) each 32-cores in size. Each NUMA has direct access to 4 channels of physical DRAM operating at 3200 MT/s.
To provide room for the Azure hypervisor to operate without interfering with the VM, we reserve 8 physical cores per server.
VM topology
The following diagram shows the topology of the server. We reserve these 8 hypervisor host cores (yellow) symmetrically across both CPU sockets, taking the first 2 cores from specific Core Complex Dies (CCDs) on each NUMA domain, with the remaining cores for the HBv3-series VM (green).
Note that the CCD boundary is not equivalent to a NUMA boundary. On HBv3, a group of four consecutive (4) CCDs is configured as a NUMA domain, both at the host sever level and within a guest VM. Thus, all HBv3 VM sizes expose 4 NUMA domains that will appear to a OS and application as shown below, 4 uniform NUMA domains, each with different number of cores depending on the specific HBv3 VM size.
Each HBv3 VM size is similar in physical layout, features, and performance of a different CPU from the AMD EPYC 7003-series, as follows:
Note
The constrained cores VM sizes only reduce the number of physical cores exposed to the VM. All global shared assets (RAM, memory bandwidth, L3 cache, GMI and xGMI connectivity, InfiniBand, Azure Ethernet network, local SSD) stay constant. This allows a customer to pick a VM size best tailored to a given set of workload or software licensing needs.
The virtual NUMA mapping of each HBv3 VM size is mapped to the underlying physical NUMA topology. There is no potentially misleading abstraction of the hardware topology.
The exact topology for the various HBv3 VM size appears as follows using the output of lstopo:
lstopo-no-graphics --no-io --no-legend --of txt
Click to view lstopo output for Standard_HB120rs_v3
Click to view lstopo output for Standard_HB120rs-96_v3
Click to view lstopo output for Standard_HB120rs-64_v3
Click to view lstopo output for Standard_HB120rs-32_v3
Click to view lstopo output for Standard_HB120rs-16_v3
InfiniBand networking
HBv3 VMs also feature Nvidia Mellanox HDR InfiniBand network adapters (ConnectX-6) operating at up to 200 Gigabits/sec. The NIC is passed through to the VM via SRIOV, enabling network traffic to bypass the hypervisor. As a result, customers load standard Mellanox OFED drivers on HBv3 VMs as they would a bare metal environment.
HBv3 VMs support Adaptive Routing, the Dynamic Connected Transport (DCT, in additional to standard RC and UD transports), and hardware-based offload of MPI collectives to the onboard processor of the ConnectX-6 adapter. These features enhance application performance, scalability, and consistency, and usage of them is strongly recommended.
Temporary storage
HBv3 VMs feature 3 physically local SSD devices. One device is preformatted to serve as a page file and will appear within your VM as a generic "SSD" device.
Two other, larger SSDs are provided as unformatted block NVMe devices via NVMeDirect. As the block NVMe device bypasses the hypervisor, it will have higher bandwidth, higher IOPS, and lower latency per IOP.
When paired in a striped array, the NVMe SSD provides up to 7 GB/s reads and 3 GB/s writes, and up to 186,000 IOPS (reads) and 201,000 IOPS (writes) for deep queue depths.
Hardware specifications
Software specifications
Note
Windows Server 2012 R2 is not supported on HBv3 and other VMs with more than 64 (virtual or physical) cores. See Supported Windows guest operating systems for Hyper-V on Windows Server for more details.
Next steps
- Read about the latest announcements, HPC workload examples, and performance results at the Azure Compute Tech Community Blogs.
- For a higher level architectural view of running HPC workloads, see High Performance Computing (HPC) on Azure.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/hpc/hbv3-series-overview | 2022-06-25T08:25:52 | CC-MAIN-2022-27 | 1656103034877.9 | [array(['media/architecture/hbv3/hbv3-topology-server.png',
'Topology of the HBv3-series server'], dtype=object)
array(['media/architecture/hbv3/hbv3-topology-vm.png',
'Topology of the HBv3-series VM'], dtype=object)
array(['media/architecture/hbv3/hbv3-120-lstopo.png',
'lstopo output for HBv3-120 VM'], dtype=object)
array(['media/architecture/hbv3/hbv3-96-lstopo.png',
'lstopo output for HBv3-96 VM'], dtype=object)
array(['media/architecture/hbv3/hbv3-64-lstopo.png',
'lstopo output for HBv3-64 VM'], dtype=object)
array(['media/architecture/hbv3/hbv3-32-lstopo.png',
'lstopo output for HBv3-32 VM'], dtype=object)
array(['media/architecture/hbv3/hbv3-16-lstopo.png',
'lstopo output for HBv3-16 VM'], dtype=object)] | docs.microsoft.com |
Initial Appliance Setup¶
Appliance Setup¶
After installation, log into the appliance at the URL presented upon completion. An initial setup wizard walks through the first account and user creations.
Enter Master Account name
Typically, the Master Account name is your Company name.
Create Master User
Username
Password * Must be at least 8 characters longs and contain one each of the following: Uppercase letter, lowercase letter, Number, Special Character
Enter Appliance Name & Appliance URL
The Appliance Name is used for white labeling and as a reference for multi-appliance installations.
The Appliance URL is the URL all provisioned instances will report back to. Example:. The Appliance URL can be changed later, and also set to different url per cloud integration.
Optionally Enable or Disable Backups, Monitoring, or Logs from this screen.
Note
You may adjust these settings from the Administration section.
Note
The Master Account name is the top-level admin account.
Note
The Master User is the system super user and will have full access privileges.
Upon completing of the initial appliance setup, you will be taken to the Admin -> Settings page, where you will add your License Key.
Login Methods¶
Master Tenant
Enter username or email. and password
Subtenant
To login, subtenants can either use the master tenant URL with
subtenant\username formatting:
- Example:
I have a username
subuserthat belongs to a tenant with the subdomain
subaccount. When logging in from the main login url, I would now need to enter in:
subaccount\subuser
Or use the tenant specific URL which can be found and configured under Administration > Tenants > Select Tenant > Identity Sources.
Important
In 3.4.0+ Subtenant users will no longer be able to login from the main login url without specifying their subdomain.
Configure Cloud-init Global Settings¶
When using cloud-init, cloudbase-init, VMware Tools customizations, or Nutanix Sysprep, Global Linux User and Windows Administrator credentials can be set using the settings in Administraiton - Provisioning. Its is recommended to define these settings after installation unless credentials are defined per Virtual Image for Provisioning.
Add a License Key¶
In order to provision anything in Morpheus , a Morpheus License Key must be applied.
If you do not already have a license key, one may be requested from or from your Morpheus representative.
In the Administration -> Settings section, select the LICENSE tab, paste your License Key and click UPDATE
When the license is accepted, your license details will populate in the Current License section.
If you receive an error message and your license is not accepted, please check it was copied in full and then contact your Morpheus representative. You can also verify the License Key and expiration at. | https://docs.morpheusdata.com/en/5.2.11/getting_started/appliance_setup/appliance_setup.html | 2022-06-25T07:14:27 | CC-MAIN-2022-27 | 1656103034877.9 | [array(['../../_images/tenant_url.png', '../../_images/tenant_url.png'],
dtype=object)
array(['../../_images/license_key.png', '../../_images/license_key.png'],
dtype=object) ] | docs.morpheusdata.com |
@Target(value={METHOD,ANNOTATION_TYPE}) @Retention(value=RUNTIME) @Documented public @interface Bean,
depends-on, primary, or lazy. Rather, it should be used in conjunction with
@Scope,
@DependsOn,
@Primary,
and
@Lazy annotations to achieve those semantics. For example:
@Bean @Scope("prototype") public MyBean myBean() { // instantiate and configure MyBean obj return obj; }
@BeanMethods in
@ConfigurationClasses
Typically,
@Bean methods are declared within
@Configuration
classes. In this case, bean methods may reference other
@Bean methodsLite Mode
@Bean methods may also be declared within classes that are not
annotated with
@Configuration. For example, bean methods may be declared
in a
@Component class or even in a plain old class. In such cases,
a
@Bean method will get processed in a so-called 'lite' mode.
Bean methods in lite mode will be treated as plain factory
methods by the container (similar to
factory-method declarations
in XML), with scoping and lifecycle callbacks properly applied. The containing
class remains unmodified in this case, and there are no unusual constraints for
the containing class or the factory methods.
In contrast to the semantics for bean methods in
@Configuration classes,
'inter-bean references' are not supported in lite mode. Instead,
when one
@Bean-method invokes another
@Bean-method in lite
mode, the invocation is a standard Java method invocation; Spring does not intercept
the invocation via a CGLIB proxy. This is analogous to inter-
@Transactional
method calls where in proxy mode, Spring does not intercept the invocation —
Spring does so only in AspectJ an that the
DisposableBean() | https://docs.spring.io/spring-framework/docs/3.2.6.RELEASE/javadoc-api/org/springframework/context/annotation/Bean.html | 2022-06-25T07:20:45 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.spring.io |
Multiple Bragg reflection by a thick mosaic crystal. II. Simplified transport equation solved on a grid
Bornemann, Folkmar
Li, Yun Yvonna
Wuttke, Joachim
DOI:
Persistent URL:
Persistent URL:
Bornemann, Folkmar; Li, Yun Yvonna; Wuttke, Joachim, 2020: Multiple Bragg reflection by a thick mosaic crystal. II. Simplified transport equation solved on a grid. In: Acta Crystallographica Section A, Band 76, 3: 376 - 389, DOI: 10.1107/S2053273320002065.
The generalized Darwin–Hamilton equations [Wuttke (2014). Acta Cryst. A70, 429–440] describe multiple Bragg reflection from a thick, ideally imperfect crystal. These equations are simplified by making full use of energy conservation, and it is demonstrated that the conventional two-ray Darwin–Hamilton equations are obtained as a first-order approximation. Then an efficient numeric solution method is presented, based on a transfer matrix for discretized directional distribution functions and on spectral collocation in the depth coordinate. Example solutions illustrate the orientational spread of multiply reflected rays and the distortion of rocking curves, especially if the detector only covers a finite solid angle.
Statistik:View Statistics
Collection
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. | https://e-docs.geo-leo.de/handle/11858/8818 | 2022-06-25T07:24:57 | CC-MAIN-2022-27 | 1656103034877.9 | [] | e-docs.geo-leo.de |
Help Center
Local Navigation
Setting up the sample application in the BlackBerry JDE
Install the sample application
- Visit to download the sample application.
- Extract the mapfielddemo.zip file.
- On the taskbar, click Start > Programs > Research In Motion > BlackBerry JDE 4 MapFieldDemo.jdp file.
- Click Open.
Run the sample application
- In the workspace where you added the mapfielddemo project, right-click MapField.7.0 > Device Simulator.
- In the application list, click the Downloads folder.
- Click the Map Field Demo icon.
Next topic: Install the sample application
Previous topic: Run the sample application
Was this information helpful? Send us your comments. | http://docs.blackberry.com/it-it/developers/deliverables/12611/Setting_up_for_JDE_organizer_1009971_11.jsp | 2014-10-20T08:14:07 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.blackberry.com |
Welcome to the FEST Swing Module
FEST-Swing)
- Reliable GUI component lookup (by type, by name or custom search criteria)
- Support for all Swing components included in the JDK
- Compact and powerful API for creation and maintenance of functional GUI tests
-, the project's repository can be found at (groupId: fest, artifactId: fest-swing).
Examples
The following example shows a test verifying that an error message is displayed if the user forgets to enter her password when trying to log in an application: | http://docs.codehaus.org/pages/viewpage.action?pageId=117899755 | 2014-10-20T08:16:52 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.codehaus.org |
Here are some samples:
Actually, a more cross-platform way to make a beep sound is to call Microsoft.VisualBasic.Interaction.Beep() (add a reference to the Microsoft.VisualBasic.dll in the GAC or do "import Microsoft.VisualBasic from Microsoft.VisualBasic"). This is supported in both .NET and Mono.
Here is another example with the EntryPoint specified as a named parameter. Useful I guess if you want to use a different name for the dll call.
For further information, see: | http://docs.codehaus.org/pages/diffpages.action?pageId=14344&originalId=228165246 | 2014-10-20T08:44:10 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.codehaus.org |
ImageDraw Module¶
The ImageDraw module provide simple 2D graphics for Image objects. You can use this module to create new images, annotate or retouch existing images, and to generate graphics on the fly for web use.
For a more advanced drawing library for PIL, see the aggdraw module.
Example: Draw a gray cross over an image¶
from PIL")
Concepts¶
Coordinates¶
The graphics interface uses the same coordinate system as PIL itself, with (0, 0) in the upper left corner.
Colors¶
To specify colors, you can use numbers or tuples just as you would use with PIL.Image.Image.new() or PIL.Image.Image.putpixel(). For “1”, “L”, and “I” images, use integers. For “RGB” images, use a 3-tuple containing integer values. For “F” images, use integer or floating point values.
For palette images (mode “P”), use integers as color indexes. In 1.1.4 and later, you can also use RGB 3-tuples or color names (see below). The drawing layer will automatically assign color indexes, as long as you don’t draw with more than 256 colors.
Color Names¶
See Color Names for the color names supported by Pillow.
Fonts¶
PIL can use bitmap fonts or OpenType/TrueType fonts.
Bitmap fonts are stored in PIL’s own format, where each font typically consists of a two files, one named .pil and the other usually named .pbm. The former contains font metrics, the latter raster data.
To load a bitmap font, use the load functions in the ImageFont module.
To load a OpenType/TrueType font, use the truetype function in the ImageFont module. Note that this function depends on third-party libraries, and may not available in all PIL builds.
Example: Draw Partial Opacity Text¶
from PIL import Image, ImageDraw, ImageFont # get an image base = Image.open('Pillow/Tests/images/lena.png').convert('RGBA') # make a blank image for the text, initialized to transparent text color txt = Image.new('RGBA', base.size, (255,255,255,0)) # get a font fnt = ImageFont.truetype('Pillow/Tests/fonts/FreeMono.ttf', 40) # get a drawing context d = ImageDraw.Draw(txt) # draw text, half opacity d.text((10,10), "Hello", font=fnt, fill=(255,255,255,128)) # draw text, full opacity d.text((10,60), "World", font=fnt, fill=(255,255,255,255)) out = Image.alpha_composite(base, txt) out.show()
Functions¶
Methods¶
- PIL.ImageDraw.Draw.arc(xy, start, end, fill=None)¶
Draws an arc (a portion of a circle outline) between the start and end angles, inside the given bounding box.
- PIL.ImageDraw.Draw.bitmap(xy, bitmap, fill=None)¶
Draws a bitmap (mask) at the given position, using the current fill color for the non-zero portions. The bitmap should be a valid transparency mask (mode “1”) or matte (mode “L” or “RGBA”).
This is equivalent to doing image.paste(xy, color, bitmap).
To paste pixel data into an image, use the paste() method on the image itself.
- PIL.ImageDraw.Draw.chord(xy, start, end, fill=None, outline=None)¶
Same as arc(), but connects the end points with a straight line.
- PIL.ImageDraw.Draw.ellipse(xy, fill=None, outline=None)¶
Draws an ellipse inside the given bounding box.
- PIL.ImageDraw.Draw.line(xy, fill=None, width=0)¶
Draws a line between the coordinates in the xy list.
- PIL.ImageDraw.Draw.pieslice(xy, start, end, fill=None, outline=None)¶
Same as arc, but also draws straight lines between the end points and the center of the bounding box.
- PIL.ImageDraw.Draw.polygon(xy, fill=None, outline=None)¶
Draws a polygon.
The polygon outline consists of straight lines between the given coordinates, plus a straight line between the last and the first coordinate.
- PIL.ImageDraw.Draw.shape(shape, fill=None, outline=None)¶
Warning
This method is experimental.
Draw a shape.
- PIL.ImageDraw.Draw.text(xy, text, fill=None, font=None, anchor=None)¶
Draws the string at the given position.
Legacy API¶
The Draw class contains a constructor and a number of methods which are provided for backwards compatibility only. For this to work properly, you should either use options on the drawing primitives, or these methods. Do not mix the old and new calling conventions.
- PIL.ImageDraw.Draw.setink(ink)¶
Deprecated since version 1.1.5.
Sets the color to use for subsequent draw and fill operations.
- PIL.ImageDraw.Draw.setfill(fill)¶
Deprecated since version 1.1.5.
Sets the fill mode.
If the mode is 0, subsequently drawn shapes (like polygons and rectangles) are outlined. If the mode is 1, they are filled. | http://pillow.readthedocs.org/en/latest/reference/ImageDraw.html | 2014-10-20T08:05:17 | CC-MAIN-2014-42 | 1413507442288.9 | [] | pillow.readthedocs.org |
Concepts are a known entity in the world. They are fundamental to annotating your data, for defining the vocabulary that a model should output, for relating things to each other, for receiving your predictions, searching for these concepts and more.
A concept is something that describes an input, similar to a “tag” or “keyword.” The data in these concepts give the model something to “observe” about the key word, and learn from.
For example, a concept may be a "dog", a "cat", or a "tree". If you annotate some input data as having a "dog" or "cat" present, that provides the foundation for training a model on that data. A model could then be created with "dog" and "cat" in the list of concepts that it will learn to predict. After training, the model can predict the concepts "dog" and "cat" and you can search over your data for "dogs" and "cats" that the model identifies, or that have been annotated. | https://docs.clarifai.com/api-guide/concepts | 2021-01-15T17:04:32 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.clarifai.com |
Customize and export Start layout
Applies to
- Windows 10
Looking for consumer information? See Customize the Start menu
The easiest method for creating a customized Start layout to apply to other Windows 10 devices is to set up the Start screen on a test computer and then export the layout.
After you export the layout, decide whether you want to apply a full Start layout or a partial Start layout.
When a full Start layout is applied,.
Note
Partial Start layout is only supported on Windows 10, version 1511 and later.
You can deploy the resulting .xml file to devices using one of the following methods:
Customize the Start screen on your test computer
To prepare a Start layout for export, you simply customize the Start layout on a test computer.
To prepare a test computer
Set up a test computer on which to customize the Start layout. Your test computer should have the operating system that is installed on the users’ computers (Windows 10 Pro, Enterprise, or Education). Install all apps and services that the Start layout should display.
Create a new user account that you will use to customize the Start layout.
To customize Start
Sign in to your test computer with the user account that you created.
Customize the Start layout as you want users to see it by using the following techniques:
Pin apps to Start. From Start, type the name of the app. When the app appears in the search results, right-click the app, and then click Pin to Start.
To view all apps, click All apps in the bottom-left corner of Start. Right-click any app, and pin or unpin it from Start.
Unpin apps that you don’t want to display. To unpin an app, right-click the app, and then click Unpin from Start.
Drag tiles on Start to reorder or group apps.
Resize tiles. To resize tiles, right-click the tile and then click Resize.
Create your own app groups. Drag the apps to an empty area. To name a group, click above the group of tiles and then type the name in the Name group field that appears above the group.
Important
In Windows 10, version 1703, if the Start layout includes tiles for apps that are not installed on the device that the layout is later applied to, the tiles for those apps will be blank. The blank tiles will persist until the next time the user signs in, at which time the blank tiles are removed. Some system events may cause the blank tiles to be removed before the next sign-in.
In earlier versions of Windows 10, no tile would be pinned.
Export the Start layout
When you have the Start layout that you want your users to see, use the Export-StartLayout cmdlet in Windows PowerShell to export the Start layout to an .xml file. Start layout is located by default at C:\Users\username\AppData\Local\Microsoft\Windows\Shell\
Important
If you include secondary Microsoft Edge tiles (tiles that link to specific websites in Microsoft Edge), see Add custom images to Microsoft Edge secondary tiles for instructions.
To export the Start layout to an .xml file
While signed in with the same account that you used to customize Start, right-click Start, and select Windows PowerShell.. For example:
Export-StartLayout -UseDesktopApplicationID -Path layout.xml
In the previous command,
-pathis a required parameter that specifies the path and file name for the export file. You can specify a local path or a UNC path (for example, \\FileServer01\StartLayouts\StartLayoutMarketing.xml).
Use a file name of your choice—for example, StartLayoutMarketing.xml. Include the .xml file name extension. The Export-StartLayout cmdlet does not append the file name extension, and the policy settings require the extension.
Example of a layout file produced by
Export-StartLayout:
(Optional) Edit the .xml file to add a taskbar configuration or to modify the exported layout. When you make changes to the exported layout, be aware that the order of the elements in the .xml file is critical.
Important.
Note
All clients that the start layout applies to must have the apps and other shortcuts present on the local system in the same location as the source for the Start layout.
For scripts and application tile pins to work correctly, follow these rules:
Executable files and scripts should be listed in \Program Files or wherever the installer of the app places them.
Shortcuts that will pinned to Start should be placed in \ProgramData\Microsoft\Windows\Start Menu\Programs.
If you place executable files or scripts in the \ProgramData\Microsoft\Windows\Start Menu\Programs folder, they will not pin to Start.
Start on Windows 10 does not support subfolders. We only support one folder. For example, \ProgramData\Microsoft\Windows\Start Menu\Programs\Folder. If you go any deeper than one folder, Start will compress the contents of all the subfolder to the top level.
Three additional shortcuts are pinned to the start menu after the export. These are shortcuts to %ALLUSERSPROFILE%\Microsoft\Windows\Start Menu\Programs, %APPDATA%\Microsoft\Windows\Start Menu\Programs, and %APPDATA%\Microsoft\Windows\Start Menu\Programs\System Tools.
Configure a partial Start layout
A partial Start layout enables you to add one or more customized tile groups to users' Start screens or menus, while still allowing users to make changes to other parts of the Start layout. All groups that you add are locked, meaning users cannot change the contents of those tile groups, however users can change the location of those groups. Locked groups are identified with an icon, as shown in the following image..
To configure a partial Start screen layout
Customize the Start layout.
-
Open the layout .xml file. There is a
<DefaultLayoutOverride>element. Add
LayoutCustomizationRestrictionType="OnlySpecifiedGroups"to the DefaultLayoutOverride element as follows:
<DefaultLayoutOverride LayoutCustomizationRestrictionType="OnlySpecifiedGroups">
Save the file and apply using any of the deployment methods.
Related topics
- Manage Windows 10 Start and taskbar layout
- Configure Windows 10 taskbar
- Add image for secondary tiles
- Start layout XML for desktop editions of Windows 10 (reference)
- Customize Windows 10 Start and taskbar with Group Policy
- Customize Windows 10 Start and taskbar with provisioning packages
- Customize Windows 10 Start and taskbar with mobile device management (MDM)
- Changes to Start policies in Windows 10 | https://docs.microsoft.com/en-us/windows/configuration/customize-and-export-start-layout | 2021-01-15T18:52:50 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['images/start-pinned-app.png', 'locked tile group'], dtype=object)] | docs.microsoft.com |
Returns true if the first argument is greater than but not equal to the second argument. Equivalent to the > operator.
true
>
Since the function returns a Boolean value, it can be used as a function or a conditional.
NOTE: Within an expression, you might choose to use the corresponding operator, instead of this function. For more information, see Comparison Operators.
delete row: GREATERTHAN(Errors, 10)
Output: Deletes all rows in which the value in the Errors column is greater than 10.
Errors
derive type:single value:GREATERTHAN(value1, value2)
Names of the column, expressions, or literals to compare.
myColumn | https://docs.trifacta.com/plugins/viewsource/viewpagesrc.action?pageId=145280716 | 2021-01-15T16:49:12 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.trifacta.com |
Access your organization's VMware Tanzu Mission Control resources using the command-line interface.
Prerequisites
You must be a member of a VMware Cloud Services organization that has access to Tanzu Mission Control to log in with the CLI.
Before starting this procedure, make sure you have already logged in to the Tanzu Mission Control console.
Procedure
- Install the Tanzu Mission Control CLI (tmc).
- In the left navigation pane of the Tanzu Mission Control console, click Automation center.
- On the Automation Center page, click Download CLI, and then choose the environment where you want to use the CLI.
- After the download completes, move the
tmcexecutable to an accessible bin directory (for example, ~/bin or %UserProfile%\bin).
- Make sure the bin directory is accessible on your PATH.
- Make the file executable.
- For Linux or Mac OS:
chmod +x ~/bin/tmc
- For Windows:
move %UserProfile%\bin\tmc %UserProfile%\bin\tmc.exe attrib +x %UserProfile%\bin\tmc.exe
- Now change to a different directory and verify that the CLI is ready to use by checking the help.
tmc --helpIf you are running on Mac OS, you might encounter the following error:
"tmc" cannot be opened because the developer cannot be verified.If this happens, you need to create a security exception for the tmc executable. Locate the tmc app in the Finder, control-click the app, and then choose Open.
-.Save this token in a safe place in the event you need to use it again.
- Return to the Download CLI page in the Tanzu Mission Control console.
- On the Download CLI page, copy the Tanzu Mission Control endpoint.
- Open a command window and use the following command to log in.
tmc login
- When prompted enter your API token and the Tanzu Mission Control endpoint.Alternatively, you can provide the token and endpoint as command-line flags. For more information, enter the following command.
tmc login --help | https://docs.vmware.com/en/VMware-Tanzu-Mission-Control/services/tanzumc-using/GUID-7EEBDAEF-7868-49EC-8069-D278FD100FD9.html | 2021-01-15T17:58:45 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.vmware.com |
The module lets you adjust reports to your needs through using various filters and even your own PHP code. It also permits export, printing, and reports comparison, as well as display of results in tables, graphs or charts.
You can also schedule the reports creation and send them to the SFTP/FTP server, chosen staff members, or even any email address.
We will guide you step by step through the whole installation and configuration process.
Note: If you are still using any of the module's previous versions prior to v4.x, follow these instructions.
Extracted files in your WHMCS directory should look like this:
The file is located in 'modules/addons/reportgenerator/reportgenerator/' .
In addition, you will also neeed to set the 'your_whmcs/modules/addons/reportgenerator//vendor/microweber' directory as writable and 'your_whmcs/modules/addons/reportgenerator/vendor/microweber/screen/bin' as writable and executable.
Log in to your WHMCS admin area. Go to 'Setup' → 'Addon Modules'. Afterwards, find 'Report Generator' and press the 'Activate' button.
To do so, click on the 'Configure' button, tick 'Full Administrator' and press 'Save Changes'.
Note: If you encounter any problems while performing the following steps, please go to the Common Problems section and check the additional steps described in the step 2
To do so, move to 'your_whmcs/modules/addons/reportgenerator/cron/' and run the below command:
php cron.php install:phantomjs
Then, run in the terminal:
chmod +x your_whmcs/modules/addons/reportgenerator/vendor/microweber/screen/bin/phantomjs
php -q your_whmcs/modules/addons/reportgenerator/cron/cron.php queue
php -q your_whmcs/modules/addons/reportgenerator/cron/cron.php task:scheduledTasks
You can access the module at 'Addons' → 'Report Generator'.
The module also offers additional useful features like predefined reports, import of the reports, generating reports in PDF and CSV, ability to send them to admins and much more.
Predefined reports are divided into four categories plus custom reports if you have some.
Provide the beginning and ending dates, then press the 'Submit' button. A brand new report will be generated on the run.
We will now show you how to develop two different reports, each one will be slightly more complicated than the previous one.
The first steps are the same for each advancement level.
Start with the first widget.
2. Now select the advancement level of the widget creator:
Let's start with 'Beginner'.
3. Select the chart type that will be used to view data in this widget.
4. Add a short description.
You can see there a list of clients, due date of their unpaid invoices together with the amount to pay and a selected payment method.
Do not forget to save the changes when your report is ready.
Of course, if you find this simple report generation mode not sufficient for you, you can always use the expert mode!
Press 'Create Task' to begin.
Afterwards, specify the conditions: if the report shall be generated right now or how often, as well as the time period for which the report shall be generated etc. Optionally, you can also change the name of the task to a more friendly.
If so, enable this option and then fill out any server data to successfully connect and allow the task to be completed in timely manner.
Press 'Confirm' button when the task is correctly configured.
Each of the planned relations can be removed or edited at any time, use action buttons to do so.
You can find any details in point 8 of this instruction.
Delete single logs or use mass action button to delete them in bulks.
When you are editing a report you may choose from the beginning which creator type you are going to use.
modules\addons\reportgenerator\app\Config\settings.
In order to successfully migrate your reports from the 3.x.x module version to the latest one, please execute the cron task:
yourWHMCS/modules/addons/reportgenerator/cron php cron.php migration:reports
Important: Reports created with PHP code are not migrated, only the ones in SQL.
3. Extract the downloaded package and copy the 'phantomjs' file from the 'bin' folder to the following directory: 'yourWHMCS/modules/addons/reportgenerator/vendor/microweber/screen/bin/'
4. Set the permissions of the file to recursively writeable (777)
5. Go back to to the installation steps and check if you can now process them successfully | https://www.docs.modulesgarden.com/Report_Generator_For_WHMCS | 2021-01-15T17:50:48 | CC-MAIN-2021-04 | 1610703495936.3 | [] | www.docs.modulesgarden.com |
Last modified: May 13, 2020
Overview
RPM® (RPM Package Manager) refers to a software format, the software that the RPM package contains, and the package management system.
Your cPanel & WHM server includes several pre-installed RPMs and the ability to install more. However, this interface allows you to install an RPM for the operating system distribution, rather than the cPanel-provided RPMs.
To troubleshoot any RPM installation failures that may occur, follow the steps in our RPM Installation Failures documentation.
For more information about RPMs, visit the RPM website.
Notes:
- The interface requires several seconds to load due to the RPMs that reside on your server.
- This interface only allows you to install RPMs that currently reside in your system’s repositories. You cannot install RPMs from a third-party repository.
Install an RPM
To install an RPM, perform the following steps:
- From the Select package to install menu, select the RPM that you wish to install.
- Click Install. | https://docs.cpanel.net/whm/software/install-an-rpm/ | 2021-01-15T17:58:26 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.cpanel.net |
The Android SDK Guide Follow
The Meridian Android SDK has all the tools you’ll need to embed the Meridian Editor’s maps, turn-by-turn navigation, indoor location awareness, and notifications into your own custom Android app.
Once you’ve added maps, routes, placemarks, and campaigns to the Meridian Editor, you can use the Meridian SDK to integrate that content into your Meridian-powered Android app.
Go here for the Meridian Android SDK Reference documentation.
Go here to download the latest version of the Android SDK.
The Meridian Samples App
The Meridian Samples app is included with the Android SDK to demonstrate common Meridian features and workflows.
Add the SDK to Your Project
In order to simplify using the Meridian SDK library with your Android project, we’ve bundled our SDK code into a binary distribution of the Android Library Project (AAR) file.
The Meridian SDK supports devices with Android 6.0+ (Marshmallow) API Level 23 and higher.
Add the AAR Relative Path
Complete these steps to add the AAR file to your Android project.
Choose a location for
meridian-x.y.z.aar.
Edit your app’s
build.gradlefile to add the relative path to the
meridian-x-y-z.aarfile location inside
flatDir { }, where our app will look for dependencies. Into
repositories { }, insert:
repositories { jcenter() google() // Tell gradle to look in our parent directory for the meridian AAR package. flatDir { dirs '[relative file path to the AAR directory]' } }
Add Dependencies
To use the Meridian SDK classes in your project, add the implementation information for the dependencies to the
build.gradle file. These are the meridian-x.y.z.aar file and two required external dependencies.
To find the project’s current dependencies, look in
build.gradle in the MeridianSamples directory.
In the
build.gradle file inside
dependencies { }, insert
implementation for the meridian-x.y.z.aar file and the three required external dependencies:
dependencies { implementation 'com.arubanetworks.meridian:meridian:x.y.z@aar' // Google Support Libraries implementation 'androidx.legacy:legacy-support-v4:1.0.0' implementation 'androidx.appcompat:appcompat:1.0.2' // Required for GPS on newer Android devices. implementation 'com.google.android.gms:play-services-location:17.1.0' // The SDK also has three external dependencies that you'll need to include: implementation 'com.android.volley:volley:1.1.0' implementation 'com.squareup:otto:1.3.8'
implementation 'org.conscrypt:conscrypt-android:2.4.0' }
Dependencies for Meridian Analytics
If you’d like to use Keen analytics in your project, you’ll need to add its dependency.
If you’re using the Android SDK 5.9+, you’ll need to remove the Google dependency. If you’re using the Android SDK 5.8 or lower, you’ll need to keep the Google Analytics dependency, even though no Google Analytics data will be collected. .
In the
build.gradle file inside
dependencies { }, add the Keen dependency:
dependencies { // To enable analytics reporting for your SDK app, include Keen Analytics dependency (make sure to' annotationProcessor 'androidx.lifecycle:lifecycle-compiler:x.y.z'.
Dependencies for Asset Tracking
If you’d like to use Asset tracking in your project, you’ll need to add its dependencies.
In the
build.gradle file inside
dependencies { }, add the gRPC dependencies:
dependencies { // To enable asset tracking for your SDK app, include the following gRPC dependency (make sure to replace the `x.y.z` with the version used in the Samples app): implementation 'io.grpc:grpc-protobuf-lite:1.22.1'
implementation 'io.grpc:grpc-stub:1.22.1'
implementation 'io.grpc:grpc-okhttp:1.22.1'
Import Packages
When this is done, you’ll be able to import the
com.arubanetworks.meridian packages to your project and use Meridian classes in your source files.
The SDK’s
MeridianSamples/app project folder has an example of what the finished
build.gradle file should look like.
Add Permissions
To enable the Meridian SDK’s location-awareness features, add the following permissions to your project’s
AndroidManifest.xml file:
<!-- For using GPS and Location in general --> <uses-permission android: <!-- For using location derived from bluetooth beacons --> <uses-permission android: <uses-permission android:
Configure the SDK
Before you can use the features of the Meridian SDK, you’ll need to create an instance of
MeridianConfig. This will configure the SDK.
Meridian uses token-based authentication. In order for your Meridian-powered app to communicate with the Editor, you’ll need to specify a Meridian token when initializing the SDK.
Put the following line in the
onCreate method of your
Application class or main activity.
// Configure Meridian Meridian.configure(this, YOUR_EDITOR_TOKEN);
Using Editor Keys
Use instances of EditorKey to specify apps, maps, and placemarks that have been created in the Meridian Editor. Most of the Meridian SDK classes require a valid key during initialization.
//()); // Create a key representing a placemark. EditorKey placemarkKey = EditorKey.forPlacemark("5668600916475904_5709068098338816", mapKey);
When you create an EditorKey to represent a map, the app identifier you provide becomes the parent of the map key. Likewise, when you create a placemarkKey, the map you specify becomes the parent of that placemark.
EditorKey placemarkKey = EditorKey.forPlacemark("375634485771", someMapKey); EditorKey mapKey = placemarkKey.getParent(); Log.i(TAG, "My map ID: " + mapKey.getId()); EditorKey appKey = mapKey.getParent(); Log.i(TAG, "My app ID: " + appKey.getId());
Display Maps
You can use MapFragment to provide a self-contained interface for maps and directions. Simply initialize the fragment with a valid EditorKey:
//()); MapFragment mapFragment = new MapFragment.Builder().setMapKey(mapKey).build();
MapFragment hosts a MapView and implements the MapView.MapViewListener interface, handling the basic tasks delegated by MapView.
You can also use MapView directly in your own activity or fragment, optionally implementing methods defined by the MapView.MapViewListener interface so that your activity can respond to MapView events.
Call the setMapKey() method to choose the map to display.
NOTE: We strongly advise using MapFragment whenever possible. If you decide to use a MapView independently, you must always pass your activity’s pause/resume/destroy events to the MapView to ensure resources are managed properly.
// Using pause/resume/destroy events with a MapView in your activity private MapView mapView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); EditorKey appKey = new EditorKey("5809862863224832"); mapView = new MapView(this);
mapView.setAppKey(appKey); // If you want to handle MapView events mapView.setListener(this); // Set the map to load mapView.setMapKey(EditorKey.forMap("5668600916475904", appKey.getId())); // Assume myLayout is this activity's root view myLayout.addView(mapView); } @Override protected void onResume() { super.onResume(); // MapView needs to know when its host activity is resumed mapView.onResume(); } @Override protected void onPause() { super.onPause(); // MapView needs to know when its host activity is paused mapView.onPause(); } @Override protected void onDestroy() { super.onDestroy(); // MapView needs to know when its host activity is destroyed (SDK 4.7.0 and above) mapView.onDestroy(); }
Load Directions
MapFragment handles loading and presenting directions in response to user interaction. When you create the map fragment, include the
pendingDestination parameter to present directions to a placemark when direction-finding resumes.
If you’re using MapView directly in your activity, you’ll need to load the directions manually with the Directions class.
To do so, first ensure that your activity implements the Directions.DirectionsRequestListener interface so that it can handle the results of the directions request. Use a Directions.Builder instance to configure the details of the desired route. To do this, specify the start and end points for the route using instances of
DirectionsSource and
DirectionsDestination, which contain information about a point or placemark on a map.
Next, call
build() to create the
Directions object from the builder.
Finally, call
calculate() to start the directions process.
The
Directions object will call
onDirectionsComplete() after successful completion or
onDirectionsError() if something went wrong.
In the following code example, MeridianLocation is the source of the directions.
A new route is calculated when the user’s blue dot location is approximately 10 meters from the route. To customize the reroute distance, in
build.gradleset MERIDIAN_DEFAULT_ROUTE_SNAP to half the desired rerouting distance.
For example, if you want the rerouting to start at 20 meters, set the MERIDIAN_DEFAULT_ROUTE_SNAP value to 10.
You can also use
Meridian.getShared().setRouteSnapDistance()to set this value.
// In your Activity private static final EditorKey appKey = new EditorKey("YOUR_APP_KEY"); private void getDirections(MeridianLocation source, Placemark destination) { Directions.Builder directionsBuilder = new Directions.Builder() .setAppKey(appKey) .setSource(DirectionsSource.forMapPoint(source.getMap(), source.getPoint())) .setDestination(DirectionsDestination.forPlacemarkKey(destination.getKey())) .setTransportType(TransportType.WALKING) .setListener(this); Directions directions = directionsBuilder.build(); directions.calculate(); } @Override public void onDirectionsComplete(DirectionsResponse response) { if (response.getRoutes().size() 0) { // Normally only one route will be returned Route route = response.getRoutes().get(0); // If we have a MapView we can tell it to display this route myMapView.setRoute(route); } }
If a current location isn’t available, you may want to prompt the user to choose a starting location with SearchActivity.
In the example, SearchActivity and
startActivityForResult() will prompt the user to choose a starting placemark.
// In your Activity private Directions.Builder directionsBuilder; private static final EditorKey appKey = new EditorKey("5085468668573440"); public static final int PLACEMARK_PICKER_CODE = 42; private void getDirections(Placemark placemark) { directionsBuilder = new Directions.Builder() .setContext(this) .setAppKey(appKey) .setDestination(DirectionsDestination.forPlacemark(placemark)) .setTransportType(TransportType.WALKING) .setListener(this); // start a search Activity to get the starting point for the route startActivityForResult(SearchActivity.createIntent(this, appKey), PLACEMARK_PICKER_CODE); } @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { // Now that the user picked a placemark to start from, we can calculate the route if (requestCode == PLACEMARK_PICKER_CODE) { if (resultCode == Activity.RESULT_OK) { LocalSearchResult result = (LocalSearchResult) data.getSerializableExtra(SearchActivity.SEARCH_RESULT_KEY); Placemark source = result.getPlacemark(); directionsBuilder.setSource(DirectionsSource.forPlacemark(source)); directions = directionsBuilder.build(); directions.calculate(); } } }
Send More Variables to Campaign Custom Endpoints
When using Campaign custom endpoints, you may want to send additional data to the custom endpoint.
To do this, extend CampaignBroadcastReceiver to create a campaign receiver.
Then override the
getUserInfoForCampaign method to add custom Campaign values.
/// Extend CampaignBroadcastReceiver public class CampaignReceiver extends CampaignBroadcastReceiver { @Override protected void onReceive(Context context, Intent intent, String title, String message) { Intent notificationIntent = new Intent(context, MainActivity.class); notificationIntent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP | Intent.FLAG_ACTIVITY_SINGLE_TOP); PendingIntent contentIntent = PendingIntent.getActivity(context, 0, notificationIntent, 0); NotificationCompat.Builder builder = new NotificationCompat.Builder(context); builder.setContentTitle(title); builder.setContentText(message); builder.setSmallIcon(R.drawable.ic_launcher); builder.setDefaults(Notification.DEFAULT_ALL); builder.setContentIntent(contentIntent); builder.setAutoCancel(true); NotificationManager nm = (NotificationManager) context.getSystemService(Context.NOTIFICATION_SERVICE); nm.notify("com.arubanetworks.meridiansamples.CampaignReceiver".hashCode(), builder.build()); } /// Define custom values being sent to the custom Campaign endpoint. @Override protected Map
Get a User’s Location
A user’s most recent location can be retrieved with the LocationRequest class. This request is asynchronous.
private LocationRequest request = null; private static final EditorKey appKey = new EditorKey("5809862863224832"); @Override public void onResume() { super.onResume(); request = LocationRequest.requestCurrentLocation(getActivity(), appKey, new LocationRequest.LocationRequestListener() { @Override public void onResult(MeridianLocation location) { // Update UI with new Location } @Override public void onError(LocationRequest.ErrorType location) { // Notify user that there was an error retrieving the location } }); } @Override public void onPause() { if (request != null && request.isRunning()) { request.cancel(); request = null; } super.onPause(); }
Get Continuous Updates About a User’s Location
MeridianLocationManager determines the user’s location by gathering all available location data for your Meridian-powered app. Implement the MeridianLocationManager.LocationUpdateListener interface to get callbacks as new locations are gathered.
// In an Activity that implements the LocationUpdateListener interface private static final EditorKey appKey = new EditorKey("5809862863224832"); private MeridianLocationManager locationHelper; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); locationHelper = new MeridianLocationManager(this, appKey, this); // Build out the UI ... } @Override protected void onStart() { locationHelper.startListeningForLocation(); super.onStart(); } @Override protected void onStop() { locationHelper.stopListeningForLocation(); super.onStop(); } @Override public void onLocationUpdate(MeridianLocation location) { // Update the UI with the new location } @Override public void onLocationError(Throwable tr) { // Update the UI to inform the user that their location could not be determined. }
Location Sharing
Location sharing lets users share their location with each other. This feature is also available in our iOS and Android white-label apps.
Prerequisites
Location Sharing needs to be enabled by an admin in the Meridian Editor before you can use it in the SDK. If you’d like to enable Location Sharing, please email [email protected].
If you haven’t already done so, you’ll need to generate a valid token in the Editor. To generate one, in the sidebar menu click Permissions, and then click Application Tokens. Click Add Application + to generate a token for your app. You can name it anything.
When configuring the Meridian SDK, set the application token you generated in the Editor:
LocationSharing.init("kd84hf83ck93k39dyi3nxo3mf94");
For best results, call
init from your Application class.
LocationSharing
LocationSharing is the main access point to the Location Sharing API. Use the
User,
Friend,
Invite, and
Location classes to get the Location Sharing data.
Create Accounts
// Creates a location sharing user. All users require a first name. Last name is optional. User sampleUser = new User(); sampleUser.setFirstName("Sample User"); LocationSharing.shared().createUser(sampleUser, new LocationSharing.Callback
Start Location Sharing
LocationSharing.shared().startPostingLocationUpdates(applicationContext);
Stop Location Sharing
LocationSharing.shared().stopPostingLocationUpdates(applicationContext);
Check Location Sharing Status
LocationSharing.shared().isUploadingServiceRunning() // example: button that performs start and stop sharing location if (LocationSharing.shared().isUploadingServiceRunning()) { LocationSharing.shared().stopPostingLocationUpdates(applicationContext); } else { LocationSharing.shared().startPostingLocationUpdates(applicationContext); }
Start and Stop Location Sharing Listener
LocationSharing.shared().addListener(new LocationSharing.Listener() { @Override public void onPostingLocationUpdatesStarted() { // ... } @Override public void onFriendsUpdated(List
Create an Invite
Invitations expire after 7 days.
LocationSharing.shared().createInvite(new LocationSharing.Callback
Get the Current Friend List
LocationSharing.shared().getFriends(new LocationSharing.Callback
createInvite and getFriends
LocationSharing keeps a copy of the current user, for use by calls like
createInvite and
getFriends. The implementor is responsible for storing the current user and setting it to the
LocationSharing object.
When an account is created, the new user is automatically set to the current user.
Store the New User
After creating the new user, store it:
LocationSharing.shared().createUser(newUser, new LocationSharing.Callback
Set the Current User
If a user restarts the app, the current user needs to be set again. This is equivalent to a login operation.
LocationSharing.shared().setCurrentUser(OurOwnStorage.retrieveLocationSharingUser());
Using Local Search
You can use the LocalSearch classes to find points of interest near a user’s location. You’ll usually use this in conjunction with MeridianLocationManager so that you can provide a location and nearby placemarks.
For best performance, limit local search results to 20 or less.
// This Activity implements the LocalSearch.LocalSearchListener interface private void findNearbyCafe() { LocalSearch localSearch = new LocalSearch.Builder() .setQuery("Cafe") .setApp(appKey) .setLocation(locationManager.getLocation()) .setListener(this) .setLimit(20) .build(); localSearch.start(); } @Override public void onSearchComplete(LocalSearchResponse response) { for (LocalSearchResult result : response.getResults()) { Log.i(TAG, String.format("%s is %f seconds away", result.getPlacemark().name, result.time / 1000)); } } @Override public void onSearchError(Throwable tr) { Log.e(TAG, "Search error", tr); }. | https://docs.meridianapps.com/hc/en-us/articles/360039670134-The-Android-SDK-Guide | 2021-01-15T17:26:49 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.meridianapps.com |
Monitor Site Recovery
In this article, learn how to monitor Azure Site Recovery, using Site Recovery inbuilt monitoring. You can monitor:
- The health and status of machines replicated by Site Recovery
- Test failover status of machines.
- Issues and errors affecting configuration and replication.
- Infrastructure components such as on-premises servers.
Before you start
You might want to review common monitoring questions before you start.
Monitor in the dashboard
In the vault, click Overview. The Recovery Services dashboard consolidates all monitoring information for the vault in a single location. There are pages for both Site Recovery and the Azure Backup service, and you can switch between them.
From the dashboard, drill down into different areas.
.
In Replicated items, click View All to see all the servers in the vault.
Click the status details in each section to drill down.
In Infrastructure view, sort monitoring information by the type of machines you're replicating.
Monitor replicated items
In Replicated items, monitor the health of all machines in the vault that have replication enabled.
Monitor test failovers
In Failover test success, monitor the failover status for machines in the vault.
- We recommend that you run a test failover on replicated machines at least once every six months. It's a way to check that failover is working as expected, without disrupting your production environment.
- A test failover is considered successful only after the failover and post-failover cleanup have completed successfully.
Monitor configuration issues
In Configuration issues, monitor any issues that might impact your ability to fail over successfully.
- Configuration issues (except for software update availability), are detected by a periodic validator operation that runs every 12 hours by default. You can force the validator operation to run immediately by clicking the refresh icon next to the Configuration issues section heading.
- Click the links to get more details. For issues impacting specific machines, click needs attention in the Target configurations column. Details include remediation recommendations.
Monitor errors
In Error summary, monitor currently active error symptoms that might impact replication of servers in the vault, and monitor the number of impacted machines.
- Errors impacting on-premises infrastructure components are shown are the beginning of the section. For example, non-receipt of a heartbeat from the Azure Site Recovery Provider on the on-premises configuration server, or Hyper-V host.
- Next, replication error symptoms impacting replicated servers are shown.
- The table entries are sorted by decreasing order of the error severity, and then by decreasing count order of the impacted machines.
- The impacted server count is a useful way to understand whether a single underlying issue might impact multiple machines. For example, a network glitch could potentially impact all machines that replicate to Azure.
- Multiple replication errors can occur on a single server. In this case, each error symptom counts that server in the list of its impacted servers. After the issue is fixed, replication parameters improve, and the error is cleared from the machine.
Monitor the infrastructure.
In Infrastructure view, monitor the infrastructure components involved in replication, and connectivity health between servers and the Azure services.
A green line indicates that connection is healthy.
A red line with the overlaid error icon indicates the existence of one or more error symptoms that impact connectivity.
Hover the mouse pointer over the error icon to show the error and the number of impacted entities. Click the icon for a filtered list of impacted entities.
Tips for monitoring the infrastructure
Make sure that the on-premises infrastructure components (configuration server, process servers, VMM servers, Hyper-V hosts, VMware machines) are running the latest versions of the Site Recovery Provider and/or agents.
To use all the features in the infrastructure view, you should be running Update rollup 22 for these components.
To use the infrastructure view, select the appropriate replication scenario in your environment. You can drill down in the view for more details. The following table shows which scenarios are represented.
To see the infrastructure view for a single replicating machine, in the vault menu, click Replicated items, and select a server.
Monitor recovery plans
In Recovery plans, monitor the number of plans, create new plans, and modify existing ones.
Monitor jobs
In Jobs, monitor the status of Site Recovery operations.
- Most operations in Azure Site Recovery are executed asynchronously, with a tracking job being created and used to track progress of the operation.
- The job object has all the information you need to track the state and the progress of the operation.
Monitor jobs as follows:
In the dashboard > Jobs section, you can see a summary of jobs that have completed, are in progress, or waiting for input, in the last 24 hours. You can click on any state to get more information about the relevant jobs.
Click View all to see all jobs in the last 24 hours.
Note
You can also access job information from the vault menu > Site Recovery Jobs.
In the Site Recovery Jobs list, a list of jobs is displayed. On the top menu you can get error details for a specific jobs, filter the jobs list based on specific criteria, and export selected job details to Excel.
You can drill into a job by clicking it.
Monitor virtual machines
In Replicated items, get a list of replicated machines.
- You can view and filter information. On the action menu at the top, you can perform actions for a particular machine, including running a test failover, or viewing specific errors.
- Click Columns to show additional columns, For example to show RPO, target configuration issues, and replication errors.
- Click Filter to view information based on specific parameters such as replication health, or a particular replication policy.
- Right-click a machine to initiate operations such as test failover for it, or to view specific error details associated with it.
- Click a machine to drill into more details for it. Details include:
Replication information: Current status and health of the machine.
RPO (recovery point objective): Current RPO for the virtual machine and the time at which the RPO was last computed.
Recovery points: Latest available recovery points for the machine.
Failover readiness: Indicates whether a test failover was run for the machine, the agent version running on the machine (for machines running the Mobility service), and any configuration issues.
Errors: List of replication error symptoms currently observed on the machine, and possible causes/actions.
Events: A chronological list of recent events impacting the machine. Error details shows the currently observable error symptoms, while events is a historical record of issues that have impacted the machine.
Infrastructure view: Shows state of infrastructure for the scenario when machines are replicating to Azure.
You can subscribe to receive email notifications for these critical events:
- Critical state for replicated machine.
- No connectivity between the on-premises infrastructure components and Site Recovery service. Connectivity between Site Recovery and on-premises servers registered in a vault is detected using a heartbeat mechanism.
- Failover failures.
Subscribe as follows:
In the vault > Monitoring section, click Site Recovery Events.
Click Email notifications.
In Email notification, turn on notifications and specify who to send to. You can send to all subscription admins be sent notifications, and optionally specific email addresses.
Next steps | https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-monitor-and-troubleshoot | 2021-01-15T19:24:12 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['media/site-recovery-monitor-and-troubleshoot/site-recovery-virtual-machine-list-view.png',
'Site Recovery replicated items list view'], dtype=object) ] | docs.microsoft.com |
.
The Symbol Library is the place where users can create generic symbols to be used in several QGIS projects. It allows users to export and import symbols, groups symbols and add, edit and remove symbols. You can open it with the Settings ‣ Style Library or from the Style tab in the vector layer’s Properties.
Groups are categories of Symbols and smart groups are dynamic groups.
To create a group, right-click on an existing group or on the main Groups
directory in the left of the library. You can also select a group and click
on the
add item button.
To add a symbol into a group, you can either right click on a symbol then choose
Apply group and then the group name added before. There is a second
way to add several symbols into group: just select a group and click
and choose Group Symbols. All symbols display a checkbox
that allow you to add the symbol into the selected groups. When finished, you can
click on the same button, and choose Finish Grouping.
Create Smart Symbols is similar to creating group, but instead select Smart Groups. The dialog box allow user to choose the expression to select symbols in order to appear in the smart group (contains some tags, member of a group, have a string in its name, etc.)
With the Style manager from the [Symbol]
menu you
can manage your symbols. You can
add item,
edit item,
remove item and
share item. ‘Marker’ symbols, ‘Line’ symbols, ‘Fill’ patterns and
‘colour ramps’ can be used to create the symbols.
The symbols are then assigned to ‘All Symbols’, ‘Groups’ or ‘Smart groups’.
For each kind of symbols, you will find always the same dialog structure:
The symbol tree allow adding, removing or protect new simple symbol. You can move up or down the symbol layer.
More detailed settings can be made when clicking on the second level in the Symbol layers dialog. You can define Symbol layers that are combined afterwards. A symbol can consist of several Symbol layers. Settings will be shown later in this chapter.
Совет
Note that once you have set the size in the lower levels of the Symbol layers dialog, the size of the whole symbol can be changed with the Size menu in the first level again. The size of the lower levels changes accordingly, while the size ratio is maintained.
Marker symbols have several symbol layer types: ...
Line marker symbols have only two symbol layer types:
The default symbol layer type draws a simple line whereas the other display a marker point regularly on the line. You can choose different location vertex, interval or central point. Marker line can have offset along the line or offset line. Finally, rotation allows you to change the orientation of the symbol.
The following settings are possible:
Polygon marker symbols have also several symbol layer types:
The following settings are possible:
- Colors for the border and the fill.
- Fill style
- Border style
- Border width
- Offset X,Y
- Data defined properties ...
Using the color combo box, you can drag and drop color for one color button to another button, copy-paste color, pick color from somewhere, choose a color from the palette or from recent or standard color. The combo box allow you to fill in the feature with transparency. You can also just clic on the button to open the palettte dialog. Note that you can import color from some external software like GIMP.. Other possibility is to choose a ‘shapeburst
fill’ which is a buffered gradient fill, where a gradient is drawn from
the boundary of a polygon towards the polygon’s centre. Configurable
parameters include distance from the boundary to shade, use of color ramps or
simple two color gradients, optional blurring of the fill and offsets.
It is possible to only draw polygon borders inside the polygon. Using
‘Outline: Simple line’ select
Draw line
only inside polygon.: | https://docs.qgis.org/2.6/ru/docs/user_manual/working_with_vector/style_library.html | 2021-01-15T18:16:05 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.qgis.org |
>>
This: Define a location field using the city and state fields! | https://docs.splunk.com/Documentation/Splunk/6.3.0/Search/Usetheevalcommandandfunctions | 2021-01-15T17:51:12 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Rendering¶
This is the second step of the processing chain: The rendering part gets the data array prepared
by
FormDataCompiler and creates a result array containing HTML, CSS and JavaScript. This
is then post-processed by a controller to feed it to the
PageRenderer or to create an ajax
response.
The rendering is a tree: The controller initializes this by setting one
container as
renderType
entry point within the data array, then hands over the full data array to the
NodeFactory which looks
up a class responsible for this
renderType, and calls render() on it. A container class creates only
a fraction of the full result, and delegates details to another container. The second one does another detail
and calls a third one. This continues to happen until a single field should be rendered, at which point an
element class is called taking care of one element.
Each container creates some “outer” part of the result, calls some sub-container or element, merges the sub-result with its own content and returns the merged array up again. The data array is given to each sub class along the way, and containers can add further render relevant data to it before giving it “down”. The data array can not be given “up” in a changed way again. Inheritance of a data array is always top-bottom. Only HTML, CSS or JavaScript created by a sub-class is returned by the sub-class “up” again in a “result” array of a specified format.
class SomeContainer extends AbstractContainer { public function render() { $result = $this->initializeResultArray(); $data = $this->data; $data['renderType'] = 'subContainer'; $childArray = $this->nodeFactory->create($data)->render(); $resultArray = $this->mergeChildReturnIntoExistingResult($result, $childArray, false); $result['html'] = '<h1>A headline</h1>' . $childArray['html']; return $result; } }
Above example lets
NodeFactory find and compile some data from “subContainer”, and merges the child result
with its own. The helper methods
initializeResultArray() and
mergeChildReturnIntoExistingResult()
help with combining CSS and JavaScript.
An upper container does not directly create an instance of a sub node (element or container) and never calls it
directly. Instead, a node that wants to call a sub node only refers to it by a name, sets this name into the data
array as
$data['renderType'] and then gives the data array to the
NodeFactory which determines
an appropriate class name, instantiates and initializes the class, gives it the data array, and calls
render()
on it.
Note
The
SingleFieldContainer and
FlexFormElementContainer will probably vanish with core version 9.
Note
Data set by containers and given down to children will likely change in core version 9: All fields not registered
in the main data array of
FormDataCompiler and only added within containers will move into section
renderData. Furthermore, it is planned to remove
parameterArray and substitute it with something
better. This will affect most elements and will probably break a lot of these elements.
Class Inheritance¶
All classes must implement
NodeInterface to be routed through the
NodeFactory. The
AbstractNode
implements some basic helpers for nodes, the two classes
AbstractContainer and
AbstractFormElement
implement helpers for containers and elements respectively.
The call concept is simple: A first container is called, which either calls a container below or a single element. A single element never calls a container again.
NodeFactory¶
The
NodeFactory plays an important abstraction role within the render chain: Creation of child nodes is
always routed through it, and the NodeFactory takes care of finding and validating the according class that
should be called for a specific
renderType. This is supported by an API that allows registering new
renderTypes and overriding existing renderTypes with own implementations. This is true for all classes,
including containers, elements, fieldInformation, fieldWizards and fieldControls. This means the child routing
can be fully adapted and extended if needed. It is possible to transparently “kick-out” a core container and to
substitute it with an own implementation.
As example, the TemplaVoila implementation needs to add additional render capabilities of the flex form rendering to add for instance an own multi-language rendering of flex fields. It does that by overriding the default flex container with own implementation:
// Default registration of "flex" in NodeFactory: // 'flex' => \TYPO3\CMS\Backend\Form\Container\FlexFormEntryContainer::class, // Register language aware flex form handling in FormEngine $GLOBALS['TYPO3_CONF_VARS']['SYS']['formEngine']['nodeRegistry'][1443361297] = [ 'nodeName' => 'flex', 'priority' => 40, 'class' => \TYPO3\CMS\Compatibility6\Form\Container\FlexFormEntryContainer::class, ];
This re-routes the
renderType “flex” to an own class. If multiple registrations for a single renderType exist,
the one with highest priority wins.
Note
The
NodeFactory uses
$data['renderType']. This has been introduced with core version 7 in TCA, and
a couple of TCA fields actively use this renderType. However, it is important to understand the renderType is only
used within the FormEngine and
type is still a must-have setting for columns fields in TCA. Additionally,
type can not be overridden in
columnsOverrides. Basically,
type specifies how the DataHandler
should put data into the database, while
renderType specifies how a single field is rendered. This additionally
means there can exist multiple different renderTypes for a single type, and it means it is possible to invent a new
renderType to render a single field differently, but still let the DataHandler persist it the usual way.
Adding a new renderType in
ext_localconf.php
// Add new field type to NodeFactory $GLOBALS['TYPO3_CONF_VARS']['SYS']['formEngine']['nodeRegistry'][1487112284] = [ 'nodeName' => 'selectTagCloud', 'priority' => '70', 'class' => \MyVendor\CoolTagCloud\Form\Element\SelectTagCloudElement::class, ];
And use it in TCA for a specific field, keeping the full database functionality in DataHandler together with the data preparation of FormDataCompiler, but just routing the rendering of that field to the new element:
$GLOBALS['TCA']['myTable']['columns']['myField'] = [ 'label' => 'Cool Tag cloud', 'config' => [ 'type' => 'select', 'renderType' => 'selectTagCloud', 'foreign_table' => 'tx_cooltagcloud_availableTags', ], ];
The above examples are a static list of nodes that can be changed by settings in
ext_localconf.php. If that
is not enough, the
NodeFactory can be extended with a resolver that is called dynamically for specific renderTypes.
This resolver gets the full current data array at runtime and can either return
NULL saying “not my job”, or return
the name of a class that should handle this node.
An example of this are the core internal rich text editors. Both “ckeditor” and “rtehtmlarea” register a resolver class
that are called for node name “text”, and if the TCA config enables the editor, and if the user has enabled rich text
editing in his user settings, then the resolvers return their own
RichTextElement class names to render a given text
field:
// Register FormEngine node type resolver hook to render RTE in FormEngine if enabled $GLOBALS['TYPO3_CONF_VARS']['SYS']['formEngine']['nodeResolver'][1480314091] = [ 'nodeName' => 'text', 'priority' => 50, 'class' => \TYPO3\CMS\RteCKEditor\Form\Resolver\RichTextNodeResolver::class, ];
The trick is here that “ckeditor” registers his resolver with ah higher priority (50) than “rtehtmlarea” (40), so the “ckeditor” resolver is called first and wins if both extensions are loaded and if both return a valid class name.
Result Array¶
Each node, no matter if it is a container, an element, or a node expansion,
must return an array with specific data keys it wants to add. It is the job of the parent node that calls the sub node to
merge child node results into its own result. This typically happens by merging
$childResult['html']
into an appropriate position of own HTML, and then calling
$this->mergeChildReturnIntoExistingResult() to add
other array child demands like
stylesheetFiles into its own result.
Container and element nodes should use the helper method
$this->initializeResultArray() to
have a result array initialized that is understood by a parent node.
Only if extending existing element via node expansion, the result array
of a child can be slightly different. For instance, a
FieldControl “wizards” must have a
iconIdentifier
result key key. Using
$this->initializeResultArray() is not appropriate in these cases but depends on the specific
expansion type. See below for more details on node expansion.
The result array for container and element nodes looks like this.
$resultArray = $this->initializeResultArray()
takes care of basic keys:
[ 'html' => '', 'additionalInlineLanguageLabelFiles' => [], 'stylesheetFiles' => [], 'requireJsModules' => [], ]
CSS and language labels (which can be used in JS) are added with their file names in format
EXT:extName/path/to/file.
JavaScript is added only via RequireJS modules, the registration allows an init method to be called if the
module is loaded by the browser.
Note
The result array handled by
$this->mergeChildReturnIntoExistingResult() contains a couple of more keys, those
will vanish with further FormEngine refactoring steps. If using them, be prepared to adapt extensions later.
Note
Nodes must never add JavaScript or CSS or similar stuff using the
PageRenderer. This fails as soon
as this container / element / wizard is called via AJAX, for instance within inline. Instead, those resources
must be registered via the result array only, using
stylesheetFiles and
requireJsModules.
Node Expansion¶
The “node expansion” classes
FieldControl,
FieldInformation and
FieldWizard are called by containers
and elements and allow “enriching” containers and elements. Which enrichments are called can be configured via TCA.
This API is the substitution of the old “TCA wizards array” and has been introduced with core version 8.
- FieldInformation
- Additional information. In elements, their output is shown between the field label and the element itself. They can not add functionality, but only simple and restricted HTML strings. No buttons, no images. An example usage could be an extension that auto-translates a field content and outputs an information like “Hey, this field was auto-filled for you by an automatic translation wizard. Maybe you want to check the content”.
- FieldWizard
- Wizards shown below the element. “enrich” an element with additional functionality. The localization wizard and the file upload wizard of
type=groupfields are examples of that.
- FieldControl
- “Buttons”, usually shown next to the element. For
type=groupthe “list” button and the “element browser” button are examples. A field control must return an icon identifier.
Currently, all elements usually implement all three of these, except in cases where it does not make sense. This API allows
adding functionality to single nodes, without overriding the whole node. Containers and elements can come with default
expansions (and usually do). TCA configuration can be used to add own stuff. On container side the implementation is still
basic, only
OuterWrapContainer and
InlineControlContainer currently implement FieldInformation and FieldWizard.
See the TCA reference ctrl section for more information on how to configure these for containers in TCA.
Example. The
InputTextElement (standard input element) defines a couple of default wizards and embeds them in its
main result HTML:
class InputTextElement extends AbstractFormElement { protected $defaultFieldWizard = [ 'localizationStateSelector' => [ 'renderType' => 'localizationStateSelector', ], 'otherLanguageContent' => [ 'renderType' => 'otherLanguageContent', 'after' => [ 'localizationStateSelector' ], ], 'defaultLanguageDifferences' => [ 'renderType' => 'defaultLanguageDifferences', 'after' => [ 'otherLanguageContent', ], ], ]; public function render() { $resultArray = $this->initializeResultArray(); $fieldWizardResult = $this->renderFieldWizard(); $fieldWizardHtml = $fieldWizardResult['html']; $resultArray = $this->mergeChildReturnIntoExistingResult($resultArray, $fieldWizardResult, false); $mainFieldHtml = []; $mainFieldHtml[] = '<div class="form-control-wrap">'; $mainFieldHtml[] = '<div class="form-wizards-wrap">'; $mainFieldHtml[] = '<div class="form-wizards-element">'; // Main HTML of element done here ... $mainFieldHtml[] = '</div>'; $mainFieldHtml[] = '<div class="form-wizards-items-bottom">'; $mainFieldHtml[] = $fieldWizardHtml; $mainFieldHtml[] = '</div>'; $mainFieldHtml[] = '</div>'; $mainFieldHtml[] = '</div>'; $resultArray['html'] = implode(LF, $mainFieldHtml); return $resultArray; } }
This element defines three wizards to be called by default. The
renderType concept is re-used, the
values
localizationStateSelector are registered within the
NodeFactory and resolve to class names. They
can be overridden and extended like all other nodes. The
$defaultFieldWizards are merged with TCA settings
by the helper method
renderFieldWizards(), which uses the
DependencyOrderingService again.
It is possible to:
- Override existing expansion nodes with own ones from extensions, even using the resolver mechanics is possible.
- It is possible to disable single wizards via TCA
- It is possible to add own expansion nodes at any position relative to the other nodes by specifying “before” and “after” in TCA.
Add fieldControl Example¶
To illustrate the principals discussed in this chapter see the following example which registers a fieldControl (button) next to a field in the pages table to trigger a data import via ajax.
Add a new renderType in
ext_localconf.php:
$GLOBALS['TYPO3_CONF_VARS']['SYS']['formEngine']['nodeRegistry'][1485351217] = [ 'nodeName' => 'importDataControl', 'priority' => 30, 'class' => \T3G\Something\FormEngine\FieldControl\ImportDataControl::class ];
Register the control in
TCA/Overrides/pages.php:
'somefield' => [ 'label' => $langFile . ':pages.somefield', 'config' => [ 'type' => 'input', 'eval' => 'int, unique', 'fieldControl' => [ 'importControl' => [ 'renderType' => 'importDataControl' ] ] ] ],
Add the php class for rendering the control in
FormEngine/FieldControl/ImportDataControl.php:
declare(strict_types=1); namespace T3G\Something\FormEngine\FieldControl; use TYPO3\CMS\Backend\Form\AbstractNode; class ImportDataControl extends AbstractNode { public function render() { $result = [ 'iconIdentifier' => 'import-data', 'title' => $GLOBALS['LANG']->sL('LLL:EXT:something/Resources/Private/Language/locallang_db.xlf:pages.importData'), 'linkAttributes' => [ 'class' => 'importData ', 'data-id' => $this->data['databaseRow']['somefield'] ], 'requireJsModules' => ['TYPO3/CMS/Something/ImportData'], ]; return $result; } }
Add the JavaScript for defining the behavior of the control in
Resources/Public/JavaScript/ImportData.js:
/** * Module: TYPO3/CMS/Something/ImportData * * JavaScript to handle data import * @exports TYPO3/CMS/Something/ImportData */ define(function () { 'use strict'; /** * @exports TYPO3/CMS/Something/ImportData */ var ImportData = {}; /** * @param {int} id */ ImportData.import = function (id) { $.ajax({ type: 'POST', url: TYPO3.settings.ajaxUrls['something-import-data'], data: { 'id': id } }).done(function (response) { if (response.success) { top.TYPO3.Notification.success('Import Done', response.output); } else { top.TYPO3.Notification.error('Import Error!'); } }); }; /** * initializes events using deferred bound to document * so AJAX reloads are no problem */ ImportData.initializeEvents = function () { $('.importData').on('click', function (evt) { evt.preventDefault(); ImportData.import($(this).attr('data-id')); }); }; $(ImportData.initializeEvents); return ImportData; });
Add an ajax route for the request in
Configuration/Backend/AjaxRoutes.php:
<?php return [ 'something-import-data' => [ 'path' => '/something/import-data', 'target' => \T3G\Something\Controller\Ajax\ImportDataController::class . '::importDataAction' ], ];
Add the ajax controller class in
Classes/Controller/Ajax/ImportDataController.php:
declare(strict_types=1); namespace T3G\Something\Controller\Ajax; use Psr\Http\Message\ResponseInterface; use Psr\Http\Message\ServerRequestInterface; class ImportDataController { /** * @param ServerRequestInterface $request * @param ResponseInterface $response * @return ResponseInterface */ public function importDataAction(ServerRequestInterface $request, ResponseInterface $response) { $queryParameters = $request->getParsedBody(); $id = (int)$queryParameters['id']; if (empty($id)) { $response->getBody()->write(json_encode(['success' => false])); return $response; } $param = ' -id=' . $id; // trigger data import (simplified as example) $output = shell_exec('.' . DIRECTORY_SEPARATOR . 'import.sh' . $param); $response->getBody()->write(json_encode(['success' => true, 'output' => $output])); return $response; } } | https://docs.typo3.org/m/typo3/reference-coreapi/10.4/en-us/ApiOverview/FormEngine/Rendering/Index.html | 2021-01-15T17:19:55 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.typo3.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.