content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Commerce v1 Developer PDF Writer
PDF Writers are implemented as standalone services.
At the moment, writers for mPDF and PDFCrowd are available.
Other writers can be added.
Using a PDF Writer
When a PDF Writer module has been installed and configured, you can easily get access to it in other extensions with the
getPDFWriter() method.
try { $writer = $commerce->getPDFWriter(); } catch (\modmore\Commerce\PDF\Exception\PDFException $e) { $this->adapter->log(1, get_class($e) . ' generating PDF: ' . $e->getMessage()); }
Next, you'll want to set an output file path.
$fullPath = MODX_CORE_PATH . 'export/pdf_' . rand(0,9999) . '.pdf'; $writer->setOutputFile($fullPath);
Check if we have a HTML writer, and if so, set the input HTML:
if ($writer instanceof FromHtmlWriterInterface) { $html = <<<HTML <h1>Hello world</h1> HTML; $writer->setSourceHtml($html); }
In the future, we'll likely add different writer types, so this check is important.
To generate the HTML from a twig template, see Twig and Views.
After setting the input and the output, we can call the render method which will return the PDF binary string (which has also been written to your output file).
return $writer->render();
As any of these methods can throw a
PDFException which causes the rendering to fail, it's best to wrap the entire thing in a
try {} catch () {} block, like so:
try { $writer = $commerce->getPDFWriter(); $fullPath = MODX_CORE_PATH . 'export/pdf_' . rand(0,9999) . '.pdf'; $writer->setOutputFile($fullPath); if ($writer instanceof FromHtmlWriterInterface) { $html = <<<HTML <h1>Hello world</h1> HTML; $writer->setSourceHtml($html); } return $writer->render(); } catch (\modmore\Commerce\PDF\Exception\PDFException $e) { $this->adapter->log(1, get_class($e) . ' generating PDF: ' . $e->getMessage()); }
And at that point you should have a PDF! :-)
Building a PDF Writer
It's possible to create a new PDF writer that can generate PDFs, and make that available for Commerce and extensions to use.
Responsibility of a Writer
The
WriterInterface defines a very narrow scope of responsibility for a PDF Writer:
<?php interface WriterInterface { /** * @param array $options * @return string * @throws modmore\Commerce\PDF\Exception\MissingSourceException * @throws modmore\Commerce\PDF\Exception\RenderException */ public function render(array $options = []); /** * @param string $file * @return void * @throws modmore\Commerce\PDF\Exception\InvalidOutputException */ public function setOutputFile($file); }
Basically, a method to accept an output file path to write the PDF to, and a method that should write the PDF. Any problem with the render triggers a RenderException or MissingSourceException, and providing an invalid path throws an InvalidOutputException. All of these extend the PDFException, making it easy to intercept any error.
In addition, the
FromHtmlWriterInterface adds a method that makes it accept source HTML:
<?php interface FromHtmlWriterInterface extends WriterInterface { /** * @param string $html * @return void */ public function setSourceHtml($html); }
By splitting up these two interfaces, we're prepared for future code-based PDF generation.
Example Writer
See the GitHub repositories for PDFCrowd and mPDF.
Registering a custom PDF Writer
To load a custom PDF writer, you need a functional module.
In the module, you'll want to add a listener to the
\Commerce::EVENT_GET_PDF_WRITER event. The callback receives a
PDFwriter event on which you can provide an instance of a writer to
$event->addWriter(). For example:
<?php class MyModule extends BaseModule { // ... public function initialize(EventDispatcher $dispatcher) { $dispatcher->addListener(\Commerce::EVENT_GET_PDF_WRITER, array($this, 'getPDFWriter')); } public function getPDFWriter(PDFWriter $event) { $instance = new MyWriter(); $event->addWriter($instance); } }
For a more complete example, see the PDFCrowdWriter Module as well as the Bootstrapping a Module guide if you're new to building modules. | https://docs.modmore.com/en/Commerce/v1/Developer/PDF_Writer.html | 2019-09-15T08:31:20 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.modmore.com |
These instructions apply to Autodesk® AutoCAD® Map 3D 2016 and above, and describe how to load georeferenced Nearmap imagery using Web Map Service (WMS).
Rather than import a single image of a small area, WMS allows AutoCAD to request the imagery directly from the Nearmap server in a variety of map projections.
Please note that only certain versions of AutoCAD (Map 3D and Civil 3D) support this functionality.
In addition to this document, you can view our video on How to add a WMS connection in Autodesk.).
From the Task Pane, click the Data icon.
Click Connect to Data.
In the Data Connect window, click Add WMS Connection.
In the Add a New Connection panel, at Connection Name, enter Nearmap.
At Server name or URL, enter the WMS URL including your API Key:
USA:
Australia & New Zealand:
Click Connect.
Do not enter anything in the User Name and Password fields. Click Login
The Add Data to Map panel appears in the connection window. Expand the NearMap node and configure a layer:
Check a layer to select it.
Change the Image Format to jpeg. This step improves performance by using compression at a usually imperceptible cost in image quality.
-.
Your layer will display. To configure a different coordinate reference system, click the CRS icon at the bottom of the map display window.
Search for an appropriate coordinate reference system, select it, and click Assign.(Please refer to Natively Supported Coordinate Systems)
The imagery is reprojected into your chosen CRS. | https://docs.nearmap.com/display/ND/AutoCAD+Map+3D+2016+-+WMS+Integration | 2019-09-15T08:21:39 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.nearmap.com |
# ..
In tests performed with with 210 and 990 OpenShift Container Platform nodes, where 10500 pods and 11000 pods were monitored respectively, the Cassandra database grew at the speed shown in the table below: requirements for week, as in the values above.
One set of metrics pods (Cassandra/Hawkular/Heapster) is able to monitor at least 25,000 pods..
Cassandra nodes use persistent storage. Therefore, scaling up or down is not possible with replication controllers.
Scaling a Cassandra cluster requires modifying the
openshift_metrics_cassandra_replicas variable and re-running the
deployment.
By default, the Cassandra cluster is a single-node cluster.
To scale up the number of OpenShift Container Platform metrics hawkular pods to two replicas, run:
# oc scale -n openshift-infra --replicas=2 rc hawkular-metrics
Alternatively, update your inventory file and re-run the deployment.
To scale down:
If remotely accessing the container, run the following for the Cassandra node you want to remove:
$ oc exec -it <hawkular-cassandra-pod> nodetool decommission
If locally accessing the container, run the following instead:
$ oc rsh <hawkular-cassandra-pod> nodetool decommission
This command can take a while to run since it copies data across the cluster.
You can monitor the decommission progress with
nodetool netstats -H.
Once the previous command succeeds, scale down the
rc for the Cassandra instance to
0.
# oc scale -n openshift-infra --replicas=0 rc <hawkular-cassandra-rc>
This will remove the Cassandra pod. | https://docs.openshift.com/container-platform/3.7/scaling_performance/scaling_cluster_metrics.html | 2019-09-15T07:51:21 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.openshift.com |
Everything Legal Notices Toon Boom Animation Inc. 4200 Saint-Laurent, Suite 1020Montreal, Quebec, CanadaH2W 2R2 Tel: +1 514 278 8666Fax: +1 514 278 2666 toonboom.com Disclaimer The content of this documentation is the property of Toon Boom Animation Inc. and is copyrighted. Any reproduction in whole or in part is strictly prohibited. The content of this documentation is covered by a specific limited warranty and exclusions and limit of liability under the applicable License Agreement as supplemented by the special terms and conditions for Adobe®Flash® File Format (SWF). For details, refer to the License Agreement and to those special terms and conditions. Trademarks Toon Boom® is a registered trademark. Harmony™ and the Toon Boom logo are trademarks of Toon Boom Animation Inc. All other trademarks of the property of their respective owners. Publication Date 09-09-2019 Copyright © 2019 Toon Boom Animation Inc., a Corus Entertainment Inc. company. All rights reserved. | https://docs.toonboom.com/help/harmony-16/advanced/about/legal.html | 2019-09-15T07:51:40 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.toonboom.com |
Quickstart: Using CyVerse for a Shared Project¶
Prerequisites¶
Downloads, access, and services¶
In order to complete this tutorial you will need access to the following services/software
- Any project members who will be using CyVerse should take a look at the Data Store Guide and the Discovery Environment Guide. You may also want to review the Atmosphere Guide.
- Be sure that all project members register for CyVerse accounts at the CyVerse User Portal. See Creating a CyVerse Account.
Sharing data with project members¶
For projects that are part of a single lab, we recommend that the PI create a CyVerse account and share it with lab members. Specific sub-directories can be shared with specific lab members as desired.
For projects that are collaborations among multiple labs, one person should be create a project folder to share with all collaborators. Collaborators must decide among themselves who has read, write, and own permission.
Tip
Anyone who has own permission on a folder can delete it or rename it!
The sharing functionality the CyVerse Data Store can be used to share data among project members. This can be done through the Discovery Environment via the data sharing feature or on the command line using iCommands. Project members also can upload and download data using the desktop application Cyberduck, but Cyberduck cannot be used for setting sharing permissions.
According to the CyVerse Data Policy, all users receive a default allocation of 100GB. Shared data is counted as part of the allocation of whoever owns the folder that contains it. To request an increase to your allocation, should that become necessary, use the allocation increase form. We expect that users hosting shared directories will need to request larger data allocations.
If your project needs a shared folder for data that that going to be public during the active research phase of the project (e.g., you want to share transcriptomes or draft genomes as they are created, before publication), you can request a Community Released Data Folder. Community Released folders are intended for public data, not for shared projects that are kept private among collaborators.
Managing data in a shared project¶
We strongly recommend that a single person be in charge of data management. There should also be a single person (generally the PI) who has ownership of the project folders and who sets read and write permissions for others. This ensures continuity when people move on. The PI can give ownership to a data manager for setting permissions, but should maintain their own ownership as well.
The owner of a folder has the ability to delete or rename the folder and any of its contents. If project members are given write permission to the project folder, they will be able to create their own sub-folders which they will own. In this way, project members can control access to their own data.
Tip
Before beginning your project, make a plan for how to name files and organize folders. Agree on which metadata are needed for each type of file, and set up protocols for adding metadata when files are uploaded.
Publishing data from a shared project¶
When you are ready to publish the results of your project, you should also publish the data to an appropriate repository. For sequence data, that is one of the INDSC repositories, such as NCBI’s SRA. Other data types can be published to general scientific repositories or to the CyVerse Data Commons. See Publishing your data through the CyVerse Data Commons.
Group projects that are using a Community Released Data Folder to share data pre-publication are encouraged to transition to fully published data (with a DOI) when the data are stable. At that point, data can move into the Data Commons repository in its own folder, or it can remain within the shared project folder, but project members will lose edit access to the dataset. For more questions on this option, contact [email protected].
Sharing tools and analyses with project members¶
Projects can use CyVerse analysis platforms to develop and share analysis tools and workflows.
The Discovery Environment (DE) contains hundreds of application that can be used by projects. Apps can be chained together to form workflows in the DE. It is now possible for CyVerse users to integrate their own applications or any open source application into the DE, using Docker containers. Projects may create private apps and workflows, to be shared only with project members, and then make those apps public when they are ready.
In the DE, you can create a team (add link to documentation) and share apps with your team.
Atmosphere can be used to set up a virtual machine (VM) with project software, which can then be used by all project members. The VM can later be imaged (made permanent) and published along with the project.
If your project includes a lot of computationally intensive analyses, you should consider requesting an XSEDE allocation (for the U.S. national super-computer infrastructure) and setting up HPC workflows using tools such as Pegasus.
Additional information, help¶
Fix or improve this documentation
- On Github: Github Repo Link
- Send feedback: [email protected] | https://cyverse-group-project-quickstart.readthedocs-hosted.com/en/latest/ | 2019-09-15T08:48:27 | CC-MAIN-2019-39 | 1568514570830.42 | [] | cyverse-group-project-quickstart.readthedocs-hosted.com |
Table of Contents
- How to Create a New Workspace
- How to Update or Delete a Workspace
- Navigating across Workspaces in Kong Manager
How to Create a New Workspace
Log in as the Super Admin. On the “Workspaces” page, click the “New Workspace” button at the top right to see the “Create Workspace” form.
Name the new Workspace.
⚠️ WARNING: Each Workspace name should be unique, regardless of letter case. For example, naming one Workspace “Payments” and another one “payments” will create two different workspaces that look identical.
⚠️ WARNING: Do not name Workspaces the same as these major routes in Kong Manager:
• Admins • APIs • Certificates • Consumers • Plugins • Portal • Routes • Services • SNIs • Upstreams • Vitals
Select a color or avatar to make each Workspace easier to distinguish, or accept the default color.
Click the “Create New Workspace” button. Upon creation, the application will navigate to the new Workspace’s “Dashboard” page.
On the left sidebar, click the “Admins” link in the “Security” section. If the sidebar is collapsed, hover over the security badge icon at the bottom and click the “Admins” link.
The “Admins” page displays a list of current Admins and Roles. Four default Roles specific to the new Workspace are already visible, and new Roles specific to the Workspace can be assigned from this page.
How to Update or Delete a Workspace
⚠️ IMPORTANT: In order to delete a Workspace, everything inside the Workspace must be deleted first. This includes default Roles on the “Admins” page.
Within the Workspace, navigate to the “Dashboard” page.
At the top right, click the “Settings” button.
Edit or delete the Workspace.
Navigating across Workspaces in Kong Manager
To navigate between Workspaces, from the “Overview” page, click on any Workspace displayed beneath the Vitals chart. The list of Workspaces may be rendered as cards or a table, depending on preference.
If a Role does not have permission to access entire endpoints, the user assigned to the Role will not be able to see the related navigation links.
For more information about Admins and Roles, see RBAC and Permissions. For information about how RBAC applies to specific Workspaces, see RBAC in Workspaces
Next: RBAC and Permissions › | https://docs.konghq.com/enterprise/0.34-x/kong-manager/organization-management/workspaces/ | 2019-09-15T07:35:26 | CC-MAIN-2019-39 | 1568514570830.42 | [array(['https://konghq.com/wp-content/uploads/2018/11/admins-section.png',
'Admins Hover Over'], dtype=object)
array(['https://konghq.com/wp-content/uploads/2018/11/km-ws-list.png',
'Workspace List'], dtype=object) ] | docs.konghq.com |
profiles. supports the following types of data sources: database source, XPath source, association source and:
Templates properties
Templates. | https://docs.mendix.com/refguide7/list-view | 2019-09-15T07:28:14 | CC-MAIN-2019-39 | 1568514570830.42 | [array(['/refguide7/attachments/pages/show-styles.png',
'Location and effect of the Show styles button'], dtype=object)] | docs.mendix.com |
CloudSimple maintenance and updates
The Private Cloud environment is designed to have no single point of failure.
- ESXi clusters are configured with vSphere High Availability (HA). The clusters are sized to have at least one spare node for resiliency.
- Redundant primary storage is provided by vSAN, which requires at least three nodes to provide protection against a single failure. vSAN can be configured to provide higher resiliency for larger clusters.
- vCenter, PSC, and NSX Manager VMs are configured with RAID-10 storage to protect against storage failure. The VMs are protected against node/network failures by vSphere HA.
- ESXi hosts have redundant fans and NICs.
- TOR and spine switches are configured in HA pairs to provide resiliency.
CloudSimple continuously monitors the following VMs for uptime and availability, and provides availability SLAs:
- ESXi hosts
- vCenter
- PSC
- NSX Manager
CloudSimple also monitors the following continuously for failures:
- Hard disks
- Physical NIC ports
- Servers
- Fans
- Power
- Switches
- Switch ports
If a disk or node fails, a new node is automatically added to the affected VMware cluster to bring it back to health immediately.
CloudSimple backs up, maintains, and updates these VMware elements in the Private Clouds:
- ESXi
- vCenter Platform Services
- Controller
- vSAN
- NSX
Back up and restore
CloudSimple backup includes:
- Nightly incremental backups of vCenter, PSC, and DVS rules.
- vCenter native APIs to back up components at the application layer.
- Automatic backup prior to update or upgrade of the VMware management software.
- vCenter data encryption at the source before data is transferred over a TLS1.2 encrypted channel to Azure. The data is stored in an Azure blob where it's replicated across regions.
You can request a restore by opening a Support request.
Maintenance
CloudSimple does several types of planned maintenance.
Backend/internal maintenance
This maintenance typically involves reconfiguring physical assets or installing software patches. It doesn’t affect normal consumption of the assets being serviced. With redundant NICs going to each physical rack, normal network traffic and Private Cloud operations aren’t affected. You might notice a performance impact only if your organization expects to use the full redundant bandwidth during the maintenance interval.
CloudSimple portal maintenance
Some limited service downtime is required when the CloudSimple control plane or infrastructure is updated. Currently, maintenance intervals can be as frequent as once per month. The frequency is expected to decline over time. CloudSimple provides notification for portal maintenance and keeps the interval as short as possible. During a portal maintenance interval, the following services continue to function without any impact:
- VMware management plane and applications
- vCenter access
- All networking and storage
- All Azure traffic
VMware infrastructure maintenance
Occasionally it's necessary to make changes to the configuration of the VMware infrastructure. Currently, these intervals can occur every 1-2 months, but the frequency is expected to decline over time. This type of maintenance can usually be done without interrupting normal consumption of the CloudSimple services. During a VMware maintenance interval, the following services continue to function without any impact:
- VMware management plane and applications
- vCenter access
- All networking and storage
- All Azure traffic
Updates and Upgrades
CloudSimple is responsible for lifecycle management of VMware software (ESXi, vCenter, PSC, and NSX) in the Private Cloud.
Software updates include:
- Patches. Security patches or bug fixes released by VMware.
- Updates. Minor version change of a VMware stack component.
- Upgrades. Major version change of a VMware stack component.
CloudSimple tests a critical security patch as soon as it becomes available from VMware. Per SLA, CloudSimple rolls out the security patch to Private Cloud environments within a week.
CloudSimple provides quarterly maintenance updates to VMware software components. When a new major version of VMware software is available, CloudSimple works with customers to coordinate a suitable maintenance window for upgrade.
Next steps
Back up workload VMs using Veeam
Feedback | https://docs.microsoft.com/en-us/azure/vmware-cloudsimple/cloudsimple-maintenance-updates | 2019-09-15T08:28:16 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.microsoft.com |
Storage Spaces Direct hardware requirements
Applies to: Windows Server 2019, Windows Server 2016
This topic describes minimum hardware requirements for Storage Spaces Direct.
For production, Microsoft recommends purchasing a validated hardware/software solution from our partners, which include deployment tools and procedures. These solutions are designed, assembled, and validated against our reference architecture to ensure compatibility and reliability, so you get up and running quickly. For Windows Server 2019 solutions, visit the Azure Stack HCI solutions website. For Windows Server 2016 solutions, learn more at Windows Server Software-Defined.
Tip
Want to evaluate Storage Spaces Direct but don't have hardware? Use Hyper-V or Azure virtual machines as described in Using Storage Spaces Direct in guest virtual machine clusters.
Base requirements
Systems, components, devices, and drivers must be Windows Server 2016 Certified per the Windows Server Catalog. In addition, we recommend that servers, drives, host bus adapters, and network adapters have the Software-Defined Data Center (SDDC) Standard and/or Software-Defined Data Center (SDDC) Premium additional qualifications (AQs), as pictured below. There are over 1,000 components with the SDDC AQs.
The fully configured cluster (servers, networking, and storage) must pass all cluster validation tests per the wizard in Failover Cluster Manager or with the
Test-Cluster cmdlet in PowerShell.
In addition, the following requirements apply:
Servers
- Minimum of 2 servers, maximum of 16 servers
- Recommended that all servers be the same manufacturer and model
CPU
- Intel Nehalem or later compatible processor; or
- AMD EPYC or later compatible processor
Memory
- Memory for Windows Server, VMs, and other apps or workloads; plus
- 4 GB of RAM per terabyte (TB) of cache drive capacity on each server, for Storage Spaces Direct metadata
Boot
- Any boot device supported by Windows Server, which now includes SATADOM
- RAID 1 mirror is not required, but is supported for boot
- Recommended: 200 GB minimum size
Networking
Storage Spaces Direct requires a reliable high bandwidth, low latency network connection between each node.
Minimum interconnect for small scale 2-3 node
- 10 Gbps network interface card (NIC), or faster
- Two or more network connections from each node recommended for redundancy and performance
Recommended interconnect for high performance, at scale, or deployments of 4+
- NICs that are remote-direct memory access (RDMA) capable, iWARP (recommended) or RoCE
- Two or more network connections from each node recommended for redundancy and performance
- 25 Gbps NIC or faster
Switched or switchless node interconnects
- Switched: Network switches must be properly configured to handle the bandwidth and networking type. If using RDMA that implements the RoCE protocol, network device and switch configuration is even more important.
- Switchless: Nodes can be interconnected using direct connections, avoiding using a switch. It is required that every node have a direct connection with every other node of the cluster.
Drives
Storage Spaces Direct works with direct-attached SATA, SAS, or NVMe drives that are physically attached to just one server each. For more help choosing drives, see the Choosing drives topic.
- SATA, SAS, and NVMe (M.2, U.2, and Add-In-Card) drives are all supported
- 512n, 512e, and 4K native drives are all supported
- Solid-state drives must provide power-loss protection
- Same number and types of drives in every server – see Drive symmetry considerations
- Cache devices must be 32 GB or larger
- When using persistent memory devices as cache devices, you must use NVMe or SSD capacity devices (you can't use HDDs)
- NVMe driver is the Microsoft-provided one included in Windows. (stornvme.sys)
- Recommended: Number of capacity drives is a whole multiple of the number of cache drives
- Recommended: Cache drives should have high write endurance: at least 3 drive-writes-per-day (DWPD) or at least 4 terabytes written (TBW) per day – see Understanding drive writes per day (DWPD), terabytes written (TBW), and the minimum recommended for Storage Spaces Direct
Here's how drives can be connected for Storage Spaces Direct:
- Direct-attached SATA drives
- Direct-attached NVMe drives
- SAS host-bus adapter (HBA) with SAS drives
- SAS host-bus adapter (HBA) with SATA drives
- NOT SUPPORTED: RAID controller cards or SAN (Fibre Channel, iSCSI, FCoE) storage. Host-bus adapter (HBA) cards must implement simple pass-through mode.
Drives can be internal to the server, or in an external enclosure that is connected to just one server. SCSI Enclosure Services (SES) is required for slot mapping and identification. Each external enclosure must present a unique identifier (Unique ID).
- Drives internal to the server
- Drives in an external enclosure ("JBOD") connected to one server
- NOT SUPPORTED: Shared SAS enclosures connected to multiple servers or any form of multi-path IO (MPIO) where drives are accessible by multiple paths.
Minimum number of drives (excludes boot drive)
- If there are drives used as cache, there must be at least 2 per server
- There must be at least 4 capacity (non-cache) drives per server
Note
This table provides the minimum for hardware deployments. If you're deploying with virtual machines and virtualized storage, such as in Microsoft Azure, see Using Storage Spaces Direct in guest virtual machine clusters.
Maximum capacity
Feedback | https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-direct-hardware-requirements | 2019-09-15T08:49:11 | CC-MAIN-2019-39 | 1568514570830.42 | [array(['media/hardware-requirements/sddc-aqs.png',
'screenshot of the Windows Server catalog showing the SDDC AQs'],
dtype=object)
array(['media/hardware-requirements/drive-interconnect-support-1.png',
'diagram of supported drive interconnects'], dtype=object)
array(['media/hardware-requirements/drive-interconnect-support-2.png',
'diagram of supported drive interconnects'], dtype=object) ] | docs.microsoft.com |
>>. You,
span='1hr' or
'3d'. This parameter also supports 'auto'.
See also
Answers
Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has using the tstats
Landen99 - The fields that are available to the tstats command are any indexed field. If you have access to the TSIDX files, you can use the walklex command from the CLI on a TSIDX file. You would look for "*::*" and the fields are on the left hand side of the double colon.
Here is a link to more information about the walklex command.
We need a list of fields available to tstats, please.
Just in case anyone else got confused by the nodename option for an accelerated data model object (like I did)
You need to reference any fields in that object with the whole tree name within the datamodel.
So to count the values for field_name in the server.scheduler.scheduled_reports object , you would reference it like this :-
tstats
count(server.scheduler.scheduled_reports.field_name)
FROM
datamodel=internal_server
where
nodename=server.scheduler.scheduled_reports
Hope that helps someone else, as it wasn't obvious to me from the documentation. | https://docs.splunk.com/Documentation/Splunk/6.3.3/SearchReference/Tstats | 2019-09-15T08:53:56 | CC-MAIN-2019-39 | 1568514570830.42 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Tamr is a data platform that enables enterprises to catalog their data sources, understand relationships between them, and curate a massive variety of information. It leverages human expertise to learn about the data, and uses machine learning to apply this business/institutional knowledge at scale. This ensures that data curation is guided by the people who know the most about the data, and reduces the associated effort by upwards of 90%.
Use this documentation to learn more about Tamr and how to use it. | https://docs.tamr.com/ | 2019-09-15T08:33:50 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.tamr.com |
Doug Gilmour has always been known as a clutch playoff performer. Well, maybe not always, but certainly throughout his Hall of Fame NHL career. Hear the story of how Gilmour first realized he could raise his game as the playoffs went on.
Plus, hear about how Aaron & Angela's cross-country trip has lead to the Memorial Cup Memories series coming soon to Hockey Docs.
Your email address will not be published. Required fields are marked *
Comment
Name *
Website | https://hockeydocs.com/2019/04/29/hockey-docs-podcast-episode-3/ | 2019-09-15T08:38:01 | CC-MAIN-2019-39 | 1568514570830.42 | [] | hockeydocs.com |
Wikka Documentation in SPANISH
This category contains links to documentation pages in spanish about Wikka. When creating such pages, include CategoryES at the bottom of each page so that the page shows here.
(somebody who knows spanish should translate the sentence above and the heading)
The following 17 page(s) belong to CategoryES
List of all categories | http://docs.wikkawiki.org/CategoryES | 2019-09-15T08:42:30 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.wikkawiki.org |
Dictionary<TKey, TValue>.Enumerator Structure
Microsoft Silverlight will reach end of support after October 2021. Learn more.
Enumerates the elements of a Dictionary<TKey, TValue>.
Namespace: System.Collections.Generic
Assembly: mscorlib (in mscorlib.dll)
Syntax
'Declaration Public Structure Enumerator _ Implements IEnumerator(Of KeyValuePair(Of TKey, TValue)), _ IDisposable, IDictionaryEnumerator, IEnumerator
public struct Enumerator : IEnumerator<KeyValuePair<TKey, TValue>>, IDisposable, IDictionaryEnumerator, IEnumerator
The Dictionary<TKey, TValue>.Enumerator generic type exposes the following members.
Properties
Top
Methods
Top
Explicit Interface Implementations
Top. | https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/k3z2hhax%28v%3Dvs.95%29 | 2019-09-15T08:46:19 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.microsoft.com |
Getting Started
- PDF for offline use
-
-
- Related Samples:
-
- Related SDKs:
-
Let us know how you feel about this
Translation Quality
0/250
last updated: 2016-06
Learn how to install Xamarin support for tvOS and quickly get started in tvOS development.
Getting Started with Xamarin.tvOS
Xamarin.tvOS allows us to create native tvOS apps using the same UI controls we would in Objective-C or Swift and Xcode, except with the flexibility and elegance of the modern C# language, the power of the .NET Base Class Library (BCL), and the first-class Xamarin Studio IDE.
This series introduces the basics of Xamarin.tvOS development. It will take us from installing tvOS support to starting, designing, coding and running our first app. Along the way, it teaches the skills and toolset that will be required to work on any Xamarin.tvOS app. Let's get started!
Installing tvOS Support
If you would like to get started developing apps for the new tvOS platform using Xamarin, you can install Xamarin.iOS (with support for tvOS 9.0 and watchOS 2.0) from the stable update channel.
Hello, tvOS Quick Start Guide
This article provides a quick start to developing apps for tvOS with Xamarin Studio by creating a simple Hello, tvOS app. It covers the basics of tvOS device provisioning, interface creation, coding for tvOS and testing on both the tvOS simulator and on real tvOS. | https://docs.mono-android.net/guides/ios/tvos/getting-started/ | 2017-10-17T02:03:26 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.mono-android.net |
Installing OpenRCT2 On Windows¶
In order to play OpenRCT2, you need a copy of RollerCoaster Tycoon 2 to play. Whilst it is possible to play OpenRCT2 without music, sound and the official scenarios, tracks and objects... the base graphics are still required (known as g1.dat). OpenRCT2 will work with any version and region of RollerCoaster Tycoon 2 including the original CD game, the GOG edition and the Steam edition. If you have installed the game in the default location, OpenRCT2 should automatically detect it and read the required files directly. If however you have not installed the game in the default location, you will be required to locate them when you start OpenRCT2 for the first time. You must select the directory containing the sub directories; data, objdata, scenarios and tracks. | https://docs.openrct2.website/en/latest/installing/installing-on-windows.html | 2017-10-17T01:58:43 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.openrct2.website |
Located in: Located in: product-template.php
Version Introduced: 3.8
Parameters Accepted: $args ($product_id = defaults to the current product id if used within the loop, $type – percentage / amount, Variations)
Description: This template tag is used to display the “you save” price or percentage for products that are on special. You would use this tag within your products pages.
Note: Use this tag inside the product loop for best results, this function will not return a $ or % sign it will only return the difference amount.
Code Example:
[php]
<?php echo ‘$’ . wpsc_you_save(‘type=amount’); ?>
[/php] | http://docs.wpecommerce.org/wpsc_you_save/ | 2017-10-17T02:05:06 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.wpecommerce.org |
Release Notes
Local Navigation
Configuration files
In the PushSDK.properties file, the following new properties are available:
- dao.type
- dtd.declaration.public
- dtd.declaration.enterprise
- public.ppg.address
- enterprise.ppg.address
- acknowledgement.push.lookup.retry.delay
The following properties are no longer supported:
- All the XML configuration files of the Spring beans are updated to support data storage in system memory (as an alternative to using a database).
Next topic: Database scripts
Previous topic: Setup application
Was this information helpful? Send us your comments. | http://docs.blackberry.com/nl-nl/developers/deliverables/30699/New_features_configuration_files_1456043_11.jsp | 2014-12-18T08:49:19 | CC-MAIN-2014-52 | 1418802765698.11 | [] | docs.blackberry.com |
Rate•Give Feedback
You can push images from your CI/CD system to DigitalOcean Container Registry. For example, you can push a new image to the registry whenever a build with your commit is successful on your source control system.
To start using your CI/CD system with the registry, you first need to authenticate it to push images to the registry. Depending on your CI system, you can use one of the following methods to authenticate it:
doctl
You can then run the
docker commands to push your image to the registry or you can configure your CI system to specify what to build and push the image automatically.
Many CI systems support configuring authentication using a Docker
config.json file. You can fetch this JSON file for your container registry using one of the following methods:
doctl registry docker-config --read-write. If you do not provide the
--read-writeflag, you will receive read-only credentials, which are usually undesirable for CI.
For CI systems that support configuring registry authentication via username and password, use a DigitalOcean API token as both the username and the password. The API token must have read/write privileges to push to your registry.
If you can run
doctl in your CI environment, run the following command to authenticate before pushing images:
doctl registry login --expiry-seconds <time>
This method is a good choice for CI systems such as GitHub Actions, where you can run arbitrary commands and push Docker images via the Docker command-line. For an example, see Enable Push-to-Deploy. | https://docs.digitalocean.com/products/container-registry/how-to/set-up-ci-cd/ | 2021-09-17T05:08:11 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.digitalocean.com |
The Write File tool can convert output data to files. You can add the Write File tool to API testing and service virtualization components the output data, such as test assets, message responders, and other tools.
- Select a node for which you want to write output data.
- Choose Add Write File Tool from the actions drop-down menu.
- Configure the tool settings:
Target name: Specify a name for the target file or use wildcards. Acceptable wildcards include:
- Target directory: Specify where the tool saves the file it creates. Click in the filed and the auto-complete function will show available locations.
You can also enable the following options for the target directory:
- If the extension tool is not attached to the output of another tool, you can specify an input for your script. Choose a MIME type from the drop-down menu and specify the text you want the tool to operate on. | https://docs.parasoft.com/display/CTP302/Write+File | 2021-09-17T04:23:20 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.parasoft.com |
TOM Software Architecture¶
The goal of the TOM Toolkit is to make developing TOMs as easy as possible while providing the flexibility needed to tailor each TOM to its specific science case. The motivation for the TOM Toolkit is discussed on the about page.
The TOM Toolkit (referred to as “the toolkit”) provides a framework for developing web applications that serve the purposes described in the motivation. This means when users interact with a TOM they will be doing so via a web browser or through other internet-based protocols. Web-based technologies allow developers to create rich user interfaces, simplify distribution and choose from a huge variety of programming languages and frameworks.
Python has become the go-to language for many in science. Fortunately, Python also enjoys widespread popularity in web development communities. This provides a unique opportunity for “Pythonistas” to develop scientific codebases which integrate seamlessly with web-based technologies. One need look no further than the success of the Jupyter project to see evidence of this.
There has been a lot of development surrounding Python and the web in the last two decades. One framework in particular, Django has emerged as one of the more popular choices for web development. Django is well known for its maturity, ease of use and modularity.
Instead of reinventing the wheel, it often makes sense to build on the proven work of others. Thus, it was decided that the toolkit would build on top of the Django framework. This provides several advantages:
The toolkit does not need to re-implement generic functionality that Django already provides such as template rendering, routing, object relational mapping, or even higher-level functionality like user accounts and database migrations.
TOM developers get to take advantage of the massive amounts of existing knowledge that already exists for Django projects. In fact, much of the extra functionality that a TOM developer might want to implement need not be dependent on the the toolkit at all, but can instead be developed by referring to the excellent documentation Django provides.
There are thousands of Django packages already written that can be used in any TOM project. If a TOM developer wants to be able to generate dynamic plots, or allow their users to login with Google, or even turn their TOM into a Slack bot, chances are there is already a package available that might suit their needs.
We highly recommend that developers interested in utilizing the TOM Toolkit familiarize themselves with the basics of Django, especially if they want to customize the toolkit in any significant fashion. The majority of the guides found in the TOM toolkit documentation are simply Django concepts rewritten in a TOM context.
Extending and Customizing the TOM Toolkit¶
As mentioned before, Django is well known for its extensibility and modularity. The toolkit takes advantage of these strengths heavily. In many ways, the TOM Toolkit is a framework within a framework.
After a TOM developer follows the getting started guide they are left with a functioning but generic TOM. It is then up to the developer to implement the specific features that their science case requires. The toolkit tries to facilitate this as efficiently as possible and provides documentation in areas of customization from changing the HTML layout of a page to altering how observations are submitted and even creating a new alert broker.
Django, and by extension the toolkit, rely heavily on object oriented
programming, especially inheritance. Most customization in the TOM toolkit comes
from subclassing classes that provide generic functionality and overriding or
extending methods. An experienced Django developer would feel right at home. For example, the
ObservationRecordDetailView
in the
tom_observations module of the toolkit inherits from Django’s
DetailView.
This means TOM developers are able to take full advantage of the power of Django
while still benefiting from the basic functionality that the toolkit provides.
This is why we recommend TOM developers familiarize themselves with Django; most TOM Toolkit features are actually extended Django features.
Plugin Architecture¶
Some areas of the TOM implement a plugin based architecture to support multiple
implementations of a similar functionality. An example would be the
tom_observations` module in which every supported observatory is implemented
as its own plugin. The
tom_catalogs and
tom_alerts work in the same way: the
module defines the interface and generic functionality and each implementation
fills in its own logic.
This structure makes it easy for developers to write their own plugins which can then be shared and installed by others or even contributed to the main codebase. The gemini.py module is an observation module plugin contributed by Bryan Miller to enable the triggering of observation requests on the Gemini telescope via the TOM Toolkit. Thanks Bryan!
Template Engine¶
The toolkit is able to take advantage of Django’s excellent template engine. Part of the engine’s power comes form the ability of templates to extend and override each other. This means a TOM developer can easily change the layout and style of any page without modifying the underlying framework’s code directly. Entire pages may be replaced, or only “blocks” within a template.
Compare these screenshots of the standard target detail page and the Global Supernova Project’s target detail page, the latter taking heavy advantage of template inheritance.
Data Storage, Deployment and Tooling¶
The toolkit is implemented as a web application backed by a relational database, uses (mostly) server side rendering, and is deployed using wsgi.
The toolkit should support any relational database that Django supports, including MySql, Postgresql, SQLite, and Oracle. There is nothing stopping a TOM developer from supplementing their TOM with additional databases, even NoSQL ones. By default SQLite is deployed because of its ease of use.
For non-database storage (data products, fits files, etc) the toolkit can be configured to use a variety of cloud-based storage services via django-storages. The documentation provides a guide for storing data on Amazon S3. By default, data is stored on disk.
Similarly, deployment works with a variety of servers, including uWsgi and Gunicorn. The documentation provides a guide to deploying to Heroku for those who want to get up and running quickly. Another option is to use Docker: the demo instance of the toolkit is deployed to a Kubernetes cluster and the Dockerfile is available on Github.
On the frontend, the toolkit utilizes the very popular Bootstrap4 css framework for its layout and general look, making it easy to pickup for anyone with experience with CSS. Javascript is introduced sparingly (astronomers love Python!) but is used in various situations to enhance the user experience and enable functionality such as interactive plotting and sky maps.
Django Reusable Apps¶
As previously mentioned, one of the reasons for Django’s popularity is its modularity. Django has the concept of reusable apps which are just python packages that are specifically meant to be used inside a Django project. The majority of the the toolkit’s functionality is implemented in a series of Django apps. While most of the apps are required, some may be omitted entirely from a TOM if the functionality is not desired.
The following describes each app that ships with the toolkit and its purpose.
TOM Targets¶
The tom_targets app is central to the entire TOM Toolkit project. It provides the database definitions for the storage and retrieval of targets and target lists. It also provides the views (pages) for viewing, creating, modifying and visualizing these targets in several ways including the visibility and target distribution plots.
Nearly every app depends on the
tom_targets module in some way.
TOM Observations¶
The tom_observations app handles all the logic for submitting and querying observations of targets at observatories. It defines the database models for observation requests and provides some views for working with them. facility.py defines an interface that external facilities (observatories) can implement in order to integrate with the toolkit: gemini.py and lco.py are two examples, and we expect more in the future.
TOM Data Products¶
Straddling both the
tom_targets and
tom_observations packages is
tom_dataproducts.
This package contains the logic required for storing data related to targets and
observations within the toolkit. Some data products are fetched from on-line
archives (handled by an observatory’s observation module) but data can also be
uploaded manually by the toolkit’s users.
This module handles details such as where data should be stored (locally on disk or in the cloud) as well as displaying certain kinds of data. It also provides code hooks where TOM developers can run their own functions on the data in case specialized data processing, analytics or pipelining is required.
TOM Alerts¶
The tom_alerts app contains modules related to the functionality of ingesting targets from various external services. These services, usually called brokers, provide rapidly changing target lists that are of interest to time domain astronomers. The alerts.py module provides a generic interface that other modules can implement, giving them the ability to integrate these brokers with the toolkit. Currently, there are modules available for Lasair, MARS, SCOUT, and others, with more planned for the future.
TOM Catalogs¶
The tom_catalogs app contains functionality related to querying astronomical catalogs. These “harvester” modules enable the querying and translation of targets found in databases such as Simbad and JPL Horizons directly into targets within the toolkit. The harvester.py module provides the basic interface, and there are several modules already written for Simbad, NED, the MPC, JPL Horizons and the Transient Name Server.
TOM Setup and TOM Common¶
The tom_setup package is special in that its sole purpose is to help TOM developers bootstrap new TOMs. See the getting started guide for an example. The tom_common package contains logic and data that doesn’t fit anywhere else.
Database Layout¶
The following diagram is an Entity-relationship Diagram (ERD). It is meant to display the relationship between tables in a database. In this case, it may help illustrate how the data from each of the toolkit’s packages relate to each other. It is not exhaustive; many tables and rows have been omitted for brevity.
Models¶
Django models are the classes that map to the database tables in your Django application. The TOM Toolkit models and the rationale behind them do are largely intuitive, but may require some explanation.
Target¶
The
Target model is relatively self-evident–it stores the data that describes the
targets in your TOM. By default, that includes things like name, type, coordinates, and
ephemerides.
TargetName¶
The
TargetName model stores extra names for a target, aka aliases. The corresponding target
is stored as a foreign key.
ObservationRecord¶
The
ObservationRecord model describes an individual observation request for a single target.
It stores the target as a foreign key, and can optionally store facility information and the
parameters submitted for the observation.
DataProduct¶
The
DataProduct model can refer to a number of different things, but generally refers to a
single file that is associated with a
Target and optionally an
ObservationRecord. A
DataProduct` has one of a number of tags, which at present include the following:
Photometry, a file containing photometric data
FITS, any FITS file not falling into the other categories
Spectroscopy, a file containing spectroscopic data
Image, a file containing image data, such as a JPEG or PNG
A
DataProduct type is file format-agnostic and refers to the data contained in the file,
rather than the format itself. The type is necessary for making decisions on which operations
can be executed using the data in a file.
ReducedDatum¶
A
ReducedDatum is a single point of data associated with a
Target and optionally a
DataProduct. The single data point is typically a single point of photometry or an individual
spectrum. The
ReducedDatum model has the following fields, in addition to its aforementioned
foreign key relationships:
data_typeis maintained on both the
ReducedDatumand
DataProductfor the case when data is brought in from another source, such as a broker
The
source_nameoptionally refers to the original source of the data. The intent of this field was to track data ingested from brokers, but could potentially be used for other purposes.
source_locationoptionally gives a hard location to the source–for a broker, it would be a link to the original alert.
The
timestamptime at which the datum was produced.
valueis a
TextFieldthat can take any series of data. As implemented, photometry is stored as JSON with keys for magnitude and error, but the
TextFieldprovides flexibility for additional photometry values on the datum. Spectroscopy is also stored as JSON, with keys for
magnitudeand
flux.
Feedback and bug reporting¶
We hope the TOM Toolkit is helpful to you and your project. If you have any concerns about implementation details, or questions about your own needs, please don’t hesitate to reach out. Issues and pull requests are also welcome on the project’s GitHub page. | https://tom-toolkit.readthedocs.io/en/stable/introduction/tomarchitecture.html | 2021-09-17T04:30:02 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['../_images/erd.png', 'DB Layout'], dtype=object)] | tom-toolkit.readthedocs.io |
Date: Fri, 13 Mar 2020 17:08:08 +0100 From: Polytropon <[email protected]> To: [email protected] Cc: FreeBSD Mailing List <[email protected]> Subject: Re: "directory not empty", "no such file or directory" errors on upgrade Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Fri, 13 Mar 2020 07:04:24 -0600, Gary Aitken wrote: > On 3/12/20 4:57 PM, Polytropon wrote: > > On Thu, 12 Mar 2020 13:03:27 -0600, > >> > >> The upgraded system has no > >> /usr/src/sys/pc98/ directory > >> > >> The files in > >> /usr/src/contrib/ofed/usr.lib > >> /usr/src/contrib/ofed/usr.bin > >> /usr/src/contrib/llvm/tools/lldb/source/Plugins/InstrumentationRuntime/ThreadSanitizer > >> /usr/src/contrib/llvm/tools/lldb/source/Plugins/InstrumentationRuntime/AddressSanitizer > >> > >> appear to be old versions left over from the original 11.2 install > >> > >> Hints on the proper way to fix this? > > > > When I read such messages, the first thing that comes to mind > > is filesystem inconsistency. Reboot into single-user mode and > > run a forced (!) fsck on all file systems, maybe repeat it if > > needed. > > fsck reported all clean. Okay - that would have been the most obvious problem. :-) > > You probably don't have > > > > > > > in your /etc/rc.conf which in my opinion should be the default > > setting (instead of YES). See "man 8 fsck" for further options > > that might be needed. > > Thanks for the reminder; I used to have it set but on this rebuilt- > after-a-crash system lost it. As I said, I don't understand why this isn't the default. A faster boot process into a potentially unstable filesystem environment does not justify delegating low-level filesystem check (and repair, and if needed, user interaction) into the background. > > Things like "directory not empty" can also be due to files that > > haven't been removed. This is possible if the schg (immutable) > > flag has been set for a file; use "chflags noschg <file>"; see > > "man 1 chflags" for details. > > > > First step: Always rule out the obvious. ;-) > > The remaining question is: > Are those directories supposed to be repopulated > with more recent versions? Or should they simply be removed? > If they need to be repopulated, how does one repopulate them? You can entirely remove /usr/src. Depending on your source retrieval method (freebsd-update, the "src" component, or a normal SVN checkout) will create any directories beneath /usr/src as they are needed. Probably a fresh new start for /usr/src would be a good idea. A way to identify offending files that stop you from removing /usr/src completely could be the following: # ls -Rlao /usr/src | grep "schg" However, it's rather untypical to have immutable files in the /usr/src subtree; they did at least sometimes appear (in the past) in /usr/obj. It could be possible that some other tool wrote to /usr/src and modified things it wasn't supposed to touch... -- Polytropon Magdeburg, Germany Happy FreeBSD user since 4.0 Andra moi ennepe, Mousa, ...
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=541168+0+archive/2020/freebsd-questions/20200315.freebsd-questions | 2021-09-17T05:19:11 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.freebsd.org |
This element MUST be conveyed as the root element in any instance document based on this Schema expression ABIE Quotation. Details A document used to quote for the provision of goods and services. Quotation Purchase Order A container for extensions foreign to the document. BBIE Quotation. UBL Version Identifier. Identifier Identifies the earliest version of the UBL 2 schema for this document type that defines all of the elements that might be encountered in the current instance. 0..1 Quotation UBL Version Identifier Identifier Identifier. Type 2.0.5 BBIE Quotation. Customization Identifier. Identifier Identifies a user-defined customization of UBL for a specific use. 0..1 Quotation Customization Identifier Identifier Identifier. Type NES BBIE Quotation. Profile Identifier. Identifier Identifies a user-defined profile of the subset of UBL being used. 0..1 Quotation Profile Identifier Identifier Identifier. Type BasicProcurementProcess BBIE Quotation. Profile Execution Identifier. Identifier Identifies an instance of executing a profile, to associate all transactions in a collaboration. 0..1 Quotation Profile Execution Identifier Identifier Identifier. Type BBIE Quotation. Identifier An identifier for this document, assigned by the sender. 0..1 Quotation Identifier Identifier Identifier. Type BBIE Quotation. Copy_ Indicator. Indicator Indicates whether this document is a copy (true) or not (false). 0..1 Quotation Copy Indicator Indicator Indicator. Type BBIE Quotation. UUID. Identifier A universally unique identifier for an instance of this document. 0..1 Quotation UUID Identifier Identifier. Type BBIE Quotation. Issue Date. Date The date, assigned by the sender, on which this document was issued. 1 Quotation Issue Date Date Date. Type BBIE Quotation. Issue Time. Time The time, assigned by the sender, at which this document was issued. 0..1 Quotation Issue Time Time Time. Type BBIE Quotation. Note. Text Free-form text pertinent to this document, conveying information that is not contained explicitly in other structures. 0..n Quotation Note Text Text. Type BBIE Quotation. Pricing_ Currency Code. Code A code signifying the currency used for all prices in the Quotation. 0..1 Quotation Pricing Currency Code Code Currency Currency_ Code. Type BBIE Quotation. Line Count. Numeric The number of Quotation Lines in this document. 0..1 Quotation Line Count Numeric Numeric. Type ASBIE Quotation. Validity_ Period. Period The period for which the Quotation is valid. 0..1 Quotation Validity Period Period Period ASBIE Quotation. Request For Quotation_ Document Reference. Document Reference A reference to the Request for Quotation associated with this Quotation. 0..1 Quotation Request For Quotation Document Reference Document Reference Document Reference ASBIE Quotation. Additional_ Document Reference. Document Reference A reference to an additional document associated with this document. 0..n Quotation Additional Document Reference Document Reference Document Reference ASBIE Quotation. Contract A contract associated with this Quotation. 0..n Quotation Contract Contract Contract ASBIE Quotation. Signature A signature applied to this document. 0..n Quotation Signature Signature Signature ASBIE Quotation. Seller_ Supplier Party. Supplier Party The seller. 1 Quotation Seller Supplier Party Supplier Party Supplier Party ASBIE Quotation. Buyer_ Customer Party. Customer Party Association to the Buyer. 0..1 Quotation Buyer Customer Party Customer Party Customer Party ASBIE Quotation. Originator_ Customer Party. Customer Party The originator. 0..1 Quotation Originator Customer Party Customer Party Customer Party ASBIE Quotation. Delivery A delivery associated with this document. 0..n Quotation Delivery Delivery Delivery ASBIE Quotation. Delivery Terms A set of delivery terms associated with this document. 0..1 Quotation Delivery Terms Delivery Terms Delivery Terms ASBIE Quotation. Payment Means Expected means of payment. 0..1 Quotation Payment Means Payment Means Payment Means ASBIE Quotation. Transaction Conditions A specification of purchasing, sales, or payment conditions applying to Orders related to this Quotation. 0..1 Quotation Transaction Conditions Transaction Conditions Transaction Conditions ASBIE Quotation. Allowance Charge A discount or charge that applies to a price component. 0..n Quotation Allowance Charge Allowance Charge Allowance Charge ASBIE Quotation. Destination_ Country. Country The country of destination of potential orders (for customs purposes). 0..1 Quotation Destination Country Country Country ASBIE Quotation. Tax Total The total amount of a specific type of tax. 0..n Quotation Tax Total Tax Total Tax Total ASBIE Quotation. Quoted_ Monetary Total. Monetary Total The total amount of the Quotation. 1 Quotation Quoted Monetary Total Monetary Total Monetary Total ASBIE Quotation. Quotation Line A line quoting a cost for one kind of item. 1..n Quotation Quotation Line Quotation Line Quotation Line | https://docs.oasis-open.org/ubl/cs02-UBL-2.3/xsd/maindoc/UBL-Quotation-2.3.xsd | 2021-09-17T03:47:51 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.oasis-open.org |
End-of-Life (EoL)
Integrate with Windows Server 2016 & 2012r2 Active Directory Federation Services (ADFS) via SAML 2.0 federation
Many organizations use SAML to authenticate users for web services. Prisma Cloud supports the SAML 2.0 federation protocol for access to the Prisma Cloud Console. When SAML support is enabled, users can log into Console with their federated credentials. This article provides detailed steps for federating your Prisma Cloud Console with your Active Directory Federation Service (ADFS) Identity Provider (IdP).
Prisma Cloud supports SAML 2.0 federation with Windows Server 2016 and Windows Server 2012r2 Active Directory Federation Services via the SAML protocol. The federation flow works as follows:
- Users browse to Prisma Cloud Console.
- Their browsers are redirected to the ADFS SAML 2.0 endpoint.
- Users authenticate either with Windows Integrated Authentication or Forms Based Authentication. Multi-factor authentication can be enforced at this step.
- An ADFS SAML token is returned to Prisma Cloud Console.
- Prisma Cloud Console validates the SAML token’s signature and associates the user to their Prisma Cloud account via user identity mapping or group membership.
Prisma Cloud Console is integrated with ADFS as a federated SAML Relying Party Trust.
The Relying Party trust workflows may differ slightly between Windows Server 2016 and Windows Server 2012r2 ADFS, but the concepts are the same.
Configure Active Directory Federation Services
This guide assumes you have already deployed Active Directory Federation Services, and Active Directory is the claims provider for the service.
- Log onto your Active Directory Federation Services server.
- Go toServer Manager > Tools > AD FS Managementto start the ADFS snap-in.
- Go toAD FS > Service > Certificatesand click on thePrimary Token-signingcertificate.
- Select the Details tab, and clickCopy to File….
- Save the certificate as a Base-64 encoded X.509 (.CER) file. You will upload this certificate into the Prisma Cloud console in a later step.
- Go toAD FS > Relying Party Trusts.
- ClickAdd Relying Party Trustfrom theActionsmenu.
- Step Welcome: selectClaims aware.
- Step Select Data Source: selectEnter data about the relying party manually.
- Step Specify Display Name: InDisplay Name, entertwistlock Console.
- Step Configure Certificate: leave blank.
- Step Configure URL: selectEnable support for the SAML 2.0 WebSSO protocol. Enter the URL for your Prisma Cloud Consolehttps://<FQDN_TWISTLOCK_CONSOLE>:8083/api/v1/authenticate/.
- Step Configure Identifiers: for example entertwistlockall lower case and clickAdd.
- Step Choose Access Control Policy: this is where you can enforce multi-factor authentication for Prisma Cloud Console access. For this example, selectPermit everyone.
- Step Ready to Add Trust: no changes, clickNext.
- Step Finish: selectConfigure claims issuance policy for this applicationthen clickClose.
- In the Edit Claim Issuance Policy for Prisma Cloud Console clickAdd Rule.
- Step Choose Rule Type: InClaim rule template, selectSend LDAP Attributes as Claims.
- Step Configure Claim Rule:
- SetClaim rule nametoPrisma Cloud Console
- SetAttribute StoretoActive Directory
- InMapping of LDAP attributes to outgoing claim types, set theLDAP AttributetoSAM-Account-NameandOutgoing claim typetoName ID.The user’s Active Directory attribute returned in the claim must match the Prisma Cloud user’s name. In this example we are using the samAccountName attribute.
- ClickFinish.
- Configure ADFS to either sign the SAML response (-SamlResponseSignature MessageOnly) or the SAML response and assertion (-SamlResponseSignature MessageAndAssertion) for the Prisma Cloud Console relying party trust. For example to configure the ADFS to only sign the response, start an administrative PowerShell session and run the following command:set-adfsrelyingpartytrust -TargetName "Prisma Cloud Console" -SamlResponseSignature MessageOnly
Active Directory group membership within SAML response
You can use Active Directory group membership to assign users to Prisma Cloud roles. When a user’s group membership is sent in the SAML response, Prisma Cloud attempts to associate the user’s group to a Prisma Cloud role. If there is no group association, Prisma Cloud matches the user to an identity based on the NameID to Prisma Cloud username mapping. The SAML group to Prisma Cloud role association does not require the creation of a Prisma Cloud user. Therefore simplify the identity management required for your implementation of Prisma Cloud.
- InRelying Party Trusts, select thePrisma Cloud Consoletrust.
- ClickEdit Claim Issuance Policyin the right handActionspane.
- ClickAdd Rule.
- Claim rule template:Send Claims Using a Custom Rule.
- ClickNext.
- Claim rule name:Prisma Cloud Groups.
- Paste the following claim rule into the Custom rule field:c:[Type == "", Issuer == "AD AUTHORITY"] => issue(store = "Active Directory", types = ("groups"), query = ";tokenGroups;{0}", param = c.Value);
Configure the Prisma Cloud Console
Configure the Prisma Cloud Console.
- Login to the Prisma Cloud Console as an administrator.
- Go toManage > Authentication > Identity Providers > SAML.
- SetIntegrate SAML users and groups with Prisma CloudtoEnabled.
- SetIdentity ProvidertoADFS.
- InIdentity provider single sign-on URL, enter your SAML Single Sign-On Service URL. For example.
- InIdentity provider issuer, enter your SAML Entity ID, which can be retrieved fromADFS > Service > Federation Service Properties : Federation Service Identifier.
- InAudience, enter the ADFS Relying Party identifiertwistlock
- InX.509 certificate, paste the ADFSToken Signing Certificate Base64into this field.
- ClickSave.
- Go toManage > Authentication > Users.
- ClickAdd user.
- Username: Active Directory samAccountName must match the value returned in SAML token’s Name ID attribute.When federating with ADFS Prisma Cloud usernames are case insensitive. All other federation IdPs are case sensitive.
- Auth method: set toSAML.
- ClickSave.
Active Directory group membership mapping to Prisma Cloud role
Associate a user’s Active Directory group membership to a Prisma Cloud role.
- Go toManage > Authentication > Groups.
- ClickAdd group.
- Group Name matches theActive Directory group name.
- Select theSAML groupradio button.
- Assign theRole.The SAML group to Prisma Cloud role association does not require the creation of a Prisma Cloud user.
- Test login into the Prisma Cloud Console via ADFS SAML federation.Leave your existing session logged onto the Prisma Cloud Console in case you encounter issues. Open a new incognito browser window and go to https://<CONSOLE>:8083.
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/prisma/prisma-cloud/20-12/prisma-cloud-compute-edition-admin/authentication/saml_active_directory_federation_services.html | 2021-09-17T04:43:41 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.paloaltonetworks.com |
Published: 2nd September 2021
Version: 2.0.0
Severity: High
CVSS Score: 8.3 (CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:C/C:H/I:H/A:H)
AFFECTED PRODUCTS
API Microgateway Toolkit Linux : 3.2.0
API Microgateway Toolkit Mac : 3.2.0
API Microgateway Toolkit Windows : 3.2.0
OVERVIEW
An improper certification validation vulnerability leading to possible malicious code execution.
DESCRIPTION
Due to an improper certification validation a malicious actor could perform a person-in-the-middle attack by manipulating Ballerina dependencies when retrieving such from the Ballerina Central. This retrieval happens when building the microgateway using the 'micro-gw build' command.
IMPACT
In order for the attack to succeed, an malicious actor should be able to perform a person-in-the-middle attack on the TLS connection, and alter response traffic relevant to the communication in between the host executing 'micro-gw build' command and the Ballerina Central. By leveraging the vulnerability, a malicious actor may perform an Remote Code Execution by injecting a malicious code into the system.
SOLUTION
If the latest version of the affected WSO2 product is not mentioned under the affected product list, you may migrate to the latest version to receive security fixes. Otherwise you may apply the relevant fixes to the product based on the public fix:
Note: If you are a WSO2 customer with Support Subscription, please use WSO2 Updates in order to apply the fix.
CREDITS
WSO2 thanks, Simon Gerst for responsibly reporting the identified issue and working with us as we addressed it. | https://docs.wso2.com/display/Security/Security+Advisory+WSO2-2021-1347 | 2021-09-17T04:47:33 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.wso2.com |
Changelog¶
0.6.6¶
Two bugs have been fixed in
pfsspy.utils.carr_cea_wcs_header:
-
The reference pixel was previously one pixel too large in both longitude and latitude.
-
The longitude coordinate was previously erroneously translated by one degree.
Both of these are now fixed.
0.6.5¶
This release improves documentation and handling of HMI maps. In particular:
-
The HMI map downloading example has been updated to use the polar filled data product, which does not have any data missing at the poles.
-
pfsspy.utils.fix_hmi_meta()has been added to fix metadata issues in HMI maps. This modifies the metadata of a HMI map to make it FITS compliant, allowing it to be used with pfsspy.
0.6.2¶
This release includes several small fixes in response to a review of pfsspy for the Journal of Open Source Software. Thanks to Matthieu Ancellin and Simon Birrer for their helpful feedback!
A permanent code of conduct file has been added to the repository.
Information on how to contribute to pfsspy has been added to the docs.
The example showing the performance of different magnetic field tracers has been fixed.
The docs are now clearer about optional dependencies that can increase performance.
The GONG example data has been updated due to updated data on the remote GONG server.
0.6.1¶
Bug fixes¶
Fixed some messages in errors raised by functions in
pfsspy.utils.
0.6.0¶
New features¶
The
pfsspy.utilsmodule has been added, and contains various tools for loading and working with synoptic maps.
pfsspy.Outputhas a new
bunitproperty, which returns the
Unitof the input map.
Added
pfsspy.Output.get_bvec(), to sample the magnetic field solution at arbitrary coordinates.
Added the
pfsspy.fieldline.FieldLine.b_along_flineproperty, to sample the magnetic field along a traced field line.
Added a guide to the numerical methods used by pfsspy.
Breaking changes¶
The
.alproperty of
pfsspy.Outputis now private, as it is not intended for user access. If you really want to access it, use
._al(but this is now private API and there is no guarantee it will stay or return the same thing in the future).
A
ValueErroris now raised if any of the input data to
pfsspy.Inputis non-finite or NaN. Previously the PFSS computation would run fine, but the output would consist entirely of NaNs.
Behaviour changes¶
The monopole term is now ignored in the PFSS calculation. Previously a non-zero (but small) monopole term would cause floating point precision issues, leading to a very noisy result. Now the monopole term is explicitly removed from the calculation. If your input has a non-zero mean value, pfsspy will issue a warning about this.
The data downloaded by the examples is now automatically downloaded and cached with
sunpy.data.manager. This means the files used for running the examples will be downloaded and stored in your
sunpydata directory if they are required.
The observer coordinate information in GONG maps is now automatically set to the location of Earth at the time in the map header.
pfsspy.tracing.Tracerto transform from spherical to cartesian coordinates. | https://pfsspy.readthedocs.io/en/stable/changes.html | 2021-09-17T03:45:37 | CC-MAIN-2021-39 | 1631780054023.35 | [] | pfsspy.readthedocs.io |
Interactive Queries¶
Interactive Queries allow you to leverage the state of your application from outside your application. The Kafka Streams API enables your applications to be queryable. access the built-in types via the class
QueryableStoreTypes.
Kafka Streams currently has two built-in types:
- A key-value store
QueryableStoreTypes#keyValueStore(), see Querying local key-value stores.
- A window store
QueryableStoreTypes#windowStore(), see Querying local window stores., you can get access to “CountsKeyValueStore” and then query it via the ReadOnlyKeyValueStore API:
// Get the key-value store CountsKeyValueStore ReadOnlyKeyValueStore<String, Long> keyValueStore = streams.store("CountsKeyValueStore", QueryableStoreTypes.keyValueStore()); // Get value by key System.out.println("count for hello:" + keyValueStore.get("hello")); // Get the values for a range of keys available in this application instance KeyValueIterator<String, Long> range = keyValueStore.range("all", "streams"); while (range.hasNext()) { KeyValue<String, Long> next = range.next(); System.out.println("count for " + next.key + ": " + value); } //(Duration.ofMinutes(1))) .count(Materialized.<String, Long, WindowStore<Bytes, byte[]>as("CountsWindowStore"));
After the application has started, you can get access to “CountsWindowStore” and then query it via the ReadOnlyWindowStore API:
// Get the window store named "CountsWindowStore" ReadOnlyWindowStore<String, Long> windowStore = streams.store("CountsWindowStore", QueryableStoreTypes.windowStore()); // Fetch values for the key "world" for all of the windows available in this application instance. // To get *all* available windows we fetch windows from the beginning of time until now. Instant timeFrom = Instant.ofEpochMilli(0); // beginning of time = oldest available Instant timeTo = Instant.now(); // now (in processing-time) WindowStoreIterator<Long> iterator = windowStore.fetch("world", timeFrom, timeTo); while (iterator.hasNext()) { KeyValue<Long, Long> next = iterator.next(); long windowTimestamp = next.key; System.out.println("Count of 'world' @ time " + windowTimestamp + " is " + next.value); } //.
The class/interface hierarchy for your custom store might look something like:
public class MyCustomStore<K,V> implements StateStore, MyWriteableCustomStore<K,V> { // implementation of the actual store } // Read-write interface for MyCustomStore public interface MyWriteableCustomStore<K,V> extends MyReadableCustomStore<K,V> { void write(K Key, V value); } // Read-only interface for MyCustomStore public interface MyReadableCustomStore<K,V> { V read(K key); } public class MyCustomStoreBuilder implements StoreBuilder<MyCustomStore<K,V>> { //:
public class MyCustomStoreType<K,V> implements QueryableStoreType<MyReadableCustomStore<K,V>> { // Only accept StateStores that are of type MyCustomStore public boolean accepts(final StateStore stateStore) { return stateStore instanceOf MyCustomStore; } public MyReadableCustomStore<K,V> create(final StateStoreProvider storeProvider, final String storeName) { return new MyCustomStoreTypeWrapper(storeProvider, storeName, this); } }
A wrapper class is required because the
StateStoreProvider
interface to get access to the underlying instances of your store.
StateStoreProvider#stores(String storeName, QueryableStoreType<T> queryableStoreType) returns a
List of state
stores with the given storeName and of the type as defined by
queryableStoreType.
Here is an example implementation of the wrapper follows (Java 8+):
// We strongly recommended implementing a read-only interface // to restrict usage of the store to safe read operations! public class MyCustomStoreTypeWrapper<K,V> implements MyReadableCustomStore<K,V> { private final QueryableStoreType<MyReadableCustomStore<K, V>> customStoreType; private final String storeName; private final StateStoreProvider provider; public CustomStoreTypeWrapper(final StateStoreProvider provider, final String storeName, final QueryableStoreType<MyReadableCustomStore<K, V>> customStoreType) { // ... assign fields ... } // Implement a safe read method @Override public V read(final K key) { // Get all the stores with storeName and of customStoreType final List<MyReadableCustomStore<K, V>> stores = provider.getStores(storeName, customStoreType); // Try and find the value for the given key final Optional<V> value = stores.stream().filter(store -> store.read(key) != null).findFirst(); // Return the value if it exists return value.orElse(null); } }. The only requirements are that the RPC layer is embedded within the Kafka Streams application and that it exposes an endpoint that other application instances and applications can connect to.
The KafkaMusicExample and WordCountInteractiveQueriesExample are end-to-end demo applications for Interactive Queries that showcase the implementation of an RPC layer through a REST API.
Exposing the RPC endpoints of your application¶
To enable a Kafka Streams application that supports the discovery of its state stores.
Properties props = new Properties(); // Set the unique RPC endpoint of this application instance through which it // can be interactively queried. In a real application, the value would most // probably not be hardcoded but derived dynamically. String rpcEndpoint = "host1:4460"; props.put(StreamsConfig.APPLICATION_SERVER_CONFIG, rpcEndpoint); // ... further settings may follow here ... Streams, Grouped); streams.start(); // Then, create and start the actual RPC service for remote access to this // application instance's local state stores. // // This service should be started on the same host and port as defined above by // the property `StreamsConfig.APPLICATION_SERVER_CONFIG`. The example below is // fictitious, but we provide end-to-end demo applications (such as KafkaMusicExample) // that showcase how to implement such a service to get you started. MyRPCService rpcService = ...; rpcService.listenAt(rpcEndpoint);
Discovering and accessing application instances and their local state stores¶
The following methods return StreamsMetadata objects, which provide meta-information about application instances such as their RPC endpoint and locally available state stores.
KafkaStreams#allMetadata(): find all instances of this application
KafkaStreams#allMetadataForStore(String storeName): find those applications instances that manage local instances of the state store “storeName”
KafkaStreams#metadataForKey(String storeName, K key, Serializer<K> keySerializer): using the default stream partitioning strategy, find the one application instance that holds the data for the given key in the given state store
KafkaStreams#metadataForKey(String storeName, K key, StreamPartitioner<K, ?> partitioner): using
partitioner, find the one application instance that holds the data for the given key in the given state store
Attention
If
application.server is not configured for an application instance, then the above methods will not find any StreamsMetadata for it.
For example, we can now find the
StreamsMetadata for the state store named “word-count” that we defined in the
code example shown in the previous section:
KafkaStreams streams = ...; // Find all the locations of local instances of the state store named "word-count" Collection<StreamsMetadata> wordCountHosts = streams.allMetadataForStore("word-count"); // For illustrative purposes, we assume using an HTTP client to talk to remote app instances. HttpClient http = ...; // Get the word count for word (aka key) 'alice': Approach 1 // // We first find the one app instance that manages the count for 'alice' in its local state stores. StreamsMetadata metadata = streams.metadataForKey("word-count", "alice", Serdes.String().serializer()); // Then, we query only that single app instance for the latest count of 'alice'. // Note: The RPC URL shown below is fictitious and only serves to illustrate the idea. Ultimately, // the URL (or, in general, the method of communication) will depend on the RPC layer you opted to // implement. Again, we provide end-to-end demo applications (such as KafkaMusicExample) that showcase // how to implement such an RPC layer. Long result = http.getLong("http://" + metadata.host() + ":" + metadata.port() + "/word-count/alice"); // Get the word count for word (aka key) 'alice': Approach 2 // // Alternatively, we could also choose (say) a brute-force approach where we query every app instance // until we find the one that happens to know about 'alice'. Optional<Long> result = streams.allMetadataForStore("word-count") .stream() .map(streamsMetadata -> { // Construct the (fictituous) full endpoint URL to query the current remote application instance String url = "http://" + streamsMetadata.host() + ":" + streamsMetadata.port() + "/word-count/alice"; // Read and return the count for 'alice', if any. return http.getLong(url); }) .filter(s -> s != null) .findFirst();
At this point the full state of the application is interactively queryable:
- You can discover the running instances of the application and the state stores they manage locally.
- Through the RPC layer that was added to the application, you can communicate with these application instances over the network and query them for locally available state.
- The application instances are able to serve such queries because they can directly query their own local state stores and respond via the RPC layer.
- Collectively, this allows us to query the full state of the entire application.
To see an end-to-end application with Interactive Queries, review the demo applications.
Demo applications¶
Here are a couple of end-to-end demo applications to get you started:
- KafkaMusicExample: This application continuously computes the latest Top 5 music charts based on song play events collected in real-time in an Apache Kafka® topic. This charts data is maintained in a continuously updated state store that can be queried interactively via a REST API.
- WordCountInteractiveQueriesExample: This application continuously counts the occurrences of words based on text data that is consumed in real-time from a Kafka topic. The word counts are maintained in a continuously updated state store that can be queried interactively via a REST API. | https://docs.confluent.io/5.4.1/streams/developer-guide/interactive-queries.html | 2021-09-17T02:54:27 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['../../_images/streams-interactive-queries-api-02.png',
'../../_images/streams-interactive-queries-api-02.png'],
dtype=object) ] | docs.confluent.io |
New Relic's Java agent provides several options for custom instrumentation. One of those options is adding the Java agent API's
@Trace,
@TraceLambda or
@TraceByReturnType annotations to your application code. This document describes how to use annotations.
Important.
Make sure that
newrelic-api.jar appears in your classpath as it contains all these annotations.
@Trace
Annotating a method with
@Trace tells the Java agent that measurements should be taken for that method.
To add a method call as a custom trace add
@Trace annotations to your method.
import com.newrelic.api.agent.Trace;...@Tracepublic void run() {// background task}:
@Traceprotected.
Important.
@TraceLambda
If your transaction traces show large blocks of uninstrumented time and you want to include lambda expressions within the trace, you can use the
@TraceLambda annotation without parameters:
import com.newrelic.api.agent.TraceLambda;@TraceLambdaclass ClassContainingLambdaExpressions() {// work}
Lambda expressions become static methods of the containing class after compilation. By default, static methods within classes marked with the
@TraceLambda annotation matching the annotations pattern will be marked with the
@Trace annotation.
Properties for @TraceLambda
The
@TraceLambda annotation supports the following properties.
@TraceByReturnType
To include methods with a particular return type within the trace, you can use the
@TraceByReturnType annotation to mark a class passing the return types as a property. Methods in annotated classes that match one of the specified return types will be marked with the
@Trace annotation.
@TraceByReturnType(traceReturnTypes={Integer.class, String.class})class ClassContainingMethods() {// ...}
Properties for @TraceByReturnType
The
@TraceByReturnType annotation supports the following properties.
Performance considerations
When the Java agent is present in the JVM, it will inject code on the annotated methods. The performance hit is negligible in heavyweight operations, such as database or webservice calls, but is noticeable in methods that are called frequently, such as an accessor called thousands of times a second.
Caution
Do not instrument all of your methods, as this can lead to decreased performance and to a metric grouping issue.
More API functions
For more about the Java agent API and its functionality, see the Java agent API introduction.. | https://docs.newrelic.com/docs/agents/java-agent/api-guides/java-agent-api-instrument-using-annotation/?q= | 2021-09-17T05:11:21 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.newrelic.com |
Current release
Release date: 9 August 2021
Notable bug fixes and changes:
- Administrator accounts can now be used to access resources as well as the administration area.
Previous releases
Release date: 4 August 2021
Notable bug fixes and changes:
- [MDP-9271] - Unable to create permission set mapping rule at the organisation mapped org in certain situations
- [MDP-9599] - Entitlement value lengths could be set longer than maximum
- [MDP-9637] - Certain customers could not view the groups page
- [MDP-9647] - Could not impersonate into reports for an organisation that had no administrator
- [MDP-9654] - Some account account downloads were failing for some sub-organisations
- [MDP-9672] - Accounts created via Self-registration were incorrectly being recorded against the activity stream of the administrator
- [MDP-9677 / 9685] - Misuse monitoring updates to support future functionality
- [MDP-9678] - Creating a new organisation sometimes did not refresh the All organisations summary screen
- [MDP-9682] - The organisation button on a manually mapped connection had stopped working correctly
Release date: 22 July 2021
Notable bug fixes and changes:
- [MDP-9079] - IdP metadata now includes an error URL, and libraries can have the SIRTFI entity category set if their federation requires it ()
- [MDP-9680] - Some recently enabled admin accounts would have their password rejected on certain changes.
Release date: 29 June 2021
Notable bug fixes and changes:
- [MDP-9058] - You can now set the order that local connections are displayed to the user when you have more than one. See: Sorting connections
- [MDP-9419] - Sub organisations can no longer modify username prefixes
Release date: 24 May 2021
Notable bug fixes and changes:
- [MDP-9499] - Successful authentication events now appear in an account's activity stream
Release date: 5 May 2021
Notable bug fixes and changes:
- Updates to the interface including a new organisation switcher. See: How to act as a sub-organisation
- [MDP-8264] - Default status can now be applied to a permission set at creation time
- [MDP-9487] - The reporting API no longer returns a 500 error for certain queries
- [MDP-9495] - Advanced search was not showing all values in dropdown lists
Release date: 27 April 2021
Notable bug fixes and changes:
- [MDP-9455] - In certain circumstances, moving an account between sub-organisations might not be recorded in the activity stream
- [MDP-9461] - Bulk upload functionality had started rejecting non-UTF-8 characters
Release date: 3 March 2021
Notable bug fixes and changes:
- [MDP-9190] - Data downloads and bulk uploads now reference the organisation number instead of the admin username
Release date: 10 February 2021
Notable bug fixes and changes:
- [MDP-9117] - Access code generation added to the activity stream
- [MDP-9249] - Libraries in multiple federations could have problems using the redirector in certain circumstances
- [MDP-9264] - Permission set schemas incorrectly denying add/edit permission in certain circumstances
- [MDP-9270] - The old organisation name was indexed when accounts moved to new organisations
- [MDP-9273] - Resource catalogue display issues around some resources
- [MDP-9289] - Changes to password reset request limits
Release date: 2 February 2021
Notable bug fixes and changes:
- [MDP-9248] - Group affiliation was not maintained when moving accounts if the group was not present in the target organisation
- [MDP-9260] - Access accounts could not sign in
Release date: 18 January 2021
Notable bug fixes and changes:
- [MDP-9192] - Could impersonate from the ownership information hover on search results
- [MDP-9193] - Links in daily/weekly notification emails were not working
- [MDP-9194] - Account data download link was not working
- [MDP-9195] - Redirector and visibility icons were not appearing in the resource catalogue
Release date: 13 January 2021
Notable bug fixes and changes:
- Start of transition process for changes to administrator accounts
- [MDP-9105,6] - Attributes were not removed from the attribute release policy properly when disabled or changed to no longer be releasable in the schema editor
- [MDP-8918] - The organisation summary page was not listing organisations in alphabetical order
- [MDP-8992] - Creation of sub-organisations was not included in the activity stream
Release date: 21 July 2020
New functionality:
- [REP-446] - Resource access reports can now be obtained via API calls for easier integration into analysis tools. See: Fetching statistics reports via the API
Release date: 14 July 2020
Notable bug fixes and changes:
- [REP-491] - Very large reports could not be exported.
- [REP-515] - Resource reports are now supplied 'flat' for greater flexibility and compatibility. See: Saving reports
Release date: 28 May 2020
Notable bug fixes and changes:
- [MDP-8569] - Metadata now includes names of sub-organisations where they have distinct scope values.
Release date: 6 May 2020
Notable bug fixes and changes:
- [My-41] - Accessibility improvements for MyAthens.
Release date: 21 April 2020
Notable bug fixes and changes:
- [RED-170] - Could not paste links into the redirector link generator if using IE11
Release date: 4 April 2020
Notable bug fixes and changes:
- [MDP-8585] - Fixed inconsistency between "Join requests" number in summary page and the list "Accounts awaiting approval to join this organisation" function
- [MDP-8587] - The fast forward button on search results has had the jump increased
- [MDP-8609] - The search backend was having problems when very large numbers of custom attributes were being used
- [MDP-8653] - Could not search for local accounts by their email addresses
Release date: 20 March 2020
Notable bug fixes and changes:
- Update to the admin sign-in page for remote working - see: How do I work from home
Release date: 16 March 2020
Notable bug fixes and changes:
- [RED-138] - Updates to the redirector link generators
- [MDP-8287] - SAML based local auth connectors now support the forceAuthn option (SAML, ADFS, CAS)
- [MDP-8347] - Previously set filters on audit streams were not applied on initial view
Release date: 30 January 2020
Notable bug fixes and changes:
- [MDP-8542] - The OpenAthens account option alongside local connectors can now contain custom text. Set it under domain preferences
Release date: 13 January 2020
Notable bug fixes and changes:
- [MDP-8524] - Default settings on a new SAML local connection were incorrect
- [MDP-8526] - Validation on the bulk modify details function would incorrectly fail in certain circumstances
Release date: 1 November 2019
Notable bug fixes and changes:
- Administrators were unable to use the 'Log a query' link within the Administration area as an error was occurring on the support portal ()
Release date: 30 October 2019
Bug fixes and changes:
- A HTTP header of referrer-policy has been added to the Administration area (admin.openathens.net) with the enforcement of strict-origin
- A HTTP header of content-security-policy has been added to the Administration area (admin.openathens.net) in reporting only mode
- [MDP-8449] - Display name attribute and Unique user attribute on a SAML connection could not be left blank even when "Use Subject NameID" is selected
Release date: 26 September 2019
Bug fixes and changes:
- [MDP-8276] - A new non-interactive signin API option has been added (the old method will still work).
- [MDP-8284] - The uniqueID used for local account can now be case normalised to aid OALA migration.
- [REP-400] - Deleted reportable attributes could still appear in breakdown options on reports
Release date: 24 September 2019
Notable bug fixes:
- [REP-408] - Multi-dimension reports that included permission sets were producing errors
- [REP-409] - Accounts > Authentications reports would not load
Release date: 18 September 2019
Notable bug fixes:
- [REP-404] - Group name changes were not always reflected in reports
- [REP-407] - The top resources widget on the reports dashboard was not displaying data
Release date: 28 August 2019
Notable bug fixes:
- [RED-91] - the redirector didn't like how some links were encoded / decoded
Release date: 19 August 2019
Notable bug fixes:
- [RED-67] - the redirector could get a bit confused when there were multiple link formats for the same target
Release date: 31 July 2019
Notable bug fixes:
- [MDP-8278] - Security headers were being sent twice in MyAthens
Release date: 25 July 2019
Notable bug fixes:
- [MDP-8125] - certain characters could cause an error when clicking on an account that contained them in the audit stream
- [MDP-8152] - local accounts were not always cleaned up correctly when organisation mapping was being used
Release date: 18 July 2019
Notable Changes:
- [MDP-8163] - A scoped version of the targetedID attribute is now available in the Attribute release policies to help people who are migrating from other systems. To support use of this, the regular (unscoped) targetedID attribute that has always been released by default is now visible and selectable in policies.
Notable bug fixes:
- [MDP-8178] - Bulk downloads could time out when downloading very large numbers of accounts
- [MDP-8238] - OA federation metadata was not updated when enabling a scope on a sub-organisation
Release date: 12 June 2019
Notable bug fixes:
- [MDP-8176] - Local accounts where the unique ID was an empty string are now reported in audit as having their unique user identifier missing
- [MDP-8183] - Improved error handling when users were sent to sign in at a disabled domain
- [MDP-8198] - Users using a URL containing a organisation id of 0 were seeing an internal server error
Release date: 31st May 2019
Notable changes
- [MDP-4999] - Added support for HTTP-POST for SAML based local connectors
- [MDP-7248] - Add in support for Sirtifi ()
- [MDP-7426] - Changes in the schema editor were not always reflected in the UI until the cache cleared
- [MDP-8050] - The 'Account > Totals > By attribute' report now includes local accounts
- [MDP-8058] - Accessibility improvements on the authentication point
- [MDP-8112] - Some less-common characters were not supported in the Unique ID field for local accounts
- [MDP-8139] - ADFS connectors using organisation mapping could be downgraded to http from https in certain circumstances
- [MDP-8140] - Fixed a CSS issue around the forgotten password confirmation message
- [MDP-8150] - Fixed CSS issues when selecting connections at the authentication point
- [REP-244] - Could not transfer to reporting from the admin area in certain circumstances
- [REP-246] - Attributes that have the reportable option removed will no longer appear as breakdown options in reporting
- [REP-300] - Clean up duplicates created before the release of [REP-245] -
- [REP-319] - Non-interactive sign-in via the management API was not recorded in reports
- [REP-356] - Could not remove people from a scheduled report distribution list
Release date: 25 April 2019
Notable changes
- [REP-293] - Removed the unique average column from reports (was of no use unless the period matched the granularity).
Release date: 3 April 2019
Notable changes
- [CSP-3115] - Hide from discovery option moved from a domain preference to a connection preference
- [MDP-4245 / 7527 / 7637 / 7797] - Various improvements to auditing, including local account activity
- [REP-201] - Account totals reports now distinguish between organisations that do and don't have their own scopes
- [REP-280] - Account totals reports displayed zero for breakdowns of choice attributes if the name contained a space
Release date: 14 March 2019
Notable changes
- [REP-174] - Added a link from the reporting interface to the admin interface
- [REP-206 / MDP-7819] - Column headings could be unclear in certain circumstances
- [REP-214 / 236] - Interface shows a progress bar and no longer times out when generating very large reports
- [REP-245] - Local authentication usernames could appear twice in reports in certain circumstances
Release date: 7 February 2019
Notable changes
- [REP-21] - Replace unique total with average on the authentication report
- [REP-142] - Remove totals series from the Account totals by status report
- [REP-207] - Fixed error on accessing some scheduled reports from emailed links
- [MY-35] - Fixes to the MyAthens panel editor
Release date: 19 December 2018
Notable changes
- [MDP-7736, MDP-7812, MDP-7820] - Certain situations could see a hop from https to http and back again
- [MDP-7617] - transfers to the authentication point quoting an invalid entityID now have a more presentable error message
- [MPP-7845] - audit sourced values on the homepage link to the relevant audit stream once more
Release date: 15 November 2018
New features
Notable bug fixes
- [MDP-7548] - The expiry date quoted in the account expiring emails was in an unhelpful format
- [MDP-7616] - SAML requests that cannot be decoded now return a relevant "bad request" error instead of a "something went wrong" error
- [REP-159] - Report downloads were failing with error: "Cannot create a text value from null."
Release date: 9 October 2018
Notable changes and enhancements
- [REP-144] - The old statistics menu item now says 'Archive'
Notable bug fixes
- [MDP-7616] - A more helpful error is now returned when a non-decodable SAML request is received from a Publisher.
Release date: 6 September 2018
Notable bug fixes
- [MDP-7216] - Expiry date was reduced by 1 day on save when the expiry date was within daylight saving time of the admin-user's locale
Release date: 22 August 2018
Notable bug fixes
- [MDP-7434] - Users mapped to multiple organisations using a local authentication connector were encountering a 500 error
- [MDP-7490] - Accounts that had been deleted still received an account expiry email if they were within the recovery window
Release date: 14 August 2018
Notable bug fixes
- [MDP-7264] - "Contacting your OpenAthens administrator function" in the MyAthens help page was not working
Release date: 2 August 2018
New features
Notable changes and enhancements
- [MDP-6176] - The redirector now handles ejournals.ebsco.com smartlinks in the same way as dx.doi.org links
Notable bug fixes
- [MDP-3009] - Confirmation emails were not being sent on bulk upload success
- [MDP-3671] - Local accounts that passed an AthensDALegacyUID attribute that was empty encountered a 500 server error instead of a 400 bad request error
- [MDP-3872] - Calendar defaulted to 1906 after attempting to manually enter incorrect date format when creating a new personal account
- [MDP-7237] - The continue button during password reset didn't work on branded login pages
Release date: 17 July 2018
Notable bug fixes
- [CSP-2880] - Edits to MyAthens web content panels were not published
- [CSP-2881] - MyAthens was not setting
target = "_blank"when specified on web content panel links
Release date: 3 July 2018
New features:
Notable changes and enhancements
- [CSP-2778] - Allow administrators to bulk generate redirector links
- [CSP-2744 / 2767 / 2745] - Activity performed by the System, OpenAthens service desk and partners is now identified as such in the account activity stream. See: Auditing
- [CSP-2745] - Activity performed by a customer API keys is now displayed on the account activity stream - e.g. self-registration actions
Notable bug fixes
- [MDP-3181] - Unable to see full organisation name when moving accounts
- [MDP-6544] - Number of users displayed on group and permission set buttons included deleted accounts
- [MDP-6547] - Activation code was not always being replaced in emails for resetting password
- [MDP-7216] - Expiry date was reduced by 1 day when the administrator clicks save
- [MDP-7267] - Admin account activation emails had a link to the old federation manager
Release date: 6 Jun 2018
New features:
Notable changes and enhancements
- :
- Deleted accounts can now be restored - see: How to recover deleted accounts.
Notable bug fixes
-
Notable bug fixes
-
.. | https://docs.openathens.net/display/public/MD/Release+notes+-+Identity | 2021-09-17T04:40:33 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.openathens.net |
- Install Ops Manager >
- Install Ops Manager >
- Install Ops Manager with a
debPackage
Install Ops Manager with a
deb Package¶
On this page
Overview¶
This tutorial describes how to install Ops Manager using a
deb
package. If you are instead upgrading an existing deployment, please
see Upgrade Ops Manager.
Prerequisites¶
You must have administrative access on the hosts to which you install.
Before you install Ops Manager, you must:
Plan your configuration. See Installation Checklist.
Deploy hosts Ops Manager Application Database and optional Backup Database. The databases require dedicated MongoDB instances. Don’t use MongoDB installations that store other data. Ops Manager requires the Backup Database if you use the Backup feature.
The Ops Manager Application must authenticate to the backing databases as a MongoDB user with appropriate access.
See also
To learn more about connecting to your backing database with authentication, see
mongo.mongoUri.
Note
Ops Manager cannot deploy its own backing databases. You must deploy those databases manually.
Install and verify an Email Server. Ops Manager needs an email server to send alerts and recover user accounts. You may use an SMTP Server or an AWS SES server. To configure your Email Server, see
Install Ops Manager¶
To install Ops Manager:
Download the latest version of the Ops Manager package.¶
Open your preferred browser to visit the MongoDB Download Center on MongoDB.com.
If you start on MongoDB.com instead of following the link above, click Get MongoDB, then select Ops Manager from the Tools menu.
From the Platforms drop-down menu, click Ubuntu 18.04.
From the Packages drop-down menu, click DEB for x86_64 architecture.
Click Download.
The downloaded package is named
mongodb-mms-<version>.x86_64.deb, where
<version>is the version number.
Optional: Verify Ops Manager package integrity.¶
To verify the integrity of the Ops Manager download, see Verify Integrity of Ops Manager Packages.
Install the Ops Manager package on each server being used for Ops Manager.¶
Install the
.deb package by issuing the following command, where
<version> is the version of the
.deb package:
When installed, the base directory for the Ops Manager software is
/opt/mongodb/mms/. The
.deb package creates a new system user
mongodb-mms under which the server will run.:
To configure Ops Manager to use the Ops Manager Application Database over TLS, configure the following TLS settings.
Ops Manager also uses these settings for TLS connections to Backup Databases
To configure Ops Manager to use Kerberos to manage access to the Ops Manager Application Database, configure the following Kerberos settings:
Open the Ops Manager home page and register the first user.¶
Enter the following URL in a browser, where
<host>is the fully qualified domain name of the server:
Click the Sign Up Settings.
After you install the Ops Manager Application to your Ops Manager hosts, you must install MongoDB Agents on the hosts that will run your MongoDB deployments.
You can install the MongoDB Agent on hosts running existing MongoDB deployments or on hosts on which you will create new MongoDB deployments. Hosts that serve your MongoDB deployments must meet the minimum MongoDB production requirements. | https://docs.opsmanager.mongodb.com/current/tutorial/install-on-prem-with-deb-packages/ | 2021-09-17T03:26:11 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.opsmanager.mongodb.com |
Content:
1.CYCLES OF INNOVATION IN INFORMATION TECHNOLOGY
New infrastructure enables new kinds of services
Blockchain protocols produce open, global and cost-efficient services
Blockchain protocols are owned and operated by global communities
2.BLOCKCHAIN TECHNOLOGY
Blockchain technology enables global services
Applications are built on top of blockchain protocols
Examples of services provided by blockchain protocols
3.THE DEVELOPMENT OF BLOCKCHAIN TECHNOLOGY
Internet-native money (2008–2013)
Internet-native applications (2014–2019)
Internet-native economy (2020-)
The world’s most valuable companies are born during platform shifts. During platform shifts, new entrants have the opportunity to build services that make use of the new functionalities provided by a new platform. Today’s web incumbents (GAFA) managed — exceptionally — to thrive over two platforms as the primary business model (gathering data and selling ad space) remained the same over both the Internet and Mobile eras. When a new computing platform changes the prevailing business model, new entrants have an opportunity to dethrone the incumbents. Blockchain technology changes the primary business model for online businesses by making data open.
Computing platforms are the infrastructure of information technology. A new computing platform introduces new features and opens up new distribution channels. The computing power of PCs enabled the creation of tools such as the spreadsheet and the word processor; the Internet as a global information network connected the PCs into one shared communication network; and the smartphone made PCs mobile and added new features such as location-based services and the camera.
Blockchain technology is a new computing platform. Blockchains enable a global community to create and maintain open and shared databases. Companies have traditionally been tasked with processing and storing data, and maintaining databases. Transferring this responsibility to a global community means that businesses can now be operated in a peer-to-peer network, without the involvement of a traditional company. Businesses that are operated by a peer-to-peer network offer better services than their incumbent competitors. The services provided by blockchain protocols are superior due to their openness (open source), cost-efficiency (global competition), and distribution (the Internet).
Blockchain protocols concentrate the supply and demand of a specific service into one place. The marketplaces maintained by existing companies are often siloed due to geographic and/or regulatory reasons. Blockchain protocols enable the creation of worldwide marketplaces on the Internet. The Internet created a global marketplace for information — blockchain technology does the same for value. The rules for value transfer are programmed straight into the blockchain protocol. The programmability of blockchain protocols means that they can provide any kind of digital service directly to a global audience.
A blockchain protocol as a global marketplace is the most cost-efficient way to provide a specific service. Concentrating the supply and demand of a specific service into a single place leads to most competitive prices. Due to the open and global nature of the Internet, competition among blockchain protocols means that the winning protocol is only able to extract the bare minimum fee required to keep the network secure. However, from an investor’s point of view, this is not an issue since blockchain protocols operate at unprecedented scale. Creating a payment network owned and used by 7+ billion people has not been possible before.
Blockchain protocols have powerful network effects. End user applications purchase a specific service from the blockchain protocol that provides it for the most cost-efficient price. This means that in the end, there will only be a few winning blockchain protocols for a specific use case. Investing directly into blockchain protocols is financially a better alternative than investing into the end user applications built on top of them. This is because the blockchain protocol that wins its vertical is bound to aggregate traffic from all of the end user applications built on top of it. This is comparable to if one could have invested in the SMTP (email), HTTPS (web request encryption), and TCP/IP (packets) protocols in the 90s. Each incremental email, web request, and information transfer would have made these protocols more valuable.
Ownership in a blockchain protocol is acquired by purchasing tokens. A blockchain protocol can be thought of as a company registered on the Internet. This Internet-native company is maintained and operated by a global employee pool and provides its services in the most cost-efficient manner using the worldwide distribution of the Internet. An investor can take part in the value appreciation of these Internet-native companies by purchasing tokens (comparable to shares) in a blockchain protocol. The blockchain protocol that provides a specific service in the most cost-efficient manner will eventually become a part of the core infrastructure of the Internet.
A token gives its owner both economic and governance rights. Tokens can be programmed to possess both economic (profit participation) and governance (voting) rights. It is in the interests of the tokenholders to grow the blockchain protocol into becoming a global and cost-efficient infrastructure component, on top of which anyone can build their own end user application. Similar to traditional shares in a company, tokens are created to encourage their owners to participate and contribute to the development of a blockchain protocol.
The more there is demand for a blockchain protocol’s service, the more valuable its tokens are. The goal of blockchain protocols is to provide a specific service as reliably and cost-efficiently as possible. The blockchain protocols that emerge as winners of their respective verticals become part of the core infrastructure of the Internet. The potential for value appreciation is significant because winning blockchain protocols should eventually mediate all value transfer on the Internet. Tokenholders are compensated by the value accrual of their tokens, which appreciate in value the more (non-extractive) fees are paid by those using the blockchain protocol’s service.
Internet protocols transfer information between computers, but cannot store it. Internet protocols are only capable of transferring information between computers. For example, the SMTP protocol (simple mail transfer protocol), which is used in email communication, enables a near real-time and worldwide communication. The problems is that the SMTP protocol (and other Internet protocols) cannot store the messages sent through it. If the recipient of an email message does not write down the message (content, delivery time, relation to other prior messages, etc.), the message disappears into cyberspace. The user experience is very limited, even if the potential is significant.
Companies process and store information sent via Internet protocols directly into their own, closed, and proprietary databases. Since the SMTP protocol is unable to store the messages sent via it, a third-party company (e.g. Google) has economic incentives to build a database to store those messages. Based on the data stored, the companies build their own end user applications. On the current Internet, the primary business model is to gather data. The more data a company is able to gather, the better it can develop applications liked by end users. Better applications attract more users and their data, which in turn further enhances the companies’ abilities to create better applications. This is called the data network effect. The end result is that a majority of the population uses only a handful of services, and all the data gathered by those services is stored in the closed and proprietary databases of a few large incumbent companies.
Blockchain technology provides economic incentives for global communities to maintain open databases (the maintenance of a database is a prerequisite for being able to provide a service). The data concerning different services can now be stored in open databases that are collectively owned and maintained by worldwide communities. A blockchain protocol is made up of the protocol (business logic) that pertains to the service it provides, and the database that stores the data generated as a result of applications using it. Today, many industries have their databases siloed between thousands of different companies. A blockchain protocol can replace these proprietary databases with one universal and open database.
The open services of blockchain protocols serve as neutral tools for application developers. Application developers can use the open services of blockchain protocols as tools when building their own end user applications, without the fear of a third-party company closing their access to the service (platform risk). There are many examples of platform risk occurring in tech: Facebook/Zynga, Google/Yelp, Twitter/third-party developers. Open services promote innovation in society: the current popularity of blockchain protocols’ services among application developers serves as a proxy for the general demand that exists for open services. A single application may use the services provided by one or many blockchain protocols. As the services are open, they can easily be integrated with one another. Since the services are open, it is also in the interest of the developers of a blockchain protocol to have the protocol’s service being used by as many end user applications as possible. Compare this with the status quo, where cooperation between a platform and applications built on top of it eventually turns into competition.
Blockchain protocols introduce economic incentives to open source software development. A blockchain protocol contains programmed economic incentives for developers to develop the protocol. A developer of a blockchain protocol may be financially rewarded for open source development through tokens — developing the protocol to increase the demand for its service makes the tokens more valuable.
The services provided by blockchain protocols are intended to serve as reliable infrastructure for end user applications. Application developers use the open services of blockchain protocols in their own end user applications. Applications can be thought of as the marketing engine for a specific blockchain protocol, as their job is to increase the demand for the blockchain protocol’s service and thus also increase its value. Since anyone can build their application on top of the open services of blockchain protocols, worldwide competition will lead to better applications for consumers.
The business logic of a service is programmed into the open source code of a blockchain protocol. Some important parameters for the service are often left open for the tokenholders to vote on during set intervals. Anyone can use the services provided by blockchain protocols by paying a fee and/or by depositing collateral. It is the responsibility of the tokenholders to maintain the usefulness (by voting on the parameters mentioned above) and reliability (by posting their tokens as collateral) of the blockchain protocol’s service.
In the beginning, Bitcoin attracts technology hobbyists, cryptographers, and economic liberalists. Bitcoin and the transaction efficiency enabled by blockchain technology raises the interest of many leading technology investors. Both entrepreneurs and investors see the lack of secure token trading and custody solutions as an attractive opportunity to build companies that establish better bridges to a parallel financial system. Andreessen Horowitz and Union Square Ventures invest in the token trading and custody platform Coinbase in 2013.
Many service providers are hobbyists who lack the capabilities to operate secure token trading and custody platforms. Many token exchanges and wallets are operated carelessly, which leads to many consumers losing their assets held on these exchanges and wallets. The most famous example is the Japanese token exchange Mt. Gox, from which approximately $450 million bitcoin was stolen during the years 2011–2014. Even if there initially are many careless entrepreneurs in the space, the underlying blockchain technology still works without fault.
Bitcoin as the first blockchain protocol proves the functionality of the technology. For comparison, if the IT encryption of one company is breached due to flawed implementation, it does not automatically mean that IT encryption in general does not work. The security breaches experienced by some token exchanges do not correlate with the security of the underlying blockchain technology. Many of the security breaches experienced by token exchanges and wallets share common traits in that the companies are often operated without proper licenses, and/or in jurisdictions that lack proper financial surveillance authorities. In contrast, Coinbase, Bitstamp, Gemini, Bakkt, Fidelity Digital Assets and others, are examples of token trading and custody platforms that have embraced regulation and that have a solid track record of safely managing their customers’ token assets.
The Ethereum blockchain protocol expands the possible use cases for blockchain technology, attracting a lot of new developers into the space. The programmability of the Ethereum blockchain protocol is unlimited (in contrast with the intentionally very restricted programmability of Bitcoin). The Ethereum blockchain protocol enables the creation of blockchain-based services for all Internet applications. The diversity of the Ethereum blockchain protocol attracts many new developers to the space, and the resulting spike in developer activity entices more professional investors to enter the blockchain space.
Traditional technology, bank, and financial services companies make significant investments into blockchain technology. The entrance of more professional investors raises the bar for token trading and custody platforms. Institutional investors require secure and regulated ways to manage their token investments. Many financial services incumbents invest in companies that provide regulated token trading and custody services. The most notable example is Bakkt, a token exchange primarily aimed at institutional investors. Bakkt is owned by the Intercontinental Exchange (the parent company of the New York Stock Exchange) and launches in September 2019. Coinbase receives the status of a Qualified Custodian according to the Banking Laws of the state of New York (New York Banking Law § 100). Over 20 companies that provide token custody services have obtained the status of a Qualified Custodian. Many token trading platforms receive a similar status as well:
Anchorage. Anchorage is a token custody and trading platform registered with the financial supervisory authorities of South-Dakota. The company has raised over $40 million in funding from many leading institutional investors such as Visa, Khosla Ventures, and Andreessen Horowitz.
Bakkt. Bakkt is a trading platform registered in the state of New York. The company started its operations in September 2019. Bakkt is owned by Intercontinental Exchange (ICE), which is the parent company of the New York Stock Exchange (NYSE). Bakkt is also developing a payment service with which consumers may use tokens for everyday use cases.
BitGo. BitGo is a token custody and trading platform registered with the financial supervisory authority of South-Dakota. The company’s investors include Goldman Sachs and Galaxy Digital among others. BitGo’s insurance covers damage from data breaches, internal thefts, and loss of private keys up to $100 million. BitGo currently supports over 100 different tokens.
Bitstamp. Bitstamp is a token trading platform founded in 2011. Bitstamp supports trading of Bitcoin, Ethereum, Bitcoin Cash, Ripple and Litecoin tokens. Bitstamp is registered with the financial supervisory authorities of Luxembourg. The Belgian investment company NXMH acquired Bitstamp in late 2018.
Circle. Circle is a token trading platform founded in 2013. Circle has raised funding for over $135 million, with Goldman Sachs being a majority owner of the company. Circle has launched its own price stable token called USDC (the value of USDC is pegged to the dollar), the trading platform Poloniex, and an OTC trading desk for tokens. Circle was the first trading to platform to receive the NYDFS approved BitLicense in 2015.
Coinbase. Coinbase is a token custody and trading platform founded in 2012. Coinbase manages token assets, for both consumers and institutional investors, for over $20 billion. Coinbase has raised funding for over $500 million from investors such as Andreessen Horowitz, BBVA Ventures, Bank of Tokyo, NYSE, and USAA.
ErisX. ErisX is a token trading platform aimed at institutional investors. The company is funded by TD Ameritrade. ErisX supports Bitcoin, Ethereum, Bitcoin Cash, and Litecoin tokens. The Commodity Futures Trading Commission (CFTC) gave ErisX a license in the summer of 2019 that enables the company to support token futures. The service is currently available for a limited customer group.
Fidelity Digital Assets. Fidelity Digital Assets is Fidelity’s token custody and trading platform aimed at institutional investors. Fidelity Digital Assets launched in the beginning of 2019. The service currently supports Bitcoin and Ethereum tokens. Fidelity Digital Assets expanded to UK in December 2019.
Gemini. Gemini is a token custody and trading platform that launched in 2015. Gemini supports Bitcoin, Ethereum, Bitcoin Cash, Litecoin, and Zcash tokens. The company also provides its own price stable token GUSD (the value of GUSD is pegged to the dollar) and launched its custody service in September 2019. Gemini Custody is also a NYDFS licensed qualified custodian.
Tagomi Systems. Tagomi Systems is a token custody and trading platform aimed at institutional investors. The company launched their operations at the end of 2018. Tagomi provides its customers with services for token lending and short selling. Tagomi has raised funding for $27 million from leading venture capital funds such as Pantera Capital and Founders Fund. Tagomi received an NYDFS license for its token trading service in March 2019.
Public and worldwide token offerings (ICO) raise the interest of regulators. Initially, the programmability of blockchain protocols is used to conduct worldwide crowdfunding campaigns. In the US, the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) take action against crowdfunding campaigns that are deemed as illegal securities offerings. In the EU, the European Securities and Markets Authority (ESMA) publishes its recommendation for the legal interpretation of ICOs. As anyone can build an end user application on top of the open services provided by blockchain protocols, it is likely that market forces will direct the majority of monetary traffic towards those applications that serve consumer interests in the best way possible. Even if the rules are now programmed straight into blockchain protocols, in the long-run the business logic of end user applications should follow globally adopted best practices.
New secure and easy-to-use fiat-token exchanges enable global participation in blockchain protocols. Already today, there exists quite a few service providers specialised in providing consumers the possibility to convert their fiat currency into tokens. The integration of price stable tokens removes the inherent volatility risk of tokens, a risk currently rolled over to end users. The entrance of more regulated service providers like Bakkt and Fidelity furthers the interest among institutional investors to participate in blockchain protocols. The next major leap is to have traditional insurance companies enter the blockchain space.
Scaling solutions for blockchain protocols prove their viability. Today, blockchain protocols are able to process approximately 10–20 transactions per second. The scaling of this processing capacity is and has for many years been one of the primary areas of research in the blockchain space. Some of the already proposed scaling solutions should improve the transaction processing capacity of blockchain protocols approximately a hundredfold in the near future. The scaling debate is similar to the debate around Internet scalability in the 90s. The economic incentives available for those who solved the scalability dilemma were so significant, that solving it turned out to be a non-issue. The economic incentives for solving blockchain protocol scalability are as great — if not greater — as they were during the early days of the Internet.
Developers build easy-to-use end user applications for consumers. A person using email does not have to know how the underlying Internet protocols work. The reason today’s blockchain-based end user applications receive some critique is because they have not yet been able to abstract away all the complexity from end users. In the first phase towards an Internet-native economy, the focus lies on building the core infrastructure for the new Internet. Over time, the economic incentives for developers will shift towards building easy-to-use applications that make the UX of blockchain-based applications resemble that of traditional fiat applications. | https://docs.tokenterminal.com/our-thesis | 2021-09-17T05:00:18 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.tokenterminal.com |
Getting Started
Get started with the XRP Toolkit and managing your own crypto assets in three easy steps. You need some XRP, you need a secure wallet and you need to connect your wallet to the XRP Toolkit.
Step 1: Buy XRPStep 1: Buy XRP
If you already hold some XRP, you may skip this step. If you need to buy some XRP, you can open an account with a cryptocurrency exchange like Binance.
No Investment Advice
The information provided on this website does not constitute investment advice, financial advice, trading advice, or any other sort of advice and you should not treat any of this website's content as such.
Step 2: Get WalletStep 2: Get Wallet
To manage XRP and other crypto assets yourself, you need a secure wallet capable of signing XRP transactions. The XRP Toolkit currently supports the following hardware wallets:
The XRP Toolkit also supports the Xumm mobile wallet. Ledger hardware wallets are the most secure option, while the Xumm app can be downloaded for free:
Step 3: Fund WalletStep 3: Fund Wallet
Once you've set up one of the wallets from the previous step, you need to send your wallet 20 XRP or more, to meet the XRP Ledger's minimum reserve requirement. Fund your wallet from another wallet in your control or withdraw some XRP from your cryptocurrency exchange account. | https://docs.xrptoolkit.com/getting-started | 2021-09-17T03:05:51 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.xrptoolkit.com |
Analytics for Choreo Connect¶
Choreo Connect Analytics provides reports, dashboards, statistics, and graphs for the APIs deployed on Choreo Connect. WSO2 Choreo Connect has the capability to publish events to the Choreo platform in order to generate analytics. This page describes the feature and explains how the feature could be used to generate useful analytics in order to gain important insights into the APIs deployed on the Choreo Connect. To learn more concepts on analytics, follow the concepts.
Configuring Analytics for Choreo Connect¶
The following steps will describe how to configure Choreo Connect Analytics with Choreo portal.
Note
Before you begin, make sure to go through main configurations and Configurations for Analytics and familiar with the configurations.
STEP-1: Set up Analytics¶
To configure analytics with Choreo,
- Go to and generate a on-prem-key.
Open the
docker-compose.yamlfile located in the
CHOREO-CONNECT_HOME/docker-compose/choreo-connector
CHOREO-CONNECT_HOME/docker-compose/choreo-connect-with-apimbased on your choice on the setup.
Info
Choreo Connect can be configured to pulish Analytics to Choreo cloud in both standalone mode and with control plane mode.
Find the environment variables section under the
enforcerand change the below variables.
environment: ... analytics_authURL= analytics_authToken=<your-on-prem-key>
Now open the
config.tomllocated in
CHOREO-CONNECT_HOME/docker-compose/choreo-connect/confdirectory and find the Analytics section and enable analytics by using the following configurations.
[analytics] enabled = true [analytics.adapter] bufferFlushInterval = "1s" bufferSizeBytes = 16384 gRPCRequestTimeout = "20s" [analytics.enforcer] [analytics.enforcer.configProperties] authURL = "$env{analytics_authURL}" authToken = "$env{analytics_authToken}" [analytics.enforcer.LogReceiver] port = 18090 maxMessageSize = 1000000000 maxHeaderLimit = 8192 #keep alive time of the external authz connection keepAliveTime = 600 [analytics.enforcer.LogReceiver.threadPool] coreSize = 10 maxSize = 100 #keep alive time of threads in seconds keepAliveTime = 600 queueSize = 1000
STEP-2: Try it out¶
Let's generate some traffic to see the Analytics in Choreo cloud.
- Deploy your API - Follow this according to your setup.
- Let's Invoke the API few times - Invoke the API
- Go to Choreo Insights to view statistics.
Here are some of the graphs generated in Choreo cloud.Top | https://apim.docs.wso2.com/en/latest/deploy-and-publish/deploy-on-gateway/choreo-connect/configure-analytics/ | 2021-09-17T03:56:26 | CC-MAIN-2021-39 | 1631780054023.35 | [] | apim.docs.wso2.com |
Confirm Hosts prompts you to confirm that Ambari has located the correct hosts for your cluster and to check those hosts to make sure they have the correct directories, packages, and processes required. Choose Click here to see the
warnings to see a list of what was checked and what caused the warning. The
warnings page also provides access to a python script that can help you clear any issues you
may encounter and let you run
Rerun Checks
.
When you are satisfied with the list of hosts, choose Next.
Next Step
More Information
Hortonworks Data Platform Apache Ambari Troubleshooting | https://docs.cloudera.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-installation/content/confirm_hosts.html | 2021-09-17T05:07:49 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.cloudera.com |
Everything you include in a multi-message campaign is called an action. For example, an action can be a message or a call to a webhook.
Add campaign action
- Click on the "Add Action" button in the Actions step.
- Select a channel to get more template options and start editing your action's content.
available actions
1. Action content
The Action step is where you compose the content of your action.
Add an action name
This name appears in Campaign results, Analytics, and CSV exports.
To rename the action, just click on the title.
You can also edit the action's name on the campaign graph. Hover over it and click the pen icon. Type the action's new name, then click away or hit Enter.
Tip: Giving your actions unique names will help you identify the events and analytics related to those actions later on.
Switch between Edit and Preview modes
The toggle button in the header provides an easy and quick switch between action Edit and Preview settings.
Localize content (optional)
To create localized versions of your action, click on the Localize button.
Change message template
To select another channel or a different message template, click on the Templates button in the header.
Select one of the links below for more details on editing a specific action type.
2. Sub-Audience and Sub-Delivery
After you edit the content of your action, you can decide when to send it using Sub-Delivery. You can also specify who should receive an action using Sub-Audience (optional). These settings will always fall within the Delivery and Audience settings of your overall campaign. See Sub-Delivery and Sub-Audience for more.
Use the stepper navigation to scroll between action's Content, Sub-Audience and Sub-Delivery.
Add more actions with branches and chains
You can create complex, multi-action campaigns by branching and chaining messages in the preview window on the right side of the screen. See Add multiple actions for more instructions.
Updated 2 months ago | https://docs.leanplum.com/docs/actions | 2021-09-17T03:51:03 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['https://files.readme.io/50ab556-select-channel-template.png',
'select-channel-template.png available actions'], dtype=object)
array(['https://files.readme.io/50ab556-select-channel-template.png',
'Click to close... available actions'], dtype=object)
array(['https://files.readme.io/fcf212b-message-step.png',
'message-step.png'], dtype=object)
array(['https://files.readme.io/fcf212b-message-step.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/989135f-rename-message.png',
'rename-message.png'], dtype=object)
array(['https://files.readme.io/989135f-rename-message.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/f9fa35a-rename-action.png',
'rename-action.png'], dtype=object)
array(['https://files.readme.io/f9fa35a-rename-action.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/7deeb74-action-composition-steps.png',
"action-composition-steps.png Use the stepper navigation to scroll between action's Content, Sub-Audience and Sub-Delivery."],
dtype=object)
array(['https://files.readme.io/7deeb74-action-composition-steps.png',
"Click to close... Use the stepper navigation to scroll between action's Content, Sub-Audience and Sub-Delivery."],
dtype=object) ] | docs.leanplum.com |
Now, when the hitpoint is clicked in play mode, the state will switch to whatever state is linked in the Hit Point.
3D labels are just pre-customized Hit Points to target a specific model. They have all the properties of a Hit Point. To create a 3D label quickly:
Next: Object Interactions | https://docs.modest3d.com/Xplorer/Adding_Interactions/Hit_Points_and_Labels | 2021-09-17T04:50:32 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.modest3d.com |
The following figure illustrates Panorama in a centralized log collection deployment. In this example, the Panorama management server comprises two Panorama virtual appliances that are deployed in an active/passive high availability (HA) configuration. This configuration suits firewall management within a VMware virtual infrastructure in which Panorama processes up to 10,000 logs/second. (For details on deployment options, see
Plan a Log Collection Deployment.) The firewalls send logs to the Panorama management server (to its virtual disk or Network File System [NFS] datastore). By default, the active and passive peers both receive logs, though you can
Modify Log Forwarding and Buffering Defaults
so that only the active peer does. By default, the Panorama virtual appliance uses approximately 11GB on its internal disk partition for log storage, though you can
Expand Log Storage Capacity on the Panorama Virtual Appliance
if necessary. | https://docs.paloaltonetworks.com/panorama/7-1/panorama-admin/manage-log-collection/deploy-panorama-virtual-appliances-with-local-log-collection.html | 2021-09-17T05:00:47 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['/etc/framemaker/panorama/7-1/panorama-admin/panorama-admin-206.gif',
None], dtype=object) ] | docs.paloaltonetworks.com |
GET Campaign Breakdown
Returns performance metrics for a campaign broken down by country, partner source, campaign, audience or creative for a specified time period, time zone, or currency.
#Description
Call this API endpoint to receive campaign-level data broken down by an attribute specified in the query string for a specified time period, time zone, or currency. The attributes that may be called through the "groupby" parameter include:
- country;
- partner-source;
- campaign;
- audience; or
- creative.
#Request
Path
GET /reporting/accounts/{accountId}/campaigns/{campaignId}/breakdown?groupby=creative
Parameters
#Response
200 OK
{ "groupByValue": "string", "grossCost": 0, "netCost": 0, "impressions": 0, "uniqueSessions": 0, "referrals": 0, "acquisitionsByConversionDate": 0, "acquisitionsByReferralDate": 0, "campaigns": 0, "creatives": 0, "audiences": 0, "campaignCountries": 0, "creativeName": "string"} | https://docs.rokt.com/docs/developers/api-reference/reporting/get-campaign-breakdown | 2021-09-17T03:30:37 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.rokt.com |
UTM tracking
One crucial aspect of measuring your success with Rokt Calendar is tracking how your promotion strategies and paid campaigns are performing. By keeping an eye on your metrics, you can determine what strategies are the most powerful and cost effective, and dedicate energy and resources accordingly.
The best way to track performance is to append UTM parameters to all of your calendar URLs. These parameters enable you to attribute the number of subscribers by source, medium, and campaign. Once the URL has been appended, you can see the UTM source, medium, and campaign in Rokt Calendar Analytics.
calendar.
#Paid media
If you are running paid ads, it is imperative to append source, medium, and campaign to your URLs so you can keep track of which campaigns are most effective. It is not a requirement to all use all of the UTM parameters; using a single parameter (e.g., source) still provides tracking information. Here are two examples of how URLs appended with UTM parameters might look:
Flipboard banner ad:
on.nflnetwork.com/preseason?utm_source=flipboard_banner_ad1&utm_medium=inapp&utm_campaign=fall2015
Paid Google Search:
on.nflnetwork.com/preseason?utm_source=Google_Search&utm_medium=search&utm_campaign=fall2015
#Digital integrations
All digital, mobile, and in-app integrations should be appended with source, medium, and campaign so you can keep track of how many subscribers are coming from your digital and mobile properties. It's important to differentiate between desktop and mobile/in-app buttons so you can tell where the majority of your traffic is coming from.
Here are a few examples of how URLs appended with UTM parameters might look:
Desktop button:
on.fox.com/scream-queens?utm_source=desktop_button&utm_medium=website&utm_campaign=fall2015
Mobile button:
on.fox.com/scream-queens?utm_source=mobile_button&utm_medium=website&utm_campaign=fall2015
In-App button:
on.fox.com/scream-queens?utm_source=InApp_button&utm_medium=website&utm_campaign=fall2015
#Social media
Append your social media links so you can tell how many subscribers are coming from Twitter and Facebook posts. You should also keep track of customers that are coming from other sources on social media like a profile bio link, auto-reply tweet, etc.
For example:
Twitter profile bio:
on.fox.com/empire?utm_source=Twitter_Profile&utm_medium=social&utm_campaign=fall2020
Twitter auto-reply:
on.fox.com/empire?utm_source=Twitter_Reply&utm_medium=social&utm_campaign=fall2020
Facebook post:
on.fox.com/empire?utm_source=Facebook_Post&utm_medium=social&utm_campaign=fall2020
#Adding UTM tracking to calendar URLs
To add tracking, just append a snippet of code to the end of your calendar URLs, listing the source, medium, and campaign you want to track.
Each would need to be defined as indicated below:
utm_source=source_name
utm_medium=medium_name
utm_campaign=campaign_name
Using all of the above, the full string to be appended to the URL should look like this:
?utm_source=source_name&utm_medium=medium_name&utm_campaign=campaign_name
An appended URL should look like this:
on.fox.com/scream-queens?utm_source=source_name&utm_medium=medium_name&utm_campaign=campaign_name
Adding the snippets of code after the question mark doesn't affect anything on the page—anything after the question mark just lets our system know that someone arrived through a certain source or overall marketing channel as part of a certain campaign.
Using these tracking tools allows you to measure your performance in a real and tangible way, ensuring that you are getting the most out of your Rokt Calendar account. | https://docs.rokt.com/docs/user-guides/rokt-calendar/reporting/utm-tracking | 2021-09-17T03:29:03 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['/assets/images/utm-analytics-555a91b309781cc2af2e43235c770cff.png',
'UTM Analytics'], dtype=object)
array(['/assets/images/utm-tags-nfl-7868aeb1062b2330e222017470b443e1.png',
'UTM Tags NFL'], dtype=object)
array(['/assets/images/utm-tags-social-media-5790a914af4222c38e668aca1ec5a67b.png',
'UTM Tags Social Media'], dtype=object) ] | docs.rokt.com |
Scylla Monitoring Stack¶
This document describes the setup of Scylla monitoring Stack, base on Scylla Prometheus API.
Scylla monitoring stack consists of three components, wrapped in Docker containers:
prometheus - collects and stores metrics
alertmanager - handles alerts
grafana - dashboard server provide HA out of the box.
Prerequisites¶
Docker Post Installation¶
Procedure¶with
4. Connect to Scylla Manager by updating
prometheus/scylla_manager_servers.yml
If you are using Scylla Manager, you should set its ip.
For example
# List Scylla Manager end points - targets: - 127.0.0.1:56090
Note that you do not need to add labels to the Scylla Manager targets.
Start and Stop¶
Start
./start-all.sh -d data_dir
Stop
./kill-all.sh
Setting Specific.0,master -M 1.3
will load the dashboards for Scylla versions
3.0 and
master and the dashboard for Scylla Manager
1.3
View Grafana Monitor¶
Point your browser to
your-server-ip:3000
By default, Grafana authentication is disabled. To enable it and set a password for user admin use the
-a option | https://docs.scylladb.com/operating-scylla/monitoring/2.4/monitoring_stack/ | 2021-09-17T03:08:02 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['../../../../_images/monitor2.png',
'../../../../_images/monitor2.png'], dtype=object)] | docs.scylladb.com |
Menus Menu Item Article Archived
Archived Article List
Used to show a list of Articles that have been Archived and can be searched by date. Archived articles are no longer published but are still stored on the site. Articles are Archived using the Article Manager screen. Note that Articles assigned to the "Uncategorized" Section will not show on the Archived Article List layout.
Parameters - Basic This Menu Item Type Archived Article List allows you to set the sort order of Archived Articles, as shown in the screenshot below.
The Default order is most recent first. The Order option sorts Articles by the Order column in the Article Manager. | https://docs.joomla.org/Help17:Menus_Menu_Item_Article_Archived | 2017-01-16T15:00:14 | CC-MAIN-2017-04 | 1484560279189.36 | [] | docs.joomla.org |
AutoCAD Civil 3D 2011 contains many new features and enhancements.
Refer to this section for workflow information for common tasks you might perform when working with AutoCAD Civil 3D.
In AutoCAD Civil 3D, objects are the basic building blocks that enable you to create design drawings.
AutoCAD Civil 3D has drawing, object, and command settings. All three levels of settings in AutoCAD Civil 3D are saved with the drawing, and they can be saved to a drawing template.
The AutoCAD Civil 3D user interface enhances the standard AutoCAD environment with additional tools for creating and managing civil design information.
AutoCAD Civil 3D objects are stored in drawings by default. | http://docs.autodesk.com/CIV3D/2011/ENU/filesMadRiverCUG/WS1a9193826455f5ff3075904d11bc05ed636-71d9.htm | 2017-01-16T15:06:19 | CC-MAIN-2017-04 | 1484560279189.36 | [] | docs.autodesk.com |
public interface ImportTaskManager
A simplified version of the
TaskManager. This interface should only be used for data imports.
Implementations should *not* rely on any external dependencies since they will get refreshed during a data import
which can cause all sorts of issues during an import. Also depending on any external dependencies will double memory
consumption potentially during an import.
<V> TaskDescriptor<V> getTask()
TaskDescriptorof the current import task that's running
TaskDescriptoror null if the manager has not such task. The descriptor returned is a snapshot of the task state when the method returns will not reflect any future changes.
nullwill be returned when no matching task can be found.
<V> TaskDescriptor<V> submitTask(@Nonnull Callable<V> callable, String taskName) throws RejectedExecutionException, AlreadyExecutingException
Callabletask to the manager which can then be started at the managers discretion, but hopefully very soon. The
TaskDescriptorreturned is a snapshot of the task's state when the method returns and will not change to reflect the task's future state changes.
callable- the long running task
taskName- An i18nized string describing this task
RejectedExecutionException- if the task manager is being shutdown and cannot accept new tasks.
AlreadyExecutingException- if another import task is already running in the task manager.
void shutdownNow()
void prepareCachedResourceBundleStrings(Locale locale)
locale-
void clearCachedResourceBundleStrings()
Map<String,String> getCachedResourceBundleStrings() | https://docs.atlassian.com/software/jira/docs/api/6.2-ClusteringEAP01/com/atlassian/jira/task/ImportTaskManager.html | 2021-10-16T11:42:39 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.atlassian.com |
Linux microPlatform¶
The Foundries.io Linux microPlatform is an extensible software and hardware platform that makes it easier to develop, secure, and maintain Internet-connected Linux-based embedded devices.
The Linux microPlatform is based on OpenEmbedded / Yocto Project,.
- Supported Machines
- Repo Source Control Tool
- Understanding Development FIO Tags
- Development Container
- Building from Source
- Linux Kernel
- OpenEmbedded / Yocto Project Layers
- LmP Distros
- WIC Image Installer
- Persistent Log Support
- Network Debugging
- Updating the Linux microPlatform Core
- Extending the Linux microPlatform
- OSS Compliance with FoundriesFactory
- Creating Preloaded Images | https://docs.foundries.io/83/reference-manual/linux/linux.html | 2021-10-16T12:44:22 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.foundries.io |
CfgConnInfo
Description
CfgConnInfo contains information about a connection.
Attributes
- appServerDBID — The unique identifier of the Server this application shall connect to as a client when it is started.
- connProtocol — A pointer to the name of the connection control protocol. Available values: addp. Default: none.
- timoutLocal — The heart-bit polling interval measured in seconds, on client site. See comments below.
- timoutRemote — The heart-bit polling interval measured in seconds, on server site. See comments below.
- mode — The trace mode dedicated for this connection. Refer to CfgTraceMode below. Default value: CFGTMNoTraceMode.
- id — An identifier of the server's listening port. Should correspond to CfgPortInfo.id.
- transportParams — Connection protocol's transport parameters.
- appParams — Connection protocol's application parameters.
- proxyParams — Connection protocol's proxy parameters.
- description — Optional description of the connection.
-.
TipIf client and server exchange large processing instructions, that is, packets larger than 1Mbyte, the values for timeoutLocal and timeoutRemote for this connection should not be set to less than 3 seconds. Otherwise, the connection library will be forced to disconnect the client.
This page was last edited on June 27, 2017, at 20:20. | https://docs.genesys.com/Documentation/PSDK/latest/ConfigLayerRef/CfgConnInfo | 2021-10-16T12:30:46 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.genesys.com |
Creating a subnet group
A cache subnet group is a collection of subnets that you may want to designate for your cache clusters in a VPC. When launching a cache cluster in a VPC, you need to select a cache subnet group. Then ElastiCache uses that cache subnet group to assign IP addresses within that subnet to each cache node in the cluster. or Local:
aws elasticache create-cache-subnet-group \ --cache-subnet-group-name
mysubnetgroup\ --cache-subnet-group-description
"Testing"\ --subnet-ids
subnet-53df9c3a
For Windows:
aws elasticache create-cache-subnet-group ^ --cache-subnet-group-name
mysubnetgroup^ --cache-subnet-group-description
"Testing"^ --subnet-ids
subnet-53df9c3a
This command should produce output similar to the following:
{ "CacheSubnetGroup": { "VpcId": "vpc-37c3cd17", "CacheSubnetGroupDescription": "Testing", "Subnets": [ { "SubnetIdentifier": "subnet-53df9c3a", "SubnetAvailabilityZone": { "Name": "us-west-2a" } } ], "CacheSubnetGroupName": "mysubnetgroup" } } ?Action=CreateCacheSubnetGroup &CacheSubnetGroupDescription=Testing &CacheSubnetGroupName=mysubnetgroup &SignatureMethod=HmacSHA256 &SignatureVersion=4 &SubnetIds.member.1=subnet-53df9c3a &Timestamp=20141201T220302Z &Version=2014-12-01 &X-Amz-Algorithm=&AWS;4-HMAC-SHA256 &X-Amz-Credential=<credential> &X-Amz-Date=20141201T220302Z &X-Amz-Expires=20141201T220302Z &X-Amz-Signature=<signature> &X-Amz-SignedHeaders=Host | https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/SubnetGroups.Creating.html | 2021-10-16T13:28:37 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.aws.amazon.com |
ClustrixDB is a shared-nothing clustered scalable database based on commodity hardware and parallel software. Parallelism throughout the system integrates the various nodes of the cluster into one very large (huge) database, from both programming and management perspectives., very large time-series databases, and some of the world's fastest super-computers.
No, ClustrixDB is available as licensed, downloadable software.
ClustrixDB is supported on RHEL or CentOS 7.4.
There are several things that affect scalability and performance:
This is very different from other systems, which routinely move large amounts of data to the node that is processing the query, then eliminate all the data that doesn't fit the query (typically lots of data). By only moving qualified data across the network to the requesting node, ClustrixDB.
It doesn't matter. Clients can connect to any node in the cluster. The ClustrixDB parallel database software will route the queries to the appropriate nodes - the ones that have the relevant data. Clustrix recommends using an external load balancer.
Replication only scales reads. In a master-slave configuration, all writes are done to the master, then replicated to the various slaves. This causes two problems:
Essentially, ClustrixDB is doing horizontal federation. The key is making the federation invisible to applications and to administrators. In addition, ClustrixDB provides:
By making the federation invisible to applications, ClustrixDB eliminates the need for custom programming and administration for partitioning. This increases the customer's ability to query and update transactions across partitions, ultimately leading to greater functionality at lower cost.
All data in ClustrixDB is replicated on a per-table or per-index basis. Customers may prefer to maintain more replicas of base representations (data tables), and fewer replicas of indexes, since they are reconstructable.
The query planner is cluster-aware, and it knows which nodes of the cluster contain which indexed rows. Here's how it works:
Note: if the join is on columns that have no indexes, then table scans are required, but the scans can be done in parallel on multiple nodes, so the operation, while not optimal, is still accelerated.
See Xpand Installation Guide Bare OS Instructions.
The short answer is: just add nodes. Refer to these instructions for guidance in Expanding Your Cluster's Capacity - Flex Up.
The system is designed to continue operating through inevitable component failures, as follows:
The node is the fundamental redundant unit. Multiple nodes can fail without a system outage. In addition, all data paths and all data are redundant. Administrators can specify the desired level of redundancy (number of data replicas) and can specify priorities for the re-creation of additional replicas when storage or nodes fail.
No, it's a complete database, built from the ground up for high-performance, clustered OLTP. It is wire-compatible with MySQL, but is implemented without any MySQL code.
Yes. For complete details, please see Xpand Fast Backup and Restore. ClustrixDB also supports MySQL operations such as mysqldump. | https://docs.clustrix.com/plugins/viewsource/viewpagesrc.action?pageId=10912418 | 2021-10-16T12:26:23 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.clustrix.com |
This is a memo intended to contain documentation of the VPP LLDP implementation Everything that is not directly obvious should come here.
LLDP is a link layer protocol to advertise the capabilities and current status of the system.
There are 2 nodes handling LLDP
1.) input-node which processes incoming packets and updates the local database 2.) process-node which is responsible for sending out LLDP packets from VPP side
LLDP has a global configuration and a per-interface enable setting.
Global configuration is modified using the "set lldp" command
set lldp [system-name <string>] [tx-hold
] [tx-interval
]
system-name: the name of the VPP system sent to peers in the system-name TLV tx-hold: multiplier for tx-interval when setting time-to-live (TTL) value in the LLDP packets (TTL = tx-hold * tx-interval + 1, if TTL > 65535, then TTL = 65535) tx-interval: time interval between sending out LLDP packets
Per interface setting is done using the "set interface lldp" command
set interface lldp <interface> | if_index <idx> [port-desc <string>] [disable]
interface: the name of the interface for which to enable/disable LLDP if_index: sw interface index can be used if interface name is not used. port-desc: port description disable: LLDP feature can be enabled or disabled per interface.
Configure system-name as "VPP" and transmit interval to 10 seconds:
set lldp system-name VPP tx-interval 10
Enable LLDP on interface TenGigabitEthernet5/0/1 with port description
set interface lldp TenGigabitEthernet5/0/1 port-desc vtf:eth0
The list of LLDP-enabled interfaces which are up can be shown using "show lldp" command
Example: DBGvpp# show lldp Local interface Peer chassis ID Remote port ID Last heard Last sent Status GigabitEthernet2/0/1 never 27.0s ago inactive TenGigabitEthernet5/0/1 8c:60:4f:dd:ca:52 Eth1/3/3 20.1s ago 18.3s ago active
All LLDP configuration data with all LLDP-enabled interfaces can be shown using "show lldp detail" command
Example: DBGvpp# show lldp detail LLDP configuration: Configured system name: vpp Configured tx-hold: 4 Configured tx-interval: 30
LLDP-enabled interface table:
Interface name: GigabitEthernet2/0/1 Interface/peer state: inactive(timeout) Last known peer chassis ID: Last known peer port ID: Last packet sent: 12.4s ago Last packet received: never
Interface name: GigabitEthernet2/0/2 Interface/peer state: interface down Last packet sent: never
Interface name: TenGigabitEthernet5/0/1 Interface/peer state: active Peer chassis ID: 8c:60:4f:dd:ca:52(MAC address) Remote port ID: Eth1/3/3(Locally assigned) Last packet sent: 3.6s ago Last packet received: 5.5s ago | https://docs.fd.io/vpp/21.06.0/da/d5d/lldp_doc.html | 2021-10-16T11:18:32 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.fd.io |
Date: Mon, 5 Aug 2013 03:25:39 -0500 From: David Noel <[email protected]> To: Matthew Seaman <[email protected]> Cc: [email protected] Subject: Re: Update /usr/src with subversion Message-ID: <CAHAXwYD2kNPGDurqGFvnQDNca6xFuRr2cn_6GJL1mC6WqhusuA@mail.gmail.com> In-Reply-To: <[email protected]> References: <CAHAXwYDuRuA3iiWfu3yB19ShKMPikLu+21bXkmZf4X2AjLaMBw@mail.gmail.com> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
Ok great, thanks Matthew. I tried a different search query and actually found a similar question on the forums: Your solution looks a bit cleaner than the one proposed there: "rm -r /usr/src/.svn, and then check out the new branch". I'll check out the man for svn switch. Thanks again, -David On 8/5/13, Matthew Seaman <[email protected]> wrote: > On 05/08/2013 09:00, David Noel wrote: >> Does anyone know how a workaround for having to rm -rf /usr/src every >> time the source URL changes? I'm updating from 8.3 to 8.4 with >> subversion and got a message along the lines of "Error: /usr/src/ >> contains files from a different URL". -David > > You need 'svn switch' -- so, if you've got some other branch checked > out, and you want to have 8.4-RELEASE instead, then it's something like: > > # svn switch ^/base/releng/8.4 > > This will speedily change your checked out tree with minimal network IO. > > You can also use 'svn switch --relocate' to change which svn servers you > have the tree checked out from or the protocol (svn://, https:// etc) > used. See the output of 'svn help switch' for details. > > Cheers, > > Matthew > > _______________________________________________ > [email protected] mailing list > > To unsubscribe, send any mail to > "[email protected]" >
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=112962+0+/usr/local/www/mailindex/archive/2013/freebsd-questions/20130811.freebsd-questions | 2021-10-16T11:24:28 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.freebsd.org |
Designs and templates
Learn how to improve your designs and templates by following these best-practice tutorials.
- Set up a Git bridge
This tutorial shows you how to to set up a Git file bridge asset, connect it to a repository, and configure a webhook to fetch updates automatically.
- Create a component template to embed an image listing
This tutorial shows you how to create a basic Component Template that lets users in Edit+ embed an image listing using a metadata field and a nested Asset Listing page.
- Create a multi-step custom form
This tutorial shows you how to create a Job Application online form using the Custom Form asset.
- Creating a robots.txt file
This tutorial shows you how to create a robots.txt to restrict access to your site by 'spider' programs.
- Create main menus using edge side includes (ESIs)
This tutorial shows you how to create main menus in Matrix that you can include in your design using an Edge Side Include (ESI) tag.
- Use Edge Side include (ESI) tags with Squiz Edge
This tutorial shows you how to use Matrix to take advantage of the Edge Side Includes (ESI) capabilities in Squiz Edge to display cached conditional content within your site.
- Create dynamic title tags using Server Side JavaScript (SSJS)
This tutorial shows you how to create page titles by combining Matrix keywords with custom Server Side JavaScript (SSJS) logic.
- Create a dynamic asset listing using Server Side JavaScript (SSJS) with metadata and paint layouts
This tutorial shows you how to create a "Related Asset" asset listing powered by a metadata schema and paint layout. | https://docs.squiz.net/matrix/version/latest/tutorials/designs-and-templates/index.html | 2021-10-16T12:20:35 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.squiz.net |
brainpy.inputs module¶
brainpy.inputs module aims to provide several commonly used helper functions to help construct input currents.
[1]:
import brainpy as bp import numpy as np import matplotlib.pyplot as plt
[2]:
def show(current, duration, title): ts = np.arange(0, duration, 0.1) plt.plot(ts, current) plt.title(title) plt.xlabel('Time [ms]') plt.ylabel('Current Value') plt.show()
brainpy.inputs.period_input¶
brainpy.inputs.period_input() is an updated function of previous
brainpy.inputs.constant_input() (see below).
Sometimes, we need input currents with different values in different periods. For example, if you want to get an input in which 0-100 ms is zero, 100-400 ms is value 1., and 400-500 ms is zero, then, you can define:
[3]:
current, duration = bp.inputs.period_input(values=[0, 1., 0.], durations=[100, 300, 100], return_length=True)
[4]:
show(current, duration, '[(0, 100), (1, 300), (0, 100)]')
brainpy.inputs.constant_input¶
brainpy.inputs.constant_current() function helps you to format constant current in several periods.
For the input created above, we can define it again with
constant_current() by:
[5]:
current, duration = bp.inputs.constant_current([(0, 100), (1, 300), (0, 100)])
[6]:
show(current, duration, '[(0, 100), (1, 300), (0, 100)]')
Another example is this:
[7]:
current, duration = bp.inputs.constant_current([(-1, 10), (1, 3), (3, 30), (-0.5, 10)], 0.1)
[8]:
show(current, duration, '[(-1, 10), (1, 3), (3, 30), (-0.5, 10)]')
brainpy.inputs.spike_input¶
brainpy.inputs.spike_input() helps you to construct an input like a series of short-time spikes. It receives the following settings:
points: The spike time-points. Must be an iterable object. For example, list, tuple, or arrays.
lengths: The length of each point-current, mimicking the spike durations. It can be a scalar float to specify the unified duration. Or, it can be list/tuple/array of time lengths with the length same with
points.
sizes: The current sizes. It can be a scalar value. Or, it can be a list/tuple/array of spike current sizes with the length same with
points.
duration: The total current duration.
dt: The time step precision. The default is None (will be initialized as the default
dtstep).
For example, if you want to generate a spike train at 10 ms, 20 ms, 30 ms, 200 ms, 300 ms, and each spike lasts 1 ms and the spike current is 0.5, then you can use the following funtions:
[9]:
current = bp.inputs.spike_input(points=[10, 20, 30, 200, 300], lengths=1., # can be a list to specify the spike length at each point sizes=0.5, # can be a list to specify the spike current size at each point duration=400.)
[10]:
show(current, 400, 'Spike Input Example')
brainpy.inputs.ramp_input¶
brainpy.inputs.ramp_input() mimics a ramp or a step current to the input of the circuit. It receives the following settings:
c_start: The minimum (or maximum) current size.
c_end: The maximum (or minimum) current size.
duration: The total duration.
t_start: The ramped current start time-point.
t_end: The ramped current end time-point. Default is the None.
dt: The current precision.
We illustrate the usage of
brainpy.inputs.ramp_input() by two examples.
In the first example, we increase the current size from 0. to 1. between the start time (0 ms) and the end time (1000 ms).
[11]:
duration = 1000 current = bp.inputs.ramp_input(0, 1, duration) show(current, duration, r'$c_{start}$=0, $c_{end}$=%d, duration, ' r'$t_{start}$=0, $t_{end}$=None' % (duration))
In the second example, we increase the current size from 0. to 1. from the 200 ms to 800 ms.
[12]:
duration, t_start, t_end = 1000, 200, 800 current = bp.inputs.ramp_input(0, 1, duration, t_start, t_end) show(current, duration, r'$c_{start}$=0, $c_{end}$=1, duration=%d, ' r'$t_{start}$=%d, $t_{end}$=%d' % (duration, t_start, t_end))
| https://brainpy.readthedocs.io/en/v1.0.3/apis/inputs_module.html | 2021-10-16T11:47:12 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['../_images/apis_inputs_module_7_0.png',
'../_images/apis_inputs_module_7_0.png'], dtype=object)
array(['../_images/apis_inputs_module_11_0.png',
'../_images/apis_inputs_module_11_0.png'], dtype=object)
array(['../_images/apis_inputs_module_14_0.png',
'../_images/apis_inputs_module_14_0.png'], dtype=object)
array(['../_images/apis_inputs_module_19_0.png',
'../_images/apis_inputs_module_19_0.png'], dtype=object)
array(['../_images/apis_inputs_module_23_0.png',
'../_images/apis_inputs_module_23_0.png'], dtype=object)
array(['../_images/apis_inputs_module_25_0.png',
'../_images/apis_inputs_module_25_0.png'], dtype=object)] | brainpy.readthedocs.io |
# Securing Transmission's RPC interface.
This guide demonstrates how Pomerium can secure a Transmission (opens new window) daemon. Pomerium is an identity-aware access proxy that can add single-sign-on / access control to any service.
# Transmission
Transmission (opens new window) is a powerful BitTorrent client that's highly customizable. It's often run remotely as a system daemon, and interacted with through a remote client using a Remote Procedure Call (opens new window) (RPC) interface.
The BitTorrent protocol is widely used in the distribution of large open-source softwares, like Linux distribution images and source code. Using Transmission as a system daemon, you can monitor and automatically download the latest versions to a local distribution server.
While there are software clients available to interact with the daemon over RPC, the easiest option is often to use the web interface built into the Transmission daemon package. Unfortunately, the service is only built to communicate over unencrypted HTTP, using basic HTTP authentication (opens new window). Using Pomerium, we can encrypt traffic from anywhere in the world to the local network hosting the Transmission service, and restrict access to authenticated users.
WARNING
Because RPC traffic to and from a Transmission daemon is unencrypted, we strongly suggest you only communicate from Pomerium to Transmission on a trusted private network. Note that some cloud hosting providers differentiate "private networking" (which is visible to all hosts in a data center) from "VLANS" which are only visible to your hosts. While you can configure a local proxy on your Transmission host to provide TLS encryption, that configuration is outside of the scope of this guide.
Running Pomerium and Transmission on the same host, using docker for example, negates this concern.
# Before You Begin
This guide assumes you've completed one of the quick start guides, and have a running instance of Pomerium configured. This guide also assumes that Pomerium and Transmission will both run on separate hosts (physical or virtual machines) on the same private network (LAN or VLAN), but the configuration could be easily adjusted to fit your setup.
In addition to a working instance of Pomerium, have ready the private IP addresses (opens new window) for the Pomerium and Transmission hosts. If you're running both on the same host, you can substitute
localhost for both.
# Configuration
# Pomerium Config
Edit your
config.yaml file to add the following policy. Note that
<> denotes placeholder values that must be replaced if copying this config directly:
routes: - from: https://<transmission.mydomain.com> # Replace with the domain you want to use to access Transmission to: http://<private.ip.address>:9091 # Replace with the private network address of the Transmission host, or `localhost` if running on the same host. policy: - allow: or: - email: is: [email protected] # Replace with authorized user(s), or remove if using group permissions only. - groups: has: "<transmission-users>" # Replace with authorized user group(s), or remove if using user permissions only.
Remember to restart the Pomerium instance after saving your changes.
# Transmission Config
TIP
Don't forget to switch your terminal prompt to the Transmission host before continuing.
If you don't already have the Transmission daemon installed, install it through your distro's package manager. The commands to install and configure Transmission below assume a Debian-based Linux distribution, but can be adapted for any Linux distro:
sudo apt update && sudo apt install transmission-daemon
Because Transmission writes over its configuration file when running, stop the service before continuing:
sudo systemctl stop transmission-daemon.service
In your preferred text editor, open
/etc/transmission-daemon/settings.jsonwith
sudoor as the root user. Look for the following key/value pairs, and edit appropriately.
Because we are using Pomerium to authenticate, disable HTTP auth:
"rpc-authentication-required": false,
Confirm that RPC is enabled:
"rpc-enabled": true,
Enable and configure the RPC Host whitelist. This ensures that the service will only work when accessed from the domain defined in Pomerium's
config.yamlfile (the
policy.fromkey). This helps to mitigate DNS hijacking attack vectors:
"rpc-host-whitelist": "<transmission.mydomain.com>", "rpc-host-whitelist-enabled": true,
Enable and configure the RPC whitelist to only allow access from the Pomerium gateway. The value should be the private IP address of the Pomerium host, or
localhostif running on the same host:
"rpc-whitelist": "<pomerium.host.address>", "rpc-whitelist-enabled": true,
After saving and closing
settings.json, restart the service:
sudo systemctl start transmission-daemon.service
You should now be able to authenticate and access your Transmission daemon remotely in the web browser, with TLS encryption!
In addition to the lock symbol in your browser's address bar, you can go to
<transmission.mydomain.com>/.pomerium to view and confirm your session details. | https://master.docs.pomerium.io/guides/transmission | 2021-10-16T12:18:43 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['https://d33wubrfki0l68.cloudfront.net/93f5cc7c7fb132565a7854347db577033d6e472b/45024/assets/img/transmission-demo.ab42d478.png',
'The Transmission web interface, secured with Pomerium'],
dtype=object) ] | master.docs.pomerium.io |
Use Amplicare's Late Enrollment Calculator:
What is it?
The late enrollment penalty is an amount added to your Medicare Part D premium every month.
A patient may owe a late enrollment penalty if, at any time after the initial enrollment period is over, there is a period of 63 days or more in a row where he or she doesn't have Part D or other "creditable" prescription drug coverage.
The initial enrollment period lasts for 7 months, 3 months before and after their 65th birthday (and the month of their birthday). "Creditable" drug coverage can be from an employer or union, a state assistance program, VA benefits, etc. as long as it's expected to pay at least as much as Medicare's standard prescription drug coverage. If a patient gets Extra Help, they don't pay the late enrollment penalty.
How much is the Part D penalty?
The cost of the late enrollment penalty depends on how long the beneficiary went without creditable prescription drug coverage.
The late enrollment penalty is calculated by multiplying 1% of the "national base beneficiary premium" ($33.19 in 2019) times the number of full, uncovered months the patient didn't join a Medicare Prescription Drug Plan after their initial enrollment period (and went without other creditable prescription drug coverage). The final amount is rounded to the nearest dime and added to their monthly premium.
1% x $33.19 x #of months of without coverage
*The national base beneficiary premium may increase each year, so the penalty amount may also increase each year.
Amplicare does not include your patient's penalty when calculating out-of-pocket costs per plan. However, you can calculate the patient's monthly penalty using our Late Enrollment Calculator above.
After your patient joins a Medicare drug plan, the plan will tell them if they owe a penalty, and what their premium will be. The beneficiary may have to pay this penalty for as long as they have a Medicare drug plan--usually the rest of their lives 😬! That's why we suggest you check your Newly Eligible patients report every month, so you can prevent your patient's from incurring this fee.
If your patient had to pay a Part D late enrollment penalty before they turned 65, the penalty will be waived once they reach 65.
What's Next?
Reach out to your Newly Eligible patients to make sure your patients enroll in a plan on time by reading this article here! | https://docs.amplicare.com/en/articles/434251-how-does-the-late-enrollment-penalty-affect-my-patients | 2021-10-16T12:41:25 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.amplicare.com |
Getting there
Note: If you are already connected to a site, you can open the Connection menu and click Site Properties.
Options on the Connection tab allow you to configure settings for maintaining connections and specifying timeout intervals.
Note: For SFTP connections, the only setting available from this tab is Use IPV6.
Use passive mode
When selected, the client sends a PASV command to communicate with the server in passive mode (sometimes called PASV mode). This initiates a separate data connection for directory listings and file transfers.
Use passive mode to minimize connection problems with firewalls, such as the Windows firewall enabled by default in some versions of Windows XP.
If passive mode has been turned off and a directory listing does not display or you get an error "425 Can't open data connection," you should enable this setting.
IPV6 connections use EPSV.
Send keep alive every <n> seconds
Most servers have an "idle time" value that specifies how long a user's FTP session can last when no activity is detected. When the user exceeds the time limit, the server connection is closed. This setting allows you to direct the FTP Client to send a NOOP command to the server at timed intervals to prevent the server from closing the connection due to inactivity.
TCP port
Use the TCP port box to specify a non-standard TCP service port number or socket for FTP. The default value 21 is the standard service port for FTP.
When a connection is opened, if Account is filled in, the client automatically sends the account name to the server as the last step in the login process. If you don't fill in the Account box and your server requires an account name, you must enter an ACCOUNT command at the FTP command line before you can see the files in that account.
Account
Use Account if your server requires an account name for file access. For case-sensitive servers, be sure to use the appropriate case.
Connect
Specifies the maximum number of seconds to continue trying to establish an FTP server connection. Entering 0 (zero) in this box prevents the FTP Client from ever timing out on a connection attempt.
Session
Specifies the maximum number of seconds to wait for data packets being transferred to or from the host. If nothing is received within the period specified, a timeout error displays and the transfer is aborted; in this case, try the operation again. If you receive repeated timeout errors, increase the timeout value. Entering 0 (zero) in this box prevents the FTP Client from ever timing out when waiting for a response.
Use IPV6
Specifies whether connections to the host use IPV6 (Internet Protocol version 6) or the older IPv4 protocol. By default, the client attempts to connect using IPv6, and uses IPv4 when IPv6 is not available. You may need to change this value to "Never" if you are having problems connecting to hosts on an IPv4 network from a client computer with IPv6 enabled. | https://docs.attachmate.com/Reflection/2008/R1/Guide/pt/user-html/ftp_site_properties_connection_cs.htm | 2021-10-16T12:53:25 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.attachmate.com |
User Permissions Are Not Disabled in the Destination Org When Deploying Profiles
User Permissions set to false are not retrieved by the Metadata API. This is the standard behavior of Salesforce’s Metadata API. If you do a retrieve of a profile with Workbench or ANT Migration Tool, you will get only the user permissions that are set to true on that profile.
When committing a profile in a user story, you will see that the user permissions set to true in the master branch are removed in the feature branch if they are set to false in the source org. This is because user permissions set to false are not retrieved by the Metadata API.
The user permission will be removed also from the promotion branch and the destination branch when deploying.
Permissions that don't exist in the xml file that is being deployed (promotion branch) are not modified in the org you are deploying to, and therefore, they will not be set to false in the destination org.
Solution 1
If you add the user permissions in the feature branch and the promotion branch as false, the permissions will be deployed with status disabled in the destination org.
Let’s take a look at the example scenario below:
- Disable Manage Users and Manage Internal Users in the source org.
- Commit the profile in a user story.
- The user permissions are removed in the feature branch as expected. (See screenshot above).
- Commit, promote and deploy the user story. The permissions are removed in the destination branch but they are not disabled in the destination org.
- Add the permissions in the feature branch in Git set to false.
- Create a new deployment from the promotion.
- You will see now the permissions added in the new promotion branch as false.
- Deploy the new deployment which is taking the new promotion branch with the permissions set to false.
- The permissions are deployed as false in the destination org.
Solution 2
You can use the Commit Full Profiles and Permission Sets feature to commit and deploy a profile including also the permissions that are set to false. Note that this will commit and deploy the whole profile with all the OLS, FLS, user permissions and any other relationships, even if you do not include any other component in the commit. | https://docs.copado.com/article/ztg8d3yr01-user-permissions-are-not-disabled-in-the-destination-org-when-deploying-profiles | 2021-10-16T12:33:55 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.copado.com |
The Invoke Named Lambda Function Action is used to execute an existing named Lambda function in the selected Context.
As part of a sequence of Actions, you may wish to inject some custom code to handle a special requirement not handled by GorillaStack.
In the Action configuration, you first select your Lambda function by name.
Next, you can specify execution settings including the:
You can use the Action by setting up a rule.
You’ll also want to guarantee that you have created a Lambda function in the specified context that you can target by name.
There are two tabs used to configure the Action:
Function Name
The name of the function that you wish to target with this Action..) | https://docs.gorillastack.com/docs/reference/actions/invoke_named_lambda_function/ | 2021-10-16T12:34:42 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.gorillastack.com |
Colossus Premium Template : How to install / configure
Colossus is a a column-based premium PDF template with the option to automatically display two, three or four-column layouts – no CSS-Ready Classes required! You can purchase from our PDF Template Shop. This guide will walk you through installing and configuring Colossus to its full potential.
InstallationInstallation
Please follow our installation guide, which provides instructions for setting up and configuring your premium template.
LimitationsLimitations
As a Universal PDF template, Colossus can be used with all Gravity Forms and will correctly display most official Gravity Forms fields. To ensure the layout stays consistent Page and Hidden fields are removed from the PDF.
ConfiguringConfiguring
All PDF templates have common settings that can be configured, such as font, security and PDF attachments, and we recommend reviewing the PDF setup guide to get a better understanding on all the available settings. All template-specific configuration is done from the Template tab and below you'll find detailed information about each option available in Colossus, what it does and how it alters the generated PDF.
Main Heading
: The main heading is included at the very start of your PDF, before any other content. Merge tags are supported.
: Leave the field blank to disable.
: This option replaces the "Show Form Title" setting. To replicate, use the
{form_title} merge tag.
Columns : Controls the number of columns to use in the PDF. Choose between two, three or four column layout. The default layout is 3 columns. : The more columns used the more condensed the PDF will be. If using List or Likert fields we recommend two columns.
Logo / Image : This image is left-aligned in the header of each page of the PDF. The height of the image will be no greater than 280px (25 millimetres or about 1 inch). : An image 500px wide will be a suitable resolution in most cases. To ensure your PDF generates quickly and the PDF file size stays small we recommend using an image under 1MB.
Show Section Break Title : You can enable or disable the Section Break fields in the PDF. Defaults to disabled. : When enabled, the Section Break fields will be displayed in a new row spanning the width of the PDF. : Added in version 2.0.
Show Section Break Description : You can enable or disable the Section Break field description in the PDF. Defaults to disabled. : The Show Section Break Title option must be enabled to display the Section Break field description. : Added in version 2.0.
Show HTML Fields : You can enable or disable HTML fields in the PDF. Defaults to disabled. : When enabled, the HTML fields will be displayed in a new row spanning the width of the PDF. : Added in version 2.0.
Additional SettingsAdditional Settings
Along with the options specific to Colossus, the following core settings are also supported:
CSS Ready ClassesCSS Ready Classes
While this template ignores the standard CSS Ready Classes, in version 2.0 the following CSS Ready Classes can be used to give you more control over the rows and columns:
row-break– When a field has this CSS class added, the PDF will always display that field in the first column of a row (creating a new row if necessary).
full-width– When a field has this CSS class added, the PDF will allways display that field in a new row that spans the full-width of the PDF (i.e columns get disabled for this field).
Recommended FontRecommended Font
Colossus comes bundled with Arimo, an open source Google web font (Apache License, Version 2.0). Arimo is a innovative, refreshing sans serif font that works great at 10pt with Colossus. Set the PDF font in the Appearance tab.
Viewing PDFViewing PDF
Once you've saved your new PDF you can view it from the Gravity Forms Entries List page. Just remember to fill out and submit your form if the entry list is empty. | https://docs.gravitypdf.com/v4/shop-pdf-template-colossus/ | 2021-10-16T12:15:18 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['https://resources.gravitypdf.com/uploads/2017/01/updating-four-columns.png',
'Colossus allows your fields to automatically be displayed in two, three and four column layouts.'],
dtype=object) ] | docs.gravitypdf.com |
One of Salt's strengths, the use of existing serialization systems for representing SLS data, can also backfire. YAML is a general purpose system and there are a number of things that would seem to make sense in an sls file that cause YAML issues. It is wise to be aware of these issues. While reports or running into them are generally rare they can still crop up at unexpected times..
When dictionaries are nested within other data structures (particularly lists),
the indentation logic sometimes changes. Examples of where this might happen
include
context and
default options from the
file.managed state:
}
Here is a more concrete example of how YAML actually handles these indentations, using the Python interpreter on the command line:
>>> import yaml >>> yaml.safe_load('''mystate: ... file.managed: ... - context: ... some: var''') {'mystate': {'file.managed': [{'context': {'some': 'var'}}]}} >>> yaml.safe_load('''mystate: ... file.managed: ... - context: ... some: var''') {'mystate': {'file.managed': [{'some': 'var', 'context': None}]}}
Note that in the second example,
some is added as another key in the same
dictionary, whereas in the first example, it's the start of a new dictionary.
That's the distinction.
context is a common example because it is a keyword
arg for many functions, and should contain a dictionary.
Similarly, when a multi-line string is nested within a list item (such as when
using the
contents argument for a
file.managed state), the indentation must be doubled. Take for
example the following state:
/tmp/foo.txt: file.managed: - contents: | foo bar baz
This is invalid YAML, and will result in a rather cryptic error when you try to run the state:
myminion: Data failed to compile: ---------- Rendering SLS 'base:test' failed: could not find expected ':'; line 5 --- /tmp/foo.txt: file.managed: - contents: | foo bar <====================== baz ---
The correct indentation would be as follows:
/tmp/foo.txt: file.managed: - contents: | foo bar baz
PyYAML will load these values as boolean
True or
False. Un-capitalized
versions will also be loaded as booleans (
true,
false,
yes,
no,
on, and
off). This can be especially problematic when constructing
Pillar data. Make sure that your Pillars which need to use the string versions
of these values are enclosed in quotes. Pillars will be parsed twice by salt,
so you'll need to wrap your values in multiple quotes, including double quotation
marks (
" ") and single quotation marks (
' '). Note that spaces are included
in the quotation type examples for clarity.
Multiple quoting examples looks like this:
- '"false"' - "'True'" - "'YES'" - '"No"'
Note
When using multiple quotes in this manner, they must be different. Using
"" ""
or
'' '' won't work in this case (spaces are included in examples for clarity).
The % symbol has a special meaning in YAML, it needs to be passed as a string literal:
cheese: ssh_auth.present: - user: tbortels - source: salt://ssh_keys/chease.pub - config: '%h/.ssh/authorized_keys'
PyYAML will load a time expression as the integer value of that, assuming
HH:MM. So for example,
12:00 is loaded by PyYAML as
720. An
excellent explanation for why can be found here.
To keep time expressions like this from being loaded as integers, always quote them.
Note
When using a jinja
load_yaml map, items must be quoted twice. For
example:
{% load_yaml as wsus_schedule %} FRI_10: time: '"23:00"' day: 6 - Every Friday SAT_10: time: '"06:00"' day: 7 - Every Saturday SAT_20: time: '"14:00"' day: 7 - Every Saturday SAT_30: time: '"22:00"' day: 7 - Every Saturday SUN_10: time: '"06:00"' day: 1 - Every Sunday {% endload %}
If I can find a way to make YAML accept "Double Short Decs" then I will, since I think that double short decs would be awesome. So what is a "Double Short Dec"? It is when you declare a multiple short decs in one ID. Here is a standard short dec, it works great:
vim: pkg.installed
The short dec means that there are no arguments to pass, so it is not required to add any arguments, and it can save space.
YAML though, gets upset when declaring multiple short decs, for the record...
THIS DOES NOT WORK:
vim: pkg.installed user.present
Similarly declaring a short dec in the same ID dec as a standard dec does not work either...
ALSO DOES NOT WORK:
fred: user.present ssh_auth.present: - name: AAAAB3NzaC... - user: fred - enc: ssh-dss - require: - user: fred
The correct way is to define them like this:
vim: pkg.installed: [] user.present: [] fred: user.present: [] ssh_auth.present: - name: AAAAB3NzaC... - user: fred - enc: ssh-dss - require: - user: fred
Alternatively, they can be defined the "old way", or with multiple "full decs":
vim: pkg: - installed user: - present fred: user: - present ssh_auth: - present - name: AAAAB3NzaC... - user: fred - enc: ssh-dss - require: - user: fred
According to YAML specification, only ASCII characters can be used.
Within double-quotes, special characters may be represented with C-style escape sequences starting with a backslash ( \ ).
Examples:
- micro: "\u00b5" - copyright: "\u00A9" - A: "\x41" - alpha: "\u0251" - Alef: "\u05d0"
List of usable Unicode characters will help you to identify correct numbers.
Python can also be used to discover the Unicode number for a character:
repr(u"Text with wrong characters i need to figure out")
This shell command can find wrong characters in your SLS files:
find . -name '*.sls' -exec grep --color='auto' -P -n '[^\x00-\x7F]' \{} \;
Alternatively you can toggle the yaml_utf8 setting in your master configuration file. This is still an experimental setting but it should manage the right encoding conversion in salt after yaml states compilations.
If a definition only includes numbers and underscores, it is parsed by YAML as an integer and all underscores are stripped. To ensure the object becomes a string, it should be surrounded by quotes. More information here.
Here's an example:
>>> import yaml >>> yaml.safe_load('2013_05_10') 20130510 >>> yaml.safe_load('"2013_05_10"') '2013_05_10'
datetimeconversion¶
If there is a value in a YAML file formatted
2014-01-20 14:23:23 or
similar, YAML will automatically convert this to a Python
datetime object.
These objects are not msgpack serializable, and so may break core salt
functionality. If values such as these are needed in a salt YAML file
(specifically a configuration file), they should be formatted with surrounding
strings to force YAML to serialize them as strings:
>>> import yaml >>> yaml.safe_load('2014-01-20 14:23:23') datetime.datetime(2014, 1, 20, 14, 23, 23) >>> yaml.safe_load('"2014-01-20 14:23:23"') '2014-01-20 14:23:23'
Additionally, numbers formatted like
XXXX-XX-XX will also be converted (or
YAML will attempt to convert them, and error out if it doesn't think the date
is a real one). Thus, for example, if a minion were to have an ID of
4017-16-20 the minion would not start because YAML would complain that the
date was out of range. The workaround is the same, surround the offending
string with quotes:
>>> import yaml >>> yaml.safe_load('4017-16-20') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 93, in safe_load return load(stream, SafeLoader) File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 71, in load return loader.get_single_data() File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 39, in get_single_data return self.construct_document(node) File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 43, in construct_document data = self.construct_object(node) File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 88, in construct_object data = constructor(self, node) File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 312, in construct_yaml_timestamp return datetime.date(year, month, day) ValueError: month must be in 1..12 >>> yaml.safe_load('"4017-16-20"') '4017-16-20' | https://docs.saltproject.io/en/latest/topics/troubleshooting/yaml_idiosyncrasies.html | 2021-10-16T11:34:27 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.saltproject.io |
Package scanner
Overview ▹
Overview ▾
Package scanner implements a scanner for Go source text. It takes a []byte as source which can then be tokenized through repeated calls to the Scan method.
Index ▹
Index ▾
Examples
Package files
func (*Scanner) Init ¶ (*Scanner) Scan ¶" | https://docs.studygolang.com/pkg/go/scanner/ | 2021-10-16T12:20:16 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.studygolang.com |
OS/CPU/compiler dependent CPU specific calls. More...
#include "stdafx.h"
#include "core/bitmath_func.hpp"
#include "safeguards.h"
Go to the source code of this file.
Check whether the current CPU has the given flag.
Definition at line 133 of file cpu.cpp.
References HasBit(), and ottd_cpuid().
Definitions for CPU detection:
Get the CPUID information from the CPU.
MSVC offers cpu information while gcc only implements in gcc 4.8 __builtin_cpu_supports and friends
Other platforms/architectures don't have CPUID, so zero the info and then most (if not all) of the features are set as if they do not exist.
Definition at line 127 of file cpu.cpp.
Referenced by HasCPUIDFlag(), and ottd_rdtsc().
Get the tick counter from the CPU (high precision timing).
Definition at line 78 of file cpu.cpp.
References ottd_cpuid(). | http://docs.openttd.org/cpu_8cpp.html | 2019-05-19T08:52:09 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.openttd.org |
Problem »
Service Request Form Template
Download This Template Service Request Form In order for a user to officially request something, be it a simple password change or a more difficult software installation, they need to…Read more »
What is Incident Management?
Download ITIL Templates ITIL Incident Management Incident Management in ITIL is the key process in Service Operation. Most Service Providers are evaluated and assessed by the speed they respond and…Read more »
Major Problem Catalogue Template
Download This Template ITIL Problem Management – Catalogue Template Those who don’t learn from their mistakes are destined to repeat them: All organizations aim at learning from their mistakes, and…Read more »
Incident Report Template | Major Incident Management
Download Incident Report Templates Incident Report in ITIL Incident Management Incident reporting is one of most important phase in Major Incident Management. Incident report is an authentic and authorized information…Read more » | http://itil-docs.com/page/6/ | 2019-05-19T09:16:47 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['http://itil-docs.com/wp-content/uploads/2018/02/ITIL-Problem-Record-Template-1.png',
'problem management record ,ITIL Problem Record Template,'],
dtype=object)
array(['http://itil-docs.com/wp-content/uploads/2018/02/Major-Problem-Report-Template-2.png',
None], dtype=object)
array(['http://itil-docs.com/wp-content/uploads/2018/02/ITILService-Request-Template-Featured-Image-825x441.png',
'ITILService Request Template'], dtype=object)
array(['http://itil-docs.com/wp-content/uploads/2018/02/Incident-Management.jpg',
'Incident Management'], dtype=object)
array(['http://itil-docs.com/wp-content/uploads/2018/02/ITIL-Problem-Catalogue-Template-2.png',
None], dtype=object)
array(['http://itil-docs.com/wp-content/uploads/2018/01/ITIL-Incident-Management-Report-Template-Excel-825x510.png',
'ITIL Incident Management Report'], dtype=object) ] | itil-docs.com |
Health Check¶
This page describes how to configure health check probes for a Mattermost server.
Before you begin, you should have a running Mattermost server. If you don’t, you can install Mattermost on various distributions or deploy a Kubernetes cluster with Minikube. Note that highly available Mattermost cluster is available in Enterprise Edition E20.
You can perform a health check with two methods:
/ping APIv4 Endpoint¶
In Mattermost version 3.10 and later, you can use the GET /system/ping APIv4 endpoint to check for system health.
A sample request is included below. The endpoint checks if the server is up and healthy based on the configuration setting
GoRoutineHealthThreshold.
- If
GoRoutineHealthThresholdand the number of goroutines on the server exceeds that threshold, the server is considered unhealthy.
- If
GoRoutineHealthThresholdis not set or the number of goroutines is below the threshold the server is considered healthy.
This endpoint can also be provided to schedulers like Kubernetes.
import "github.com/mattermost/mattermost-server/model" Client := model.NewAPIv4Client("") Client.Login("[email protected]", "Password1") // GetPing status, err := Client.GetPing()
Mattermost Probe¶
The Mattermost Probe constantly pings a Mattermost server using a variety of probes.
These probes can be configured to verify core features, including sending and receiving messages, joining channels, pinging a login page, and searching of users and channels.
The project is contributed by the Mattermost open source community. Suggestions and contributions for the project are welcome. | https://docs.mattermost.com/administration/health-check.html | 2019-05-19T09:27:59 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.mattermost.com |
Client-side Data Storage FAQ¶
Mobile Web Experience¶
- 1. What data is stored?
- Similar to a desktop web browser, data may be stored in the mobile web browser cache which resides on the storage system of the device operating system which is protected by security measures in the physical device and its operating system.
- 2. How is the data protected?
- Security for mobile web experience is similar to the security for a desktop web experience.
- 3. When is the data deleted?
- If you log out or your account is deactivated, the data in the browser cache may reside until the cache expires or the temporary file system store on the operating system is cleared, depending on your operating system.
Mobile App Experience¶
To speed up initial loading time, Mattermost mobile apps cache data locally on the device for v1.1 and later. Below are common questions on cached data:
1. What data is stored locally with the new mobile apps on a mobile device?
The data that can be found on the device depends solely on whether or not the user is logged in to the Mattermost server, and is independent of the state of the device’s connection or the state of the app. While logged in, anything that the user is normally allowed to see is eligible for storage on the device, which includes the following content:
- messages
- files and images that are attached to messages
- avatars, usernames, and email addresses of people in the currently open channel
In addition, metadata that the app uses for keeping track of its operations is also cached. The metadata includes user IDs, channel IDs, team IDs, and message IDs.
Currently, cache cannot be reset remotely on connected mobile devices.
- 2. What about push notifications?
- Push notification storage is managed by the operating system on the device. Mattermost can be configured to send limited amounts of information that does not include the message text or channel name, and it can also be configured to not send push notifications at all.
- 3. Where is the data stored and how is that data protected?
- The data is stored in the app’s local storage. It is protected by the security measures that a device normally provides to the apps that are installed on it.
- 4. How long is the data stored?
- Data is stored until the user logs out, or until it is purged during normal cache management. Deactivating a user account forces a logout and subsequent purging of data from the device.
- 5. Are messages pre-loaded?
- No. Messages are sent to the device on demand. They are not pre-loaded in anticipation of users scrolling up or switching channels.
- 6. What happens to messages that are deleted on the server after a user has seen them?
- The messages are deleted from the client.
- 7. What data is stored on a mobile device after an account is deactivated in the following cases:
All the data listed in Questions 1 and 2, but within 60 seconds after an account is deactivated on the server, all app data is deleted from the cache.
- The mobile device is connected with app running.
All the data listed in Questions 1 and 2, but within 60 seconds after the device reconnects, all app data is deleted from the cache.
- The mobile device is disconnected with app running.
All the data listed in Questions 1 and 2, but within 60 seconds after the app is started, all app data is deleted from the cache.
- The mobile device is connected with the app not running.
All the data listed in Questions 1 and 2, but within 60 seconds after the device reconnects and the app is started, all app data is deleted from the cache.
- The mobile device is disconnected and app is not running.
- 8. What data might be on the device after a user account is deactivated and all data is deleted from the cache?
- If file attachments are enabled on the server, users can download files that are attached to messages and store them on their local file system. After they are downloaded, the files are outside the control of the app and can remain on the device indefinitely. | https://docs.mattermost.com/deployment/client-side-data.html | 2019-05-19T08:20:45 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.mattermost.com |
Perform simple data validation checks when editing a record in a form
You can use the BeforeUpdate event of a form or a control to perform validation checks on data entered into a form or control. If the data in the form or control fails the validation check, you can set the BeforeUpdate event's Cancel argument to True to cancel the update.
The following example prevents the user from saving changes to the current record if the Unit Cost field does not contain a value.
Private Sub Form_BeforeUpdate(Cancel As Integer) ' Check for a blank value in the Unit Cost field. If IsNull(Me![Unit Cost]) Then ' Alert the user. MsgBox "You must supply a Unit Cost." ' Cancel the update. Cancel = True End If End Sub
Support and feedback
Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback. | https://docs.microsoft.com/en-us/office/vba/access/Concepts/Forms/perform-simple-data-validation-checks-when-editing-a-record-in-a-form | 2019-05-19T08:26:29 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.microsoft.com |
Interface Configuration¶
To assign a new interface:
- Navigate to Interfaces > (assign)
- Pick the new interface from the Available network ports list
- Click
Add
The newly assign interface will be shown in the list. The new interface will have a default name allocated by the firewall such as OPT1 or OPT2, with the number increasing based on its assignment order. The first two interfaces default to the names WAN and LAN but they can be renamed. These OPTx names appear under the Interfaces menu, such as Interfaces > OPT1. Selecting the menu option for the interface will open the configuration page for that interface.
The following options are available for all interface types.
Description¶
The name of the interface. This will change the name of the interface on the Interfaces menu, on the tabs under Firewall > Rules, under Services > DHCP, and elsewhere throughout the GUI. Interface names may only contain letters, numbers and the only special character that is allowed is an underscore (“_”). Using a custom name makes it easier to remember the purpose of an interface and to identify an interface for adding firewall rules or choosing other per-interface functionality.
IPv4 Configuration Type¶
Configures the IPv4 settings for the interface. Details for this option are in the next section, IPv4 WAN Types.
IPv6 Configuration Type¶
Configures the IPv6 settings for the interface. Details for this option are in IPv6 WAN Types.
MAC address¶
The MAC address of an interface can be changed (“spoofed”) to mimic a previous piece of equipment.
Warning
We recommend avoiding this practice. The old MAC would generally be cleared out by resetting the equipment to which this firewall connects, or by clearing the ARP table, or waiting for the old ARP entries to expire. It is a long-term solution to a temporary problem.
Spoofing the MAC address of the previous firewall can allow for a smooth transition from an old router to a new router, so that ARP caches on devices and upstream routers are not a concern. It can also be used to fool a piece of equipment into believing that it’s talking to the same device that it was talking to before, as in cases where a certain network router is using static ARP or otherwise filters based on MAC address. This is common on cable modems, where they may require the MAC address to be registered if it changes.
One downside to spoofing the MAC address is that unless the old piece of equipment is permanently retired, there is a risk of later having a MAC address conflict on the network, which can lead to connectivity problems. ARP cache problems tend to be very temporary, resolving automatically within minutes or by power cycling other equipment.
If the old MAC address must be restored, this option must be emptied out and then the firewall must be rebooted. Alternately, enter the original MAC address of the network card and save/apply, then empty the value again.
MTU (Maximum Transmission Unit)¶
The Maximum Transmission Unit (MTU) size field can typically be left blank, but can be changed when required. Some situations may call for a lower MTU to ensure packets are sized appropriately for an Internet connection. In most cases, the default assumed values for the WAN connection type will work properly. It can be increased for those using jumbo frames on their network.
On a typical Ethernet style network, the default value is 1500, but the actual value can vary depending on the interface configuration.
MSS (Maximum Segment Size)¶
Similar to the MTU field, the MSS field “clamps” the Maximum Segment Size (MSS) of TCP connections to the specified size in order to work around issues with Path MTU Discovery.
Speed and Duplex¶
The default value for link speed and duplex is to let the firewall decide what is best. That option typically defaults to Autoselect, which negotiates the best possible speed and duplex settings with the peer, typically a switch.
The speed and duplex setting on an interface must match the device to which it is connected. For example, when the firewall is set to Autoselect, the switch must also be configured for Autoselect. If the switch or other device has a specific speed and duplex forced, it must be matched by the firewall.
Block Private Networks¶
When Block private networks is active pfSense inserts a rule
automatically that prevents any RFC 1918 networks (
10.0.0.0/8,
172.16.0.0/12,
192.168.0.0/16) and loopback (
127.0.0.0/8) from
communicating on that interface. This option is usually only desirable on WAN
type interfaces to prevent the possibility of privately numbered traffic coming
in over a public interface.
Block bogon networks¶
When Block bogon networks is active pfSense will block traffic from a list of unallocated and reserved networks. This list is periodically updated by the firewall automatically.
Now that the IPv4 space has all been assigned, this list is quite small, containing mostly networks that have been reserved in some way by IANA. These subnets should never be in active use on a network, especially one facing the Internet, so it’s a good practice to enable this option on WAN type interfaces. For IPv6, the list is quite large, containing sizable chunks of the possible IPv6 space that has yet to be allocated. On systems with low amounts of RAM, this list may be too large, or the default value of Firewall Maximum Table Entries may be too small. That value may be adjusted under System > Advanced on the Firewall & NAT tab. | https://docs.netgate.com/pfsense/en/latest/book/interfaces/interface-configuration.html | 2019-05-19T09:19:10 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.netgate.com |
A query band contains name and value pairs that use Teradata Database predefined names or custom names to specify metadata, such as user location or application version.
- From the ruleset toolbar, click Filters, Throttles, or Workloads.
- Select an item or create one.
- Click the Classification tab.
- Select a query band criteria or add one.
- Select a query band name from the list or add one.
- Select a Previously Used Value or enter a New Value. You must select a name and a value.
- Select items from the list and use the Include and Exclude buttons to create your classification criteria.
- Click OK. | https://docs.teradata.com/reader/kO54CPhW~G2NnrnBBnb6~A/AQtazl0Bji4hfGXc2oNJgQ | 2019-05-19T08:46:22 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.teradata.com |
Core Team Handbook¶
This handbook is a guide to serving on the core team of the Mattermost open source project, which includes making contributions both as a volunteer contributor from the community and as a paid staff member of Mattermost, Inc.
Please consider this handbook - along with all our documentation - as a work-in-progress and constantly updating to best achieve our shared mission.
Development Process¶
- Development Process Overview
- Deprecation Policy
- Mattermost Software Requirements
- Mattermost Style Guide
- User Experience Guidelines
- Localization
- Levels of Feature Quality, Development, and Support Eligibility
- Product Manager FAQ
Release Process¶
Community Process¶
- Mattermost Community
- Bugs, Feature Ideas, Troubleshooting
- Help Wanted Tickets
- Community Playbook
Core Developer Handbook¶
- Support Handbook
- Engineer Onboarding Timeline & Expectations
- Week 1: Focus on environment setup and introductions
- Week 2: Focus on digesting information dump
- Weeks 3-4: Focus on solidifying role in the team
- Weeks 5-8 (Month 2): Work on your first project as dev owner
- Weeks 9-11 (Month 3): Work on more and/or larger projects as dev owner
- Week 12: Informal performance evaluation
- Weeks 13-16 (Month 4): Act on your performance evaluation and focus on community
- Weeks 17-20 (Month 5): Become an authority
- Weeks 21+ (Month 6+): Continue to grow as an engineer, be a leader in the community, and be an integral part of the Mattermost engineering org.
Documentation Style Guide¶
- Documentation Guidelines
- General Guidelines
- Document Structure
- Grammar, Spelling, and Mechanics
- ReStructuredText Markup
Joining the Team¶
- About Mattermost
- Working at Mattermost
- Security Policies
- Security Release Process
- Onboarding
- People Ops
- Staff Developers
Marketing¶
- Case Studies
- Guest Blog Posts for Mattermost Apps and Services
- Mattermost Community Content Guidelines
- Mattermost Asset Guidelines
- Creating and Editing Mattermost User’s Guide
- How to Update Integrations Directory
Partners¶
Credits: Our culture and process draws from the fantastic work of Wordpress, GitLab, Pixar and Intel in different ways. | https://docs.mattermost.com/guides/core.html | 2019-05-19T09:23:45 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.mattermost.com |
AWS Elastic Beanstalk Docker Setup¶
The following instructions use Docker to install Mattermost in Preview Mode for exploring product functionality. This configuration should not be used in production.
- From your AWS console select Elastic Beanstalk under the Compute section.
- Select Create New Application from the top right.
- Name your Elastic Beanstalk application and click Next,
- Select Create web server on the New Environment page.
- If asked, select Create an IAM role and instance profile, then click Next.
- On the Environment Type page,
- Set Predefined Configuration to Multi-Container Docker under the generic heading in the drop-down list.
- Set Environment Type to Single instance in the drop-down list.
- Click Next.
- For Application Source, select Upload your own and upload the Dockerrun.aws.json file from (select version you’d like to use), then click Next.
- Type an Environment Name and URL. Make sure the URL is available by clicking Check availability, then click Next.
- The options on the Additional Resources page may be left at default unless you wish to change them. Click Next.
- On the Configuration Details page,
- Select an Instance Type of t2.small or larger.
- The remaining options may be left at their default values unless you wish to change them. Click Next.
- Environment tags may be left blank. Click Next.
- You will be asked to review your information, then click Launch.
- It may take a few minutes for beanstalk to launch your environment. If the launch is successful, you will see a see a large green checkmark and the Health status should change to “Green”.
- Test your environment by clicking the domain link next to your application name at the top of the dashboard. Alternatively, enter the domain into your browser in the form
http://<your-ebs-application-url>.elasticbeanstalk.com. You can also map your own domain if you wish. If everything is working correctly, the domain should navigate you to the Mattermost signup page. Enjoy exploring Mattermost!
Configuration Settings¶
See Configuration Settings documentation to customize your deployment.
(Recommended) Enable Email¶
The default. | https://docs.mattermost.com/install/docker-ebs.html | 2019-05-19T08:47:08 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.mattermost.com |
New Relic Browser's Overview page summarizes the real-user browser performance of your app. Use the Overview page to:
- View trends in an app's browser-side performance
- Quickly troubleshoot page load timing issues
- Navigate to other Browser UI pages
View the Overview page
To view a summary of browser performance for an app:
- Go to rpm.newrelic.com/browser and select an app from the New Relic Browser index.
- From the app's Overview page, use standard New Relic page functions to drill down into detailed information.
The Overview page includes:
- Load time chart
- Apdex score chart
- Throughput chart
- For Browser Pro: Charts for JavaScript errors, AJAX response time, and browser session traces
Load time chart
The Load time chart is the main chart on the Overview page. It shows a breakdown of users' browser load time, with average values for the selected time period displayed in the upper right corner of the chart. This chart appears with more detail about the page load timing process on the Page views page.
- Load time chart segments
The Load time chart includes into these segments:
- Single page app (SPA) monitoring load time
If you enabled SPA monitoring, the Load time chart will display SPA data by default. The SPA chart breaks down load time into these segments:
- Initial page load: A traditional URL change stemming from a load or reload of a URL.
- Route change: A view change or update that doesn't require a URL reload.
- Custom: A custom event created using the Browser API.
Page load timing for SPA monitoring is handled differently than standard page load timing. To switch back to the standard page load timing chart: Select the chart title's dropdown, then select Page view load time.
- Load time chart functions
Load time charts also have the following functions:
Apdex chart
The Apdex chart displays the end-user Apdex for the selected time range and the average values for that period. For Browser apps also monitored by New Relic APM, the chart also includes app server Apdex data.
The Apdex chart does not use SPA data: it only uses standard page view timing.
Other functions of this chart include:
Throughput chart
The Throughput chart displays browser throughput as pages per minute (ppm). The value in the upper right of the chart is the average value for the selected time range.
If you have enabled SPA monitoring enabled and the Overview page shows the SPA load time chart, the Throughput chart will also use SPA data.
App server requests per minute (rpm) may show a different measurement than the browser page load timing's pages per minute (ppm).
Session traces, JavaScript errors, and AJAX
Access to these features depends on your subscription level.
Depending on your subscription level, the following tables and charts may also be on your Overview page:
- Recent session traces: Summarizes the Session traces page.
- JavaScript errors: Summarizes the JS errors page.
- AJAX response time: Summarizes the AJAX page. | https://docs.newrelic.co.jp/docs/browser/new-relic-browser/getting-started/browser-overview-page-website-performance-summary | 2019-05-19T09:07:09 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['https://docs.newrelic.co.jp/sites/default/files/thumbnails/image/browser-overview-page_0.png',
'screen-browser-overview.png Browser Overview page'], dtype=object) ] | docs.newrelic.co.jp |
Linux installation
This guide will detail how to install and setup WildBeast on Linux.
Prerequisites¶
- Linux system
- OS: Docker officially supports these distributions, but others may be used as well
- Root privileges (Sudo)
- Git
- Node.js version 8 or above
- A text editor. For command-line you may use Nano, Vim etc. while standalone editors like Visual Studio Code, Atom and Brackets are fine for systems with a desktop environment installed.
Installation¶
You will need to install Docker and Docker Compose to use WildBeast. Find the guide for your distribution here (Docker) and here (Compose). For other distributions, you may use your own resources.
Complete the appropriate installation procedure and verify Docker is functional before proceeding.
Setup¶
With that done, clone the WildBeast GitHub)
If you're running a custom ArangoDB instance and wish to use it, you can also edit ARANGO_USERNAME, ARANGO_PASSWORD, ARANGO_DATABASE and ARANGO_URI now. The same goes for the LAVA_NODES variable in case you're running a custom Lavalink instance.
When done, save the file as .env. Then run
sudo docker-compose up --no-start in the WildBeast directory. When the container creation is done, run
sudo docker ps -a and make sure that you have an output that resembles the following.
Warning
It is paramount you save the file as .env. Do not leave it as .env.example, name it .env.txt or anything similar. Docker will not recognise it in this case.
Known docker-compose issues
On certain systems or setups Docker may refuse to run commands properly without sudo and will throw cryptic errors as a result. Try running the command with sudo before consulting help and also check your system process control to see if Docker is running.
Initialising¶
To initialise WildBeast, run the following commands. Leave a second or two between each to account for startup times.
Note
If you configured a custom ArangoDB instance previously, omit the first command.
sudo docker start wildbeast_arango_1 sudo docker start wildbeast_install_1 sudo docker logs wildbeast_install_1
If your output resembles the following, you're good to go.
After this, you will no longer need to run wildbeast_install_1 unless you wish to repair the database - it's only around for database initialisation.
Configuration¶
Now it's time to do some additional configuration. The minimum defaults have been defined already through docker-compose.yml, but the bot will only have fairly limited functionality if left at this state. Open .env again.
Here is a list of environment variables we recommend you define or at least consider defining. Check the footnotes for brief instructions on how to get the API keys below.
Tip
There are more environment variables that can be defined as well. You can find the full reference in .env.example.
However, editing variables in the Internal configuration section is not recommended lest you know what you're doing. These variables exist for development and/or internal purposes and can have unintended side effects if tampered without a proper understanding of the software.
When you're done, save the file and close the editor.
Running the bot¶
Note
If you're running custom non-Docker instances for ArangoDB and Lavalink, and have configured WildBeast to use them, omit starting the first two containers.
Your WildBeast instance should now be good to go. Run the following commands in your terminal, waiting a second or two between each:
sudo docker start wildbeast_arango_1 # If you didn't start it or stopped it sudo docker start wildbeast_lavalink_1 sudo docker start wildbeast_wildbeast_1 sudo docker logs wildbeast_wildbeast_1
If your output resembles the following, your bot is all set.
Connect ECONNREFUSED <IP>:80
An error message saying FATAL: Error: connect ECONNREFUSED <your IP>:80 may happen when the wildbeast_wildbeast_1 container is started too quickly and the Lavalink server is not ready. Wait a few seconds, then run
sudo docker restart wildbeast_wildbeast_1 and check the logs again.
Custom commands
If you wish to use the custom command aspect of WildBeast, or intend to make tweaks to the source code later down the line, follow the Decoupling from Docker guide now. Due to how Docker works, data created in the ArangoDB container created here is non-transferable.
You can test the bot by running the ping command (With your prefix) in a text channel that the bot can see. If it answers "Pong!", then your bot is set up.
Making changes¶
If you feel up for some tinkering, you're free to make modifications to the source code. When you have made your changes and want to deploy them, simply restart the wildbeast_wildbeast_1 container with the command
sudo docker restart wildbeast_wildbeast_1. The changes you made will then be reflected in the public facing bot.
Note: You make changes to the source code at your own risk and responsibility. Support will not be provided for issues that stem from modifying the source code improperly. In other words, issues that are not our responsibility cannot be remedied by us either.
Closing words¶
If you have further questions or need help with something, we'd be happy to help. You can find a link to the official server below.
Enjoy your bot and have fun!
Go to, create an account and input your username and password here. ↩↩
Go to, register an application and input the client ID you get from that here. ↩
Go to, register an application and input the client ID (Not secret!) you get from that here. ↩
Set to 1 to enable this behaviour, or to 0 to disable it. ↩↩↩ | https://docs.thesharks.xyz/install_linux/ | 2019-05-19T09:15:13 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['../img/compose-containers.png', 'Container list'], dtype=object)
array(['../img/linux-init.png', 'Init'], dtype=object)
array(['../img/linux-expected-output.png', 'Expected ouput'], dtype=object)] | docs.thesharks.xyz |
Philosophy - what is dtool?¶
What problem is dtool solving?¶
Managing data as a collection of individual files is hard. Analysing that data will require that certain sets of files are present, understanding it requires suitable metadata, and copying or moving it while keeping its integrity is difficult.
dtool solves this problem by packaging a collection of files and accompanying metadata into a self contained and unified whole: a dataset.
Having metadata separate from the data, for example in an Excel spread sheet with links to the data files, it becomes difficult to reorganise the data without fear of breaking links between the data and the metadata. By encapsulating both the data files and associated metadata in a dataset one is free to move the dataset around at will. The high level organisation of datasets can therefore evolve over time as data management processes change.
dtool also solves an issue of trust. By including file hashes as metadata it is possible to verify the integrity of a dataset after it has been moved to a new location or when coming back to a dataset after a period of time.
It is possible to discover and access both metadata and data files in a dataset. It is therefore easy to create scripts and pipelines to process the items, or a subset of items, in a dataset.
What is a “dtool dataset”?¶
Briefly, a dtool dataset consists of:
- The files added to the dataset, known as the dataset “items”
- Metadata used to describe the dataset as a whole
- Metadata describing the items in the dataset
The exact details of how this data and metadata is stored depends on the
“backend” (the type of storage used). In other words a dataset is stored
differently on local file system disk to how it is stored in Amazon S3 object
store. However, the
dtool commands and the Python API for interacting with
datasets are the same for all backends.
What does a dtool dataset look like on local disk?¶
Below is the structure of a fictional dataset containing three items from an RNA sequencing experiment.
$ tree ~/my_dataset /Users/olssont/my_dataset ├── README.yml └── data ├── rna_seq_reads_1.fq.gz ├── rna_seq_reads_2.fq.gz └── rna_seq_reads_3.fq.gz
The
README.yml file is where metadata used to describe the whole dataset is
stored. The items of the dataset are stored in the directory named
data.
There is also hidden metadata, stored as plain text files, in a directory named
.dtool. This should not be edited directly by the user.
How does one create a dtool dataset?¶
This happens in stages:
- One creates a so called “proto dataset”
- One adds data and metadata to this proto dataset
- One converts the proto dataset into a dataset by “freezing” it
Once a proto dataset is “frozen” it is simply referred to as a dataset and it is no longer possible to modify the data in it. In other words it is not possible to add or remove items from a dataset or to alter any of the items in a dataset.
The process can be likened to creating an open box (the proto dataset), putting items (data) into it, sticking a label (metadata) on it, and closing the box (freezing the dataset).
Give me more details!¶
An in depth discussion of dtool can be found in the paper Lightweight data management with dtool. | https://dtool.readthedocs.io/en/latest/philosophy.html | 2019-05-19T09:35:28 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['_images/dataset_structure.png', '_images/dataset_structure.png'],
dtype=object)
array(['_images/package_data_and_metadata_into_beautiful_box.png',
'_images/package_data_and_metadata_into_beautiful_box.png'],
dtype=object) ] | dtool.readthedocs.io |
An Act to amend 16.307 (2), 16.307 (2m) and 20.505 (7) (h); and to create 16.307 (1m) and 20.505 (7) (fn) of the statutes; Relating to: housing navigator grants and making an appropriation. (FE)
Bill Text (PDF: )
Fiscal Estimates and Reports
AB121 ROCP for Committee on Housing and Real Estate (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
2019 Senate Bill 120 - S - Utilities and Housing | http://docs.legis.wisconsin.gov/2019/proposals/ab121 | 2019-05-19T08:59:59 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.legis.wisconsin.gov |
> (assign) on the PPPs tab.
Multi-Link PPP (MLPPP)¶
Editing a PPP instance also allows Multi-Link PPP (MLPPP) to be configured for supported providers. > (assign) on the PPPs tab
- Click
to edit an existing entry or
to add a new entry
- Set the Link Type, which changes the remaining options on the page. The link types are explained throughout the remainder of this section.. Upon selecting the PPP link type, the Link Interface(s) list is populated with serial devices that can be used to communicate with a modem. Click on a specific entry to select it for use. After selecting the interface, optionally enter a Description for the PPP entry..
When configuring a 3G/4G network, the Service Provider options pre-fill other relevant fields on the page.
- Select a Country, such as United States, to activate the Provider drop-down with known cellular providers in that country
- Select a Provider from the list, such as T-Mobile, to activate the Plan drop-down.
- Select a Plan and the remaining fields will be filled with known values for that Provider and Plan
The Service Provider options can be configured manually if other values are needed, or when using a provider that is not listed:
PPPoE (Point-to-Point Protocol over Ethernet)¶
PPPoE is a popular method of authenticating and gaining access to an ISP network, most commonly found on DSL networks.
To configure a PPPoE link, start by setting Link Type to PPPoE and complete the remainder of the settings as follows:. | https://docs.netgate.com/pfsense/en/latest/book/interfaces/interfacetypes-ppps.html | 2019-05-19T08:26:59 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.netgate.com |
Commensurate supercells for rotated graphene layers¶
In the Builder, Click. Locate “graphite” and add it to the Stash by clicking the
icon in the lower right-hand corner.
In the Stash, select the newly added item and click “Copy” to make an identical copy of the graphite structure.
Select the copied structure and delete the uppermost layer of carbon atoms highlighted in the picture below.
Click on. Drag the 2-layer structure and drop it in the first slot, and the 1-layer structure in the second, as shown in the figure below.
Click the button “Select Surface Cells...”, and in the window that opens up click the button “Set Matching Parameters”.
In the Set Matching Parameters widget that opens up, set the range of rotation angles to be searched from 20° to 24°, in increment of 1°, and click “OK”.
The structure with the lowest number of atoms is normally selected by default in the 2D plot on the left-hand bottom side of the Select Surface Cells widget (the red spot in the figure below). In this case this is also the structure with the lowest strain (in fact, no strain at all, i.e. it’s a perfectly commensurate match). This is indeed the geometry you want, with the two lattices rotated by 21.79° one with respect to each other. Click “Apply” to select this cell.
In the 3D window in the Builder you will now see a preview of the structure. You can rotate it, and if you would prefer to change some parameters you can still do it (for instance, you can add more layers of graphite to the “substrate” by clicking the plus button on the left under the first slot). Otherwise press “Create” to finalize the geometry.
Additional rotated structures¶
For more ideas on commensurate structures, see e.g. Ref. [2]. All the structures reported in the paper can easily be built using the procedure shown above. However, as noted in the article, the number of atoms becomes very large in many cases. The simple cases shown in Figure 7 in Ref. [2] are quite manageable - note that in these structures there are only 2 layers rotated with respect to each other, not 3 as above. Both A-A and A-B stacking sequences can be used, so the easiest way is actually still to build the structure as above, and then remove either the middle or bottom layers, and adjust the layer separation.
Note that in these systems it will be necessary to increase the parameters nmax and mmax too in the Set Matching Parameters widget. For instance, to find the 5.1° supercell shown in Figure 7a in Ref. [2], you need to set them to 14. Notice that in this case the default suggestion in the Select Surface Cells widget may not be the structure with zero strain, if there are strained structures with a smaller number of atoms. So make sure to always select a structure with no strain (there may be a few, naturally take the one with the smallest number of atoms), by clicking the blue dots in the plot (the active choice is indicated by the red dot as shown above).
Here below you can find a few examples of the rotated bilayer graphene structures that can be created using QuantumATK and the procedure explained above. | https://docs.quantumwise.com/tutorials/rotated_graphene_layers/rotated_graphene_layers.html | 2019-05-19T09:23:29 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.quantumwise.com |
Microsoft Dynamics AX (DAX) users and is divided into the following activities:
Before you can create a Provisioning policy for DAX users, the following prerequisites need to be met:
This directs you to the View page for the account store. View pages allow you to view information about the account store and also provide an Edit link for editing the settings.
In the General section of the form, do the following:
After completing the above, the General section of the form should look similar to the following image.
In our example, we have selected Approve All Provisions and Approve All Deprovisions, meaning that the provisioning and deprovisioning of all DAX user accounts must be approved before those accounts will be processed by RET Inbox.
The Advanced and Creation Path Resolver sections of the form should look like the following image.
This opens the View page for the policy. before EmpowerID will provision the DAX accounts. This is demonstrated in the next section. | https://docs.empowerid.com/2017/admin/managingempoweridpolicies/provisioningpolicies/creatingprovisioningpolicyfordaxusers | 2019-05-19T08:49:37 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.empowerid.com |
Calling Functions in LINQ to Entities Queries
The topics in this section describe how to call functions in LINQ to Entities queries.
The EntityFunctions and SqlFunctions classes provide access to canonical and database functions as part of the Entity Framework. For more information, see How to: Call Canonical Functions and How to: Call Database Functions.
The process for calling a custom function requires three basic steps:
Define a function in your conceptual model or declare a function in your storage model.
Add a method to your application and map it to the function in the model with an EdmFunctionAttribute.
Call the function in a LINQ to Entities query.
For more information, see the topics in this section.
In This Section
How to: Call Canonical Functions
How to: Call Database Functions
How to: Call Custom Database Functions
How to: Call Model-Defined Functions in Queries
How to: Call Model-Defined Functions as Object Methods
See Also
Concepts
Queries in LINQ to Entities
Canonical Functions
Other Resources
.edmx File Overview
How to: Define Custom Functions in the Conceptual Model | https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/dd456828(v=vs.100) | 2019-05-19T09:09:40 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.microsoft.com |
addMany
Last edited by Everett Griffiths on May 26, 2014.
xPDOObject::addMany()
Adds an object or collection of objects related to this class.
Syntax
API Docs:
boolean addMany ( mixed &$obj, [string $alias = ''] )
ExampleRemember that this operation is intended to be called only for objects whose relationships are defined as cardinality="many".
See Also
Suggest an edit to this page on GitHub (Requires GitHub account. Opens a new window/tab) or become an editor of the MODX Documentation. | https://docs.modx.com/xpdo/2.x/class-reference/xpdoobject/related-object-accessors/addmany | 2019-05-19T08:20:01 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.modx.com |
Contact ActiveState.
See the ActiveState website for general contact information.
Technical support
For installation-related support issues, check the ActivePerl Support forum and the ActivePerl Mailing List archives. If you do not find the answer to your problem, contact [email protected] for installation assistance.
Please submit bug reports and enhancement requests via the ActivePerl Bug Database.!
Commercial Support
ActivePerl Enterprise is a corporate support solution for Perl. ActivePerl Enterprise provides the comprehensive support that an enterprise needs to deploy Perl in mission-critical applications. | http://docs.activestate.com/activeperl/5.26/contact/ | 2019-05-19T08:53:44 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.activestate.com |
An Act Relating to: housing grants to homeless individuals and families and making an appropriation. (FE)
Amendment Histories
Bill Text (PDF: )
Fiscal Estimates and Reports
AB123 ROCP for Committee on Housing and Real Estate (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
2019 Senate Bill 119 - S - Utilities and Housing | http://docs.legis.wisconsin.gov/2019/proposals/ab123 | 2019-05-19T08:17:28 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.legis.wisconsin.gov |
MotechAccessVoter¶
- public class
MotechAccessVoterimplements AccessDecisionVoter<Object>¶
A custom AccessDecisionVoter for voting on whether a specific user has access to a particular URL. For example, a security rule can specify that the users motech and admin have access to a particular URL. This loads the metadata source with attributes for ACCESS_motech and ACCESS_admin. When a user invokes that URL, an affirmative based voting system will check whether or not the user is motech or admin. If not, they are denied permission, otherwise they are granted access.
Methods¶
supports¶
- public boolean
supports(ConfigAttribute attribute)¶
vote¶
- public int
vote(Authentication authentication, Object object, Collection<ConfigAttribute> attributes)¶
Checks if given user has access to given URL. If authentication details are not instance of MotechUserProfile or ConfigAttributes are empty then return ACCESS_ABSTAIN. If attribute is supported but User is not allowed then return ACCESS_DENIED, otherwise return ACCESS_GRANTED | http://docs.motechproject.org/en/latest/org/motechproject/security/authentication/MotechAccessVoter.html | 2019-05-19T09:27:42 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.motechproject.org |
Server Load Balancing¶
Two types of load balancing functionality are available in pfSense: Gateway and Server..
Server load balancing allows traffic to be distributed between multiple internal servers. It is most commonly used with web servers and SMTP servers though it can be used for any TCP service or for DNS.
While pfSense has replaced high end, high cost commercial load balancers including BigIP, Cisco LocalDirector, and more in serious production environments, pfSense is not nearly as powerful and flexible as enterprise-grade commercial load balancing solutions. It is not suitable for deployments that require extremely flexible monitoring and balancing configurations. For large or complex deployments, a more powerful solution is usually called for. However, the functionality available in pfSense suits countless sites very well for basic needs.
Full-featured load balancer packages are available for pfSense, such
as HAProxy and Varnish, but the built-in load balancer based on
relayd from OpenBSD does a great job for many deployments. Monitors in
relayd can check proper HTTP response codes, check specific URLs, do an ICMP
or TCP port check, even send a specific string and expect a specific response.
TCP services in the pfSense Load Balancer are handled in a redirect manner,
meaning they work like intelligent port forwards and not like a proxy. The
source address of the client is preserved when the connection is passed to
internal servers, and firewall rules must allow traffic to the actual internal
address of pool servers. When
relayd is configured to handle DNS, however,
it works like a proxy, accepting connections and creating new connections to
internal servers.
Servers in Load Balancing pools are always utilized in a round-robin manner. For more advanced balancing techniques such as source hashing, try a reverse proxy package such as HAProxy instead.
See also
For additional information, you may access the Hangouts Archive to view the January 2015 Hangout on Server Load Balancing and Failover. | http://docs.netgate.com/pfsense/en/latest/book/loadbalancing/index.html | 2019-05-19T08:36:20 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.netgate.com |
can be installed on devices with iOS 7 or later. The latest version of the Self Service app available in the App Store requires devices with iOS 10 or later. For more information on the Self Service levels of compatibility, see Installing Jamf Self Service on Mobile Devices.
Note: For manual installations, devices with iOS 11 or later must use Self Service 9 Jamf Self Service for iOS Settings
The Jamf Self Service for iOS Self Service.
Click iOS
.
Click Edit.
Select “Self Service app” from the Install Automatically pop-up menu and configure the settings on the pane.
Note: The Self Service app requires mobile devices with iOS 7 or later. For earlier iOS versions, the web clip is installed by default. Jamf Self Service for iOS
Find out how to manually distribute Jamf Self Service for iOS to mobile devices. | https://docs.jamf.com/10.10.0/jamf-pro/administrator-guide/Jamf_Self_Service_for_iOS.html | 2019-05-19T09:08:37 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['images/download/attachments/20219565/Self_Service_10.5_Black_iPad_Landscape_and_iPhone_X.png',
'images/download/attachments/20219565/Self_Service_10.5_Black_iPad_Landscape_and_iPhone_X.png'],
dtype=object) ] | docs.jamf.com |
Contents Performance Analytics and Reporting Previous Topic Next Topic Creating dashboards Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Creating dashboards Dashboard rolesLearn about the roles required to create, edit, and view dashboards.Create a dashboardCreate a dashboard to show the most relevant indicators for specific users or groups.Control access to a dashboardYou can control which users, groups, or user roles can access a dashboard.Enable real-time updating for single score report widgetsEnable single score report widgets on a dashboard to update in real time. Real-time updates ensure that users viewing the dashboard always see the most up-to-date information.Group dashboardsOrganize dashboards into groups so users can easily find them. Dashboard groups determine how dashboards appear in the dashboard picker when you navigate to Dashboards. You can also add permissions to dashboard groups.Delete a dashboardDelete unused dashboards to keep your system free of unneeded objects. Deleted dashboards cannot be restored.Breakdown dashboardsA breakdown dashboard is a dashboard that has had a breakdown added to it. Users can select a breakdown element to filter data in Performance Analytics widgets that have been added to the dashboard.Interactive FiltersInteractive Filters allow you to filter report widgets directly from a homepage or Performance Analytics dashboard without modifying the reports. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-performance-analytics-and-reporting/page/use/dashboards/concept/c_CreatingDashboards.html | 2019-05-19T08:56:36 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.servicenow.com |
control-direction
vpn interface dot1x control-direction—Configure how the 802.1x interface sends packets to and receive packets from unauthorized clients (on vEdge routers only).
vManage Feature Template
For vEdge routers only:
Configuration ► Templates ► VPN Interface Ethernet
Command Hierarchy
vpn vpn-id interface interface-name dot1x control-direction (in-and-out | in-only)
Options
- Send and Receive Packets
- in-and-out
Set the 802.1x interface to send packets to and receive packets from unauthorized clients. Bidirectionality is the default behavior.
- Send Packets Only
- in-only
Set the 802.1x interface to send packets to unauthorized clients, but not to receive them.
Operational Commands
clear dot1x client
show dot1x clients
show dot1x interfaces
show dot1x radius
show system statistics
Example
Configure an 802.1x interface to send packets to but not receive packets from unauthorized clients:
vEdge# show running-config vpn 0 interface ge0/7 vpn 0 interface ge0/7 dot1x control-direction in-only
Release Information
Command introduced in Viptela Software Release 16.3.
Additional Information
See the Configuring IEEE 802.1x and 802.11i Authentication article for your software release. | https://sdwan-docs.cisco.com/Product_Documentation/Command_Reference/Configuration_Commands/control-direction | 2019-05-19T09:05:17 | CC-MAIN-2019-22 | 1558232254731.5 | [] | sdwan-docs.cisco.com |
Ceph Deployment¶.
With
ceph-deploy, you can develop scripts to install Ceph packages on remote
hosts, create a cluster, add monitors, gather (or forget) keys, add OSDs and
metadata servers, configure admin hosts, and tear down the clusters. | http://docs.ceph.com/docs/master/rados/deployment/ | 2019-05-19T09:41:37 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.ceph.com |
Amazon supplies a prebuilt Helion Stackato Community AMI on its EC2 platform.
Important
The HPE Helion Stackato image on the Amazon EC2 platform is provided on the basis of the Bring Your Own License model: it is subject to the Software License Terms and requires a software license key.
A cluster with multiple nodes must have a persistent internal IP address for the core Helion Stackato node. All the nodes in a cluster must be able to communicate with this internal MBUS IP address which must not change when the node reboots.
On the Amazon Web Services page, in the Networking section, click VPC.
Note
Ensure that the region that appears in the upper right-hand corner of the page is the same as the one where you plan to deploy your Helion Stackato instance.
On the VPC Dashboard, click Your VPCs.
On the right panel, click Create VPC.
On the Create VPC dialog box, enter a Name tag and a VPC CIDR block (in CIDR notation), and then click, Yes, Create.
Your VPC, route table, and
default security group are created.
0.0.0.0/0into the Destination field, select the name of the gateway you have created earlier from the Target field, and click Save.
It is a good practice to set up the smallest possible profile for the public gateway of a cluster while allowing the functional components inside the cluster to communicate freely on various required ports. You can add this functionality by creating two partially-overlapping security groups.
Note
For more information on how Helion Stackato uses ports, see the Helion Stackato port requirements.
When you create a new VPC, the
default security group is created. This internal security group allows traffic to pass between all of its members.
0.0.0.0/0into the Source field.
Windows DEA nodes have the additional requirements of TCP and UDP access on port 3389 for initial configuration.
Important
After you configure your WinDEA node, make sure that your remove it from this security group. For more information, see the WinDEA documentation.
3389into the Port Range field, and enter
0.0.0.0/0into the Source field.
Helion Stackatointo the Filter field, press Enter, and click the HPE Helion Stackato row.
On the Step 2: Choose an Instance Type page, select the virtual machine size. The following minimum instance types are recommended.
Click Next: Configure Instance Details.
On the Step 4: Add Storage page, enter
30 for the Size (GiB) field of the Root partition.
From the Volume Type drop-down list, select General Purpose (SSD).
Click Next: Tag Instance.
Tip
You can create a more robust instance by moving the Helion Stackato droplets and data services to an EC2 EBS (Elastic Block Store) volume.
An elastic IP address allocates a static external address to your cluster and exposes this address on the border router where the cluster is hosted. The router associates this address with a corresponding dynamic address local to your cluster. This address is leased over DHCP, together with the address of the local DNS server which keeps track of private addresses allocated by the DHCP server. Thus, each node in your cluster is aware of the private address of the core node, while outside traffic is aware of the public address.
You can now
ssh to the Elastic IP address of your instance using the
stackato username and password.
To be able to access the web interface and applications that will be hosted on Helion Stackato, you must set the hostname on your public-facing node to a
corresponding wildcard DNS record. You can use the
xip.io or
nip.io service to obtain wildcard DNS resolution for your Elastic IP address.
ssh to your instance. For example:
$ ssh [email protected]
You will receive the following warning:
WARNING: Your password is set to the default. Please update it.
Rename the hostname. For example:
$ kato node rename 203.0.113.0.xip.io
At the end of the process, the address of the API endpoint is displayed. For example:
Stackato Micro Cloud:- endpoint: api.203.0.113.0.xip.io mbusip: 127.0.0.1 micro cloud: true eth0 IP: 198.0.2.0
You can now connect to the web console of your instance by entering the API endpoint into your browser.
Enter the address of the web console of your instance into a web browser. For example:
api.203.0.113.0.xip.io
When you first connect to the web console, you will receive a warning about an untrusted connection. Add an exception for the provided certificate and proceed.
Important
For production systems, add a signed certificate and a real DNS record to your domain. You can publish the public-facing address of your domain
name either using DNS or dynamic DNS. For example, a static DNS zone file for
stackato-test on
example.com would have the following entries (note the
. that terminates the
A record):
stackato-test IN A <Elastic-IP>. \*.stackato-test IN CNAME stackato-test
For more information on DNS configuration, see DNS.
On the Set Up First Admin User page, enter the Username, Email Address, and Password for the first administrator, the first organization Name and Space Name.
Tip
The password you specify for this account will also become the password for the
stackato system user, removing the warning displayed after
connecting to the instance using
ssh.
Review the Stackato Terms of use, click Yes, I agree, and click Set Up First Admin User.
You can add Helion Stackato instances to an existing VPC in a process similar to creating your core instance. For more information on configuring multi-node clusters, see Cluster Setup.
ssh to your core instance. For example:
$ ssh [email protected]
Set up the core node:
$ kato node setup core
Press y when prompted for an endpoint or enter a name for the endpoint.
Enter your password when prompted.
Helion Stackato disables all the roles that will be delegated to other nodes and configures itself to listen on the node's internal MBUS IP address. At the end of the process, the internal MBUS IP address and the assigned and available roles are displayed. For example:
Stackato Cluster:- endpoint: api.203.0.113.0.xip.io mbusip: 198.0.2.24 micro cloud: false Stackato Node [198.0.2.0] assigned roles : base,controller,primary,router available roles: base,mdns,primary,controller,router,dea,postgresql,mysql,rabbit,rabbit3,mongodb,redis,filesystem,harbor,memcached,load_balancer
Tip
Note the internal MBUS IP address. You will need it to configure your non-core nodes.
On the Amazon Web Services page, in the Compute section, click EC2.
On the EC2 Dashboard, in the Instances section, click Instances.
On the right panel, click the name of your instance and note its Private IPs listed on its Description tab at the bottom.
ssh to your core instance. For example:
$ ssh [email protected]
ssh to your non-core instance from the core instance. For example:
$ ssh [email protected]
Important
There is no other way to access the non-core instances. When you
ssh into non-core instances, use the
stackato username and password.
You can later simplify setup and maintenance operations by configuring passwordless SSH authentication
between the core and non-core nodes.
Create the required number of DEAs from the non-core node using the internal MBUS IP of the core node. For example:
$ kato node attach -e dea 198.0.2.0
Note
The
-e option enables the specified role on the node and disables all other roles. While
kato node attach commands run on various cluster
nodes, the web console may display
Node Degraded! error messages. However after the commands finish, you can view the operational cluster nodes
by navigating to the Helion Stackato web console and clicking Admin > Cluster or by running the
kato node list and
kato status commands
after you
ssh into your core node.
Enter your password for the non-core node and core node when prompted.
data-services is a meta-tag that enables support for MySQL, PostreSQL, MongoDB, RabbitMQ, Memcached, and the Filesystem service.
Create a data service node from the non-core node using the internal MBUS IP of the core node. For example:
$ kato node attach -e data-services 198.0.2.1
Enter your password for the non-core node and core node when prompted.
hostsfile, see Modifying /etc/hosts.
To spread web traffic between two or more Helion Stackato Router nodes, you can set up Helion Stackato clusters behind an EC2 Elastic Load Balancer.
The load balancer must be part of a security group that allows HTTP and HTTPS access. If there are no other gateways into the Helion
Stackato cluster, you must also allow access to an arbitrary external port between 1024 and 4999 that can be forwarded internally to
port 22 for administrative
ssh access.
Tip
When exposing
ssh access, setting up passwordless SSH authentication is recommended.
For instructions on setting up certificates on router nodes, see Replacing the Default SSL Certificate and CA Certificate Chaining.
Helion Stackato stores its services data in the root filesystem. However, the following issues exist for a new EC2 instance:
- the default set of disks is limited
- the root volumes are limited in size
- the disk mounted on
/mntis ephemeral
- the size of the ephemeral disk varies by instance type
It is a good practice to check EC2 instances for disk use. In the following example, running the
df -h command on a new, medium
Helion Stackato instance shows that the root filesystem is already almost half-full:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 3.3G 4.3G 43% / none 3.7G 112K 3.7G 1% /dev none 3.8G 0 3.8G 0% /dev/shm none 3.8G 80K 3.8G 1% /var/run none 3.8G 0 3.8G 0% /var/lock none 3.8G 0 3.8G 0% /lib/init/rw /dev/sdb 414G 199M 393G 1% /mnt
You can create a more robust instance by moving the Helion Stackato droplets and data services to an EC2 EBS (Elastic Block Store) volume.
On the Amazon Web Services page, in the Compute section, click EC2.
On the EC2 Dashboard, in the Elastic Block Store section, click Volumes.
On the right panel, click Create Volume.
On the Create Volume dialog box, select the Type and Size of the volume. Ensure that the Availability Zone matches the zone your instance is running in.
When the State of the volume becomes Available, click the name of your volume on the right panel and then click Actions > Attach Volume.
On the Attach Volume dialog box, enter the name of your Instance.
Important
To ensure that your instance is not already using the specified device name, you can use the
mount or
df command to view the devices
already in use.
Click Attach.
ssh to your core instance. For example:
$ ssh [email protected]
Run the
sudo fdisk -l command to identify your device. In the following example, it is
/dev/xvdf:
Disk /dev/xvdf: 214.7 GB, 214748364800 bytes 255 heads, 63 sectors/track, 26108 cylinders, total 419430400 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Note
It is unnecessary to partition the device before building a filesystem on it.
Run the
sudo mkfs command to make a new filesystem on the device. For maximum compatibility, specify the filesystem type that matches your root partition. In the following example, it is
ext3:
$ sudo mkfs -t ext3 /dev/xvdf mke2fs 1.42 (29-Nov-2011) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 13107200 inodes, 52428800 blocks 2621440 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 1600
Create a directory to serve as the mount point. For example:
$ sudo mkdir /mnt/ebs
Make the
stackato user the owner of the directory and give the user read and write permissions. For example:
$ sudo chown stackato /mnt/ebs $ sudo chmod +rw /mnt/ebs
Add the directory to
/etc/fstab. In the following example, the directory is configured on the last line:
# file system mount point type options dump fsck order LABEL=cloudimg-rootfs / ext4 defaults 0 0 /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/xvdf /mnt/ebs auto defaults 0 0
Tip
For detailed instructions on the
fstab file, see its manpage.
Mount the EBS volume. For example:
$ sudo mount /dev/xvdf /mnt/ebs
Tip
For instructions on mounting volumes with quotas enabled, see Enabling Filesystem Quotas.
For more information, see Relocating Services, Droplets, and Containers.
You can create an RDS instance and use the Universal Service Broker to add the RDS instance as an external service to Helion Stackato.
To ensure that your RDS instance can correctly communicate with Helion Stackato, you must place your EC2 instance and RDS instance in different subnets within the same VPC.
Create two additional subnets on the VPC that you have created earlier.
Important
The two subnets must be in different availability zones.
On the Amazon Web Services page, in the Database section, click RDS.
On the RDS Dashboard, click Subnet Groups.
On the right panel, click Create DB Subnet Group.
On the Create DB Subnet Group page, enter a Name and a Description for the subnet group and select the the VPC that you have created earlier.
Click add all the subnets to add all of the subnets from your VPC to your subnet group, and then click Create.
If you want to create an MSSQL-compatible RDS instance, you must enable contained database authentication by creating and configuring a parameter group that you will specify when creating your RDS instance.
authenticationinto the Filter field and press Enter.
Note
To allow this setting to take effect on an existing RDS instance, right-click the instance and then click Reboot.
In the following examples, the SQL Server Express RDS instance is created. You can install any number of additional RDS instances using a similar process.
To see a list of available drivers, run the
usbc drivers command.
On the Amazon Web Services page, in the Database section, click RDS.
Note
Ensure that the region that appears in the upper right-hand corner of the page is the same as the one where you plan to deploy your Helion Stackato instance.
On the RDS Dashboard, click Instances.
On the right panel, click Launch DB Instance.
On the Select Engine panel, click the tab of a database engine (for example, Microsoft SQL Server), and then click Select next to one of its flavors (for example, SQL Server Express).
On the Specify DB Details panel, in the Instance Specifications section select a License Model, the DB Engine Version, the DB Instance Class, the Storage Type, and enter the amount of Allocated Storage.
In the Settings section, enter a unique DB Instance Identifier, the Master Username, and the Master Password.
Tip
Note these settings. You will need them to install your RDS instance as an external service using the Universal Service Broker.
On the Configure Advanced Settings panel, in the Network and Security section, select the following parameters:
Click Launch DB Instance.
Once the RDS instance has been created, expose it as a service to system users
with the
usbc utility. See the Universal Service Broker documentation for instructions. | http://docs.stackato.com/admin/server/ec2.html | 2019-05-19T09:19:40 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.stackato.com |
HTTP 500: Internal Server Error
The HTTP 500 Internal Server Error is both rare and often difficult to reproduce. It generally indicates a problem with a server somewhere. It may be Elasticsearch, but it could also be a node in the load balancer or proxy. A process restarting is typically the root cause, which means it will often resolve itself within a few seconds.
The easiest solution is to simply catch and retry HTTP 500's. If you've seen this several times in a short period of time, please send us an email and we will investigate. | https://docs.bonsai.io/article/79-http-500-internal-server-error | 2019-05-19T08:53:38 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.bonsai.io |
WAM050 – A Section Maintenance Application
To design and develop your WAM applications, just like any other new application, you need to begin by focusing on:
Avoid creating very large WAMs with too many WebRoutines. Each time a WebRoutine is invoked from the browser, the WAM loads and then unloads. Many small WAMs is a much better design and will be easier to maintain.
1. Create a new WAM.
Name: iiiSecMaint
Description: Section Maintenance
Layout weblet: iiilay01
Consider the first two screen captures shown in the Objectives section that show the Begin WebRoutine in operation. This WebRoutine initially displays a list of sections that is empty and an input field for a department code. An AutoComplete weblet will replace the Department Code input field. A Select push button invokes the Begin WebRoutine that builds a list of sections for the department and displays this list.
Consider what working lists will be needed to handle this web page and what logic will be necessary?
a. Define a work field DEPT_IN based on DEPTMENT, this will be the department code input field.
b. Define working list DEPTS, to support the AutoComplete weblet. The list contains the field DEPTMENT only.
c. Define working list SECTLIST, to support the list of sections containing STDSELECT, SECTION, SECDESC, SECADDR1. Note: All fields apart from STDSELECT should have an *output attribute.
d. Map STDRENTRY globally as a hidden field.
2. Define a WebRoutine named Begin, based on the following pseudo code
3. Define a response WebRoutine AutoComplete to build the list DEPTS to support the AutoComplete weblet
Your completed code should now look like the following:
Function Options(*DIRECT)
Begin_Com Role(*EXTENDS #PRIM_WAM) Layoutweblet('iiilay01')
Define Field(#dept_in) Reffld(#deptment)
Def_List Name(#depts) Fields(#deptment) Type(*Working)
Def_List Name(#sectlist) Fields(#STDSELECT (#SECTION *out) (#SECDESC *out) (#SECADDR1 *out)) Type(*Working)
Web_Map For(*both) Fields((#stdrentry *hidden))
WebRoutine Name(Begin) Desc('Select a Department')
Web_Map For(*both) Fields(#dept_in #sectlist)
Case (#stdrentry)
When (= S)
Clr_List Named(#sectlist)
Select Fields(#sectlist) From_File(sectab) With_Key(#dept_in)
Add_Entry To_List(#sectlist)
Endselect
Endcase
Endroutine
WebRoutine Name(AutoComplete) Response(*JSON)
Web_Map For(*input) Fields(#dept_in)
Web_Map For(*output) Fields((#depts *json))
#dept_in := #dept_in.substring( 1, 1 ).upperCase
Clr_List Named(#depts)
Select Fields(#deptment) From_File(deptab) With_Key(#dept_in) Generic(*yes)
Add_Entry To_List(#depts)
Endselect
Endroutine
End_Com
4. Compile your WAM and open the Begin WebRoutine in the Design view.
Your web page should look like the following:
5. Drag and drop an jQuery UI AutoComplete weblet onto the Department Code field. Select the Details tab and set up the AutoComplete properties:
6. If the Select column (field STDSELECT) is shown as an input field (i.e. it does not have a clickable image weblet field visualization defined in the Repository), drop a Clickable Image weblet into the field in the first column.
7. With the clickable image selected, select the Details tab and set up its properties:
Note:
8. Select a clickable image weblet. Set the relative_image_path by clicking in the Value column, and then using the Elipsis
button and select the /normal/16 folder and then select any suitable image. See the example following:
9. Select the column heading "Std *WEBEVENT template field" and delete it.
10. If the field SECTION has a dropdown field visualization defined in the Repository, it will not be displayed in the list – since it is an *output field. If necessary, select the field SECTION in the list and use the context menu to select the option Replace with an Output Field.
11. Select anywhere in the table containing "Department Code" and the AutoComplete weblet. Use the context menu, and select the option Table Items / Add columns… to add one column to the right. Add a push button into this new column and remove the * placeholder characters from this cell.
Set up the push button properties:
Note: Whenever possible, set a weblet property value by selection from its dropdown list. Using a list of values provided by the editor, when available, will help to minimize errors in your XSL.
12. Save your changes and run the web page in the browser. Your page should look like the following: | https://docs.lansa.com/14/en/lansa087/content/lansa/wamtut01_0690.htm | 2019-05-19T08:33:23 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.lansa.com |
Studio Pro. The steps below will explain where you can gather all the necessary information.
The debugger supports only debugging of single-instance environments. Multi-instance environents need to be scaled down to one instance before the debugger can be used. Apps and navigate to the project that you want to debug:
Click Environments in the left sidebar, and on the Deploy tab, Studio Pro
Once you have the unique URL and password, there are two methods for connecting Studio Studio Pro to the Cloud Environment
Go to the Run tab and select Connect debugger…:
In the Connect Debugger dialog box, enter the URL and the Password that you got from the cloud environment (for details, see 3.1 Enabling Debugging in the Cloud):
3.2.2 Second Method for Connecting Studio Pro to the Cloud Environment
- Go to the Debugger dock window.
- Click Connect and enter the URL and password information in the dialog window. | https://docs.mendix.com/howto/monitoring-troubleshooting/debug-microflows-remotely | 2019-05-19T08:21:58 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.mendix.com |
OWL class restrictions¶
Keep the previously opened interim ontology open.
As previously stated, in OWL we use object property to describe binary relationships between two individuals (or instances). We can also use the properties to describe new classes (or sets of individuals) using restrictions. A restriction describes a class of individuals based on the relationships that members of the class participate in. In other words, a restriction is a kind of class, in the same way that a named class is a kind of class.
For example, we can use a named class to capture all the individuals that are chromosome parts. But we could also describe the class of chromosome parts as all the instances that are ‘part of’ a chromosome.
In OWL, there are three main types of restrictions that can be placed on classes. These are quantifier restriction , cardinality restrictions and hasValue restriction. In this tutorial will initially focus on quantifier restrictions.
Quantifier restrictions are further categorized into two types, the existential and the universal restriction.
- Existential restrictions describe classes of individuals that participate in at least one relationship along a specified property to individuals that are members of a specified class. For example, the class of individuals that have at least one ( some ) ‘part of’ relationship to members of the ‘Chromosome clas. In Protégé, the keyword ‘ some’ is used to denote existential restrictions.
- Universal restrictions describe classes of individuals that for a given property only have relationships along this property to individuals that are members of a specified class. For example, we can say a cellular component is capable of many functions using the existential quantifier, however, OWL semantics assume that there could be more. We can use the universal quantifier to add closureto the existential. That is, we can assert that a cellular component is capable of these function, and is only capable of those functions and no other. Another example is that the process of hair growth is found only in instances of the class Mammalia. In Protégé the keyword ‘only’ is used.
In this tutorial, we will deal exclusively with the existential (some) quantifier.
Superclass restrictions¶
Strictly speaking in OWL, you don’t make relationships between classes, however, using OWL restrictions we essentially achieve the same thing.
We want to capture the knowledge that the named class ‘ organelle part’ is part of an organelle. In OWL speak, we want to say that every instance of an ‘ organelle part’ is also an instance of the class of things that have at least one ‘part of’ relationship to an ‘ organelle’. In OWL, we do this by creating an existential restriction on the ‘ organelle part’ class.
In the Entities tab, select ‘organelle part’ in the class hierarchy and look at its current class description in the bottom right box. At the top of this view there are two slots for defining equivalent classes and superclasses (as denoted by the SubClass Of list). ‘organelle part’ already has one superclass named cellular_component.
We will create a restriction on ‘organelle part’ stating ‘organelle part’ has a ‘part of’ relationship to some ‘organelle’. Select the (+) icon next to the SubClass Of slot. Select the Class expression editor pane. We will define this anonymous superclass in Manchester OWL syntax as ‘part_of some organelle’.
The class restriction will be shown in the SubClass of slot as follows.
Using Protégé create your own part_of restrictions for the ‘cell part’, ‘intracellular part’ and ‘chromosomal part’ classes. Note: you must use single quotes around text strings that are separated by a space, e.g. ‘intracellular organelle part’.
NOTE: After each edit to the ontology you might want to synchronize the reasoner to make sure you didn’t introduce any inconsistencies into your ontology. The edit, reason, edit, reason iteration becomes particularly important as your ontologies grow more complex. | https://ontology101tutorial.readthedocs.io/en/latest/OWL_ClassRestrictions.html | 2019-05-19T08:20:22 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['_images/Figure51.png', None], dtype=object)
array(['_images/Figure52.png', None], dtype=object)
array(['_images/Figure53.png', None], dtype=object)] | ontology101tutorial.readthedocs.io |
The Library Setting, Disable Patron Credit, allows staff to disable the Patron Credit payment type and to hide patron credit payment actions within the billing interface of a patron’s account.
By default, the payment type Patron Credit is enabled in the staff client. Within the Bills interface of a patron’s account, the Patron Credit payment type, the Credit Available, and the option to Convert Change to Patron Credit are exposed by default in the staff client. new Library Setting:
After applying changes to this library setting, it is necessary to restart the staff client to see the changes take effect. | http://docs.evergreen-ils.org/2.5/_disable_patron_credit_in_billing_interface.html | 2019-05-19T09:34:23 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.evergreen-ils.org |
An Act to create 16.3075, 20.505 (7) (fp) and 20.505 (7) (hp) of the statutes; Relating to: housing quality standards loans, granting rule-making authority, and making an appropriation. (FE)
Bill Text (PDF: )
Fiscal Estimates and Reports
AB125 ROCP for Committee on Housing and Real Estate (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
2019 Senate Bill 121 - S - Utilities and Housing | http://docs.legis.wisconsin.gov/2019/proposals/ab125 | 2019-05-19T08:37:31 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.legis.wisconsin.gov |
PageLines DMS has some system requirements you should be aware of prior to installing.
PageLines DMS has been designed to work with any modern server environment, the recommended minimum requirements are:
We strongly advise that you always use the latest stable version of Wordpress for security reasons.
PageLines DMS supports all modern web browsers which includes Chrome, FireFox, Safari and Internet Explorer 8 & 9.
To ensure you have a secure and enjoyable browsing experience, we recommend you visit Browse Happy. Browse Happy is a way for you to find out what is the latest version of each major browser. | http://docs.pagelines.com/getting-started/ | 2018-11-13T03:36:58 | CC-MAIN-2018-47 | 1542039741192.34 | [] | docs.pagelines.com |
Example Amazon Lambda Functions and Events for Amazon Config Rules
Each Custom Lambda rule is associated with an Lambda function, which is custom code that contains the evaluation logic for the rule. When the trigger for a Config rule occurs (for example, when Amazon Config detects a configuration change), Amazon Config invokes the rule's Lambda function by publishing an event, which is a JSON object that provides the configuration data that the function evaluates.
For more information about functions and events in Amazon Lambda, see Function and Event Sources in the Amazon Lambda Developer Guide.
Topics | https://docs.amazonaws.cn/en_us/config/latest/developerguide/evaluate-config_develop-rules_examples.html | 2022-09-25T08:52:18 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.amazonaws.cn |
services in the SDK for JavaScript
The Amazon SDK for JavaScript v3 provides access to services that it supports through a collection of client
classes. From these client classes, you create service interface objects, commonly called
service objects. Each supported Amazon service has one or more client
classes that offer low-level APIs for using service features and resources. For example,
Amazon DynamoDB APIs are available through the Amazon service includes the full request and response lifecycle of an operation on a service object, including any retries that are attempted. A request contains zero or more properties as JSON parameters. The response is encapsulated in an object related to the operation, and is returned to the requestor through one of several techniques, such as a callback function or a JavaScript promise.
Topics | https://docs.amazonaws.cn/en_us/sdk-for-javascript/v3/developer-guide/working-with-services.html | 2022-09-25T07:17:49 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.amazonaws.cn |
Using the SourceExpr and DestExpr Properties
In the last section, we’ve discussed how to add table instances to a mapping. In this section, we’ll look at how to access the data of fields in a table. And how to use constant values and filters.
The module allows you to read data from fields from tables, whenever there is a SourceExpr (source expression) property. The SourceExpr can also be used to specify constant values or to call functions. You will find this property on the data lines of mapping of format type NAV and on the data lines of export mappings. But this property is also available for parameters of custom function and to extend error messages. There are more options, which we will discuss a little bit later in this section.
To write data to table fields there is another property called DestExpr (destination expression). This property also allows you to check data against a specified filter, instead of storing it. We will go into the details in a moment. It is important to know that you can write data to any field of a table instance, even if the table Mode is set to Read. Writing to a table in read mode will call the validation logic of NAV, but we don’t modify the record. On our buffer tables you can use this to store a value for a certain time in the mapping. On other tables it will depend on the validation logic, whether it is safe to use a table in read mode to temporarily store data.
The SourceExpr Property
The source type specifies what kind of data you want to read. The allowed values are:
A constant value that is entered in the mapping.
A field from a table instance, that is one of the parent mapping lines of this mapping line. In case you want to read a sum field you have to be outside the table instance, but below it.
You want to read data from either a build-in or a custom function.
This property was added in Anveo EDI Connect 4.00.
This property is only available if the SrcType is set to Const. This property specifies the data type of the constant value. You have to select the data type, to prevent errors due to different locales used during setup and runtime. In older version you had to ensure that the locale during setup was the same than during runtime to prevent, for example, numbers to be interpreted differently.
You can choose from the following data types:
A constant text value.
A text that can contain special characters, like a carriage return (<CR>). There is a list of all supported special characters.
A boolean value, like True or False.
An option value. You should use the integer value of the option in the database.
An integer value (32 bit).
A decimal value.
A large integer value (64 bit).
The value contains a duration.
The value is a Dynamics code value (uppercase only, does not allow all characters).
Represents a date value.
Represents a time value.
Represents a combined date and time value.
A date formula.
A global unique ID (GUID).
This property is only available if the SrcType is set to Const. Represents the constant value that you want to use.
This property is only available if the SrcType is set to Function. You can use the AssistEdit to specify the object and function. If the function required parameters, these will be SourceExpr as well, but do not support nesting of functions.
This property is only available if the SrcType is set to Field. Select the table instance to read the data from. The table need to be one of the parents of the current mapping line, to read the data of one record. If you want to read sum fields, you have to be under the table instance, but not a child of it.
This property is only available if the SrcType is set to Field. Selects the column / field you want to read from.
You can specify a value translation, to change the selected value to a different target value. There is a section on how-to setup value translations.
The code of the value translation that should be used.
What should happen, if the value is not found in the value translation.
The module does nothing if the translation is missing. It uses the original value without translation.
Output an empty value.
Add a information log entry and use the original value.
Add a warning log entry and use the original value.
Break the mapping immediately and log an error.
Create an error log entry and do not successfully finish the mapping, but continue with the processing to find other errors as well.
This property is hidden by default. You can specify a list of allowed values and create errors, iof you try to export another value.
The list of allowed values. You can either use the AssistEdit or enter the terms comma separated.
What should happen, if the source value is not in the list of allowed value.
Do not use the advanced validation and ignore any values in the list.
Create a information log entry.
Create a warning log entry.
Break the mapping immediately with an error message.
Return an error on mapping execution, but continue to process the mapping, to find other errors as well.
The DestExpr Property
Select the target for a value. The following values are valid:
Empty means that the value is ignored. You can use this to skip fields on imports or, for example, to ignore a function return value.
The value should be checked against a Dynamics filter. If the filter does not match the value is not accepted. Depending on the converter this will result in an error message or in skipping a section of the mapping. You’ll learn more on the use of filters for specific converter on the documentation of each converter.
The value should be written to a field of a table instance in this mapping.
This property is only available if the DestType is set to Filter. The Dynamics filter the value is checked against. The value will be interpreted as a text value for applying the filter. A typical example would be “BY|IV”, to allow the values “BY” and “IV”.
This property is only available if the DestType is set to Field. Selects the table instance the value should be written to. The table instance has to be one of the parents of the current mapping line.
This property is only available if the DestType is set to Field. The field name/column name of the target field on the table.
This property is only available if the DestType is set to Field. This property is only available as an advanced property. Setting this property to False will skip the Microsoft Dynamics NAV 2009R2 RTC validation trigger for that field.
Be very cautious when using False. You should only deactivate the validation after consulting a programmer. You should not deactivate it on any of the standard Dynamics tables, except you really know what you are doing. You can make Microsoft Dynamics NAV 2009R2 RTC unusable, by skipping validation code.
This property is only available if the DestType is set to Field and you’re using the NAV converter. You can find more information on the converter page. | https://docs.anveogroup.com/en/manual/anveo-edi-connect-installation-and-configuration/mappings/using-sourceexpr-and-destexpr-properties/?product_version=Anveo%20EDI%20Connect%205.00&product_platform=Microsoft%20Dynamics%20NAV%202009R2%20RTC | 2022-09-25T08:47:43 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.anveogroup.com |
ETLrobot IP Addresses¶
Whitelisting is the fastest and easiest way to allow ETLrobot to connect to your destination database. To whitelist, you must create a rule in a security group that allows ETLrobot access to your database port and instance.
Whitelist the following IPs for your Data Processing country of choice:
Configure your firewall and/or other access control systems to allow the following:
- Incoming connections to your host and port from the IPs above.
- Outgoing connections from all ports (
1024to
65535) to the IPs above. | https://docs.etlrobot.com/destinations/security/ips/ | 2022-09-25T07:12:14 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.etlrobot.com |
Catch is used to handle an exception. A Catch can be attached to a Task, Call Subprocess or a Scope. The outgoing connection from a Catch will point to an Error Handler element.
The exception that is caught can be accessed within the error handler by defining a variable name in the Catch element, and then using a #var reference. An element can only have one Catch element attached.
Error Handler
An error handler is a Task, Code, Call Subprocess or Scope element that is used to handle an exception. An error handler always have an incoming connection from a catch, and it must always continue to the same element(s) as the element which the catch is attached to.
If an exception occurs then the execution of the throwing element will stop, and the error handler will kick in. The return type of the Error handler should be the same as the throwing element's, since the return of the error handler will be used in the same way as the return of the throwing element.
An Error handler can end the execution of the whole process by placing a Throw shape as the end element within a Scope.
A catch attached to a Scope element will catch all exceptions within the Scope. Note that the execution of the whole Scope will stop even if the exception is thrown on the very first element within the scope. It is possible to define an error handler for the entire Process by encapsulating everything but the Start element(s) and the final return within a Scope or by using a global error handler.
When you are sending message, e.g. email, about error you can query for a specific Process execution graph by the execution GUID, in order to e.g. generate links to the process in error emails. You get the execution GUID in process via the
#process.executionId reference, and the link would be in format
https://<website>/ProcessInstance/Instance/{{#process.executionId}}
For reporting unhandled errors at the Process level, it is also possible to configure a Subprocess to call on unhandled error. | https://docs.frends.com/en/articles/5270926-catch | 2022-09-25T08:46:53 | CC-MAIN-2022-40 | 1664030334515.14 | [array(['https://downloads.intercomcdn.com/i/o/70879735/1dd47c106e63493ed4bc30a5/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/70879741/359f9d0f3fe46c5567116b1b/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/70879758/49a6911a83bfb56155e7f6bd/image.png',
None], dtype=object) ] | docs.frends.com |
xfce4-statusnotifier-plugin - Status Notifier Plugin
xfce4-statusnotifier-plugin provides a panel area for status notifier items (application indicators). Applications may use these items to display their status and interact with user. This technology is a modern alternative to systray and follows the freedesktop.org specification regarding status notification.
Usage
- Compile or install xfce4-statusnotifier-plugin
- Right-click the panel → Add New Items
- Add the Status Notifier Plugin
Known Issues
Plugin Doesn't Show Any Items
There is already a running service which works with status notifier items. Most probably it's the indicator-application service which is used by xfce4-indicator-plugin. Make sure this service is not running: remove this indicator from xfce4-indicator-plugin and remove the service from autostart.
Screenshots
Required Packages
-
-
-
-
-
-
- dbusmenu-gtk3
For detailed information on the minimum required versions, check the configure.ac.in file.
Latest Release
- xfce4-statusnotifier-plugin 0.2.3 released (2021/01/27 08:19)
xfce4-statusnotifier-plugin 0.2.3 is now available for download from What is xfce4-statusnotifier-plugin? ==================================== A panel plugin which provides an area for status notifier items (application indicators). Website: Release notes for 0.2.3 ======================= This is (final?) maintenance release of this plugin for users of Xfce
Source Code Repository
Reporting Bugs
- Reporting Bugs – Open bug reports and how to report new bugs
Return to Main Xfce4-panel page | https://docs.xfce.org/panel-plugins/xfce4-statusnotifier-plugin/start | 2022-09-25T09:26:31 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.xfce.org |
CaptAIM Plan8 - Climetrics
Asian Institute of Management | Manila, Philippines
GitHub Link:
Website for Live Demo:
Team Name:
CaptAIM Plan8
Participants:
Jishu Basak, Jazel Castillo, Emmanuel Damian, David Felicelda, Masaki Mitsuhashi, Camile Perez, Anand Ramasamy, Jude Teves
Node:
Manila, Phillipines (Asian Institute of Management)
Discord channel:
Meet the Team
We are students from Asian Institute of Management (Manila, Philippines) coming from various specializations: Data Science, Innovation & Business, Development Management, and Executive MBA.
CaptAIM Plan8.mp4
6MB
Binary
Meet the Team
Carbon Insight Dashboard
Carbon insights allow the user to view carbon emissions that each country emits annually (measured in Carbon parts per billion by volume). Users may scroll through the years and click hover above any country to know carbon emission trends throughout the years. User may use the information to derive local or regional actions that ultimately lower the carbon emissions of the world.
Asset Insights
Asset Insights will allow the user to observe different assets that contribute to climate change that is present (such as assets related to gas and oil development, injection, etc). User may set filters to refine their searches such as types (whether these assets are active or inactive) and type (whether they are productive or not). User may use the information to identify which assets or infrastructures resource are allocated on.
Stock Trends
Climate Financial Markets will allow the user to visualize trends related to climate carbon trade. Users may observe bid and ask difference a well a observe information such as balance, equity, and margin as well a free margin, margin level % and open P/L.
Sentiment Analysis
Sentiments will allow users to identify the social impacts of conversations on social media. The platform measures engagements, mentions, and word counts to a sentiment score that compares different trending hashtags. Users may use the information to make stock predictions or in development of financial instruments that takes into consideration climate related issues.
CaptAIM Plan8 - Climetrics.pdf
2MB
PDF
Download Pitch Deck here
References
DAO for Paris Agreement
Climate Cops
Last modified
2yr ago
Copy link
Outline
Meet the Team
Carbon Insight Dashboard
Asset Insights
Stock Trends
Sentiment Analysis
References | https://collabathon-docs.openclimate.earth/hacks/team-contributions/captaim-plan8-climetrics | 2022-09-25T08:21:59 | CC-MAIN-2022-40 | 1664030334515.14 | [] | collabathon-docs.openclimate.earth |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.