content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
The SAX (Symbolic Aggregate approXimation) function transforms a time series data item into a smaller sequence of symbols that can then be analyzed using Teradata Introduction to nPath or Shapelet Functions, or by other hashing or regular-expression pattern matching algorithms.
The SAX algorithm was developed by Eamonn Keogh and Jessica Lin in 2002. For information about the SAX algorithm, see: | https://docs.teradata.com/reader/AI5zLpFtwEQWIVvpnZgB8g/TArm2bllu_GBI7HQ4irJuw | 2020-03-28T18:45:52 | CC-MAIN-2020-16 | 1585370492125.18 | [] | docs.teradata.com |
Account eligibility
From OCFwiki
The following sets of people are eligible for OCF accounts. If you have any comments, please write to the General Manager.
OCF Acount Eligibility Policy
The Open Computing Facility provides accounts and services only to people who meet at least one of the following eligibility criteria as verified by presentation of the specified documentation:
UC Berkeley Students
Present
More specifically, any Faculty or Staff Member who is directly associated with UC Berkeley.
Present current UC Berkeley Staff or Faculty ID Card.
Post Doctoral, Visiting Scholars, Research Assistant, etc.
Must have letter from their department on departmental letterhead (*), stating:
- Name
- Appointment Length
- Departmental Support for the Computer Account
Lawrence Berkeley Laboratory (LBL) Staff
Present LBL Staff ID Card.
Graduate Theological Union Students (GTU)
Current Doctoral Students ONLY. Masters program not eligible.
Present student ID card.
ASUC Employees
Present employee ID card.
Math Science Research Institute (MSRI)
More specifically, Postdoc Researchers or Research Professors.
Must have letter from the department on departmental letterhead (*), stating:
- They hold a valid appointment with the University
- Departmental Support for the Computer Account
UC Extension Faculty
(Faculty Extension Faculty are usually instructors, and not titled "Faculty", primarily due to the part-time role.)
Present an appointment letter marked as independent contractor by agreement for a term.
UC Berkeley Extension Students
More specifically, students enrolled in certificate programs or concurrently enrolled in courses offered by UC Berkeley. (Note: A large number of Extension Courses do not lead towards Certificate programs.)
Qualified UCB
These employees must work on the UC Berkeley campus (e.g. California Alumni Association, International House, Howard Hughes Medical Institute) or be closely associated with the Berkeley campus (e.g. Richmond Field Station).
Must have letter from their organization/unit on letterhead (*), stating:
- Currently an employee of the organization
- Organizational Support for obtaining the Computer Account
Volunteers, Contractors, and Other Individuals
You must work on campus supporting the University.
Must have a letter from the sponsoring department on departmental letterhead (*), stating:
- Individual is affiliated with the department.
- Department supports the individual's request for a Computer Account
Student Groups, Staff Groups, or Campus Support Organizations
Additional restrictions apply, see the Group Account Policy.
- Copy of a completed OSL.
* All letters supporting an account should be on appropriate letterhead, must signed by someone other than the account applicant, and should include contact information for the person signing the letter for verification purposes. | http://docs.ocf.berkeley.edu/wiki/Account_eligibility | 2008-05-11T14:52:34 | crawl-001 | crawl-001-004 | [] | docs.ocf.berkeley.edu |
Social media channels come and go but email will live on after we are gone. If your dispensary hasn’t allocated a marketing budget to email marketing, your business is missing out on significant growth opportunities. Unlike SMS, email marketing has significant advantages.
As our team of engineers and experts collaborated with our dispensary partners to design and develop our latest email marketing technology, we put together a guide on email marketing best practices for both beginners and email marketing pros. If you’re not sure where to start or are interested in brushing up on what you already know, this guide is for you. Whether your business is an online cannabis medical dispensary or a dispensary focused on adult-use, email marketing can be used to increase customer trust and revenue.
Email marketing is older than social media marketing. While consumers may choose which social media platforms suit their lifestyle, email is used by virtually everyone. It’s true. Email is needed to log into social media platforms, book airline reservations, and so much more. With email playing a crucial role in the lives of consumers, it’s important to define how crucial your business is to your contacts.
When customers and patients visit your dispensary, it’s easy to understand each visitor has different needs. The same can be said for email. While it is not realistic to manually write individual emails to thousands of customers, there are a few strategies your team can leverage to ensure customers read your emails and find value in what your company has to offer.
To improve cannabis marketing with email, read on.
Most Effective Email Messages
First Timer Email
When a new customer leaves your dispensary, it is imperative to use email marketing to extend their experience. Why? An experience doesn’t end when a customer leaves the store. As customers use products purchased from your dispensary, your business is top of mind. Use Alpine IQ’s email marketing software to welcome new customers into your brand with high-value creative such as branded videos and stunning photography.
New Inventory Email
What could be more exciting than being the first to try a new product on the market? Nothing. That’s what. When new products are acquired from distributors and cultivators, compose an email to customers. Tell them where the product comes from, what the benefits of the product are and why they should stop in the shop to get a closer look.
Promotional Email
If your business has excess stock of a particular product, send a promotional email. If your business has data that proves a specific product has the highest profit margins, send a promotional email. If the common cold is running rampant in your community and a specific brand of edibles is known to ease muscle aches, send a promotional email. Yes. There is always a good reason to send a promotional email.
Event Email
If foot traffic in your business is down, plan an event. Partner with a local business to offer an incentive for customers who arrive between certain times. For example – if your business offers Lemon Cookie in flower form, give out complimentary lemon cookies from a local bakery when customers purchase a half an ounce or more of Lemon Cookie flower.
Newsletter Email
Most newsletters suck. Your business’ newsletter doesn’t have to. Encourage your internal team to create content for your company’s newsletter. Share customer success stories. Build an email list of medical patients who have only shopped with you once or twice. Send your newsletter to that list too and see what your open rates are.
Customer Survey
Your customers are the lifeline of your business. Transform customer complaints and praise into business opportunities by using your email marketing software to send out customer surveys. Dispensary stakeholders can also have one-to-one relationships with customers.
Corporate Social Responsibility
If your business is up to all good, it’s important to let your customers know. Whether your business is planting cannabis seeds to fight climate change or partnering with a local non-profit to raise awareness for a disease, use your email marketing software to build community.
Best Practices
Know Your Audience
Sending adult-use product messaging to medical patients is a recipe for disaster. Before sending an email to a customer or a list of customers, identify the needs of the person you’re sending an email to. If you’ve ever received an email that is anything less than helpful, you’ll understand why it’s important to know who your audience is and what your email can do for them. Prioritize email messaging under customer needs.
Segment Your Audience
Don’t worry. Knowing your audience is easier when you have the ability to segment them. We give you the power to identify and group customers based on many factors. After you’ve segmented audiences, you’re ready to deliver customer-centric messaging that moves your business forward.
Follow The Law
Before you segment audiences and send messages, your business needs to know what’s allowed and what could get your business in trouble. For the most up-to-date information on email law, read FTC’s compliance guide.
Avoid Becoming Spam
Spam filters are designed to protect customers from malevolent senders but they aren’t perfect. To ensure your emails don’t land in your customers’ spam folders, ask your customers to check their spam folders if they don’t receive your business’ emails. You should also ask customers who have received your email to save the dispensary’s email address in their contacts folder; this will ensure emails are received.
Subject Line
Your subject line is the first line of messaging that will either compel a contact to open your email or ignore it. To ensure emails are open, craft enticing subject lines that evoke curiosity.
Example:
Good:
New Lemon Cookie Flower In Stock
Better:
GSC or Lemon Cookie? Learn the difference | https://docs.alpineiq.com/2021/06/14/how-to-improve-cannabis-marketing-with-email/ | 2022-08-08T01:34:15 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.alpineiq.com |
Create a deployment group with CodeDeploy
You can use the CodeDeploy console, the Amazon CLI, the CodeDeploy APIs, or an Amazon CloudFormation template to create deployment groups. For information about using an Amazon CloudFormation template to create a deployment group, see Amazon CloudFormation templates for CodeDeploy reference.
When you use the CodeDeploy console to create an application, you configure its first deployment group at the same time. When you use the Amazon CLI to create an application, you create its first deployment group in a separate step.
As part of creating a deployment group, you must specify a service role. For more information, see Step 2: Create a service role for CodeDeploy.
Topics
- Create a deployment group for an in-place deployment (console)
- Create a deployment group for an EC2/On-Premises blue/green deployment (console)
- Create a deployment group for an Amazon ECS deployment (console)
- Set up a load balancer in Elastic Load Balancing for CodeDeploy Amazon EC2 deployments
- Set up a load balancer, target groups, and listeners for CodeDeploy Amazon ECS deployments
- Create a deployment group (CLI) | https://docs.amazonaws.cn/en_us/codedeploy/latest/userguide/deployment-groups-create.html | 2022-08-08T01:24:41 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.amazonaws.cn |
Custom App
What is a Custom App, and When Would I Need to Install One?
A Custom App is any application that is not included in the Faronics Deploy library of 80+ applications.
Custom Applications that are not part of the default Deploy application library can be added to the Applications grid; once added, Deploy displays the version number of the application if installed on any computers. You can then install/uninstall/update the application from the grid.
Not all external applications are compatible out of the box; however, Faronics has Deployment Specialists to assist by writing a custom wrapper for the external application, which will enable it to work. See
Request Assistance From a Deployment Specialist
for further information.
To create a Custom App and install for the first time see
Create a Custom App
.
App Preset
Policies (Windows)
Last modified
1yr ago
Copy link | https://docs.faronics.com/faronicsdeploy/feature-definitions | 2022-08-08T01:25:02 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.faronics.com |
Date: Mon, 8 Aug 2022 01:59:24 +0100 (BST) Message-ID: <294474212.603.1659920364604@OMGSVR86> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_602_1089941253.1659920364604" ------=_Part_602_1089941253.1659920364604 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
After you have created all the segments, the next step is to choose suit= able joint types and link the segments together to form a hierarchical skel= etal structure. A joint can have up to six degrees of freedom (DOF), which = indicates how freely the joint is able to move in relation to its parent se= gment.
When you link segments, start with the root segment and link from that u= ntil you have a continuous link chain to the leaf segment. Nexus lets you l= ink in any order you like, but it is easier to keep track if you follow the= chain.
To display the segment bounding boxes, which enables you to select segme=
nts easily in the view pane, in the Options dialog box (F7=
), ensure the Subjects display is selected and in the
To link segments:
On the Subject Preparation tab of the Tools= pane, in the Labeling Template Builder section, = from the Link Segments drop-down box, choose the appropria= te joint type for the two segments you intend to link and click Lin= k.
Important
When linking segments, Vicon recommends that you use the Free Joint for = two segments that do not share any marker and the Ball Joint for segments t= hat share markers. For information on the joint types, see About joint types. Joint types as def= ined by the Labeling Template Builder are for the labeling= skeleton, meaning that the true biomechanical joint type may not necessari= ly be the best choice for the skeleton. For the majority of skeletons, the = Nexus 2 labeler works best when the Ball Joint and Free Joint joint types are used. The advanced joint types are onl= y required for very specific labeling needs.
The mouse cursor changes, and the Select the Parent Segmen= t tool tip is displayed.
In the 3D Perspective view, click on the bounding b= ox of the segment you want to be the parent segment.
If you have trouble selecting segments by clicking on the bounding boxes= in the 3D Perspective view, you can also link segments by= clicking on their names in the list under the Joints node= in the Subjects resources tree and then clicking = Link.
The segment bounding box turns red and the mouse cursor label chan= ges to Select the Child Segment.
When all segments are defined, click the Link butto=
n again to exit the segment linking utility.
When you have finished link= ing all the segments, the name of the subject in the Subjects resources tree, which was red when you first created it, is now display= ed in black. The red color indicates that the subject is incomplete; for ex= ample, it has unlinked segments. The black color indicates that the subject= is properly defined (markers, segments, and joints).
On the Subjects tab, expand the Joints= node and check that the joints are all correctly connected, and that their= icons display the expected Degrees of Freedom (DOFs).
You can now assign the marker and segment properties as required.
To link segments, you can use the following joint types:
Ball Joint This 3 DOF joint with full rotational (b= ut not translational) freedom, used to link two segments that share one mar= ker, and which can therefore rotate freely around all three axes with respe= ct to each other, but cannot translate. This joint type has the position of= the child segment defined from the position of the parent joint, but its o= rientation can vary freely.
Important
Use Ball Joints (3 DOF) for segments that share one marker and Free Join= ts (6 DOF) for segments that share none. | https://docs.vicon.com/exportword?pageId=50888896 | 2022-08-08T00:59:25 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.vicon.com |
Quickly search for any vehicle and get access to a wealth of information related to that vehicle.
When you’re moving vehicles in and out of your garage quickly, the last thing that you want to be doing is digging through old Haynes manuals or using non manufacturer specific service schedules.
Below I’ll show you how you can quickly search for any vehicle and get access to some of the most comprehensive technical data available.
You can access technical data in two ways within VGM, through the Search menu or from a Jobsheet (see Job times, technical data and jobsheets for further information), but for now we’ll access it by clicking the ‘Search’ menu, then ‘Technical Data’. This will give you access to the main technical data screen. Once here, to look up a vehicle do the following:
Enter the VRM into the registration box in the top left and click search.
So long as you have technical data credits, VGM will make a lookup against the VRM and populate the drop-down menu next to ‘Subjects’ with all the available categories of data.
Note: If for some reason we can’t return any data regarding the vehicle (if the registration doesn’t exist for example) then no credit is used.
Now that a credit has been used, you have about 24 hours to access all of this information without using an additional credit.
Select which subject you wish to look at, and this will populate the left hand area. From here you can start selecting different items until you find the information you require.
Note: Some items have a + next to them. Clicking the + will show an additional level of categories.
You are now free to look through all the data that you may need. There are handy print buttons next to individual images so you can make a hard copy to take into the workshop.
| https://docs.motasoft.co.uk/searching-for-a-vehicles-technical-data/ | 2022-08-08T02:11:11 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/image52-1-1024x576.jpg',
None], dtype=object)
array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/image51-1024x576.jpg',
None], dtype=object)
array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/image27-1024x576.jpg',
None], dtype=object)
array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/image88-1024x576.jpg',
None], dtype=object) ] | docs.motasoft.co.uk |
] manager_uri = mode = peer.
This documentation applies to the following versions of Splunk® Enterprise: 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 9.0.0
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/9.0.0/Indexer/Configurepeerswithserverconf | 2022-08-08T01:26:48 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
AI Engine users can run into simulator hangs. A common cause is insufficient input data for the requested number of graph iterations, mismatch between production and consumption of stream data, cyclic dependency with stream, cascade stream or asynchronous windows, or wrong order of blocking protocol calls (acquisition of async window, read/write from streams).
You can use the
--stop-on-deadlock option on
x86simulator to detect such deadlocks. This
option will enable
x86simulator to automatically
detect a broad category of deadlocks, stop the simulation, and print a message
stating the simulation has been terminated prematurely because a deadlock has been
detected. When the
x86simulator is invoked with the
--stop-on-deadlock option,
x86simulator detects the deadlock and produces a
deadlock diagnosis report.
Additionally, a graph is generated in
x86simulator_output/simulator_state_post_analysis.dot. This is a
.dot file that encodes a description of the
graph in terms of a block diagram where the agents involved in the deadlock are
highlighted in red. To get this file transformed into a
.png file, you must use the
dot
program as follows.
dot -Tpng x86simulator_output/simulator_state_post_analysis.dot > simulator_state_post_analysis.png
--stop-on-deadlockoption is not supported for software emulation or use cases with an external test bench.
vitis_analyzer: | https://docs.xilinx.com/r/en-US/ug1076-ai-engine-environment/Deadlock-Detection | 2022-08-08T01:46:53 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.xilinx.com |
Value Data
The
.audit file syntax contains keywords that can be assigned various value types to customize your checks. This section
describes these keywords and the format of the data that can be entered.
This section includes the following information:
- Complex Expressions
- The "check_type" Field
- The "group_policy" Field
- The "info" Field
- The "debug" Field
Data Types
The following types of data can be entered for the checks:
Examples
value_data: 45
value_data: [11..9841]
value_data: [45..MAX]
In addition, numbers can be specified with plus (+) or minus (-) to indicate their "sign" and be specified as hexadecimal values. Hexadecimal and signs can be combined. The following are valid examples (without the corresponding label in parentheses) within a REGISTRY_SETTING audit for a POLICY_DWORD:
value_data: -1 (signed)
value_data: +10 (signed)
value_data: 10 (unsigned)
value_data: 2401649476 (unsigned)
value_data: [MIN..+10] (signed range)
value_data: [20..MAX] (unsigned range)
value_data: 0x800010AB (unsigned hex)
value_data: -0x10 (signed hex) | https://docs.tenable.com/nessus/compliancechecksreference/Content/ValueData.htm | 2022-08-08T02:01:29 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.tenable.com |
render mode for the XR device. The render mode controls how the view of the XR device renders in the Game view and in the main window on a host PC.
See GameViewRenderMode for a description of each available render mode.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2018.2/Documentation/ScriptReference/XR.XRSettings-gameViewRenderMode.html | 2022-08-08T00:56:30 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.unity3d.com |
After the kernel boundaries have been finalized, the developer knows exactly how many kernels will be instantiated and therefore how many ports will need to be connected to global memory resources.
At this point, it is important to understand the features of the target platform and what global memory resources are available. For instance, the Alveo™ U200 Data Center accelerator card has 4 x 16 GB banks of DDR4 and 3 x 128 KB banks of PLRAM distributed across three super-logic regions (SLRs). For more information, refer to Vitis Software Platform Release Notes.
If kernels are factories, then global memory banks are the warehouses through which goods transit to and from the factories. The SLRs are like distinct industrial zones where warehouses preexist and factories can be built. While it is possible to transfer goods from a warehouse in one zone to a factory in another zone, this can add delay and complexity.
Using multiple DDRs helps balance the data transfer loads and improves performance. This comes with a cost, however, as each DDR controller consumes device resources. Balance these considerations when deciding how to connect kernel ports to memory banks. As explained in Mapping Kernel Ports to Memory, establishing these connections is done through a simple compiler switch, making it easy to change configurations if necessary.
After refining the architectural details, the developer should have all the information necessary to start implementing the kernels, and ultimately, assembling the entire application. | https://docs.xilinx.com/r/en-US/ug1393-vitis-application-acceleration/Decide-Kernel-Placement-and-Connectivity | 2022-08-08T01:40:06 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.xilinx.com |
License key renewals and subscriptions
#Automatic renewals
All license keys automatically renewed (with thhe exception of lifetime licenses), and bill the account that you initially paid with when purchasing the product. You can choose to turn off automatic renewals by cancelling the subscription from within your account dashboard. The Lifetime Membership and othehr lifetime purchase options will not be billed any additional times, and are a one time fee.
#Renewal details
You can always check the license subscription status by going to the payment details.
Here is example:
#Renew the license manually
To renew your license key manually.
- Login to Aspen Grove Studios account and go to
- Click Renew near the product name and go through the checkout process.
#Renewals and billing
You can find more about renewals in this section. | https://docs.aspengrovestudios.com/docs/licenses/renewals/ | 2022-08-08T01:57:29 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['/assets/images/subscription-details-8300755b11ad1947e61ddc9dbf91b734.png',
None], dtype=object) ] | docs.aspengrovestudios.com |
Optional boolean value. Enables remote ssh access to your pipeline instance using the tmate. Both the hosted service and self-hosted services are supported. This feature is disabled by default. This feature only works on linux ie where platform’s os is set to linux.
DRONE_TMATE_ENABLED=true
Note that you can also configure a self-hosted tmate server using the below configuration parameters. Please see the official tmate documentation to learn more about self-hosting a tmate server.
DRONE_TMATE_ENABLED=true DRONE_TMATE_HOST=tmate.company.com DRONE_TMATE_PORT=2200 DRONE_TMATE_FINGERPRINT_RSA=SHA256:iL3StSCmPU+7p2IoD8y0huMXRVFIZyGFZa8r+lO3U5I DRONE_TMATE_FINGERPRINT_ED25519=SHA256:gXLaN8IUxUMmlm/xu7M2NEFMlbUr5UORUgMi86Kh+tI | https://docs.drone.io/runner/vm/configuration/reference/drone-tmate-enabled/ | 2022-08-08T01:10:52 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.drone.io |
I am trying to run a script in Powershell 7 using PnP.PowerShell version 1.5.0 and I am getting error:
Could not load file or assembly 'Microsoft.Identity.Client, Version=4.21.0.0, Culture=neutral, PublicKeyToken=0a613f4dd989e8ae'. Could not find or
| load a specific file. (0x80131621)
My connect statement is as follows:
Connect-PnPOnline -ClientId $env:PnPClientId -Url $URL -Tenant ($env:Tenant + ".onmicrosoft.com") -Thumbprint $env:PnPThumbprint
This seems to be an issue with PowerShell 7 as Windows PowerShell still seems to be working on this same PC using PnP.PowerShell.
Anyone have suggestions on what I might need to change? I have users running scripts in Windows PowerShell so hoping for a uniform statement that works in both PowerShell versions. | https://docs.microsoft.com/en-us/answers/questions/394459/getting-39could-not-load-file-or-assembly-39micros.html | 2022-08-08T02:14:35 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.microsoft.com |
This logbook format is like no other designed to meet the requirements of BS 5839-1.
A logbook designed by fire alarm professionals to be used by fire alarm professionals and their clients.
The issue of this logbook to the user is a simple and slick way to comply with the requirements of BS 5839-1 as this logbook provides documented guidance to the user in relation to:
- recommendations for investigation of a fire alarm or fault signal
- recommendations for the investigation of a false alarm
- routine weekly and monthly testing
- service and maintenance of the system in accordance with Section 6 of BS 5839-1
- avoidance of false alarms
- need to keep a clear space around all fire detectors and manual call points
- need to avoid contamination of detectors during contractors activities
- ensuring that changes to the building do not affect the standard of protection
- other responsibilities as detailed within Section 7 of BS 5839-1
Ever considered how much time Service Engineers waste on site obtaining information they need to access the panel, engage with the alarm receiving centre, establish the system category?
The logbook contains many useful features that are additional to “standard logbooks” including methods to record system category, BAFE Compliance Certificate details, agreed variations to BS 5839, designed standby battery capacity, panel codes, alarm receiving centre codes, details of designer, installer, commissioner and maintainer and many more useful means for making record that make this logbook a really useful information station for the Service Engineer and the User.
The logbook even has a facility for making record of modifications to the system, changes to system category, changes of premises management, service contractor and alarm receiving centre details.
Specification
- A4 by standard
- High quality durable front and back cover
- 32 pages
Delivery
- 3-4 working days
Product Data Sheet
< Back to Logbook Categories | https://www.trade-docs.com/product/grade-a-fire-alarm-log-book/ | 2022-08-08T01:57:36 | CC-MAIN-2022-33 | 1659882570741.21 | [] | www.trade-docs.com |
Deploy an Azure Service Fabric cluster across Availability Zones
Availability Zones in Azure are a high-availability offering that protects your applications and data from datacenter failures. An Availability Zone is a unique physical location equipped with independent power, cooling, and networking within an Azure region.
To support clusters that span across Availability Zones, Azure Service Fabric provides the two configuration methods as described in the article below. Availability Zones are available only in select regions. For more information, see the Availability Zones overview.
Sample templates are available at Service Fabric cross-Availability Zone templates.
Topology for spanning a primary node type across Availability Zones
Note
The benefit of spanning the primary node type across availability zones is really only seen for three zones and not just two.
- The cluster reliability level set to
Platinum
- A single public IP resource using Standard SKU
- A single load balancer resource using Standard SKU
- A network security group (NSG) referenced by the subnet in which you deploy your virtual machine scale sets
Note
The virtual machine scale set single placement group property must be set to
true.
The following sample node list depicts FD/UD formats in a virtual machine scale set spanning zones:
Distribution of service replicas across zones
When a service is deployed on the node types that span Availability Zones, the replicas are placed to ensure that they land in separate zones. The fault domains on the nodes in each of these node types are configured with the zone information (that is, FD = fd:/zone1/1, etc.). For example, for five replicas or service instances, the distribution is 2-2-1, and the runtime will try to ensure equal distribution across zones.
User service replica configuration
Stateful user services deployed on the node types across Availability Zones should be configured like this: replica count with target = 9, min = 5. This configuration helps the service work even when one zone goes down because six replicas will be still up in the other two zones. An application upgrade in this scenario will also be successful.
Cluster ReliabilityLevel
This value defines the number of seed nodes in the cluster and the replica size of the system services. A cross-Availability Zone setup has a higher number of nodes, which are spread across zones to enable zone resiliency.
A higher
ReliabilityLevel value ensures that more seed nodes and system service replicas are present and evenly distributed across zones, so that if a zone fails, the cluster and the system services aren't affected.
ReliabilityLevel = Platinum (recommended) ensures that there are nine seed nodes spread across zones in the cluster, with three seeds in each zone.
Zone-down scenario
When a zone goes down, all of the nodes and service replicas for that zone appear as down. Because there are replicas in the other zones, the service continues to respond. Primary replicas fail over to the functioning zones. The services appear to be in warning states because the target replica count isn't yet achieved and the virtual machine (VM) count is still higher than the minimum target replica size.
The Service Fabric load balancer brings up replicas in the working zones to match the target replica count. At this point, the services appear healthy. When the zone that was down comes back up, the load balancer will spread all of the service replicas evenly across the zones.
Upcoming Optimizations
- To provide reliable infrastructure updates, Service Fabric requires the virtual machine scale set durability to be set at least to Silver. This enables the underlying virtual machine scale set and Service Fabric runtime to provide reliable updates. This also requires each zone to have at minimum of 5 VMs. We are working to bring this requirement down to 3 & 2 VMs per zone for primary & non-primary node types respectively.
- All the below mentioned configurations and upcoming work, provide in-place migration to the customers where the same cluster can be upgraded to use the new configuration by adding new nodeTypes and retiring the old ones.
Networking requirements
Public IP and load balancer resource
To enable the
zones property on a virtual machine scale set resource, the load balancer and the IP resource referenced by that virtual machine scale set must both use a Standard SKU. Creating a load balancer or IP resource without the SKU property creates a Basic SKU, which does not support Availability Zones. A Standard SKU load balancer blocks all traffic from the outside by default. To allow outside traffic, deploy an NSG to the subnet.
{ "apiVersion": "2018-11-01", "type": "Microsoft.Network/publicIPAddresses", "name": "[concat('LB','-', parameters('clusterName')]", "location": "[parameters('computeLocation')]", "sku": { "name": "Standard" } }
{ "apiVersion": "2018-11-01", "type": "Microsoft.Network/loadBalancers", "name": "[concat('LB','-', parameters('clusterName')]", "location": "[parameters('computeLocation')]", "dependsOn": [ "[concat('Microsoft.Network/networkSecurityGroups/', concat('nsg', parameters('subnet0Name')))]" ], "properties": { "addressSpace": { "addressPrefixes": [ "[parameters('addressPrefix')]" ] }, "subnets": [ { "name": "[parameters('subnet0Name')]", "properties": { "addressPrefix": "[parameters('subnet0Prefix')]", "networkSecurityGroup": { "id": "[resourceId('Microsoft.Network/networkSecurityGroups', concat('nsg', parameters('subnet0Name')))]" } } } ] }, "sku": { "name": "Standard" } }
Note
It isn't possible to do an in-place change of SKU on the public IP and load balancer resources. If you're migrating from existing resources that have a Basic SKU, see the migration section of this article.
NAT rules for virtual machine scale sets
The inbound network address translation (NAT) rules for the load balancer should match the NAT pools from the virtual machine scale set. Each virtual machine scale set must have a unique inbound NAT pool.
{ "inboundNatPools": [ { "name": "LoadBalancerBEAddressNatPool0", "properties": { "backendPort": "3389", "frontendIPConfiguration": { "id": "[variables('lbIPConfig0')]" }, "frontendPortRangeEnd": "50999", "frontendPortRangeStart": "50000", "protocol": "tcp" } }, { "name": "LoadBalancerBEAddressNatPool1", "properties": { "backendPort": "3389", "frontendIPConfiguration": { "id": "[variables('lbIPConfig0')]" }, "frontendPortRangeEnd": "51999", "frontendPortRangeStart": "51000", "protocol": "tcp" } }, { "name": "LoadBalancerBEAddressNatPool2", "properties": { "backendPort": "3389", "frontendIPConfiguration": { "id": "[variables('lbIPConfig0')]" }, "frontendPortRangeEnd": "52999", "frontendPortRangeStart": "52000", "protocol": "tcp" } } ] }
Outbound rules for a Standard SKU load balancer
The Standard SKU load balancer and public IP introduce new abilities and different behaviors to outbound connectivity when compared to using Basic SKUs. If you want outbound connectivity when you're working with Standard SKUs, you must explicitly define it with either a Standard SKU public IP addresses or a Standard SKU load balancer. For more information, see Outbound connections and What is Azure Load Balancer?.
Note
The standard template references an NSG that allows all outbound traffic by default. Inbound traffic is limited to the ports that are required for Service Fabric management operations. The NSG rules can be modified to meet your requirements.
Important
Each node type in a Service Fabric cluster that uses a Standard SKU load balancer requires a rule allowing outbound traffic on port 443. This is necessary to complete cluster setup. Any deployment without this rule will fail.
1. Enable multiple Availability Zones in single virtual machine scale set
This solution allows users to span three Availability Zones in the same node type. This is the recommended deployment topology as it enables you to deploy across availability zones while maintaining a single virtual machine scale set..
A full sample template is available on GitHub.
Configuring zones on a virtual machine scale set
To enable zones on a virtual machine scale set, include the following three values in the virtual machine scale set resource:
The first value is the
zonesproperty, which specifies the Availability Zones that are in the virtual machine scale set.
The second value is the
singlePlacementGroupproperty, which must be set to
true. The scale set that's spanned across three Availability Zones can scale up to 300 VMs even with
singlePlacementGroup = true.
The third value is
zoneBalance, which ensures strict zone balancing. This value should be
true. This ensures that the VM distributions across zones are not unbalanced, which means that when one zone goes down, the other two zones have enough VMs to keep the cluster running.
A cluster with an unbalanced VM distribution might not survive a zone-down scenario because that zone might have the majority of the VMs. Unbalanced VM distribution across zones also leads to service placement issues and infrastructure updates getting stuck. Read more about zoneBalancing.
You don't need to configure the
FaultDomain and
UpgradeDomain overrides.
{ "apiVersion": "2018-10-01", "type": "Microsoft.Compute/virtualMachineScaleSets", "name": "[parameters('vmNodeType1Name')]", "location": "[parameters('computeLocation')]", "zones": [ "1", "2", "3" ], "properties": { "singlePlacementGroup": true, "zoneBalance": true } }
Note
- Service Fabric clusters should have at least one primary node type. The durability level of primary node types should be Silver or higher.
- An Availability Zone spanning virtual machine scale set should be configured with at least three Availability Zones, no matter the durability level.
- An Availability Zone spanning virtual machine scale set with Silver or higher durability should have at least 15 VMs.
- An Availaibility Zone spanning virtual machine scale set with Bronze durability should have at least six VMs.
Enable support for multiple zones in the Service Fabric node type
The Service Fabric node type must be enabled to support multiple Availability Zones.
The first value is
multipleAvailabilityZones, which should be set to
truefor the node type.
The second value is
sfZonalUpgradeModeand is optional. This property can't be modified if a node type with multiple Availability Zones is already present in the cluster. This property controls the logical grouping of VMs in upgrade domains (UDs).
- If this value is set to
Parallel: VMs under the node type are grouped into UDs and ignore the zone info in five UDs. This setting causes UDs across all zones to be upgraded at the same time. This deployment mode is faster for upgrades, we don't recommend it because it goes against the SDP guidelines, which state that the updates should be applied to one zone at a time.
- If this value is omitted or set to
Hierarchical: VMs are grouped to reflect the zonal distribution in up to 15 UDs. Each of the three zones has five UDs. This ensures that the zones are updated one at a time, moving to next zone only after completing five UDs within the first zone. This update process is safer for the cluster and the user application.
This property only defines the upgrade behavior for Service Fabric application and code upgrades. The underlying virtual machine scale set upgrades are still parallel in all Availability Zones. This property doesn't affect the UD distribution for node types that don't have multiple zones enabled.
The third value is
vmssZonalUpgradeMode, is optional and can be updated at anytime. This property defines the upgrade scheme for the virtual machine scale set to happen in parallel or sequentially across Availability Zones.
- If this value is set to
Parallel: All scale set updates happen in parallel in all zones. This deployment mode is faster for upgrades, we don't recommend it because it goes against the SDP guidelines, which state that the updates should be applied to one zone at a time.
- If this value is omitted or set to
Hierarchical: This ensures that the zones are updated one at a time, moving to next zone only after completing five UDs within the first zone. This update process is safer for the cluster and the user application.
Important
The Service Fabric cluster resource API version should be 2020-12-01-preview or later.
The cluster code version should be atleast 8.1.321 or later.
{ "apiVersion": "2020-12-01-preview", "type": "Microsoft.ServiceFabric/clusters", "name": "[parameters('clusterName')]", "location": "[parameters('clusterLocation')]", "dependsOn": [ "[concat('Microsoft.Storage/storageAccounts/', parameters('supportLogStorageAccountName'))]" ], "properties": { "reliabilityLevel": "Platinum", "sfZonalUpgradeMode": "Hierarchical", "vmssZonalUpgradeMode": "Parallel", "nodeTypes": [ { "name": "[parameters('vmNodeType0Name')]", "multipleAvailabilityZones": true } ] } }
Note
- Public IP and load balancer resources should use the Standard SKU described earlier in the article.
- The
multipleAvailabilityZonesproperty on the node type can only be defined when the node type is created and can't be modified later. Existing node types can't be configured with this property.
- When
sfZonalUpgradeModeis omitted or set to
Hierarchical, the cluster and application deployments will be slower because there are more upgrade domains in the cluster. It's important to correctly adjust the upgrade policy timeouts to account for the upgrade time required for 15 upgrade domains. The upgrade policy for both the app and the cluster should be updated to ensure that the deployment doesn't exceed the Azure Resource Service deployment time limit of 12 hours. This means that deployment shouldn't take more than 12 hours for 15 UDs (that is, shouldn't take more than 40 minutes for each UD).
- Set the cluster reliability level to
Platinumto ensure that the cluster survives the one zone-down scenario.
Tip
We recommend setting
sfZonalUpgradeMode to
Hierarchical or omitting it. Deployment will follow the zonal distribution of VMs and affect a smaller amount of replicas or instances, making them safer.
Use
sfZonalUpgradeMode set to
Parallel if deployment speed is a priority or only stateless workloads run on the node type with multiple Availability Zones. This causes the UD walk to happen in parallel in all Availability Zones.
Migrate to the node type with multiple Availability Zones
For all migration scenarios, you need to add a new node type that supports multiple Availability Zones. An existing node type can't be migrated to support multiple zones. The Scale up a Service Fabric cluster primary node type article includes detailed steps to add a new node type and the other resources required for the new node type, such as IP and load balancer resources. That article also describes how to retire the existing node type after a new node type with multiple Availability Zones is added to the cluster.
Migration from a node type that uses basic load balancer and IP resources: This process is already described in a sub-section below for the solution with one node type per Availability Zone.
For the new node type, the only difference is that there's only one virtual machine scale set and one node type for all Availability Zones instead of one each per Availability Zone.
Migration from a node type that uses the Standard SKU load balancer and IP resources with an NSG: Follow the same procedure described previously. However, there's no need to add new load balancer, IP, and NSG resources. The same resources can be reused in the new node type.
2. Deploy zones by pinning one virtual machine scale set to each zone
This is the generally available configuration right now. To span a Service Fabric cluster across Availability Zones, you must create a primary node type in each Availability Zone supported by the region. This distributes seed nodes evenly across each of the primary node types.
The recommended topology for the primary node type requires this:
- Three node types marked as primary
- Each node type should be mapped to its own virtual machine scale set located in a different zone.
- Each virtual machine scale set should have at least five nodes (Silver Durability).
The following diagram shows the Azure Service Fabric Availability Zone architecture:
Enable zones on a virtual machine scale set
To enable a zone on a virtual machine scale set, include the following three values in the virtual machine scale set resource:
- The first value is the
zonesproperty, which specifies which Availability Zone the virtual machine scale set is deployed to.
- The second value is the
singlePlacementGroupproperty, which must be set to
true.
- The third value is the
faultDomainOverrideproperty in the Service Fabric virtual machine scale set extension. This property should include only the zone in which this virtual machine scale set will be placed. Example:
"faultDomainOverride": "az1". All virtual machine scale set resources must be placed in the same region because Azure Service Fabric clusters don't have cross-region support.
{ "apiVersion": "2018-10-01", "type": "Microsoft.Compute/virtualMachineScaleSets", "name": "[parameters('vmNodeType1Name')]", "location": "[parameters('computeLocation')]", "zones": [ "1" ], "properties": { "singlePlacementGroup": true }, "virtualMachineProfile": { "extensionProfile": { "extensions": [ { "name": "[concat(parameters('vmNodeType1Name'),'_ServiceFabricNode')]", "properties": { "type": "ServiceFabricNode", "autoUpgradeMinorVersion": false, "publisher": "Microsoft.Azure.ServiceFabric", "settings": { "clusterEndpoint": "[reference(parameters('clusterName')).clusterEndpoint]", "nodeTypeRef": "[parameters('vmNodeType1Name')]", "dataPath": "D:\\\\SvcFab", "durabilityLevel": "Silver", "certificate": { "thumbprint": "[parameters('certificateThumbprint')]", "x509StoreName": "[parameters('certificateStoreValue')]" }, "systemLogUploadSettings": { "Enabled": true }, "faultDomainOverride": "az1" }, "typeHandlerVersion": "1.0" } } ] } } }
Enable multiple primary node types in the Service Fabric cluster resource
To set one or more node types as primary in a cluster resource, set the
isPrimary property to
true. When you deploy a Service Fabric cluster across Availability Zones, you should have three node types in distinct zones.
{ "reliabilityLevel": "Platinum", "nodeTypes": [ { "name": "[parameters('vmNodeType0Name')]", "applicationPorts": { "endPort": "[parameters('nt0applicationEndPort')]", "startPort": "[parameters('nt0applicationStartPort')]" }, "clientConnectionEndpointPort": "[parameters('nt0fabricTcpGatewayPort')]", "durabilityLevel": "Silver", "ephemeralPorts": { "endPort": "[parameters('nt0ephemeralEndPort')]", "startPort": "[parameters('nt0ephemeralStartPort')]" }, "httpGatewayEndpointPort": "[parameters('nt0fabricHttpGatewayPort')]", "isPrimary": true, "vmInstanceCount": "[parameters('nt0InstanceCount')]" }, { "name": "[parameters('vmNodeType1Name')]", "applicationPorts": { "endPort": "[parameters('nt1applicationEndPort')]", "startPort": "[parameters('nt1applicationStartPort')]" }, "clientConnectionEndpointPort": "[parameters('nt1fabricTcpGatewayPort')]", "durabilityLevel": "Silver", "ephemeralPorts": { "endPort": "[parameters('nt1ephemeralEndPort')]", "startPort": "[parameters('nt1ephemeralStartPort')]" }, "httpGatewayEndpointPort": "[parameters('nt1fabricHttpGatewayPort')]", "isPrimary": true, "vmInstanceCount": "[parameters('nt1InstanceCount')]" }, { "name": "[parameters('vmNodeType2Name')]", "applicationPorts": { "endPort": "[parameters('nt2applicationEndPort')]", "startPort": "[parameters('nt2applicationStartPort')]" }, "clientConnectionEndpointPort": "[parameters('nt2fabricTcpGatewayPort')]", "durabilityLevel": "Silver", "ephemeralPorts": { "endPort": "[parameters('nt2ephemeralEndPort')]", "startPort": "[parameters('nt2ephemeralStartPort')]" }, "httpGatewayEndpointPort": "[parameters('nt2fabricHttpGatewayPort')]", "isPrimary": true, "vmInstanceCount": "[parameters('nt2InstanceCount')]" } ] }
Migrate to Availability Zones from a cluster by using a Basic SKU load balancer and a Basic SKU IP
To migrate a cluster that's using a load balancer and IP with a basic SKU, you must first create an entirely new load balancer and IP resource using the standard SKU. It isn't possible to update these resources.
Reference the new load balancer and IP in the new cross-Availability Zone node types that you want to use. In the previous example, three new virtual machine scale set resources were added in zones 1, 2, and 3. These virtual machine scale sets reference the newly created load balancer and IP and are marked as primary node types in the Service Fabric cluster resource.
To begin, add the new resources to your existing Azure Resource Manager template. These resources include:
- A public IP resource using Standard SKU
- A load balancer resource using Standard SKU
- An NSG referenced by the subnet in which you deploy your virtual machine scale sets
- Three node types marked as primary
- Each node type should be mapped to its own virtual machine scale set located in a different zone.
- Each virtual machine scale set should have at least five nodes (Silver Durability).
An example of these resources can be found in the sample template.
New-AzureRmResourceGroupDeployment ` -ResourceGroupName $ResourceGroupName ` -TemplateFile $Template ` -TemplateParameterFile $Parameters
When the resources finish deploying, you can disable the nodes in the primary node type from the original cluster. When the nodes are disabled, the system services migrate to the new primary node type that you deployed previously.
Connect-ServiceFabricCluster -ConnectionEndpoint $ClusterName ` -KeepAliveIntervalInSec 10 ` -X509Credential ` -ServerCertThumbprint $thumb ` -FindType FindByThumbprint ` -FindValue $thumb ` -StoreLocation CurrentUser ` -StoreName My Write-Host "Connected to cluster" $nodeNames = @("_nt0_0", "_nt0_1", "_nt0_2", "_nt0_3", "_nt0_4") Write-Host "Disabling nodes..." foreach($name in $nodeNames) { Disable-ServiceFabricNode -NodeName $name -Intent RemoveNode -Force }
After the nodes are all disabled, the system services will run on the primary node type, which is spread across zones. You can then remove the disabled nodes from the cluster. After the nodes are removed, you can remove the original IP, load balancer, and virtual machine scale set resources.
foreach($name in $nodeNames){ # Remove the node from the cluster Remove-ServiceFabricNodeState -NodeName $name -TimeoutSec 300 -Force Write-Host "Removed node state for node $name" } $scaleSetName="nt0" Remove-AzureRmVmss -ResourceGroupName $groupname -VMScaleSetName $scaleSetName -Force $lbname="LB-cluster-nt0" $oldPublicIpName="LBIP-cluster-0" $newPublicIpName="LBIP-cluster-1" Remove-AzureRmLoadBalancer -Name $lbname -ResourceGroupName $groupname -Force Remove-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname -Force
Next, remove the references to these resources from the Resource Manager template that you deployed.
Finally, update the DNS name and public IP.
$oldprimaryPublicIP = Get-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname $primaryDNSName = $oldprimaryPublicIP.DnsSettings.DomainNameLabel $primaryDNSFqdn = $oldprimaryPublicIP.DnsSettings.Fqdn Remove-AzureRmLoadBalancer -Name $lbname -ResourceGroupName $groupname -Force Remove-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname -Force $PublicIP = Get-AzureRmPublicIpAddress -Name $newPublicIpName -ResourceGroupName $groupname $PublicIP.DnsSettings.DomainNameLabel = $primaryDNSName $PublicIP.DnsSettings.Fqdn = $primaryDNSFqdn Set-AzureRmPublicIpAddress -PublicIpAddress $PublicIP
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cross-availability-zones | 2022-08-08T03:01:53 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['media/service-fabric-cross-availability-zones/sf-multi-az-nodes.png',
'Screenshot that shows a sample node list of FD/UD formats in a virtual machine scale set spanning zones.'],
dtype=object)
array(['media/service-fabric-cross-availability-zones/sf-multi-az-topology.png',
'Diagram of the Azure Service Fabric Availability Zone architecture.'],
dtype=object)
array(['media/service-fabric-cross-availability-zones/sf-cross-az-topology.png',
'Diagram that shows the Azure Service Fabric Availability Zone architecture.'],
dtype=object) ] | docs.microsoft.com |
Remove-NetNat
Remove-NetNat
Removes NAT objects.
Syntax
Parameter Set: Query (cdxml) Remove-NetNat [[-Name] <String[]> ] [-AsJob] [-CimSession <CimSession[]> ] [-PassThru] [-ThrottleLimit <Int32> ] [-Confirm] [-WhatIf] [ <CommonParameters>] Parameter Set: InputObject (cdxml) Remove-NetNat -InputObject <CimInstance[]> [-AsJob] [-CimSession <CimSession[]> ] [-PassThru] [-ThrottleLimit <Int32> ] [-Confirm] [-WhatIf] [ <CommonParameters>]
Detailed Description
The Remove-NetNat cmdlet removes Network Address Translation (NAT) objects. NAT modifies IP address and port information in packet headers. When you remove a NAT object, NAT no longer does that translation. NAT drops any existing connections created through that NAT object.
Specify a value for the Name parameter to remove specific NAT objects. To remove all the NAT objects from a computer, do not include the Name parameter. You can use the Get-NetNat cmdlet to specify NAT objects to delete. an array of NAT objects. To obtain a NAT object, use the Get-NetNat cmdlet.
-Name<String[]>
Specifies an array of names of NAT objects.
Nat
Examples
Example 1: Remove all NAT objects on a computer
This command removes all the NAT objects on the current computer.
PS C:\> Remove-NetNat | https://docs.microsoft.com/en-us/previous-versions/windows/powershell-scripting/dn283357(v=wps.630)?redirectedfrom=MSDN | 2022-08-08T02:01:58 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.microsoft.com |
Command Line Operations
This section includes command line operations for Nessus and Nessus Agents.
Tip: During command line operations, prompts for sensitive information, such as a password, do not show characters as you type. However, the command line records the data and accepts it when you press the Enter key.
This section includes the following topics: | https://docs.tenable.com/nessus/10_3/Content/CommandLineOperations.htm | 2022-08-08T00:54:57 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.tenable.com |
Launch a Scan
In addition to configuring Schedule settings for a scan, you can manually start a scan run.
To launch a scan:
In the top navigation bar, click Scans.
The My Scans page appears.
In the scans table, in the row of the scan you want to launch, click the
button.
Nessus launches the scan.
What to do next:
If you need to stop a scan manually, see Stop a Running Scan. | https://docs.tenable.com/nessus/10_3/Content/LaunchAScan.htm | 2022-08-08T01:46:04 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.tenable.com |
Edit a Vulnerability Management Scan Configuration
Required Tenable.io Vulnerability Management User Role: Scan Operator, Standard, Scan Manager, or Administrator
Required Scan Permissions: Can Configure
To edit a scan configuration:
In the upper-left corner, click the
button.
The left navigation plane appears.
In the left navigation plane, in the Vulnerability Management section, click Scans.
The Scans page appears.
In the Folders section, click a folder to load the scans you want to view.
The scans table updates to display the scans in the folder you selected.
(Optional) Search for the scan you want to edit. For more information, see Tenable.io Tables.
In the scans table, click the scan you want to edit.
The scan details appear.
Click the
button next to the scan name.
The Edit a Scan page appears.
Change the scan configuration:. | https://docs.tenable.com/tenableio/Content/Scans/EditAScanConfiguration.htm | 2022-08-08T01:28:46 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.tenable.com |
About EAA logs
This guide provides an overview of the different EAA data feeds generated by Enterprise Application Access (EAA) and how to consume them either using API, or with your SIEM. It also describes the contents from each log field and explain their meanings in a dictionary of data available in the logs.
The Enterprise Application Access application has a full suite of APIs. You need to build scripts to interact with the service. You can either write your own code to interact with the service or use pre-existing tools such as our Log Streamer, CLI or Splunk application.
Our EAA Splunk application is compatible up to Splunk 7.x Enterprise. EAA Splunk application users are encouraged to migrate to Log Streamer to benefit from additional data feeds, compatibility with Splunk Free Edition.
For information on how to download and install the Splunk app, see Splunk installation manual.
In case of any issues contact support.
Updated 6 months ago | https://techdocs.akamai.com/eaa/docs/logs-overview | 2022-08-08T00:26:20 | CC-MAIN-2022-33 | 1659882570741.21 | [] | techdocs.akamai.com |
OS Deployment Control Grid
Overview
The OS Deployment feature allows Administrators to deploy and capture Images of the Windows Operating System. Below is a screenshot of the OS Deployment View. (For more information, see
OS Deployment View
).
OS Deployment View
To use the
OS Deployment
feature in Faronics Deploy, it is
mandatory
to set up an
Imaging Server
.
An
Imaging
Server
can be installed on
Windows
8
,
10
,
Windows Server 2012 R2
,
2016
computer.
For
WDS-Based
deployment, a
Server OS
is
required.
For
PXE booting
, it is
mandatory
to install the
Imaging Serve
r on a
Windows Server OS
.
The
Imaging Server must be installed
within the
same network
as the
managed
computers
.
Images
can then be imported into the Imaging Server (
ISO
files
) or captured from a managed computer and stored.
If there is
no network connectivity
between the managed computers and the Imaging Server, the process of
Applying an Image
or
Capturing an Image
will
fail.
If you have
multiple
locations
of managed networks, you will need to set up an
Imaging Server
for
each location
.
Once the
Image
has been imported into the
Imaging Server
, its metadata/summary is uploaded to the Deploy cloud console under the OS deployment page.
A
Deployment Package
can now be created with the desired Install Settings to remotely Image a computer or capture its Image.
Imaging Wizard
For first time users, clicking on the OS Deployment page will start the Setup Imaging Wizard. The
Imaging Wizard
includes the following options/steps:
Step 1: Install Imaging Server
Step 2: Add Images and Drivers
Step 3: Create Install Settings
Step 4: Create Deployment Package
Unless Steps 1 and 2 have been completed, you will not be able to proceed further with the wizard.
If at least one Imaging Server has been set up and an Image has been added, the wizard will no longer be shown as the default; instead, you will be shown the OS Deployment page.
Users can initiate the Setup Imaging wizard again by clicking on the
Add Imaging Server
option.
OS Deployment View Grids
Computers grid
Imaging Servers grid
Deployment Packages grid
Install Settings grid
Images grid
Drivers grid.
Computers Grid
Computers Grid
The Computers grid under OS Deployment displays the list of all computers where the Faronics Deploy agent has been installed, allowing administrators to Image the computer(s) or Capture an Image of the managed computer.
Image Computer: Clicking on the
Image Computer
button will allow you to apply an Image to one or multiple managed computers.
Capture Image: Clicking on the
Capture Image
button will allow you to capture an Image of the selected computer.
Along with the name of the computer, the grid additionally displays the Policy associated with the computer, the Group, Tags, OS Type, Last Deployed Package, Last Deployed On, and the Live Status of Image Deployment on that computer.
Imaging Servers Grid
The
Imaging Server
grid displays the list of all Imaging Servers that have been set up under this particular user account and site along with the Server Name where the Imaging Server has been set up, the IP addresses of the Server, the total Images available on the Imaging Server, the Last Reported timestamp and the Status if it is Online or Offline.
Deployment Package Grid
Deployment Packages Grid
Deployment Packages packages are custom packages created for OS deployment purposes. It allows you to combine different Images, Install Settings, and Drivers for different deployment scenarios.
For example, a Deployment Package for Dell computers can include Image “A” from Imaging Server 1, and Install Settings “ABC”, and Driver Group “XYZ” specific for Dell computers. Similarly, a Deployment Package for HP computers would include the same Image “A” but different Install Settings and a different Driver Group.
Deployment Package can be created based on Hardware, Departments, or any other categorization to simplify the deployment process.
To create a new Deployment Package, click on the (+) icon on top of the Deployment Packages grid. There is no limit on the number of Deployment Packages that can be created.
Install Settings Grid
Install Settings Grid
Install
Settings
can be used to modify Windows settings in your Images during the imaging process. Install Settings allow you to customize General Settings, Regional Setting, Out-of-Box Experience Settings, Disk Settings, and Local User Account Settings, which are then applied during the imaging process.
Install Settings can be created based on Hardware, Departments, or any other categorization to simplify the deployment process.
To create a new Install Settings, click on the (+) icon on top of the Install Settings grid. There is no limit on the number of Install Settings that can be created.
Images Grid
Images Grid
The
Images
grid displays the list of all available Images which have been imported via ISO files or captured using the Capture Image feature and stored across all Imaging Servers.
Images can only be imported locally on the Imaging Server using the Faronics Deploy Imaging Server utility. The number of Images that can be imported or captured depends on the available disk space on the Imaging Server.
The grid further displays the Image Name, the Description of the Image, the Operating System of the Image, the Architecture - whether it is 32bit or 64bit, the type of Image, the Size of the Image, the Creation Date, and the Imaging Server where it is residing.
Driver Group Grid
Driver Group Grid
The
Driver Group
grid displays the list of all Driver Groups that have been created using the different Drivers imported and stored across all Imaging Servers.
Drivers can only be imported locally on the Imaging Server using the Faronics Deploy Imaging Server utility. The number of Drivers that can be imported depends on the available disk space on the Imaging Server.
The grid also displays the Driver Group's Name, the Number of Drivers, the Imaging Server where the Drivers are stored, and the Last Updated Date.
To create a new Driver Group, click on the (+) icon on top of the Driver Group grid. There is no limit on the number of Driver Groups that can be created.
Patch Scan (Using a Policy)
Next - OS DEPLOYMENT
Imaging Utility Requirements
Last modified
1yr ago
Copy link
Outline
Overview
Imaging Wizard
OS Deployment View Grids
Computers Grid
Imaging Servers Grid
Deployment Package Grid
Install Settings Grid
Images Grid
Driver Group Grid | https://docs.faronics.com/faronicsdeploy/os-deployment/navigating-the-os-deployment-control-grid | 2022-08-08T01:16:27 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.faronics.com |
Hi, I was wondering if anyone can describe the steps for adding an ATL window class as a control to a dialog resource. I found an article discussing it here, but that involves MFC, so I was hoping to get some clarification on how to do it with ATL.
So far I have (a) a dialog class derived from
CDialogImpl, which includes a dialog resource, and (b) another class derived from
CWindowImpl, which I'd like to add as the custom control in the dialog.
I've added
DECLARE_WND_CLASS to the latter class, so that it has an identifiable class name. From what I understand, this class name should be used for the custom control in the dialog resource editor.
But I'm not sure how to go about doing the rest. Do I still call
Create on the control class? And if so, where? It seems that no matter what I try, ATL throws an assertion error about some HWND not existing.
Thank you very much for any assistance. | https://docs.microsoft.com/en-us/answers/questions/61198/how-to-create-a-custom-control-in-a-dialog-with-at.html | 2022-08-08T02:29:04 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.microsoft.com |
Is BeaverDash wpmu / network / multisite ready?
We offer a separate version of BeaverDash that includes network-wide activation and settings while removing the sub-site settings panel. The standard BeaverDash Pro, Agency and Team Tangible Membership versions of BeaverDash will not be suitable for a network environment as each sub-site would need to be managed and activated separately.
You can purchase a single-network license here
You can purchase a multi-network license here | https://docs.tangibleplugins.com/article/5-beaverdash-network | 2022-08-08T01:55:55 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.tangibleplugins.com |
When installing EmpowerID, you have two options, you can install EmpowerID using the MSI or you can use the EmpowerID Configurator to create a batch file that is configured with the installation settings for your environment. Then when ready, you can execute the batch file and silently install EmpowerID on any network-reachable dedicated EmpowerID Web server.
From SQL Server, restore the EmpowerID Database. The user account must have the right to restore a SQL database.
This opens the EmpowerID Configurator Utility. You use this utility to add the EmpowerID configuration settings that will be used in the installation batch file.
You should see two files in the folder, eid.install, which is an encrypted XML file with the configuration settings you specified and install, which is the installation batch file.
When the installation completes, EmpowerID generates a text file named output.log with the results of the install (shown below with the results of the installation highlighted), starts the EmpowerID Web Role Windows service and begins the process of GAC'ing the EmpowerID assemblies. | https://docs.empowerid.com/2017/admin/installationandconfiguration/installingempoweridsilently | 2021-02-25T05:19:02 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.empowerid.com |
How to use
You can access “Flash Actions” from 3 places in PrestaBay module.
- “Selling List” page — on this level “Flash Action” performed among all selected selling lists.
- “Specific Selling List (Edit)” — “Flash Action” on this level is always performed with all products from this Selling List that matches required conditions. For example, if you try to “Send to eBay All ‘Not Active’ Items” then the module will get all products that currently “In Stock” and not listed yet on ebay.
- “Dashboard” — this allows you to perform actions with ALL listings in your installation. For performing actions it’s also got only products that match criteria
Use Flash Actions in Selling List overview page
- Mark all Selling List which products you want to use in Flash Action
- Select one of available Flash Action
- Wait before operation finished
Use Flash Actions in Selling List edit mode
- Select one of available Flash Action
- Wait before operation finished
Use Flash Actions in Flash Dashboard
- Select one of available Flash Action
- Wait before operation finished
Type of Flash Actions
The module allows you to perform the following types of flash actions: Send Products to ebay, Full Revise ebay Listings, QTY/Price Revise ebay listings, Relist finished listings, Stop ebay listings. | https://docs.salest.io/article/76-how-to-use | 2021-02-25T05:36:10 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.salest.io |
So, why might you want to use an clunky, amateurish distribution like Shedbuilt on your device? Here are a few ideas:
There's no better way to learn how operating systems work than slapping one together yourself. Shedbuilt gives you the option of assembling the entire system yourself, aided by simple tools that eliminate much of the tedium. And, because it runs on $10 single-board computers, Shedbuilt makes tinkering with components at each level of the stack cheap and risk-free. It's the perfect way to introduce an inquisitive teen to system design, administration and programming.
With Shedbuilt you can configure a minimal system for your IoT project, then compile a clean system image for distribution with a single command. Push your package repository to GitHub, and deploying updates becomes as simple as committing to a branch. The same tools used to tinker in the development phase can be repurposed to provide security and transparency when you ship to end-users.
Are you curmudgeonly? Do you feel that the increasing speed and complexity of personal computers have done little to improve their utility? Do you yearn for the days of your youth when you poured countless hours into your Gentoo installation, shunning friends and family? Well, with Shedbuilt, you can relive the good old days on hardware that's more capable than what you had in 1999 but costs no more than a Chipotle burrito with guac. | https://docs.shedbuilt.net/intro/use-cases | 2021-02-25T05:38:44 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.shedbuilt.net |
Limitations of the S3A Committers
There are limitations of the S3A committers associated with custom file output formats, MapReduce API output format, and non-availability of Hive support.
Custom file output formats and their committers
Output formats which implement their own committers do not automatically switch to the new committers. If such a custom committer relies on renaming files to commit output, then it will depend on S3Guard for a consistent view of the object store, and take time to commit output proportional to the amount of data and the number of files.
To determine if this is the case, find the subclass of
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat which implements
the custom format, to see if it subclasses the
getOutputCommitter() to
return its own committer, or has a custom subclass
of
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.
It may be possible to migrate such a committer to support store-specific committers, as was
done for Apache Parquet support in Spark. Here a subclass of Parquet's
ParquetOutputCommitter was implemented to delegates all operations to the
real committer.
MapReduce V1 API Output Format and Committers
Only the MapReduce V2 APIs under
org.apache.hadoop.mapreduce support the
new committer binding mechanism. The V1 APIs under the
org.apache.hadoop.mapred package only bind to the file committer and
subclasses. The v1 APIs date from Hadoop 1.0 and should be considered obsolete. Please
migrate to the v2 APIs, not just for the new committers, but because the V2 APIs are still
being actively developed and maintained.
No Hive Support
There is currently no Hive support for the S3A committers. To safely use S3 as a destination of Hive work, you must use S3Guard. | https://docs.cloudera.com/runtime/7.2.7/cloud-data-access/topics/cr-cda-limitations-of-the-s3a-committers.html | 2021-02-25T05:23:01 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.cloudera.com |
$ ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> (1)
install-config.yamlfile for AWS
In OKD version 4, availability 4,.
See About remote health monitoring for more information about the Telemetry service..
For installations of a private OKD cluster that are only accessible from an internal network and are not visible to the Internet, you must manually generate your installation configuration file.
Obtain the OKD: pullSecret: '{"auths": ...}' (1) fips: false (10) sshKey: ssh-ed25519 AAAA... (11) publish: Internal (12) into.
Validating an installation.
If necessary, you can opt out of remote health reporting. | https://docs.okd.io/latest/installing/installing_aws/installing-aws-private.html | 2021-02-25T05:37:56 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.okd.io |
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
Version 1
Current »
The Router Mediator is deprecated. You can use the Filter Mediator or Conditional Router Mediator instead.
Powered by a free Atlassian Confluence Community License granted to WSO2, Inc.. Evaluate Confluence today. | https://docs.wso2.com/pages/viewpage.action?pageId=36021890 | 2021-02-25T04:43:13 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.wso2.com |
The goal of this feature is pretty straight-forward. When a user connects their node to the site, we want them to be able to see their alias and channel balance, so that it is clear to them how much funds they have available to pay for upvotes.
To implement this feature, we first updated the backend to make this data available from our API. Then we updated the frontend to fetch this info from the backend.
Let’s go to the
feat-2 branch to see what’s changed.
git checkout feat-2
To update our backend, we’ll need to add a new route to handle requests to fetch the node’s info.
source: /backend/routes.ts
/*** GET /api/info*/export const getInfo = async (req: Request, res: Response) => {const { token } = req.body;if (!token) throw new Error('Your node is not connected!');// find the node that's making the requestconst node = db.getNodeByToken(token);if (!node) throw new Error('Node not found with this token');// get the node's pubkey and aliasconst rpc = nodeManager.getRpc(node.token);const { alias, identityPubkey: pubkey } = await rpc.getInfo();const { balance } = await rpc.channelBalance();res.send({ alias, balance, pubkey });};
In the
routes.ts file, we added a new route handler function
getInfo() which receives the user’s token, and first validates that it is valid. Then it uses the
NodeManager class to get the RPC connection to the
lnd node. With the rpc, we can not make two calls to lnd to fetch the alias and pubkey from getInfo() and the balance from channelBalance(). Finally, we return this data to the client.
source: /backend/index.ts
app.get('/api/info', catchAsyncErrors(routes.getInfo));
We updated the backend entrypoint file to instruct express to use the
getInfo() route handler for
GET requests to
/api/info.
With the backend updated, we can now fetch the alias and balance from the backend to display in the UI. We’ll start with the api wrapper then work our way up to the UI.
source: /src/lib/api.ts
export const getInfo = async () => {return await httpGet('info');};
In the API wrapper module, we just added a function to make the http request and return the result.
source: /src/store/store.ts
init = async () => {// try to fetch the node's info on startuptry {await this.fetchInfo();this.connected = true;} catch (err) {// don't display an error, just disconnectthis.connected = false;}// fetch the posts from the backendtry {this.posts = await api.fetchPosts();} catch (err) {this.error = err.message;}// connect to the backend WebSocket and listen for eventsconst ws = api.getEventsSocket();ws.addEventListener('message', this.onSocketMessage);};...fetchInfo = async () => {const info = await api.getInfo();this.alias = info.alias;this.balance = parseInt(info.balance);this.pubkey = info.pubkey;};
In the mobx store, we added the
fetchInfo() function that will retrieve the
alias,
pubkey and
balance, then update the app state with the returned values. We also updated the
init() function to fetch the node info when the page first loads. If it succeeds, then we set the
connected flag to true.
source: /src/App.tsx
<Navbar.Collapse<Nav className="ml-auto">{!store.connected ? (<Nav.Item><NavLink onClick={store.gotoConnect}>Connect to LND</NavLink></Nav.Item>) : (<><Navbar.Text><Badge variant="info" pill{store.balance.toLocaleString()} sats</Badge></Navbar.Text><Dropdown id="basic-nav-dropdown" alignRight><Dropdown.Toggle as={NavLink}>{store.alias}</Dropdown.Toggle><Dropdown.Menu><Dropdown.Item onClick={store.disconnect}>Disconnect</Dropdown.Item></Dropdown.Menu></Dropdown></>)}</Nav></Navbar.Collapse>
Finally, we updated the App component to display the node’s alias and balance in the Navbar when it is connected. We also nested the Disconnect link under a dropdown menu.
If you go back to your browser, refresh the page and connect your node, you should now see the alias and balance at the top right of the screen.
This was a simple feature to implement, which was mostly plumbing code to get the data from
lnd, through the backend, to the frontend, and finally displayed in the browser. The important takeaway here is that communicating with the Lightning Network is not very different from communicating with any other third-party APIs.
In the next feature, we’ll work on updating the Post creation process to add some Lightning functionality. | https://docs.lightning.engineering/build-a-lapp/add-features/display-node-alias-and-balance | 2021-02-25T05:41:05 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.lightning.engineering |
Requesting a New Data Exchange¶
To request a new Data Exchange for your Snowflake account, contact Snowflake Support and provide the following information:
Description: A description of the business case to use the Snowflake Data Exchange.
Name for the New Data Exchange: The name is a unique identifier for the Data Exchange that is used in SQL. The name cannot include spaces, or special characters, and must be unique if you have an existing Data Exchange for a Snowflake account in another region.
Data Exchange Display Name: The display name is displayed in the User Interface to the Data Exchange members.
Account URL: The URL of an account for which the Data Exchange is created.
Organization ID: ID of your Snowflake organization (if you are using the Organizations feature in Snowflake).
Attention
Enabling a Data Exchange for your account may take up to 2 business days. When you request a Data Exchange to be enabled, please be sure to provide Snowflake account URL. Providing incorrect or incomplete information may delay the process. | https://docs.snowflake.com/en/user-guide/data-exchange-requesting.html | 2021-02-25T05:36:50 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.snowflake.com |
Hello, world¶
This guide will take you step by step through creating a new Vapor project, building it, and running the server.
If you have not yet installed Swift or Vapor Toolbox, check out the install section.
New Project¶
The first step is to create a new Vapor project on your computer. Open up your terminal and use Toolbox's new project command. This will create a new folder in the current directory containing the project.
vapor new hello -n
Tip
The
-n flag gives you a bare bones template by automatically answering no to all questions.
Once the command finishes, change into the newly created folder and open Xcode.
cd hello open Package.swift
Xcode Dependencies¶
You should now have Xcode open. It will automatically begin downloading Swift Package Manager dependencies.
At the top of the window, to the right of the Play and Stop buttons, click on your project name to select the project's Scheme, and select an appropriate run target—most likely, "My Mac".
Build & Run¶
Once the Swift Package Manager dependencies have finished downloading, click the play button to build and run your project.
You should see the Console pop up at the bottom of the Xcode window.
[ INFO ] Server starting on
Visit Localhost¶
Open your web browser, and visit localhost:8080/hello or
You should see the following page.
Hello, world!
Congratulations on creating, building, and running your first Vapor app! 🎉 | https://docs.vapor.codes/4.0/hello-world/ | 2021-02-25T05:18:59 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.vapor.codes |
Enable Self-Service WorkSpace Management Capabilities for Your Users
In Amazon WorkSpaces, you can enable self-service WorkSpace management capabilities for your users to provide them with more control over their experience. It can also reduce your IT support staff workload for Amazon WorkSpaces. When you enable self-service capabilities, you can allow users to perform one or more of the following tasks directly from their Windows, macOS, or Linux client for Amazon WorkSpaces:
Cache their credentials on their client. This lets them reconnect to their WorkSpace without re-entering their credentials.
Restart (reboot) their WorkSpace.
Increase the size of the root and user volumes on their WorkSpace.
Change the compute type (bundle) for their WorkSpace.
Switch the running mode of their WorkSpace.
Rebuild their WorkSpace.
To enable one or more of these capabilities for your users, perform the following steps.
To enable self-service management capabilities for your users
Open the Amazon WorkSpaces console at
.
In the navigation pane, choose Directories.
Select your directory, and choose Actions, Update Details.
Expand User Self-Service Permissions. Enable or disable the following options as required to determine the WorkSpace management tasks that users can perform from their client:
Remember me — Users can choose whether to cache their credentials on their client by selecting the Remember Me or Keep me logged in check box on the login screen. The credentials are cached in RAM only. When users choose to cache their credentials, they can reconnect to their WorkSpaces without re-entering their credentials. To control how long users can cache their credentials, see Set the Maximum Lifetime for a Kerberos Ticket.
Restart WorkSpace from client — Users can restart (reboot) their WorkSpace. Restarting disconnects the user from their WorkSpace, shuts it down, and reboots it. The user data, operating system, and system settings are not affected.
Increase volume size — Users can expand the root and user volumes on their WorkSpace to a specified size without contacting IT support. Users can increase the size of the root volume (for Windows, the C: drive; for Linux, /) up to 175 GB, and the size of the user volume (for Windows, the D: drive; for Linux, /home) up to 100 GB. WorkSpace root and user volumes come in set groups that can't be changed. The available groups are [Root(GB), User(GB)]: [80, 10], [80, 50], [80, 100], [175 to 2000, 100 to 2000]. For more information, see Modify a WorkSpace.
For a newly created WorkSpace, users must wait 6 hours before they can increase the size of these drives. After that, they can do so only once in a 6-hour period. While a volume size increase is in progress, users can perform most tasks on their WorkSpace. The tasks that they can't perform are: changing their WorkSpace compute type, switching their WorkSpace running mode, restarting their WorkSpace, or rebuilding their WorkSpace. When the process is finished, the WorkSpace must be rebooted for the changes to take effect. This process might take up to an hour.
Note
If users increase the volume size on their WorkSpace, this will increase the billing rate for their WorkSpace.
Change compute type — Users can switch their WorkSpace between compute types (bundles). For a newly created WorkSpace, users must wait 6 hours before they can switch to a different bundle. After that, they can switch to a larger bundle only once in a 6-hour period, or to a smaller bundle once in a 30-day period. When a WorkSpace compute type change is in progress, users are disconnected from their WorkSpace, and they can't use or change the WorkSpace. The WorkSpace is automatically rebooted during the compute type change process. This process might take up to an hour.
Note
If users change their WorkSpace compute type, this will change the billing rate for their WorkSpace.
Switch running mode — Users can switch their WorkSpace between the AlwaysOn and AutoStop running modes. For more information, see Manage the WorkSpace Running Mode.
Note
If users switch the running mode of their WorkSpace, this will change the billing rate for their WorkSpace.
Rebuild WorkSpace from client — Users can rebuild the operating system of a WorkSpace to its original state. When a WorkSpace is rebuilt, the user volume (D: drive) is recreated from the latest backup. Because backups are completed every 12 hours, users' data might be up to 12 hours old. For a newly created WorkSpace, users must wait 12 hours before they can rebuild their WorkSpace. When a WorkSpace rebuild is in progress, users are disconnected from their WorkSpace, and they can't use or make changes to their WorkSpace. This process might take up to an hour.
Choose Update or Update and Exit. | https://docs.aws.amazon.com/workspaces/latest/adminguide/enable-user-self-service-workspace-management.html | 2021-02-25T06:16:18 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.aws.amazon.com |
Selling Profile — Payment Tab
This tab contains information about payment methods that the seller will accept when the buyer pays for the item.
Access to fields in this tab available only after your choose eBay Marketplace.
- Business Policy - Payment — list of all available Business Payment Policy for selected ebay account. Please notice, if you select Business Policy for one of the sections Payment, Shipping, Return you will need to select it also for other sections. Please check separate page of manual related to “Business Policy”
- Payment Methods (Payment Policy not selected) — identifies the payment method (such as PayPal) that the seller will accept when the buyer pays for the item. At least one payment method must be specified.
- PayPal E-mail (Payment Policy not selected) — valid PayPal email address for the PayPal account that the seller will use if they offer PayPal as a payment method for the listing. Required if you choose PayPal as a “Payment Methods”.
- Requiring Immediate Payment (Payment Policy not selected) — define that eBay items will require immediate payment after purchase. Only sellers with Premier or Business PayPal accounts can specify immediate payment for an item. Not all eBay categories support this option. Work when only PayPal specify as a payment method.
- COD cost (Payment Policy not selected)— only for eBay Italy. The field is shown only when select related payment method. Please set into this field cost for payment method “Contrassegno”
- Payment Instructions (Payment Policy not selected) — these instructions appear on eBay's View Item page and on eBay's checkout page when the buyer pays for the item. | https://docs.salest.io/article/27-selling-profile-payment-tab | 2021-02-25T04:26:14 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['https://involic.com/images/prestabay-manual/prestashop-ebay-module-selling-profile-payment-tab-60.png',
'PrestaShop ebay module — Selling Profile — Payment Tab PrestaShop ebay module — Selling Profile — Payment Tab'],
dtype=object) ] | docs.salest.io |
>>, 2018 to 12 A.M. November 13, 2018..
For example, if you specify a time range of
Last 24 hours in the Time Range Picker and in the Search bar you specify
earliest=-30m latest=now, the search only looks at events that have a timestamp within the last 30 minutes.
This applies to any of the options you can select in the Time Range Picker,
However, this does not apply to subsearches.
Time ranges and subsearches
Time ranges selected from the Time Range Picker apply to the main search and to subsearches, unless a time range is specified in the Search 2019,
Search from the beginning of the week to the time of your search
This example searches for Web access errors from the beginning of the week to the time that you run your search. Though not specified,
latest=now is assumed with this search
..
Search the current business week
This example searches for Web access errors from the current business week, where
w1 is Monday and
w6.! | https://docs.splunk.com/Documentation/Splunk/7.1.3/Search/Specifytimemodifiersinyoursearch | 2021-02-25T06:10:43 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Get-Exchange
Assistance Config
Syntax
Get-ExchangeAssistance-ExchangeAssistanceConfig -Identity Contoso.com
This example shows the configuration information that the web management interface uses to locate the source of the documentation for Contoso.com.
-------------------------- Example 2 --------------------------
Get-ExchangeAssistanceConfig | Format-Table
This example shows the configuration information for all organizations and formats the information into a table. identity of the organization.. | https://docs.microsoft.com/en-us/powershell/module/exchange/organization/Get-ExchangeAssistanceConfig?view=exchange-ps | 2018-05-20T11:35:34 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.microsoft.com |
- .
Considerations When Deploying a Replica Set.
Changed in version 3.6: Starting in MongoDB 3.6, MongoDB binaries,
mongod and
mongos, bind to localhost (
127.0.0.1) by default. If the
net.ipv6 configuration file setting or the
--ipv6
command line option is set for the binary, the binary additionally binds
to the IPv6 address
::1. ip:
Connectivity¶
Ensure that network traffic can pass securely between all members of the set and all clients in the network .. you bind to other ip addresses, consider enabling access control and other security measures listed in Security Checklist to prevent unauthorized access.
For
<ip address>, specify the ip address or hostname for your
mongod instance that remote clients (including the other
members of the replica set) can use to connect to the instance.
Alternatively, you can also specify the
replica set name and the
ip addresses. | https://docs.mongodb.com/manual/tutorial/deploy-replica-set/ | 2018-05-20T12:17:30 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.mongodb.com |
How to Set Up a Scene
Setting up your scene can be compared to building a set for a television show. This is the point when you position each scene element such as the camera frame, the background elements and the characters.
Positioning the Camera
A new camera layer is added to the scene and appears in the Timeline view.
The selected camera frame is highlighted in purple.
Positioning Objects
You can display a rotation handle on the bounding box when transforming a layer. In the Preferences dialog box, select the Camera tab and then select the Use Rotation Lever with Transformation Tools option. This preference is off by default.
Repositioning the Pivot
Transformations, such as rotation, scale, skew and flip, are made relative to the pivot point position. You can reposition this pivot point anywhere using the advanced animation tools.
The pivot point appears in the Camera view.
All transformations, including existing ones will be recalculated from this new pivot postion. | https://docs.toonboom.com/help/harmony-11/workflow-standalone/Content/_CORE/Getting_Started/010_CT_Scene_Setup.html | 2018-05-20T12:23:59 | CC-MAIN-2018-22 | 1526794863410.22 | [array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object)
array(['../../Resources/Images/HAR/_Skins/stage.png', None], dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../Resources/Images/HAR/_Skins/draw.png',
'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../Resources/Images/HAR/_Skins/sketch.png', None], dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../Resources/Images/HAR/_Skins/stageXsheet.png', None],
dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../Resources/Images/HAR/Stage/SceneSetup/an_addcamera_timeline.png',
None], dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../Resources/Images/HAR/Stage/SceneSetup/HAR11_cam_frame.png',
None], dtype=object)
array(['../../Resources/Images/HAR/Stage/SceneSetup/HAR11_cam_frame_move.png',
None], dtype=object)
array(['../../Resources/Images/HAR/Stage/SceneSetup/HAR11_cam_frame_rot.png',
None], dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../Resources/Images/EDU/HAR/Student/Steps/an_transform_rotate.png',
None], dtype=object)
array(['../../Resources/Images/HAR/Stage/SceneSetup/rotate_box.png', None],
dtype=object)
array(['../../Resources/Images/HAR/Stage/SceneSetup/an_transform_rotate.png',
None], dtype=object)
array(['../../Resources/Images/HAR/Stage/SceneSetup/rotation_handle.png',
None], dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../Resources/Images/HAR/Stage/SceneSetup/an_perm_pivotpoint.png',
None], dtype=object)
array(['../../Resources/Images/HAR/Stage/SceneSetup/an_perm_pivotpoint_02.png',
None], dtype=object) ] | docs.toonboom.com |
You can view the certificates known to the vCenter Certificate Authority (VMCA) to see whether active certificates are about to expire, to check on expired certificates, and to see the status of the root certificate. You perform all certificate management tasks using the certificate management CLIs.
About this task.
Procedure
- Log in to vCenter Server as [email protected] or another user of the CAAdmins vCenter Single Sign-On group.
- From the Home menu, select Administration.
- Click Nodes, and select the node for which you want to view or manage certificates.
- Click the Manage tab, and click Certificate Authority.
- Click the certificate type for which you want to view certificate information.
- Select a certificate and click the Show Certificate Details button to view certificate details.
Details include the Subject Name, Issuer, Validity, and Algorithm. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.psc.doc/GUID-C0A0BD94-7AC1-4F40-BAB7-560B5AF0FC41.html | 2018-05-20T12:04:30 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.vmware.com |
Localisation
Gherkin is localised for many spoken languages; each has their own localised equivalent of these keywords.
Gherkin uses a set of special keywords to give structure and meaning to executable specifications. Each keyword is translated to many spoken languages; in this reference we’ll use English.
Most lines in a Gherkin document start with one of the keywords.
Comment lines are allowed anywhere in the file. They begin with zero or more spaces,
followed by a hash sign (
#) and some text. Comments do have to start on a new line.
Either spaces or tabs may be used for indentation. The recommended indentation level is two spaces.:
Feature
Example(
Scenarioand
Scenario Outlineare synonyms)
Given,
When,
Then,
And,
But(steps)
Background
Combinations(
Examplesis a synonym)
There are a few secondary keywords as well:
"""(Doc Strings)
|(Data Tables) default in html reports).
Feature: Guess the word The word guess game is a turn-based game for two players. The Maker makes a word for the Breaker to guess. The game is over when the Breaker guesses the Maker's word. Example: Maker starts a game
The name and the optional description have no special meaning to Cucumber. Their purpose is to provide a place for you to document important aspects of the feature, such as a brief explanation and a list of business rules (general acceptance criteria).
The free format description for
Feature ends when you start a line with the keyword
Scenario or
Scenario Outline.
You can place tags above
Feature to group related features,
independent of your file and directory structure.
Free-form descriptions (as described above for
Feature) can also be placed underneath
Example,
Background,
Scenario and
Scenario Outline.
You can write anything you like, as long as no line starts with a keyword.
This.
Examples follow this same pattern:
Givensteps)
Whensteps)
Thensteps)
Each step starts with
Given,
When,
Then,
And, or
Given,
When,
Then,
And or
But step with the same text as another step.
Cucumber considers the following steps duplicates:
Given there is money in my account Then there is money in my account
This might seem like a limitation, but it forces you to come up with a less ambiguous, more clear domain language:
Given my account has a balance of £430 Then my account should have a balance of £430.
It’s okay to have several
Given steps (just use
And or
But for number 2 and upwards to make it more readable).
Examples:.
Examples:
Then steps are used to describe an expected outcome, or result.
The step definition of a
Then step should use an assertion to
compare the actual outcome (what the system actually does) to the expected outcome
(what the step says the system is supposed to do).
An observation should be on an observable output. That is, something that comes out of the system (report, user interface, message), and not something deeply buried inside it (like a database).
Examples:
While it might be tempting to implement
Then steps to just look in the database - resist that temptation!
You should only verify outcome that is observable for the user (or external system), and databases usually are not.
If you have several
Given’s,
When’s, or
Thens, you could write:
Example: Multiple Givens Given one thing Given another thing Given yet another thing When I open my eyes Then I should see something Then I shouldn't see something else
Or, you could make it read more fluidly by writing:
Example: Multiple Givens Given one thing And another thing And yet another thing When I open my eyes Then I should see something But I shouldn't see something else
Occasionally you’ll find yourself repeating the same
Given steps in all of the scenarios in a feature.
Since it is repeated in every scenario, this is an indication that those steps
are not essential to describe the scenarios; they are incidental details. You can literally move such
Given steps to the background, by grouping them under a
Background section.
A
Background allows you to add some context to the scenarios in the feature. It can contain one or
more
Given steps.
A
Background is run before each scenario, but after any Before hooks. In your feature file, put the
Background before the first
Scenario.
You can only have one set of
Background steps per feature. If you need different
Background steps for different scenarios, you’ll need to split them into different feature files.
For example:
Feature: Multiple site support Only blog owners can post to a blog, except administrators, who can post to all blogs. Background: Given a global administrator named "Greg" And a blog named "Greg's anti-tax rants" And a customer named "Dr. Bill" And a blog named "Expensive Therapy" owned by "Dr. Bill" Scenario: Dr. Bill posts to his own blog Given I am logged in as Dr. Bill When I try to post to "Expensive Therapy" Then I should see "Your article was published." Scenario: Dr. Bill tries to post to somebody else's blog, and fails Given I am logged in as Dr. Bill When I try to post to "Greg's anti-tax rants" Then I should see "Hey! That's not your blog!" Scenario: Greg posts to a client's blog Given I am logged in as Greg When I try to post to "Expensive Therapy" Then I should see "Your article was published."
For a less explicit alternative to
Background, check out tagged hooks.
Backgroundto set up complicated states, unless that state is actually something the client needs to know.
Given I am logged in as a site owner.
Backgroundsection short.
Backgroundis more than 4 lines long, consider moving some of the irrelevant details into higher-level steps.
Backgroundsection vivid.
"User A",
"User B",
"Site 1", and so on.
Backgroundsection has scrolled off the screen, the reader no longer has a full overview of whats happening. Think about using higher-level steps, or splitting the
*.featurefile.
The
Scenario Outline keyword can be used to run the same
Scenario multiple times,
with different combinations of values.
Copying and pasting scenarios to use different values quickly becomes tedious and repetitive:
We can collapse these two similar scenarios into a
Scenario Outline.
Scenario outlines allow us to more concisely express these scenarios through the use
of a template with
< >-delimited parameters:
Scenario Outline: eating Given there are <start> cucumbers When I eat <eat> cucumbers Then I should have <left> cucumbers Examples: | start | eat | left | | 12 | 5 | 7 | | 20 | 5 | 15 |
A
Scenario Outline must contain an
Examples section. Its steps are interpreted as a template
which is never directly run. Instead, the
Scenario Outline is run once for each row in
the
Examples
Doc Strings and
Data Tables.
Doc Strings are handy for passing a larger piece of text to a step definition. translates to over 70 languages.
Here is a Gherkin scenario written in Norwegian:
# language: no Funksjonalitet: Gjett et ord Eksempel: Ordmaker starter et spill Når Ordmaker starter et spill Så må Ordmaker vente på at Gjetter blir med Example: Gjetter blir med Gitt at Ordmaker har startet et spill med ordet "bløtt" Når Gjetter blir med på Ordmakers spill Så må Gjetter gjette et ord på 5 bokstaver
A
# language: header on the first line of a feature file tells Cucumber what
spoken language to use - for example
# language: fr for French.
If you omit this header, Cucumber will default to English (
Some Cucumber implementations also let you set the default language in the
configuration, so you don’t need to place the
# language header in every file.
You can help us improve this documentation. Edit this page. | http://docs.cucumber.io/gherkin/reference/ | 2018-05-20T11:35:52 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.cucumber.io |
What's New in Developer Content
This section contains a summary of the new features that apply to the developer content in the 2007 Microsoft Office system. Our COM documentation has existed for many versions and for the 2007 version, we decided to focus on three main areas:
- Visual Studio look and feel
- Task-based code samples
- Object model changes
Visual Studio look and feel
In previous versions, the organization of our table of content was far from intuitive, and the feedback from our Visual Studio customers was that they often felt disoriented in Office Help. The Visual Studio look and feel goals are to have our docs look and operate more like the Visual Studio documentation. We also want our COM documentation to match the look and feel of our managed code documentation, such as the Project SDK, or the SharePoint Server SDK.
The following picture of a Microsoft Office Excel Developer topic shows the new look and feel.
The refreshed table of content design groups all the relevant members for a given object under that object node entry in the table of content as seen in the following figure.
As in Visual Studio help, we added Members table topics that presents high level information for all the members corresponding to a given object. The following picture illustrates the concept.
Task-based code samples
The second focus area in the developer content in the 2007 Microsoft Office system is the implementation of task-based code samples or how-to topics. The idea is to provide code samples, based on a task taxonomy. We want to be sure that if developers know what they want to do, but have no idea where to look in the object model (as is often the case with new users), we provide an answer.
We provide a list of categories such as: working with sheets, working with PivotTables, working with XML, or working with data. Then, for each of those, we provide sub-categories. For example, "Exporting Data" is a sub-category of “Working with Data”. Each sub-category contains a list of discrete tasks, such as exporting to CSV, exporting to a tab delimited text file, or exporting to XML.
The Outlook Developer documentation and the SharePoint Server SDK both contain many new how-to code samples. Look for how-to entries in the table of contents and expect these topics to expand in other Office help content during subsequent updates. To view the most recent content, visit the Microsoft Office Developer Center.
We actively monitor discussions and questions from the newsgroups to identify tasks to document. If you have specific tasks for which you want to provide a code sample, send us an e-mail message at [email protected].
Object model changes
In previous versions of Office, the documentation provided accurate information about the new items in each object model. For the 2007 Office system, we also provide information about the various changes from one version of the object model to another, starting with Microsoft Office 97. For more information about new and updated members, see the appropriate topics in What's New in the table of content, as pictured below.
In the sample below, we provide both the old and the new syntax to highlight syntax changes.
We believe that adopting the Visual Studio Help Model, adding more task-base code samples and providing in-depth information about object model changes represent significant progress in our developer documentation. If you have feedback on any of the decisions we made, feel free to send us an e-mail message at [email protected]. We look forward to hearing from you.
- The Office Developer Docs Group | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/aa433891(v=office.12) | 2018-05-20T12:25:40 | CC-MAIN-2018-22 | 1526794863410.22 | [array(['images/aa433891.lookandfeel_za10155596%28en-us%2coffice.12%29.jpg',
'New 2007 Office System Developer content look and feel New 2007 Office System Developer content look and feel'],
dtype=object)
array(['images/aa433891.toc_za10155600%28en-us%2coffice.12%29.jpg',
'New 2007 Office System Developer Table of Content New 2007 Office System Developer Table of Content'],
dtype=object)
array(['images/aa433891.memberstable_za10155597%28en-us%2coffice.12%29.jpg',
'New 2007 Office System Developer content Members table New 2007 Office System Developer content Members table'],
dtype=object)
array(['images/aa433891.omdelta_za10155598%28en-us%2coffice.12%29.jpg',
'Object models delta entry points Object models delta entry points'],
dtype=object)
array(['images/aa433891.syntaxchange_za10155599%28en-us%2coffice.12%29.jpg',
'Example of Syntax Diff between versions Example of Syntax Diff between versions'],
dtype=object) ] | docs.microsoft.com |
Close a work order task as incomplete Only agents can close work order tasks assigned to them. About this task If the problem was only partially fixed or resolved, use the Close Incomplete option. Procedure Navigate to Field Service > Agent > Assigned to me. Open a work order task. Click Close Incomplete. In the Create a follow on task field, do one of the following: Select Yes to have a new work order task cloned and, if this task is the last remaining task to complete, prevent the work order from closing. After the follow on task is done and marked Closed Complete, the status of the work order changes to Closed Complete even though the task that generated the follow on task was marked Closed Incomplete. Select No to close the work order task. In the Reason for the incomplete closure field, enter information about why the task could not be completed. The information entered is added to the Work Notes field on the work order task form. The status of all unused parts automatically changes to In-Stock. The state of the parent work order automatically changes to Closed - Incomplete if all work order tasks have a state of Closed - Complete or Canceled and at least one task has a state of Closed - Incomplete. | https://docs.servicenow.com/bundle/kingston-customer-service-management/page/product/planning-and-policy/task/t_CloseAWorkOrderTaskAsIncomplete.html | 2018-05-20T12:11:03 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.servicenow.com |
.
.
If there are several functions whose velocity you want to adjust at the same time, such as the hand, forearm and arm of a cut-out character, you can apply the same velocity parameters to all the selected keyframes in one frame. | https://docs.toonboom.com/help/harmony-14/advanced/cut-out-animation/about-ease.html | 2018-05-20T12:23:49 | CC-MAIN-2018-22 | 1526794863410.22 | [array(['../Resources/Images/EDU/HAR/Student/Steps/an_velocity_0.png',
None], dtype=object) ] | docs.toonboom.com |
WMI Provider Events and Errors
This topic contains cause and resolution information for a number of WMI errors related to the SQL Server.
In This Section
The error message topics in this section provide an explanation of the error message, possible causes, and any actions you can take to correct the problem.
- 0x8007052f
Logon failure: user account restriction. Possible reasons are blank passwords not allowed, login hour restrictions, or a policy restriction has been enforced.
See Also | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/cc879254(v=sql.105) | 2018-05-20T12:40:32 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.microsoft.com |
REST test step: Assert Response Payload Assert the HTTP response payload has the specified relationship to the specified value. You specify the value. Note: The entire payload is used to look for a match. A large payload can affect performance. body and the value you specify. Response body The value of the response body to use in the test. Must contain the name and value to be compared as it appears in the response payload. The field must not contain any curly braces. This field is not shown if the operation is is not empty. To check the short description in the response payload{"result":{"number":"INC0010040","short_description":"Test ATF Incident"}}the Response body should contain "short_description":"Test ATF Incident" These formats are incorrect and the step fails. {"short_description":"Test ATF Incident"} "{"short_description":"Test ATF Incident"}" short_description: Test ATF Incident short_description:"Test ATF Incident" | https://docs.servicenow.com/bundle/kingston-application-development/page/administer/auto-test-framework/reference/atf-assert-response-payload.html | 2018-05-20T12:16:44 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.servicenow.com |
See: Description
The MarkLogic Data Movement SDK supports long-running write,
read, delete, or transform jobs. Long-running write jobs are
enabled by
WriteBatcher.
Long-running read, delete, or transform jobs are enabled by
QueryBatcher
which can perform actions
on all uris matching a query or
on all uris provided by an
Iterator<String>.
addfrom many threads
When using QueryBatcher, your custom listeners provided to
onUrisReady can do anything with each batch of
uris and will usually use the MarkLogic Java Client
API to do things. However, to simplify common use cases, the
following listeners are also provided:
ApplyTransformListener- Modifies documents in-place in the database by applying a
server-side transform
ExportListener- Downloads each document for further processing in Java
ExportToWriterListener- Downloads each document and writes it to a Writer (could be a file, HTTP response, in-memory Writer, etc.
DeleteListener- Deletes each batch of documents from the server
UrisToWriterListener- Writes each uri to a Writer (could be a file, HTTP response, etc.).
When you need to perform actions on server documents beyond what can be done with the provided listeners, register your custom code with onUrisReady and your code will be run for each batch of uris.For Example:
QueryBatcher qhb = dataMovementManager.newQueryBatcher(query) .withBatchSize(1000) .withThreadCount(20) .withConsistentSnapshot() .onUrisReady(batch -> { for ( String uri : batch.getItems() ) { if ( uri.endsWith(".txt") ) { client.newDocumentManager().delete(uri); } } }) .onQueryFailure(queryBatchException -> queryBatchException.printStackTrace()); JobTicket ticket = dataMovementManager.startJob(qhb); qhb.awaitCompletion(); dataMovementManager.stopJob(ticket);
When you need to write a very large volume of documents and mlcp cannot meet your requirements, use WriteBatcher.For Example:
WriteBatcher whb = dataMovementManager.newWriteBatcher() .withBatchSize(100) .withThreadCount(20) .onBatchSuccess(batch -> { logger.debug("batch # {}, so far: {}", batch.getJobBatchNumber(), batch.getJobResultsSoFar()); }) .onBatchFailure((batch,throwable) -> throwable.printStackTrace() ); JobTicket ticket = dataMovementManager.startJob(whb); // the add or addAs methods could be called in separate threads on the // single whb instance whb.add ("doc1.txt", new StringHandle("doc1 contents")); whb.addAs("doc2.txt", "doc2 contents"); whb.flushAndWait(); // send the two docs even though they're not a full batch dataMovementManager.stopJob(ticket);
As demonstrated above, listeners should be added to each instance of QueryBatcher or WriteBatcher. Ad-hoc listeners can be written as Java 8 lambda expressions. More sophisticated custom listeners can implement the appropriate listener interface or extend one of the provided listeners listed above.
QueryBatchListener (onUrisReady) instances are necessary to do something with the uris fetched by QueryBatcher. What a custom QueryBatchListener does is completely up to it, but any operation which operates on uris offered by any part of the Java Client API could be used, as could any read or write to an external system. QueryFailureListener (onQueryFailure) instances handle any exceptions encoutnered fetching the uris. WriteBatchListener (onBatchSuccess) instances handle any custom tracking requirements during a WriteBatcher job. WriteFailureListener (onBatchFailure) instances handle any exceptions encountered writing the batches formed from docs send to the WriteBatcher instance. See the javadocs for each provided listener for an explantion of the various listeners that can be registered for it to call. See javadocs, the Java Application Developer's Guide, source code for provided listeners, cookbook examples, and unit tests for more examples of listener implementation ideas.
Since listeners are called asynchronously by all threads in the
pool inside the QueryBatcher or WriteBatcher instance, they must
only perform thread-safe operations. For example, accumulating to a
collection should only be done with collections wrapped as
synchronized Collections rather than directly using
un-synchronized collections such as HashMap or ArrayList which are
not thread-safe. Similarly, accumulating to a string should use
StringBuffer insted of StringBuilder since StringBuffer is
synchronized (and thus thread-safe). We also recommend
java.util.concurrent.atomic classes.
Listeners should handle their own exceptions as described below in Handling Exceptions in Listeners.
logger.error("Exception thrown by an onBatchSuccess listener", throwable);
This achieves logging of exceptions without allowing them to prevent the job from continuing.
A QueryFailureListener or WriteFailureListener will not be notified of exceptions thrown by other listeners. Instead, these failure listeners are notified exclusively of exceptions in the operation of QueryBatcher or WriteBatcher.
If you wish a custom QueryBatchListener or WriteBatchListener to
trap its own exceptions and pass them along to callbacks registered
with it for exception handling, it can of course do that in a
custom way. Examples of this pattern can be seen in the interface
of
ApplyTransformListener.
Every time you create a new QueryBatcher or WriteBatcher it
comes with some pre-installed listeners such as
HostAvailabilityListener
and a listener to track counts for JobReport. If you wish to remove
these listeners and their associated functionality call one of the
following:
setUrisReadyListeners,
setQueryFailureListeners,
setBatchSuccessListeners, or
setBatchFailureListeners. Obviously, removing the
functionality of HostAvailabilityListener means it won't do its job
of handling black-listing hosts or retrying batches that occur when
a host is unavailable. And removing the functionality of the
listeners that track counts for JobReport means JobReport should no
longer be used. If you would just like to change the settings on
HostAvailabilityListener or NoResponseListener, you can do
something like the following:
HostAvailabilityListener.getInstance(batcher) .withSuspendTimeForHostUnavailable(Duration.ofMinutes(60)) .withMinHosts(2);
We have made efforts to provide helpful logging as you use QueryBatcher and WriteBatcher. Please make sure to enable your slf4j-compliant logging framework. | http://docs.marklogic.com/javadoc/client/com/marklogic/client/datamovement/package-summary.html | 2018-05-20T11:55:09 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.marklogic.com |
Vision¶
pydocusign is a Python client for DocuSign [1] signature SAAS platform.
As pydocusign author, I basically needed Python bindings for DocuSign API, but I did not find one either on or. So initiating a new one looked like a fair start. I published it under BSD so that it can be used, improved and maintained by a community.
So, feel free to report issues, request features or refactoring! See Contributing for details.
Notes & references | http://pydocusign.readthedocs.io/en/latest/about/vision.html | 2018-05-20T11:33:30 | CC-MAIN-2018-22 | 1526794863410.22 | [] | pydocusign.readthedocs.io |
From 18 August 2017 onwards, we introduced a new version of the meal plan auto-generator in NutriAdmin. The new version should be superior in almost 100% of the cases when compared to the previous one. Some of the key improvements are:
- Generated meal plans include recipes as opposed to just lists of ingredients
- Plans are generated much faster
- Results are more accurate and of higher quality in general
- We are expanding the variety of diets and options so that it should eventually be better than before.
If you still want to use the old meal plan auto-generator, this is still possible. We have hidden this functionality by default since the majority of users will prefer using the newer improved algorithm, but you can access the old algorithm by following these steps:
Step 1: Click on Settings.
Step 2: Click on Meal Plans, Recipes, and Foods.
Step 3: Scroll to the bottom of the page and click on Show BOTH the old algorithm and the new one.
Step 4: Click on Save Changes on the top of the page.
Step 5: Go to Meal Plans on the side menu.
Step 6: Click on New Meal Plan.
Step 7: You should now have the option to choose between using the old auto-generation algorithm and the new one.
| https://docs.nutriadmin.com/how-to-access-the-old-algorithm-for-meal-plan-auto-generation | 2018-05-20T11:48:02 | CC-MAIN-2018-22 | 1526794863410.22 | [array(['uploads/2017-08-18 08.04.53 pm-1503083267483.png',
'settings menu'], dtype=object)
array(['uploads/2017-08-18 08.05.02 pm-1503083267503.png',
'meal plans tab'], dtype=object)
array(['uploads/2017-08-18 08.05.20 pm-1503083268594.png',
'old algorithm activate'], dtype=object)
array(['uploads/2017-08-18 08.05.26 pm-1503083268594.png',
'save settings'], dtype=object)
array(['uploads/2017-08-18 08.05.36 pm-1503083269249.png',
'mea plans menu'], dtype=object)
array(['uploads/2017-08-18 08.05.43 pm-1503083269249.png',
'new meal plan button'], dtype=object)
array(['uploads/2017-08-18 08.05.59 pm-1503083269705.png',
'using both algorithms'], dtype=object) ] | docs.nutriadmin.com |
Managing an Amazon Aurora DB Cluster
In the following sections, you can find information about managing performance, scaling, fault tolerance, backup, and restoring for an Amazon Aurora DB cluster.
Managing Performance and Scaling for Aurora DB Clusters
You can use the following options to manage performance and scaling for Aurora DB clusters and DB instances:
Storage Scaling
Aurora storage automatically scales with the data in your cluster volume. As your data grows, your cluster volume storage grows in 10 gibibyte (GiB) increments up to 64 TiB.
The size of your cluster volume is checked on an hourly basis to determine your storage costs. For pricing information, see the Amazon RDS product page. the DB cluster..
Fault Tolerance for an Aurora DB Cluster fails, Aurora automatically fails over to a new primary instance in one of two ways:
By promoting an existing Aurora Replica to the new primary instance
By creating a new primary instance
If the DB cluster has one or more Aurora Replicas, then an Aurora Replica is promoted to the primary instance during a failure event. A failure event results in a brief interruption, during which read and write operations fail with an exception. However, service is typically restored in less than 120 seconds, and often less than 60 seconds. To increase the availability of your DB cluster, we recommend that you create at least one or more Aurora Replicas in two or more different Availability Zones.
You can customize the order in which your Aurora Replicas are promoted to the primary instance after a failure by assigning each replica a priority. Priorities range from 0 for the highest priority to 15 for the lowest priority. If the primary instance fails, Amazon RDS promotes the Aurora Replica with the highest priority to the new primary instance. You can modify the priority of an Aurora Replica at any time. Modifying the priority doesn't trigger a failover.
More than one Aurora Replica can share the same priority, resulting in promotion tiers..
If the DB cluster doesn't contain any Aurora Replicas, then the primary instance is recreated during a failure event. A failure event results in an interruption during which read and write operations fail with an exception. Service is restored when the new primary instance is created, which typically takes less than 10 minutes. Promoting an Aurora Replica to the primary instance is much faster than creating a new primary instance.
Note
Amazon Aurora also supports replication with an external MySQL database, or an RDS MySQL DB instance. For more information, see Replication Between Aurora and MySQL or Between Aurora and Another Aurora DB Cluster.
Backing Up and Restoring an Aurora DB Cluster
In the following sections, you can find information about Aurora backups and how to restore your Aurora DB cluster using the AWS Management Console.
Backups
Aurora backs up your cluster volume automatically and retains restore data for the length of the backup retention period. Aurora backups are continuous and incremental so you can quickly restore to any point within the backup retention period. No performance impact or interruption of database service occurs as backup data is being written. You can specify a backup retention period, from 1 to 35 days, when you create or modify a DB cluster.
If you want to retain a backup beyond the backup retention period, you can also take a snapshot of the data in your cluster volume. Storing snapshots incurs the standard storage charges for Amazon RDS. For more information about RDS storage pricing, see Amazon RDS Pricing.
Because Aurora retains incremental restore data for the entire backup retention period, you only need to create a snapshot for data that you want to retain beyond the backup retention period. You can create a new DB cluster from the snapshot.
Restoring Data
You can recover your data by creating a new Aurora DB cluster from the backup data that Aurora retains, or from a DB cluster snapshot that you have saved. You can quickly restore a new copy of a DB cluster created from backup data to any point in time during your backup retention period. The continuous and incremental nature of Aurora backups during the backup retention period means you don't need to take frequent snapshots of your data to improve restore times.
To determine the latest or earliest restorable time for a DB instance, look for
the
Latest Restorable Time or
Earliest Restorable Time
values on the RDS console. For information about viewing these values, see Viewing an Amazon Aurora DB Cluster. The latest restorable time
for a DB cluster is the most recent point at which you can restore your DB cluster,
typically within 5 minutes of the current time. The earliest restorable time specifies
how far back within the backup retention period that you can restore your cluster
volume.
You can determine when the restore of a DB cluster is complete by checking the
Latest Restorable Time and
Earliest Restorable Time
values. The
Latest Restorable Time and
Earliest Restorable
Time values return NULL until the restore operation is complete. You
can't request a backup or restore operation if
Latest Restorable
Time or
Earliest Restorable Time returns NULL.
To restore a DB cluster to a specified time using the AWS Management Console
Open the Amazon Aurora console at.
In the navigation pane, choose Instances. Choose the primary instance for the DB cluster that you want to restore.
Choose Instance actions, and then choose Restore to point in time.
In the Launch DB Instance window, choose Custom under Restore time.
Specify the date and time that you want to restore to under Custom.
Type a name for the new, restored DB instance for DB instance identifier under Settings.
Choose Launch DB Instance to launch the restored DB instance.
A new DB instance is created with the name you specified, and a new DB cluster is created. The DB cluster name is the new DB instance name followed by
–cluster. For example, if the new DB instance name is
myrestoreddb, the new DB cluster name is
myrestoreddb-cluster.
Backtracking a DB Cluster
You can also backtrack a DB server to "rewind" the DB cluster to a previous point in time, after configuring the DB cluster for backtracking. Unlike restoring a DB server, backtracking a DB server doesn't require creating a new Aurora DB cluster, which makes backtracking much faster. However, you must configure backtracking before you can use it, and it has a few other limitations. For more information, see Backtracking an Aurora DB Cluster.
Database Cloning for Aurora
You can also use database cloning to clone the databases of your Aurora DB cluster to a new DB cluster, instead of restoring a DB cluster snapshot. The clone databases use only minimal additional space when first created. Data is copied only as data changes, either on the source databases or the clone databases. You can make multiple clones from the same DB cluster, or create additional clones even from other clones. For more information, see Cloning Databases in an Aurora DB Cluster.
Amazon Aurora DB Cluster and DB Instance Parameters
You manage your Amazon Aurora DB cluster in the same way that you manage other Amazon RDS DB instances, by using parameters in a DB parameter group. Amazon Aurora differs from other DB engines in that you have a cluster of DB instances. As a result, some of the parameters that you use to manage your Amazon Aurora DB cluster apply to the entire cluster. Other parameters apply only to a particular DB instance in the DB cluster.
Cluster-level parameters are managed in DB cluster parameter groups. Instance-level parameters are managed in DB parameter groups.
Although each DB instance in an Aurora DB cluster is compatible with a specific database engine, some of the database engine parameters must be applied at the cluster level. You manage these using DB cluster parameter groups. Cluster-level parameters are not found in the DB parameter group for an instance in an Aurora DB cluster and are listed later in this topic.
The DB cluster and DB instance parameters available to you in Aurora vary depending on database engine compatibility. | https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html | 2018-05-20T12:03:49 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.aws.amazon.com |
Advanced HoloLens Emulator and Mixed Reality Simulator input
Most emulator users will only need to use the basic input controls for the HoloLens emulator or the Windows Mixed Reality simulator. The details below are for advancd users who have found a need to simulate more complex types of input.
Concepts
To get started controlling the virtual input to the HoloLens emulator and Windows Mixed Reality simulator, you should first understand a few concepts.
Motion is controlled with both rotation and translation (movement) along three axes.
- Yaw: Turn left or right.
- Pitch: Turn up or down.
- Roll: Roll side-to-side.
- X: Move left or right.
- Y: Move up or down.
- Z: Move forward or backward.
Gesture and motion controller input are mapped closely to how they physical devices:
- Action: This simulates the action of pressing the forefinger to the thumb or pulling the action button on a controller. For example, the Action input can be used to simulate the air-tap gesture, to scroll through content, and to press-and-hold.
- Bloom or Home: The HoloLens bloom gesture or a controller's Home button is used to return to the shell and to perform system actions.
You can also control the state of simulated sensor input:
- Reset: This will return all simulated sensors to their default values.
- Tracking: Cycles through the positional tracking modes. This includes:
- Default: The OS chooses the best tracking mode based upon the requests made of the system.
- Orientation: Forces Orientation-only tracking, regardless of the requests made of the system.
- Positional: Forces Positional tracking, regardless of the requests made of the system.
Types of input
The following table shows how each type of input maps to the keyboard, mouse, and Xbox controller. Each type has a different mapping depending on the input control mode; more information on input control modes is provided later in this document.
Input control modes
The emulator can be controlled in multiple modes, which impact how the controls are interpreted. The input modes are:
- Default mode: The default mode combines the most common operations for ease of use. This is the most commonly used mode.
- Hands or controller mode: The HoloLens emulator simulates gesture input with hands, while the Windows Mixed Reality simulator simulates tracked controllers. To enter this mode, press and hold an alt key on the keyboard: use left alt for the left hand/controller, and/or use right alt for the right hand/controller. You can also press and hold a shoulder button on the Xbox controller to enter this mode: press the left shoulder for the left hand/controller, and/or press the right shoulder for the right hand/controller.
- Hands are typically not visible to the HoloLens emulator - they are made visible briefly when performing gestures such as air-tap and bloom using the default input mode. This is a difference from tracked controllers in the Mixed Reality simulator. The corresponding Hand is also made visible when you enter hands mode, or when you click "Turn On" in the Simulation tab, which is located in the Additional Tools pane. * Head mode: The head mode applies controls, where appropriate, exclusively to the head. To enter head mode, press and hold the H key on the keyboard.
The following table shows how each input mode maps each type of input:
Controlling an app
This article has described the complete set of input types and input modes that are available in the HoloLens emulator and Windows Mixed Reality simulator. The following set of controls is suggested for day-to-day use: | https://docs.microsoft.com/en-us/windows/mixed-reality/advanced-hololens-emulator-and-mixed-reality-simulator-input | 2018-05-20T12:17:14 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.microsoft.com |
Publishing processes via OGC API - Processes¶
OGC API - Processes provides geospatial data processing functionality in a standards-based fashion (inputs, outputs).
pygeoapi implements OGC API - Processes functionality by providing a plugin architecture, thereby allowing developers to implement custom processing workflows in Python.
A sample
hello-world process is provided with the pygeoapi default configuration.
Asynchronous support¶
By default, pygeoapi implements process execution (jobs) as synchronous mode. That is, when jobs are submitted, the process is executed and returned in real-time. Certain processes that may take time to execute, or be delegated to a scheduler/queue, are better suited to an asynchronous design pattern. This means that when a job is submitted in asynchronous mode, the server responds immediately with a reference to the job, which allows the client to periodically poll the server for the processing status of a given job.
pygeoapi provides asynchronous support by providing a ‘manager’ concept which, well, manages job execution. The manager concept is implemented as part of the pygeoapi Customizing pygeoapi: plugins architecture. pygeoapi provides a default manager implementation based on TinyDB for simplicity. Custom manager plugins can be developed for more advanced job management capabilities (e.g. Kubernetes, databases, etc.).
server: manager: name: TinyDB connection: /tmp/pygeoapi-process-manager.db output_dir: /tmp/
Putting it all together¶
To summarize how pygeoapi processes and managers work together:
- process plugins implement the core processing / workflow functionality - manager plugins control and manage how processes are executed
Processing examples¶
list all processes -
describe the
hello-worldprocess -
show all jobs for the
hello-worldprocess -
execute a job for the
hello-worldprocess -
curl -X POST "" -H "Content-Type: application/json" -d "{\"inputs\":[{\"id\":\"name\",\"type\":\"text/plain\",\"value\":\"hi there2\"}]}"
execute a job for the
hello-worldprocess with a raw response -
curl -X POST "" -H "Content-Type: application/json" -d "{\"inputs\":[{\"id\":\"name\",\"type\":\"text/plain\",\"value\":\"hi there2\"}]}"
execute a job for the
hello-worldprocess in asynchronous mode -
curl -X POST "" -H "Content-Type: application/json" -d "{\"mode\": \"async\", \"inputs\":[{\"id\":\"name\",\"type\":\"text/plain\",\"value\":\"hi there2\"}]}"
Todo
add more examples once OAProc implementation is complete | https://docs.pygeoapi.io/en/latest/data-publishing/ogcapi-processes.html | 2021-01-15T21:15:11 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.pygeoapi.io |
You create a server pool for vRealize Automation in NSX Data Center for vSphere. The server pool determines the load balancing algorithm and combines resources from the pool members.
You add the three vRealize Automation nodes as members of the server pool. service gateway to open its network settings.
- Click the Load Balancer tab and click Pools.
- Click Add and, on the General tab of the New pool dialog box, enter these values to configure the load-balancing profile.
- Click the Members tab of the New pool dialog box.
- To add each vRealize Automation cluster node to the pool, click Add, enter the values for the node, and click OK.
- On New pool dialog box, click Add. | https://docs.vmware.com/en/VMware-Validated-Design/services/deployment-of-vrealize-suite-2019-on-vmware-cloud-foundation-310/GUID-5F5C74C5-94E4-41B6-B62F-5A62CC783F07.html | 2021-01-15T21:46:25 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.vmware.com |
When the Workspace ONE Access nodes are configured in the DMZ behind a load balancer, all nodes must be configured to communicate with each other. The firewall rules are configured to allow the nodes to talk to each other on port 5262.
For the cert proxy service to work to direct requests correctly, the load balancer should be configured as follows.
- SSL re-encryption enabled.
- Publicly trusted certificate installed on the load balancer.
- X-Forwarded-For header enabled.
- RemotePort header enabled.
- Port 443 configured with a self-signed certificate on each node.
- Port 5262 configured for the cert proxy service, with SSL pass-through configured for certificate authentication. The SSL handshake is between the device and the service.
- Port 5263 configured as another instance of the cert proxy service to receive internal admin requests from the service. | https://docs.vmware.com/en/VMware-Workspace-ONE/services/WS1_android_sso_config/GUID-C1B0BEE6-69C1-4477-B88E-89A53DD57F5D.html | 2021-01-15T21:45:36 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.vmware.com |
You can view and configure native management packs and other solutions that are already installed and configure adapter instances from the other accounts page.
Note: You must activate solutions before configuring them. For more information, see Solutions Repository
The Other Accounts page includes a toolbar of options.
Click All Filters and select All to enter your criteria or filter them according to name, collector, description, solution, or adapter.
The other accounts page lists the solutions that were added and configured so that vRealize Operations Cloud can collect data. To add another account, click Add Account and select one of the solutions. For more information see, Adding Other Accounts.
Manage the Cloud Solutions
To add and configure the cloud accounts, see Manage Other Accounts | https://docs.vmware.com/en/VMware-vRealize-Operations-Cloud/services/config-guide/GUID-7798A1B5-1823-4FF4-BE0B-855DE8F011E0.html | 2021-01-15T21:49:14 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.vmware.com |
Create a super metric when you want to check the health of your environment, but cannot find a suitable metric to perform the analysis.
Procedure
- On the menu, click Administration and in the left pane click .
- Click the Add icon.The Manage Super Metric wizard opens.
- Enter a meaningful name for the super metric such as Worst VM CPU Usage (%) in the Name text box.Note: It is important that you have an intuitive name as it appears in dashboards, alerts, and reports. For meaningful names, always use space between words so that it is easier to read. Use title case for consistency with the out of the box metrics and add the unit at the end.
- Provide a brief summary of the super metric in the Description text box.Note: Information regarding the super metric, like why it was created and by whom can provide clarity and help you track your super metrics with ease.
- Select the unit of the super metrics from the Unit drop-down and click Next.Note: The super metrics unit configured here can be changed in the metrics charts, widgets, and views.The Create a formula screen appears.
- Create the formula for the super metric.For example, to add a super metric that captures the average CPU usage across all virtual machines in a cluster, perform the following steps.
- Select the function or operator. This selection helps combine the metric expression with operators and/or functions. In the super metric editor, enter avg and select the avg function.You can manually enter functions, operators, objects, object types, metrics, metrics types, property, and properties types in the text box and use the suggestive text to complete your super metric formula.
Alternatively, select the function or operator from the Functions and Operators drop-down menus.
- To create a metric expression, enter Virtual and select Virtual Machine from the object type list.
- Add the metric type, enter usage, and select the CPU|Usage (%) metric from the metric type list.Note: The expression ends with depth=1 by default. If the expression ends with depth=1, that means that the metric is assigned to an object that is one level above virtual machines in the relationship chain. However, since this super metric is for a cluster which is two levels above virtual machine in the relationship chain, change the depth to 2.
The depth can also be negative, this happens when you need to aggregate the parents of a child object. For example, when aggregating all the VMs in a datastore, the metric expression ends with depth=-1, because VM is a parent object of datastore. But, if you want to aggregate all the VMs at a Datastore Cluster level, you need to implement 2 super metrics. You cannot directly aggregate from VM to Datastore Cluster, because both are parents of a datastore. For a super metric to be valid, depth cannot be 0 (-1+1=0). Hence, you need to create the first super metric (with depth=-1) for the aggregate at the datastore level, and then build the second super metric based on the first (with depth = 1).The metric expression is created.
- To calculate the average CPU usage of powered on virtual machines in a cluster, you can add the
whereclause. Enter where=””.Note: The where clause cannot point to another object, but can point to a different metric in the same object. For example, you cannot count the number of VMs in a cluster with the CPU contention metric > SLA of that cluster. The phrase "SLA of that cluster " belongs to the cluster object, and not to the VM object. The right operand must also be a number and cannot be another super metric or variable. The where clause cannot be combined using AND, OR, NOT, which means you cannot have
where="VM CPU>4 and VM RAM>16"in your super metric formula.
- Position the pointer between the quotation marks, enter Virtual, and select the Virtual Machine object type and the System|Powered ON metric type.
- To add the numeric value for the metric, enter ==1.
- To view hints and suggestions, click ctrl+space and select the adapter type, objects, object types, metrics, metrics types, property, and properties types to build your super metric formula.
- Click the This object icon.
If the This object icon is selected during the creation of a metric expression, it means that the metric expression is associated to the object for which the super metric is created.
- You can also use the Legacy template to create a super metric formula without the suggestive text.To view the super metric formula in a human-readable format, click the Show Formula Description icon. If the formula syntax is wrong, an error message appears.Note: If you are using Internet Explorer, you are automatically directed to the legacy template.
- Verify that the super metric formula has been created correctly.
The Assign to Object Types screen appears.
- Expand the Preview section.
- In the Objects text box, enter and select a Cluster.A metric graph is displayed showing values of the metric collected for the object. Verify that the graph shows values over time.
- Click the Snapshots icon.You can save a snapshot, or download the metric chart in a .csv format.
- Click the Monitoring Objects icon.If enabled, only the objects that are being monitored are used in the formula calculation.
- Click Next.
- Associate the super metric with an object type. vRealize Operations Cloud calculates the super metric for the target objects and displays it as a metric for the object type.
- In the Assign to an Object Type text box, enter Cluster and select the Cluster Compute Resource object type.After one collection cycle, the super metric appears on each instance of the specified object type. For example, if you define a super metric to calculate the average CPU usage across all virtual machines and assign it to the cluster object type, the super metric appears as a super metric on each cluster.
- Click Next.The Enable in a Policy screen appears.
- Enable the super metric in a policy, wait for at least one collection cycle till the super metric begins collecting and processing data, and then review your super metric on the All Metrics tab.
- In the Enable in a Policy section, you can view the policies related to the object types you assigned your super metric to. Select the policy in which you want to enable the super metric. For example, select the Default Policy for Cluster.
- Click Finish.You can now view the super metric you created and the associated object type and policy on the Super Metrics page. | https://docs.vmware.com/en/VMware-vRealize-Operations-Cloud/services/config-guide/GUID-8F4DC5C7-DB37-4967-A109-118DB8E74060.html | 2021-01-15T20:47:16 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.vmware.com |
eBECAS.portal API documentation version v1.5.0.0 eBECAS API for clients to search for and retrieve details of students, enrolments and other details. Please visit
Survey
You can define a set of questions in Edmiss and setup the response options. Please navigate to Edmiss – Main – Utilities – Surveys Survey Setup You can search for existing surveys and add new Surveys. You can modify a survey add, move, delete and modify sections and questions for a selected survey. A section […]
Pathway Applications
Pathway […]
Online Payments Using Flywire
Fly […] | https://docs.edmiss.com/category/operations/ | 2021-01-15T21:44:19 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.edmiss.com |
Russia documents
Dear Adoptive Parents,
Welcome to our Registration and Dossier Preparation Directory. This site was created to help you with document preparation and to try to make the process as pain-free as possible.
If you are not yet assigned to any region, please click on the Basic Registration link below:
If you are already assigned to a particular region, please click on the name of your region below to find your specific Regional Registration and Dossier package and instructions:
If you need to print your State Adoption Laws, please visit our Adoption State Laws Directory:
You can use the Menu on the left-hand side to navigate from page to page.
Best wishes in your adoption journey!
Sincerely,
The CHI Russia Team
Please remember to check your fingerprint and CIS approval expiration dates. As a reminder, please put these dates on a calendar, refrigerator or any other place where you will be sure to see them. It is necessary that your approvals are current and that you follow the necessary steps to prevent them from expiring.>>
It is not a secret that Russia has undergone quite a few changes in recent years. Part of this transformation means that prices for goods and services have sky-rocketed. While we also experience slight inflation in the United States, it is nowhere near the rate of Russia, where the average annual inflation rate has been at around 12% for several years now and is expected to hit 14% in 2009. In addition to this, the exchange rate of US dollar to the Russian ruble is constantly shifting, making it difficult to predict where it will be at any given time. As a result, it was determined that any family that applied prior to January 01, 2008, will have to pay the $1,000 Currency Adjustment/Inflation Fee in order to “correct” the differences between Russian and US economies. This adjustment will be calculated in the final fees.
Additional links: | http://docs.childrenshope.net/ | 2009-07-04T14:58:53 | crawl-002 | crawl-002-013 | [] | docs.childrenshope.net |
Table of Contents
Product Index
Growing Up gives you the ability to change your Genesis 8 Male character’s age range from a five-year-old to an adult.
The morphs were created based on average human proportions and scale relative to age, so more accurate results could be achieved. Use the age presets, or simply create your own youth character by combining the morphs. Also included are head and nose neutralize morphs so you can blend in your favorite character faces with more. | http://docs.daz3d.com/doku.php/public/read_me/index/47623/start | 2021-10-16T05:15:09 | CC-MAIN-2021-43 | 1634323583423.96 | [] | docs.daz3d.com |
Product Index
When your crew member becomes tired from working all day onboard a starship. They need someplace to rest their heads. Introducing the Sci-Fi Starship Quarters prop.
The Sci-Fi Starship Quarters prop comes with a starship interior room, fully equipped with a bed with animatable covers, a generating station, a computer, desk, and chair for studying or just internet surfing, a shower, toilet, and sink.
Also, you get 4 book props with 2 different textures for each making a total of 8 different books, a Sci-Fi sphere object, and a cup for drinking with, as well as a Sci-Fi filler object as a barrel.
Included are 3 presets for bed cover texture, 4 presets for floor rug texture, 4 presets for wall picture and 2 presets for cup texture. Plus 175 texture, height, roughness, and normal maps. Also included are 2 HDRI maps of planets so you can have a great view of outer space from your bedroom. | http://docs.daz3d.com/doku.php/public/read_me/index/64299/start | 2021-10-16T05:23:22 | CC-MAIN-2021-43 | 1634323583423.96 | [] | docs.daz3d.com |
I wanted to include a 3D game but I don't own any 3D assets and it seemed improper to include Unity's Standard Assets so, here are the instructions to create a 3D example from scratch. I'm only guessing it will take around 20-30 minutes (5 minutes if you've already done it once).
It will hopefully also be a good tutorial on how to take something that's already designed to work with controllers and make it work with HappyFunTimes
Follow these steps
Make a new project
Open the asset store
Download HappyFunTimes
Import the standard character assets
Create an empty GameObject for a level manager
Rename it "LevelManager" and add a player spawner, then click the little circle
on the far right of
prefab to spawn for player
Select the "ThirdPersonContoller" prefab (setting the
prefab to spawn for player)
Select the prefab in the hierarchy, It's in
Assets/Standard Assets/Characters/ThirdPersonCharacter/Prefabs.
Add an
HFTInput script to the prefab.
Select the
ThirdPersonUserControl script in
Assets/Standard Assets/Characters/ThirdPersonCharacter/Scripts.
Duplicate it (Cmd-D / Ctrl-D or Edit->Duplicate from the menus), Rename it
ThirdPersonUserControlHFT,
and move it to the
Asset folder (or somewhere outside of Standard Assets)
Edit the script (see the gamepad docs)
The steps are
ThirdPersonUserControllerHFT
private HFTInput m_hftInput;and
private HFTGamepad m_gamepad;
Startset
m_hftInput = GetComponent<HFTInput>();and
m_gamepad = GetComponent<HTFGamepad>();.
m_gamepad.OnDisconnect += Remove;
Removemethod that calls
Destroy.
Updatecheck for
m_hftInput.GetButtonDown("fire1");
FixedUpdateadd in
+ m_hftInput.GetAxis("Horizontal")and
- m_hftInput.GetAxis("Vertical")
m_hftInput.GetButton("fire2")to the crouch check
Here's a diff of the changes (click here for a new tab).
Select the prefab again
Delete the
ThirdPersonUserControl script on the prefab, and
add the
ThirdPersonUserControlHFT script on the prefab
Create a Plane GameObject
Add a Box Collider
Set the Box center and size. center =
x:0, y:-0.5, z:0, size =
x:10, y:1, z:10
Run it
Hopefully that shows how simple it is to get started.
Let's had a little more code to show names and colors.
Here are the changes to the script (click here for a new tab).
We added a
nameTransform Transform public field. We need to set that to a transform inside the prefab
so we'll follow these steps
ThirdPersonControllerprefab into the scene hierarchy
name(or anything)
Name Transformfield of our
ThirdPersonUserControlHFTscript.
GameObject->Apply Changes to Prefab
Here's a silent video of the steps
Now run it. There should be a "name" box showing the players name in a color matching the player's controller.
NOTE, this way of showing names is not the recommended Unity way. The point of this example is to show how to use HappyFunTimes to set colors and read names, not how to use Unity. To display names the Unity way please search for examples.
Similarly in the example above all players spawn at the same place and if they fall over the edge of the world they fall forever. Spawning at random locations and dying or resetting if the player falls off the world are standard Unity game development issues and have nothing to do with HappyFuntTimes. While I want to be helpful I can't teach you Unity. I can only teach you HappyFunTimes. For Unity issues please use the Unity Forums and/or Unity Answers. Also see the samples.
The Simple and Clean examples in
Assets/HappyFunTimes/MoreSamples spawn players at random locations given a globally defined area.
For Spawn points you can apply ideas from the example above. On the LevelManager (or some global object)
you make an array of
Transforms.
public Transform[] spawnPoints;
You then make a bunch of GameObjects, as many as you want, one for each spawn point and add them to the list
of spawn points on the LevelManager in Unity just like we added the
name GameObject to the
nameTransform above. When a player
is added, then in their
Start method (or
InitNetPlayer functions if you're not using the HFTGamepad/HFTInput)
you'd find that list from the LevelManager and either pick one of the spawn points at random or pick the next one or whaever fits
your game. Again, these are Unity game development issues not HappyFunTimes issues. I hope that doesn't
come across as harsh but I also hope you see my point. | http://docs.happyfuntimes.net/docs/unity/3d-characters.html | 2021-10-16T06:22:59 | CC-MAIN-2021-43 | 1634323583423.96 | [] | docs.happyfuntimes.net |
Metadata Search Syntax and Properties
Search in the Navigator Metadata component is implemented by an embedded Solr engine that supports the syntax described in LuceneQParserPlugin.
Search Syntax
You construct search strings by specifying the value of a default property, property name-value pairs, or user-defined name-value pairs using the syntax:
- Property name-value pairs - propertyName:value, where
- propertyName is one of the properties listed in Search Properties.
- value is a single value or range of values specified as [value1 TO value2]. In a value, * is a wildcard. In property values you must escape special characters :, -, /, and * with the backslash character \ or enclose the property value in quotes. For example, fileSystemPath:/tmp/hbase\-staging.
- User-defined name-value pairs - up_propertyName:value.
To construct complex strings, join multiple property-value pairs using the or and and operators.
Example Search Strings
- Filesystem path /user/admin - fileSystemPath:\/user\/admin
- Descriptions that start with the string "Banking" - description:Banking*
- Sources of type MapReduce or Hive - sourceType:MAPREDUCE or sourceType:HIVE
- Directories owned by hdfs in the path /user/hdfs/input - owner:HDFS and type:directory and fileSystemPath:\/user\/hdfs\/input
- Job started between 20:00 to 21:00 UTC - started:[2013-10-21T20:00:00.000Z TO 2013-10-21T21:00:00.000Z]
- User-defined key-value project-customer1 - up_project:customer1
Search Properties
A reference for the search schema properties.
The full list of properties are:
Default Properties
The following properties can be searched by simply specifying a property value: type, fileSystemPath, inputs, jobId, mapper, mimeType, name, originalName, outputs, owner, principal, reducer, tags. | https://docs.cloudera.com/documentation/enterprise/5-3-x/topics/navigator_search.html | 2021-10-16T04:59:28 | CC-MAIN-2021-43 | 1634323583423.96 | [] | docs.cloudera.com |
(PHP 4, PHP 5, PHP 7)
imap_fetchbody — Fetch a particular section of the body of the message
imap_fetchbody ( resource $imap_stream , int $msg_number , string $section [, int $options = 0 ] ) : string
Fetch of a particular section of the body of the specified messages. Body parts are not decoded by this function.
imap_stream
An IMAP stream returned by imap_open().
msg_number
The message number
section
The part number. It is a string of integers delimited by period which index into a body part list as per the IMAP4 specification
options
A bitmask with one or more of the following:
FT_UID- The
msg_numberis a UID
FT_PEEK- Do not set the \Seen flag if not already set
FT_INTERNAL- The return string is in internal format, will not canonicalize to CRLF.
Returns a particular section of the body of the specified messages as a text string.
© 1997–2020 The PHP Documentation Group
Licensed under the Creative Commons Attribution License v3.0 or later. | https://docs.w3cub.com/php/function.imap-fetchbody | 2021-10-16T06:06:56 | CC-MAIN-2021-43 | 1634323583423.96 | [] | docs.w3cub.com |
This section allows you to define password requirements for the local user accounts.
Bemerkung
Zammad does not allow you to change your LDAP password, instead, it will set a password in its local database which might confuse your users. This will be addressed in the future by #1169.
Warnung
💪 Exception for strong passwords 💪
Please note that below password policies do not affect administrators setting passwords on user accounts. While this seems strange and not safe we believe that an administrator knowing an user’s password is insecure as well.
The suggested workflow is either:
-
to use third party logins to not require local passwords at all - or -
-
to require your user to reset the password upon first login.
This way administrators are not required to set a user’s password at all!
Maximum failed logins¶
You can choose a value between
4 and
20. This defines how often a login
to a user account may fail until Zammad will lock it.
Your users can always use the „forgot password“ function to change their
password and unlock their account.
The default value is
10.
Bemerkung
Beside changing the user’s password, you can also unlock accounts via
Hinweis
Failed logins via LDAP no longer lock accounts.
2 lower and 2 upper characters¶
You can add complexity to passwords by enforcing at least 2 upper and lower case characters.
The default value is
no.
Minimum length¶
This defines the minimum password length required for users to provide
(from
4 to
20).
The default value is
6. | https://admin-docs.zammad.org/de/latest/settings/security/password.html | 2021-10-16T05:48:10 | CC-MAIN-2021-43 | 1634323583423.96 | [] | admin-docs.zammad.org |
.
Get deployment notifications
Amazon EventBridge event rules provide you with notifications about state changes for your Greengrass group deployments. EventBridge delivers a near real-time stream of system events that describes changes in AWS resources. AWS IoT Greengrass sends these events to EventBridge on an at least once basis. This means that AWS IoT Greengrass might send multiple copies of a given event to ensure delivery. Additionally, your event listeners might not receive the events in the order that the events occurred.
Amazon EventBridge is an event bus service that you can use to connect your applications with data from a variety of sources, such as Greengrass core devices and deployment notifications. For more information, see What is Amazon EventBridge? in the Amazon EventBridge User Guide.
AWS IoT Greengrass emits an event when group deployments change state. You can create an EventBridge rule that runs for all state transitions or transitions to states you specify. When a deployment enters a state that initiates a rule, EventBridge invokes the target actions defined in the rule. This allows you to send notifications, capture event information, take corrective action, or initiate other events in response to a state change. For example, you can create rules for the following use cases:
Initiate post-deployment operations, such as downloading assets and notifying personnel.
Send notifications upon a successful or failed deployment.
Publish custom metrics about deployment events.
AWS IoT Greengrass emits an event when a deployment enters the following states:
Building,
InProgress,
Success, and
Failure.
Monitoring the status of a bulk deployment operation is not currently supported. However, AWS IoT Greengrass emits state-change events for individual group deployments that are part of a bulk deployment.
Group deployment status change event
The event for a deployment state change uses the following format:
{ "version":"0", "id":" cd4d811e-ab12-322b-8255-EXAMPLEb1bc8", "detail-type":"Greengrass Deployment Status Change", "source":"aws.greengrass", "account":"123456789012", "time":"2018-03-22T00:38:11Z", "region":"us-west-2", "resources":[], "detail":{ "group-id": "284dcd4e-24bc-4c8c-a770-EXAMPLEf03b8", "deployment-id": "4f38f1a7-3dd0-42a1-af48-EXAMPLE09681", "deployment-type": "NewDeployment|Redeployment|ResetDeployment|ForceResetDeployment", "status": "Building|InProgress|Success|Failure" } }
You can create rules that apply to one or more groups. You can filter rules by one or more of the following deployment types and deployment states:
- Deployment types
NewDeployment. The first deployment of a group version.
ReDeployment. A redeployment of a group version.
ResetDeployment. Deletes deployment information stored in the AWS Cloud and on the AWS IoT Greengrass core. For more information, see Reset deployments.
ForceResetDeployment. Deletes deployment information stored in the AWS Cloud and reports success without waiting for the core to respond. Also deletes deployment information stored on the core if the core is connected or when it next connects.
- Deployment states
Building. AWS IoT Greengrass is validating the group configuration and building deployment artifacts.
InProgress. The deployment is in progress on the AWS IoT Greengrass core.
Success. The deployment was successful.
Failure. The deployment failed.
It's possible that events might be duplicated or out of order. To determine the order
of
events, use the
time property.
AWS IoT Greengrass doesn't use the
resources property, so it's always empty.
Prerequisites for creating EventBridge rules
Before you create an EventBridge rule for AWS IoT Greengrass, do the following:
Familiarize yourself with events, rules, and targets in EventBridge.
Create and configure the targets invoked by your EventBridge rules. Rules can invoke many types of targets, including:
Amazon Simple Notification Service (Amazon SNS)
AWS Lambda functions
Amazon Kinesis Video Streams
Amazon Simple Queue Service (Amazon SQS) queues
For more information, see What is Amazon EventBridge? and Getting started with Amazon EventBridge in the Amazon EventBridge User Guide.
Configure deployment notifications (console)
Use the following steps to create an EventBridge rule that publishes an Amazon SNS topic when the deployment state changes for a group. This allows web servers, email addresses, and other topic subscribers to respond to the event. For more information, see Creating a EventBridge rule that triggers on an event from an AWS resource in the Amazon EventBridge User Guide.
Open the Amazon EventBridge console
and choose Create rule.
Under Name and description, enter a name and description for the rule.
Under Define pattern, configure the rule pattern.
Choose Event pattern.
Choose Pre-defined pattern by service.
For Service provider, choose AWS.
For Service name, choose Greengrass.
For Event type, choose Greengrass Deployment Status Change.
Note
The AWS API Call via CloudTrail event type is based on AWS IoT Greengrass integration with AWS CloudTrail. You can use this option to create rules initiated by read or write calls to the AWS IoT Greengrass API. For more information, see Logging AWS IoT Greengrass API calls with AWS CloudTrail.
Choose the deployment states that initiate a notification.
To receive notifications for all state change events, choose Any state.
To receive notifications for some state change events only, choose Specific state(s), and then choose the target states.
Choose the deployment types that initiate a notification.
To receive notifications for all deployment types, choose Any state.
To receive notifications for some deployment types only, choose Specific state(s), and then choose the target deployment types.
Under Select event bus, keep the default event bus options.
Under Select targets, configure your target. This example uses an Amazon SNS topic, but you can configure other target types to send notifications.
For Target, choose SNS topic.
For Topic, choose your target topic.
Choose Add target.
Under Tags - optional, define tags for the rule or leave the fields empty.
Choose Create.
Configure deployment notifications (CLI)
Use the following steps to create an EventBridge rule that publishes an Amazon SNS topic when the deployment state changes for a group. This allows web servers, email addresses, and other topic subscribers to respond to the event.
Create the rule.
Replace
group-idwith the ID of your AWS IoT Greengrass group.
aws events put-rule \ --name TestRule \ --event-pattern "{\"source\": [\"aws.greengrass\"], \"detail\": {\"group-id\": [\"
group-id\"]}}"
Properties that are omitted from the pattern are ignored.
Add the topic as a rule target.
Replace
topic-arnwith the ARN of your Amazon SNS topic.
aws events put-targets \ --rule TestRule \ --targets "Id"="1","Arn"="
topic-arn"
Note
To allow Amazon EventBridge to call your target topic, you must add a resource-based policy to your topic. For more information, see Amazon SNS permissions in the Amazon EventBridge User Guide.
For more information, see Events and event patterns in EventBridge in the Amazon EventBridge User Guide.
Configure deployment notifications (AWS CloudFormation)
Use AWS CloudFormation templates to create EventBridge rules that send notifications about state changes for your Greengrass group deployments. For more information, see Amazon EventBridge resource type reference in the AWS CloudFormation User Guide.
See also
Deploy AWS IoT Greengrass groups to an AWS IoT Greengrass core
What is Amazon EventBridge? in the Amazon EventBridge User Guide | https://docs.aws.amazon.com/greengrass/v1/developerguide/deployment-notifications.html | 2021-10-16T06:53:26 | CC-MAIN-2021-43 | 1634323583423.96 | [] | docs.aws.amazon.com |
Phase
Description
You can use the
request-validation policy to validate an incoming HTTP request according to defined rules.
A rule is defined for an input value. This input value supports Expression Language expressions and is validated against constraint
rules.
Constraint rules can be:
NOT_NULL— Input value is required
MIN— Input value is a number and its value is greater than or equal to a given parameter
MAX— Input value is a number and its value is lower than or equal to a given parameter
DATE— Input value is valid according to the date format pattern given as a parameter
PATTERN— Input value is valid according to the pattern given as a parameter
SIZE— Input value length is between two given parameters
ENUM— Field value included in ENUM
By default, if none of the rules can be validated, the policy returns a
400 status code.
Configuration
Example configuration
"policy-request-validation": { "rules": [ { "constraint": { "parameters": [ ".*\\\\.(txt)$" ], "type": "PATTERN" }, "input": "{#request.pathInfos[2]}" } ], "status": "400" }: | https://docs.gravitee.io/apim/3.x/apim_policies_request_validation.html | 2021-10-16T06:54:09 | CC-MAIN-2021-43 | 1634323583423.96 | [] | docs.gravitee.io |
The if block will run whatever code is inserted inside it, if the condition attached to the top of the block is fulfilled e.g. if button a pressed > set all rgb red
if do The basic if block executes the code in the "do" position based on if the condition in the "if" position is met. The if section accepts the jigsaw puzzle shaped blocks such as button press event, maths and logic blocks or even sensor data from the units.
if else This if block allows us to run code if more than one condition is met. If the first condition is not met then it will default to whatever is placed in the "else" position. More conditions can be added by clicking the small cog on the if block and inserting "else if" blocks.
true Values of true or false (A.K.A "Boolean") can be used in the if condition to determine whether some predefined condition has been met. For example if you created a variable called game over and gave it the value True and placed this block in setup before the loop, and then used the = block in your if condition to check whether gameover is equal to true then the code placed next to "do" would run.
The if condition is highly necessary for any program of reasonable complexity, it allows the program to go in multiple directions based on events input by the user or some other predefined variable
This block is used in combination with the if conditions and takes two inputs and compares them against each other. They can be compared with the following operators = is equal to, < less than, > more than etc ..
Use the data to establish a relationship and connect to the if block as a judgment condition. For example, when the gyroscope X coordinate is greater than 90, the RGB bar is lit.
Perform logical operations on "and, or, not" for two logical relations
and When the left and right logical relations both hold, the result of the logical operation is True, otherwise it is False.
or When the left and right logical relations have a, the result of the logical operation is True, otherwise False
not Invert the logical result of an expression, that is, notTrue=False, notFalse=True
Add the relationship that needs to be logically added to both sides, modify the operation type
As the name suggests, a conditional loop refers to a loop that needs to satisfy certain conditions. When it meets the conditions we set, it loops through the contents of the program in Block.
repeat n time Set the number of cycles
repeat while Determine whether the condition is true or not, and when inception, infinite loop
Add repaet to the program, set the number of loops (loop conditions), add the program that needs to loop
Simply put, data iteration is to assign a number of numbers, one after the other, to the same variable, and once for each assignment, run the contents of do once.
for each item i in list Iterate over the contents of an array onto the variable i and run the contents of do once per iteration.
count with i from a to b by c Increased from a to b , the number of each increment is c , and the result of each increase is iterated to the variable i, and once per iteration, the content of do is run once.
break out of loop You can choose to jump out of the entire loop, or jump out of this loop, and execute when you execute the block.
Add an iterative block to the program, set the iteration parameters, and the do program that runs after each iteration. Example: Iterate the brightness of the RGB bar from 0 to 100.
Functions are a tool that help us wrap our code into one neat package that we can give a name, and then call that name anywhere in our program and it will run the code contained within it. Functions can help to keep our code neat and concise and avoid repeating the same things over and over.
Select functions from the blocks menu and drag it to the coding area. Enter a new name for your function in the text box provided on the block.
When we add a function to the coding area a new block will appear in the function blocks menu. We can add this block to other parts of our code and it will represent whatever code is put inside the main function block. | https://docs.m5stack.com/en/uiflow/logic?id=if | 2021-10-16T06:08:10 | CC-MAIN-2021-43 | 1634323583423.96 | [] | docs.m5stack.com |
Secure Email to Whitelisted Domain(s)
WebChart provides the ability to securely email documents to specific whitelisted domains. Securely emailing is a network setup usually via VPN and/or a secure connection between MIE and the WebChart system domain, before they can be marked as whitelisted (in a system setting) to email to. WebChart must have a secure connection before having the ability to email out documents. Please contact your MIE Implementer for setup information.
The ability to send documents via ‘Direct’ HISP connection does not rely on this setup. To email documents out to other domains must first have an established secure connection within WebChart (contact your MIE Implementer for that setup), then a system setting set, then security permission. Sending documents via “direct” for meaningful use is still valid via the send method.
Permissions and Security
Initially, the WebChart system must be setup with secure connection to other email domains. Work with your MIE Implementer and MIE developer for this connection. Once a secure connection between MIE and the WebChart system domain is established, the MIE Implementer or developer will set that domain into the system setting Webchart, Email, Whitelist Domains so the WebChart system knows which domains are setup as secure domains. You would never add gmail.com or those type of domains here in the system setting. MIE must establish a secure connection (via specific methods) to the domain you wish users to be able to email out to.
Lastly, the WebChart users must have security access to securely email documents if they are permitted to email out documents from WebChart to any whitelisted domains.
This security permission is different than the security permission to email patients. This security setting is for being able to securely email documents/records from patient charts to others who use a whitelisted domain email address.
Securely Email an Individual Document
When in the patient’s chart and you wish to email a specific (one) document, click the Print or Fax option on the top right of the opened document screen view.
The sending screen will open and here you select the radio button titled Secure Email in order to email this document to someone who has an email address within a specified whitelisted domain (whose domain connection was securely established previously with
WebChart
and is set in the system setting).
Once the secure email radio button is selected, the bottom portion of that screen will show a few fields; Recipients, Subject, free text box.
- Recipients: Begin typing in the email address to whom you want to send the document to. Hit the tab keyboard key or the addbutton. If need to add another email address to email the same document to, continue again. Otherwise even if just sending to one email address, you need to hit the tab key or the add button. Doing that will trigger the WebChart system to process if that email domain is a secure connected whitelist domain or a ‘direct’ HISP domain connection or if the email address is insecure and cannot be transmitted to.
If the email address (you are sending to) has a secure connection to WebChart and has been set in the system setting by the MIE developer or MIE Implementer, once you tab or add that email in this screen, you will see an envelope icon next to the recipient’s email address. The envelope icon means it will send via email and this email address domain has been specified as whitelisted in your WebChart system settings.
If the email address (you are sending to) has a HISP ‘direct’ secure connection to WebChart , you will see a lock icon next to the recipient’s email address. The document will be transmitted via ‘direct’ protocol exchange methods and not a regular secure whitelist email connection. It will also ‘send from’ the sender’s ‘direct’ address. ‘Direct’ HISP connection is separate from a whitelist domain connection. Contact your MIE Implementer to be set up with ‘direct’ HISP secure connection exchange.
In review, the WebChart system can only email documents via secure whitelisted domains that were previously setup with secure connection by MIE developers otherwise you will see the pop-up message that you have entered in an insecure email address (domain) and WebChart cannot send the document via email.
If you hover your mouse over the (?) help bubble in the recipients field, it will help explain the recipient must have a ‘direct’ protocol exchange email address or an email address (domain) of what has been setup as whitelisted in your system. It will list those domains that were setup with secure connection in this (?) help bubble so you know what email address domains are secure to send documents to.
- Email From: This is the address the recipient of the email will see as the ‘From’. It will auto-populate your email address from your WebChart username screen.
- Subject: Type in a subject for the email. The email will attach the document you selected, but this is the ‘subject line’ of the email that the recipient will receive in their email inbox.
- Free Text Box: Type in any free text which will be received by the recipient in the body of the email (along with the attached document you selected to email out).
Click SEND EMAIL button when wish to generate the email to be sent out.
WebChart will generate the email and your screen will show the processing. It displays the document that was sent in the email and once completed it will display the statement that the Email Sent Successfully. Again, only specific storage types can be emailed out, so if you (for example) send out an email with a lab result document, this screen will show that document as unsuccessful attaching that document, but the email and remainder of other documents (if applicable storage type) will be sent out successfully (note: documents that are HTML storage type will be received in the email as a PDF attachment). System Setting E-Chart, Email, Attach TIFF as PDF (is enabled by default) when using secure email will force tiffs to be PDFs when attached to a message.
You can X out of that screen to get back to the chart you were in.
Here is an example of what the recipient receives:
The from is the email address specified from the field when you generated the email. The document is an attachment in the email (in this example it was a word document of an Ultrasound report from the patient’s chart). The subject has your typed subject from the field when you generated the email. The free text notes field is in the body of the email to the recipient. WebChart then adds automatically (to the body of the email) the name of the patient/employee the attachment is in regards to and the date of service (DOS) and what the document type/description is from WebChart .
Audit Log of Generated Emails
WebChart keeps a history of when a document was emailed. When you go find the individual document and click on the properties
When you scroll all the way to the bottom you will see the bucket Email History on Document xxx. Here you can see when and who emailed out this document.
The generated email itself stores as a separate document within the patient/employee chart. It stores as a document named ‘email’ with the date of service as the date the email was generated. This keeps record of what/when was emailed out as it’s own separate document in the patient/employee chart.
If you click to open the stored email document that stored to the patient/employee chart it will display the specific email that was send out (generated) from the system and what document was emailed out.
| https://docs.webchartnow.com/functions/system-administration/system-controls/secure-email-to-whitelisted-domain-s.html | 2021-10-16T05:59:54 | CC-MAIN-2021-43 | 1634323583423.96 | [array(['secure-email-to-whitelisted-domain-s.images/image1.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image3.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image5.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image4.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image7.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image6.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image9.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image8.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image12.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image10.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image11.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image13.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image14.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image15.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image16.png', None],
dtype=object)
array(['secure-email-to-whitelisted-domain-s.images/image17.png', None],
dtype=object) ] | docs.webchartnow.com |
Contributing
Interested in contributing to AppSignal? Great!
Helping out is possible in many ways. By reporting issues, fixing bugs, adding features and even fixing typos. Let us know if you have any questions about contributing to any of our projects.
We're very happy sending anyone who contributes Stroopwaffles. Have look at everyone we sent a package to so far on our Stroopwaffles page.
Open Source projects
All open source projects are available on our AppSignal GitHub organization.
Our main projects include:
- AppSignal Ruby library
- AppSignal Elixir library
- AppSignal Front-end JavaScript library
- AppSignal Node.js library
- AppSignal Documentation - That's this website!
- AppSignal Examples applications
Other projects we have open sourced, and are used internally by other projects, include:
- AppSignal Rust agent Early stage proof of concept of AppSignal Rust integration.
- sql_lexer Rust library to lex and sanitize SQL queries.
- probes.rs Rust library to read out system stats from a machine running Linux.
- public_config Parts of AppSignal.com configuration that are public, such as magic dashboards.
Using git and GitHub
We organize most of our git repositories on GitHub using a
main and
develop branch. The
main branch corresponds to the current stable
release of a project. The
develop branch is used for development of features
that will end up in the next minor release of a project.
Feature branches are used to submit bug fixes and new features using GitHub Pull Requests. When submitting a Pull Request the changes are tested against a variety of Ruby/Elixir versions and dependencies on Travis CI. The submitted change is also reviewed by two or more AppSignal project members before it is accepted.
Versioning
AppSignal is very open about changes to its product. Changes to integrations and the application itself are all visible on AppSignal.com/changelog. Big updates will also be posted on our blog.
All AppSignal integration projects use Semantic Versioning for versioning of releases. Documentation and other non-version specific projects do not use explicit versioning, but will mention related version specific content, such as in which version a feature was introduced.
Every stable and unstable release is tagged in git with a version tag and can be found on every project's GitHub page under "Releases".
Reporting bugs
Report a bug by opening an issue on the GitHub project page of the respective project. If the bug report contains some sensitive information that is necessary for the complete report you can also contact us to submit the bug.
When the bug is a security sensitive issue, please refer to the Security issues section.
When submitting a bug fix please create a Pull Request on the project's GitHub
page please submit the change against the
main branch.
Feature requests
Missing a feature or integration in AppSignal? Please let us know when something comes to mind by sending us an email or submitting an issue on the project's GitHub page.
It's also possible to submit a feature on one of our Open Source projects by
creating a Pull Request targeted on the project's
develop branch.
Security issues
If you think you've found a security issue with regards to our application, network or integrations, please let us know immediately by sending an email to [email protected].
Code of Conduct
Everyone interacting in AppSignal's codebases and issue trackers is expected to follow the contributor code of conduct. Please report unacceptable behavior to [email protected]. | https://docs.appsignal.com/appsignal/contributing/ | 2021-10-16T05:58:26 | CC-MAIN-2021-43 | 1634323583423.96 | [] | docs.appsignal.com |
How to SSH to your device
#How-to
In this quick guide, we will explore how you can SSH into your AutoPi device.
First connect to the hotspot. By default, the SSID is "AutoPi" followed by the last 13 characters of your unit ID, like so "AutoPi-XXXXXXXXXXXX". Also, by default, the password is the first 13 chars of your device ID, including dashes.
Like so: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
tip
Notice this is not the unit ID like the SSID. You are able to view the device ID in the basic device settings (Account > Devices).
Connect to the device via SSH, using the details below. This can be done via most computers (you need an SSH client), or even mobile devices with an SSH client installed.
Host: local.autopi.io
User: pi
Password: autopi2018
Example:
ssh [email protected]
#Quick notes
#Execute commands
All AutoPi native commands (commands normally run from the Cloud terminal) can be run from the
SSH prompt by adding the
autopi function in front. An example is:
autopi power.status
This can be handly when doing local development.
#Allow SSH connections to the device from your local WiFi
When you are not connected to the device hotspot, but the device is instead configured to connect to your home WiFi, you will need to change the advanced configuration to allow this SSH connection. This is by default disabled as it can be insecure if you are using a public WiFi connection.
You can find the setting to enable this, in Settings > Advanced > WiFi > Client > Allow Ssh.
In such a case, the host will be the IP address that has been assigned to the device by your router.
#Discussion
If you'd like to discuss this topic with us or other fellow community members, you can do so on our community page dedicated for this guide: Guide: How to SSH to your device. | https://docs.autopi.io/guides/how-to-ssh-to-your-device/ | 2021-10-16T04:50:33 | CC-MAIN-2021-43 | 1634323583423.96 | [] | docs.autopi.io |
Customizing Tabs
Last updated May 5th, 2019 | Page history | Improve this page | Report an issue
When you use Form Customization to customize the forms in the MODX manager, you can either allow/disallow access to specific fields, or you can allow/disallow access to entire tabs or parts of tabs.
When a Tab is not a Tab When setting up Form Customization rules under Security --> Form Customization, then editing a behavior set, there is a list of "Tabs", but the list doesn't correspond directly to the familiar "Document", "Settings", "Template Variables", and "Resource Groups". Rather, the tab ids listed here refer to entire tabs or parts of tabs.
Tab Regions¶
These are the various tab regions whose visibility can be toggled.
Adding New Tabs¶
Adding a new tab is quite simple. Edit your Set, and open the Regions tab. Click the Create New Tab button, give it an ID like "my-new-tab" and a description.
Any time you need to refer to the tab region, for example if moving Template Variables into the new tab, use the ID you gave it. | https://docs.modx.com/3.x/en/building-sites/client-proofing/form-customization/tabs | 2021-10-16T04:51:30 | CC-MAIN-2021-43 | 1634323583423.96 | [array(['/3.x/en/building-sites/client-proofing/form-customization/fc_new_tab.jpg',
None], dtype=object) ] | docs.modx.com |
EOL: An End of Life announcement was issued for SentryOne Test on March 15, 2021. See the Solarwinds End of Life Policy for more information.
Visual Studio Extension
Installing the Visual Studio Extension
Note: Before starting the installation process, verify that the installation environment meets all System Requirements.
Install the SentryOne Test Visual Studio Extension by completing the following:
- Download the SentryOne Test Visual Studio Extension installer from the Visual Studio Marketplace.
- After downloading the SentryOne.Test.vsix, execute the file to display the Visual Studio Extension installer. Select the versions of Visual Studio you wish to contain SentryOne Test, then select Install.
Success: Once the installation has completed, the selected Visual Studio editions are ready to create a new SentryOne Test project!
Remote Agent
Installing the Remote Agent
SentryOne Test Remote Agent Setup Wizard
To install the SentryOne Test Remote Agent, complete the following steps:
1. Select the Download icon on the SentryOne Test Dashboard, and then select Remote Agent. Select Run on the windows download pop-up alert to display the SentryOne Test Remote Agent Setup Wizard.
2. Select Next to continue the installation and display the End User License Agreement. Read through the End User- License agreement, select I accept the terms in the License agreement, then select Next to continue.
3. Choose where to install the SentryOne Test Remote agent. By default, the remote agent installs at C:\Program Files (x86)\SentryOne\SentryOne Test\RemoteAgent\. Select Next to choose the default location, and display the Ready to Install SentryOne Test Remote Agent page.
Note: To choose a custom install location, select Change, and then browse to the desired destination location. Select OK to finalize the selection.
4. Select Install to begin the installation. After the installation completes, select Finish to finalize the installation and open the SentryOne Test Remote Agent Configuration Tool.
SentryOne Test Remote Agent Configuration Tool
1. Verify the connection paths for the desired test frameworks, and then select Next to continue the configuration.
2. Enter the Service instance name, then verify the Target host API URL. Select Next to display the Authenticate with Target Host page.
3.Select Open Authentication Tool, then log into your SentryOne account to Authenticate the targets. Select Next to display the Ready to Install page.
4. On the Ready to install page, review the configuration values, then select Next to complete the installation.
5. Once installation has completed, select Start license manager to open the Pragmatic works License manager. Select Install a license to open the Activate Software window.
6. Select Use an Activation Key, type or paste your activation key in the Activation key text box, then select Activate to activate the license.
7. Once your license activation is successful, select Start Service to enable the Remote Agent on your machine.
Success: Your SentryOne Test Remote Agent is now ready to use!
SentryOne Test On-Premises
Installing SentryOne Test On-Premises
Note: SentryOne Test is offered as a SaaS solution. If you are performing an on-premises installation (instead of using test.sentryone.com), follow the instructions on this tab.
After downloading the SentryOne Test On-Premises installer, install SentryOne Test on your machine by completing the following steps:
1. Launch the SentryOne Test On-Premises Installer (SentryOneTestInstaller.msi), then select Next.
2. Select I accept the terms in the License Agreement, then select Next to continue the installation.
3. Enter the Server name, User name, and password for the server connection. Select Test Connection to verify the connection. After a successful connection, select Next to continue the installation.
Note: If you want to use Windows Authentication, leave the User name and Password fields empty.
4. Enter the host name for the server connection, then select Next to continue the installation.
Note: You can enter multiple host names and separate them by a comma ( , ).
5. Select Install to start the installation.
6. Select Finish to complete the installation and close the installer.
Important: Access the host site at http://{YourHostName}:44301. Access the SentryOne Test API endpoint at: http://{YourHostName}:44320. This is the API endpoint needed to integrate a SentryOne Test project with the server, and to install a remote agent. | https://docs.sentryone.com/help/sentryone-test-installation | 2021-10-16T06:16:03 | CC-MAIN-2021-43 | 1634323583423.96 | [array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c74535f6e121cdf02b9dde3/n/s1-test-remote-agent-setup-wizard-destination-folder-20185.png',
'SentryOne Test Remote Agent Setup Destination Folder Version 2018.5 SentryOne Test Remote Agent Setup Destination Folder'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c74538d8e121cf34019669f/n/s1-test-remote-agent-setup-wizard-change-destination-folder-20185.png',
'SentryOne Test Remote Agent Setup Change destination folder Version 2018.5 SentryOne Test Remote Agent Setup Change destination folder'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c7455c16e121ce004b9ddfa/n/s1-test-remote-agent-configuration-tool-verify-paths-20185.png',
'SentryOne Test Remote Agent Configuration Tool Verify paths for test framework executables Version 2018.5 SentryOne Test Remote Agent Configuration Tool Verify paths for test framework executables'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c7455e4ec161cc22bdbb630/n/s1-test-remote-agent-configuration-tool-set-service-instance-name-20185.png',
'SentryOne Test Remote Agent Configuration Tool Set service instance name and target host Version 2018.5 SentryOne Test Remote Agent Configuration Tool Set service instance name and target host'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c745c25ad121c1d606e7a72/n/s1-test-activate-software-licensing-use-an-activation-key-20185.png',
'SentryOne Test Activate a License Version 2018.5 SentryOne Test Activate a License'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c745c66ec161ca035dbb599/n/s1-test-remote-agent-configuration-tool-start-service-20185.png',
'SentryOne Test Remote Agent Configuration Tool Start Service Version 2018.5 SentryOne Test Remote Agent Configuration Tool Start Service'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d1f786a6e121c3b6bc292e0/n/s1-test-on-prem-installer-select-next-1931.png',
'SentryOne Test On Premises Installer select Next Version 19.3.1 SentryOne Test On Premises Installer select Next'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d1f78978e121c967b091067/n/s1-test-on-prem-installer-license-agreement-1931.png',
'SentryOne Test On Premises Installer License Agreement Version 19.3.1 SentryOne Test On Premises Installer License Agreement'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d1f7918ad121c34065bf5ae/n/s1-test-on-prem-installer-host-name-1931.png',
'SentryOne Test On Premises Installer Host Name Version 19.3.1 SentryOne Test On Premises Installer Host Name'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d1f79476e121c7f65c29384/n/s1-test-on-prem-installer-install-1931.png',
'SentryOne Test On Premises Installer select Install Version 19.3.1 SentryOne Test On Premises Installer select Install'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d1f7959ec161c1c683ec67f/n/s1-test-on-prem-installer-finish-1931.png',
'SentryOne Test On Premises Installer select Finish Version 19.3.1 SentryOne Test On Premises Installer select Finish'],
dtype=object) ] | docs.sentryone.com |
:
Next topic: Architecture: BlackBerry MVS high availability
Previous topic: BlackBerry MVS configuration
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/43631/Arch_BBMVS_GW+SM_1967486_11.jsp | 2015-04-18T05:17:56 | CC-MAIN-2015-18 | 1429246633799.48 | [array(['ARCH_BBMVS_Standalone_GW_and_SM_1968229_11.jpg', None],
dtype=object) ] | docs.blackberry.com |
Message-ID: <885377136.667.1429333915602.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_666_982117254.1429333915602" ------=_Part_666_982117254.1429333915602 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The groovy native launcher is a native program for launching gro=
ovy scripts. It compiles to an executable binary file, e.g. groovy.exe on W=
indows. Note that you still need to have groovy and a JRE or JDK installed,=
i.e. the native launcher is a native executable replacement for the startu=
p scripts (groovy.bat, groovy).
The native launcher is included in the Groovy Windows installer. For oth= er platforms, if your package management system does not have it, you will = have to compile it yourself. This is not hard, you just need to have SCons = (and Python) installed.=20
Essentially the launcher does the same thing that the normal Java launch= er (java executable) - it dynamically loads the dynamic library containing = the JVM and hands the execution over to it. It does not start a separate pr= ocess (i.e. it does not call the java executable).=20
The native launcher aims to compile and run on any os and any JDK/JRE &g= t;=3D 1.4. If you are using a combination of os+jdk/jre that is not support= ed, please post a JIRA enhancement request and support will be added.= =20
At the moment, the following platforms have been tested:=20
At the moment, the following JDKs/JREs have been tested=20
The current version of the native launcher works with any version of Gro= ovy.=20
Here are precompiled binaries for Windows:=20
They are not guaranteed to be completely up to date with the sources of = HEAD in the Subversion repository, but they should work.=20
Hopefully we will have precompiled binaries for all supported platforms = in the future.=20
The same binaries work for all the Groovy executables (groovy, groovyc, = groovysh...). Just copy / (soft)link to the executable with the name of the= executable you want, e.g. on Windows=20
copy groovy.exe groovyc.exe=20
and on Linux, Mac OS X, Solaris, etc.:=20
ln -s groovy groovyc=20
In addition the build produces launchers for gant and grails.=20
A note about gant: the gant launcher is meant for standalone gant instal= lation. To use gant installed into your groovy installation (e.g. by groovy= windows installer) use the renamed groovy executable as described above.= p>=20
To get the sources please see the native launcher git repository.=20
The executables are compiled using SCons. For Windows there is a SCon= s installer that can be used after having installed Python - The SCons = build framework is written in Python. SCons is part of Cygwin and can be in= stalled using the usual installer. The same goes for MacPorts on Mac OS X, = though there is a disk image installer as well. For Ubuntu, Debian, Fedora,= SuSE, etc. SCons is packages and so the usual package management can be us= ed to install. The build has only been tested with SCons 0.98 and greater, = it may not work with earlier versions of SCons.=20
Once you have SCons installed then simply typing:=20
scons=20
will compile things for the platform you are compiling on. Type:=20
scons -h=20
for a short help message about the native launcher build and=20
scons -H=20
for a message about using scons.=20
The native launcher, if compiled with default options, will look up groo= vy and java installation locations as described below. However, on some pla= tforms (linux and other *nix variants) it may be desirable for performance = and security reasons to "hardwire" these locations, i.e. set them= to fixed values at compile time.=20
Currently, native launcher supports setting three things at compile time= (together or separately): GROOVY_HOME, GROOVY_STARTUP_JAR and JAVA_HOME by= passing in the values to scons build via "extramacros" option.= p>=20
Example:=20
scons extramacros=3D"GROOVY_HOME=3D/usr/share/groovy GROOV= Y_STARTUP_JAR=3D/usr/share/groovy/lib/groovy-1.5.1.jar"=20
To test that the binary you compiled uses preset locations, run groovy w= / environment variable __JLAUNCHER_DEBUG set, e.g.=20
__JLAUNCHER_DEBUG=3Dtrue build/groovy -v=20
The launcher will print debug info to stderr, among other things how it = obtained the locations of groovy and java installations.=20
On Windows you can either compile with the Microsoft cl compiler and lin= ker or you can use GCC, either the MinGW version of the Cygwin ver= sion.=20
If you are not already using Cygwin, then you may want to investigate us= ing MSYS and the MinGW toolchain.=20
Compiling with the Cygwin or MinGW GCC produces executables that depend = only dlls that are found on Windows by default. If you compile with Visual = Studio, you will need an extra dll that may or may not be found on a partic= ular windows system. The dll you need depends on the Visual Studio version,= see here for details.=20
Try running the generated executable - if there's no complaint abo= ut a missing dll, you're fine.=20
To use the native launcher, you need to either place the executable in t= he bin directory of groovy installation OR set the GROOVY_HOME environment = variable to point to your groovy installation.=20
The launcher primarily tries to find the groovy installation by seeing w= hether it is sitting in the bin directory of one. If not, it resorts to usi= ng GROOVY_HOME environment variable. Note that this means that GROOVY_HOME = environment variable does not need to be set to be able to run groovy.= =20
The native launcher uses the following order to look up java installatio= n to use:=20
To put it another way - JAVA_HOME does not need to be set.=20
The native launcher accepts accepts all the same parameters as the .bat = / shell script launchers, and a few others on top of that. For details, typ= e=20
groovy -h=20
Any options not recognized as options to groovy are passed on to the jvm= , so you can e.g. do=20
groovy -Xmx250m myscript.groovy=20
The -client (default) and -server options to designate the type of jvm t= o use are also supported, so you can do=20
groovy -Xmx250m -server myscript.groovy=20
Note that no aliases like -hotspot, -jrockit etc. are accepted - it's ei= ther -client or -server=20
You can freely mix jvm parameters and groovy parameters. E.g. in the fol= lowing -d is param to groovy and -Dmy.prop=3Dfoo / -Xmx200m are params to t= he jvm:=20
groovy -Dmy.prop=3Dfoo -d -Xmx200m myscript.groovy=20
The environment variable JAVA_OPTS can be used to set jvm options you wa=
nt to be in effect every time you run groovy, e.g. (win example)
set = JAVA_OPTS=3D-Xms100m -Xmx200m
You can achieve the same effect by using environment variable JAVA_TOOL_= OPTIONS, see un.com/j2se/1.5.0/docs/guide/jvmti/jvmti.html#tooloptions and<= /a>=20
Note that if you set the same option from the command line that is alrea= dy set in JAVA_OPTS, the one given on the command line overrides the one gi= ven in JAVA_OPTS.=20
By default, the Windows version of the native launcher only understands = Windows style paths if compiled using the Microsoft compiler or the MinGW G= CC. If you compile using Cygwin GCC then by default, Cygwin and Windows sty= le paths are understood. The variable cygwinsupport controls the behaviour.= If you need to exolicitly set whether the Cygwin path code is included in = the build then you can set an option on the command line:=20
scons cygwinsupport=3DFalse=20
allowed values are True and False. Alternatively if you want to set the = value explicitly for every build you can add a line like:=20
cygwinsupport =3D False=20
to the file local.build.options in the same directory as the SConstruct = file.=20
Cygwin path support is a little experimental, but there are no known pro= blem at the moment. If you use it, could you please report back success or = any problems via the Groovy user mailing list.=20
Similarly to java.exe and javaw.exe on a jdk, the build process produces= groovy.exe and groovyw.exe on windows. The difference is the same as w/ ja= va)= .=20
If you want to run your groovy scripts on windows so that they seem like= any other commands (i.e. if you have myscript.groovy on your PATH, you can= just type myscript), you have to associate groovy script files with the gr= oovy executable. If you use the groovy windows installer it will do this fo= r you. Otherwise, do as follows:=20
Why have a native launcher, why aren't the startup scripts (groovy.bat, = groovy.sh) sufficient? Here are some reasons:=20
Also, the launcher has been written so that the source can be used to ea= sily create a native launcher for any Java program.=20
LD_LIBRARY_PATH=3D$JAVA_HOME/jre/lib/sparc/server:$JAVA_HOME/jre/lib/spa= rc:$LD_LIBRARY_PATH groovy -server myscript.groovy=20
If you have expertise with any of the following and want to help, please= email me at antti dot karanta (at) hornankuusi dot fi:=20 | http://docs.codehaus.org/exportword?pageId=66099 | 2015-04-18T05:11:55 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.codehaus.org |
ball install as below.
Tarball install
To install Splunk on a FreeBSD, the disk partition has enough space to hold the uncompressed volume of the data you plan to keep indexed.
After you install
To ensure that Splunk
A restart of the OS is required for the changes to effect.
If your server has less than 2 GB of memory, reduce the values accordingly.
What gets installed
To see the list of Splunk packages:
pkg_info -L splunk
To list all packages:
pkg_info
Start Splunk
Splunk can run as any user on the local system. If you run Splunk as a non-root user, make sure that Splunk has the appropriate permissions to read the inputs that you specify.
To start Splunk from the command line interface, run the following command from
$SPLUNK_HOME/bin directory (where $SPLUNK_HOME is the directory into which you installed Splunk):
./splunk start
By convention, this document uses:
$SPLUNK_HOMEto identify the path to your Splunk installation.
$SPLUNK_HOME/bin/to indicate the location of the command line interface.
Startup options
The first time you start Splunk after a new installation, you must accept the license agreement. To start Splunk and accept the license in one step:
$SPLUNK_HOME/bin/splunk start --accept-license
Note: There are two dashes before the
accept-license option.
Launch Splunk Web and log in
After you start Splunk, what comes next?
Manage your license
If you are performing a new installation of Splunk or switching from one license type to another, you must install or update your license.
Uninstall Splunk
Use your local package management commands to uninstall Splunk. In most cases, files that were not originally installed by the package will be retained. These files include your configuration and index files which are under your installation directory.
To uninstall Splunk from the default location:
pkg_delete splunk
To uninstall Splunk from a different location:
pkg_delete -p /usr/splunk splunk
This documentation applies to the following versions of Splunk: 4.3 , 4.3.1 , 4.3.2 , 4.3.3 , 4.3.4 , 4.3.5 , 4.3.6 , 4.3.7 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/4.3.2/Installation/InstallonFreeBSD | 2015-04-18T04:53:07 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.splunk.com |
About BlackBerry Travel
The.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/48994/amc1357083970574.jsp | 2015-04-18T05:08:30 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.blackberry.com |
Difference between revisions of "Can articles be assigned to multiple categories or sections?"
From Joomla! Documentation
Revision as of 18:59, 18 September 2012
No, Articles and other content items cannot be assigned to multiple categories or sections. In Joomla! 1.0 and 1.5, content items are restricted to a single category in a single section. In Joomla! 1.6 and newer you. | https://docs.joomla.org/index.php?title=Can_articles_be_assigned_to_multiple_categories_or_sections%3F&diff=75424&oldid=73485 | 2015-04-18T05:04:30 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
Messaging#
Once a channel is provisioned and thing is connected to it, it can start to publish messages on the channel. The following sections will provide an example of message publishing for each of the supported protocols.
HTTP#
To publish message over channel, thing should send following request:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H "Authorization: <thing_token>"<channel_id>/messages -d '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]'
Note that if you're going to use senml message format, you should always send messages as an array.
For more information about the HTTP messaging service API, please check out the API documentation.
MQTT#
To send and receive messages over MQTT you could use Mosquitto tools, or Paho if you want to use MQTT over WebSocket.
To publish message over channel, thing should call following command:
mosquitto_pub -u <thing_id> -P <thing_key> -t channels/<channel_id>/messages -h localhost -m '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]'
To subscribe to channel, thing should call following command:
mosquitto_sub -u <thing_id> -P <thing_key> -t channels/<channel_id>/messages -h localhost
If you want to use standard topic such as
channels/<channel_id>/messages with SenML content type (JSON or CBOR), you should use following topic
channels/<channel_id>/messages.
If you are using TLS to secure MQTT connection, add
--cafile docker/ssl/certs/ca.crt
to every command.
CoAP#
CoAP adapter implements CoAP protocol using underlying UDP and according to RFC 7252. To send and receive messages over CoAP, you can use Copper CoAP user-agent. To set the add-on, please follow the installation instructions provided here. Once the Mozilla Firefox and Copper are ready and CoAP adapter is running locally on the default port (5683), you can navigate to the appropriate URL and start using CoAP. The URL should look like this:
coap://localhost/channels/<channel_id>/messages?auth=<thing_auth_key>
To send a message, use
POST request.
To subscribe, send
GET request with Observe option set to 0. There are two ways to unsubscribe:
1) Send
GET request with Observe option set to 1.
2) Forget the token and send
RST message as a response to
CONF message received by the server.
The most of the notifications received from the Adapter are non-confirmable. By RFC 7641:
Server must send a notification in a confirmable message instead of a non-confirmable message at least every 24 hours. This prevents a client that went away or is no longer interested from remaining in the list of observers indefinitely.
CoAP Adapter sends these notifications every 12 hours. To configure this period, please check adapter documentation If the client is no longer interested in receiving notifications, the second scenario described above can be used to unsubscribe.
WS#
Mainflux supports MQTT-over-WS, rather than pure WS protocol. this bring numerous benefits for IoT applications that are derived from the properties of MQTT - like QoS and PUB/SUB features.
There are 2 reccomended Javascript libraries for implementing browser support for Mainflux MQTT-over-WS connectivity:
As WS is an extension of HTTP protocol, Mainflux exposes it on port
80, so it's usage is practically transparent.
Additionally, please notice that since same port as for HTTP is used (
80), and extension URL
/mqtt should be used -
i.e. connection URL should be
ws://<host_addr>/mqtt.
For quick testing you can use HiveMQ UI tool.
Here is an example of a browser application connecting to Mainflux server and sending and receiving messages over WebSocket using MQTT.js library:
<script src=""></script> <script> // Initialize a mqtt variable globally console.log(mqtt) // connection option const options = { clean: true, // retain session connectTimeout: 4000, // Timeout period // Authentication information clientId: '14d6c682-fb5a-4d28-b670-ee565ab5866c', username: '14d6c682-fb5a-4d28-b670-ee565ab5866c', password: 'ec82f341-d4b5-4c77-ae05-34877a62428f', } var channelId = '08676a76-101d-439c-b62e-d4bb3b014337' var topic = 'channels/' + channelId + '/messages' // Connect string, and specify the connection method by the protocol // ws Unencrypted WebSocket connection // wss Encrypted WebSocket connection const connectUrl = 'ws://localhost/mqtt' const client = mqtt.connect(connectUrl, options) client.on('reconnect', (error) => { console.log('reconnecting:', error) }) client.on('error', (error) => { console.log('Connection failed:', error) }) client.on('connect', function () { console.log('client connected:' + options.clientId) client.subscribe(topic, { qos: 0 }) client.publish(topic, 'WS connection demo!', { qos: 0, retain: false }) }) client.on('message', function (topic, message, packet) { console.log('Received Message:= ' + message.toString() + '\nOn topic:= ' + topic) }) client.on('close', function () { console.log(options.clientId + ' disconnected') }) </script>
N.B. Eclipse Paho lib adds sub-URL
/mqtt automaticlly, so procedure for connecting to the server can be something like this:
var loc = { hostname: 'localhost', port: 80 } // Create a client instance client = new Paho.MQTT.Client(loc.hostname, Number(loc.port), "clientId") // Connect the client client.connect({onSuccess:onConnect});
Subtopics#
In order to use subtopics and give more meaning to your pub/sub channel, you can simply add any suffix to base
/channels/<channel_id>/messages topic.
Example subtopic publish/subscribe for bedroom temperature would be
channels/<channel_id>/messages/bedroom/temperature.
Subtopics are generic and multilevel. You can use almost any suffix with any depth.
Topics with subtopics are propagated to NATS broker in the following format
channels.<channel_id>.<optional_subtopic>.
Our example topic
channels/<channel_id>/messages/bedroom/temperature will be translated to appropriate NATS topic
channels.<channel_id>.bedroom.temperature.
You can use multilevel subtopics, that have multiple parts. These parts are separated by
. or
/ separators.
When you use combination of these two, have in mind that behind the scene,
/ separator will be replaced with
..
Every empty part of subtopic will be removed. What this means is that subtopic
a///b is equivalent to
a/b.
When you want to subscribe, you can use NATS wildcards
* and
>. Every subtopic part can have
* or
> as it's value, but if there is any other character beside these wildcards, subtopic will be invalid. What this means is that subtopics such as
a.b*c.d will be invalid, while
a.b.*.c.d will be valid.
Authorization is done on channel level, so you only have to have access to channel in order to have access to it's subtopics.
Note: When using MQTT, it's recommended that you use standard MQTT wildcards
+ and
#.
For more information and examples checkout official nats.io documentation | http://docs.mainflux.io/messaging/ | 2021-11-27T03:06:15 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.mainflux.io |
What is a codemark?
Quite simply, a codemark is a discussion connected to the code. It could be a question, a suggestion, a bug report, or documentation. All of these discussions are saved, anchored to the blocks of code they refer to, so that they can be leveraged in the future. It could be a new developer joining the team, a developer trying to fix a bug in someone else’s code, or even just you trying to remember why you made that change six months ago. Whatever the case, CodeStream helps you understand the code by surfacing the discussions in a contextual way.
Even as a file changes over time, the codemarks remain connected to the code. Add some new lines of code above the code block, make edits to the code, or even cut-and-paste the entire block to a different section of the file, and you’ll see the codemark move along with your changes.
Create a codemark
To create a codemark, select a block of code in your editor and then click one of the icons that appears in the CodeStream pane next to your selection.
If you're using a JetBrains IDE, such as IntelliJ, you can also create a codemark via the + button that appears in the editor's gutter when you select a block of code. When you're viewing a diff, for either a feedback request or a pull request, the button will also appear when you hover over the gutter to make it easy to comment on a single line.
Even when the CodeStream pane is closed or not in view, you can create a codemark via the CodeStream options in either the lightbulb or context menus.
You can also look for the + menu at the top of the CodeStream pane.
Need to reach teammates that don’t spend a lot of time in the IDE? Or maybe some teammates that aren’t yet on CodeStream? You can optionally share a codemark out to Slack or Microsoft Teams. The Slack integration even allows your teammates to reply directly from Slack.
Comment codemarks
Comment codemarks are the all-purpose codemark for linking any type of discussion to a block of code. Ask a question. Make a suggestion. Document some code. Make note of key sections of the codebase. The possibilities are endless.
Issue codemarks
When something needs to get done there’s a better chance of it happening if it’s captured as an issue with someone’s name attached. Assign issues as a way of reporting bugs or manage your tech debt by capturing items as tracked issues instead of inline FIXMEs.
If your team uses Asana, Azure DevOps, Bitbucket (cloud), Clubhouse, GitHub (cloud or Enterprise), GitLab (cloud or Self-Managed), Jira (cloud or Server), Linear, Trello, or YouTrack (cloud) for tracking issues, you can create an issue on one of those services directly from CodeStream. Select the service you use from the dropdown at the top of the codemark form.
After going through the authentication process with the selected service, you can select a destination for your issue. For example, with Jira you'll be able to select the appropriate issue type and project.
Once the issue has been created on CodeStream, it includes a link to the issue that was created on the external service. In the example, you'll see the URL for the issue on Jira.
The issue on Jira includes a link to open the relevant code in your IDE.
Bring the right people into the discussion
When you create a codemark, CodeStream automatically mentions the people that most recently touched the code you're commenting on. They may be the best people to answer your question, but you can, of course, remove those mentions and manually mention someone else if appropriate.
It may be the case that the people that have touched the code aren't yet on CodeStream, in which case CodeStream will provide checkboxes to have them notified via email. They can reply to the email to have their comment posted to CodeStream and, of course, they can install CodeStream to participate from their IDE.
Work with different versions of the code
Maybe you’re on a feature branch, have local changes, or simply haven’t pulled in a while. There are countless reasons why the code you’re looking at might be different than what a teammate is looking at. As a result, there will be plenty of times when the code referenced in a codemark doesn’t match what you have locally.
CodeStream recognizes these situations and includes the original version of the code block (such as, at the time the codemark was created), the current version, and a diff.
Keep in mind that with CodeStream you can discuss any line of code, in any source file, at any time, even if it’s code that you just typed into your editor and haven’t yet committed. CodeStream empowers you to discuss code at the very earliest stages of the development process.
Resolve codemarks
Although not required, both comment and issue codemarks can be resolved. The codemarks section of the CodeStream pane breaks out codemarks into open, resolved and archived sections. Green, purple, and gray icons are used to represent those different states. If you see a lot of open/green codemarks in the CodeStream pane, that means that your teammates are being blocked by discussions and issues that haven't been resolved.
You can add a comment at the same time you resolve the codemark and you can also archive the codemark at the same time.
Advanced features
Advanced features include multiple range codemarks, file attachments, tags, and related codemarks.
Multiple ranges
Many discussions about code involve more than just one block of code and concepts are often best presented when you can refer to multiple code locations at once. Here are a few examples of multi-range codemark at work:
- A change to a function is being contemplated that will impact its name. Each instance of the function call can now be referenced in one discussion.
- A React component and its CSS styling aren’t interacting well and you want to ask the team for input. You might select the div and the CSS rules you think should apply, so your teammates know exactly what you’re talking about.
- Clients which make API calls to the server might get an unexpected result. Select the code where you’re making the API call, and the handler in the API server, to connect the two actions together.
To create a multi-range codemark, click + Add Code Block.
Then select another block of code from the same file, a different file, or even a different repo.
You can intersperse the difference code blocks in your post by referring to each one as
[#N] (or click the pin icon from one of the code blocks to insert the markdown for you), as in the following example.
Here's how that example is rendered.
Once you've created the codemark, you can jump between the different locations by clicking the jump icon at the bottom right of each code block.
When you edit a codemark, you can add and remove code blocks and you can change the location of any of the code blocks by clicking the dashed square icon.
File attachments
Enrich your discussions about code by attaching files directly to code blocks. Think about how much more compelling your comments and documentation become when you attach:
- A spec to guide the development of a new feature
- A log file to help debug an issue in the code
- A mockup to help clarify some UI work
- A screenshot to highlight a problem
When creating a code comment or issue, you can attach a file by dragging-and-dropping onto the description field, pasting from your clipboard, or by clicking the paperclip icon.
Images can even be displayed inline using markdown. Click the pin icon to the right of the attachment and CodeStream will insert the markdown for you.
Now your teammate knows exactly what you’re looking for.
You can click on files in the attachments section to either download it or open it in the appropriate application.
Add tags
Look for the tag icon inside the codemark compose box to either select a tag or create a tag using any combination of color and text label.
Tags are a great way to broadly organize and group your organization's codemarks and the possibilities here are endless.
You can also filter by tag on the Filter & Search page.
Related codemarks
Click the CodeStream icon in the codemark compose form to select other related codemarks to attach them to the current codemark. This establishes a connection between different parts of a codebase. For example, when a change to one part of the codebase would require a change to another part, identify the dependency by creating two related codemarks.
Once you’ve added the related codemarks they’ll be displayed in a related section and you can click on any one to jump to that codemark and the corresponding section of the code.
Manage codemarks
Click the ellipses menu for any codemark and you'll see options to manage the codemark.
- Share: In addition to sharing to Slack or Teams at the time you create a codemark, you can also share it anytime later.
- Follow/Unfollow: Follow a codemark to be notified when its updated. Unfollow to stop receiving notifications.
- Copy link: Get a permalink for the codemark to share it anywhere.
- Archive: If there’s a codemark that you don’t think is important enough to be on permanent display in a given file, but you don’t want to completely delete it, you can archive it instead. Settings in the codemarks section allow you to easily see all archived codemarks.
- Edit: Only the codemark's author can edit it.
- Delete: Only the codemark's author can delete it, but we encourage you to archive instead of deleting unless you're positive the codemark won't have any future value.
- Inject as Inline Comment: If you'd like a specific codemark to become part of the repo use this option to have it added as an inline comment. You can select the appropriate format, and then indicate if you want to include timestamps, replies, or to have the comment wrapped at 80 characters. You can also elect to have the codemark archived once it's been added as an inline comment.
- Reposition codemark: In most cases, a codemark will automatically remain linked to the block of code it refers to as the file changes over time. For example, if you cut the block of code and paste it at a different location in the file, the codemark will move right along with it. There are some scenarios, however, that CodeStream isn't able to handle automatically. For example, if you pasted the block of code into a different file. In these cases, the Reposition codemark allows you to select the new location of the block of code so that the codemark is displayed. | https://docs.newrelic.com/docs/codestream/how-use-codestream/discuss-code/ | 2021-11-27T03:41:16 | CC-MAIN-2021-49 | 1637964358078.2 | [array(['/db0d26bd4904db971c0013448d774b96/DiscussCode1-VSC.gif',
'New Codemark'], dtype=object)
array(['/d9d0742326141a84c4cce3b108bd8aab/Compose-JB.gif',
'New Codemark in JetBrains'], dtype=object) ] | docs.newrelic.com |
Display security trace results
Contributors
You can display the security trace results generated for file operations that match security trace filters. You can use the results to validate your file access security configuration or to troubleshoot SMB and NFS file access issues.
An enabled security trace filter must exist and operations must have been performed from an SMB or NFS client that matches the security trace filter to generate security trace results.
You can display a summary of all security trace results, or you can customize what information is displayed in the output by specifying optional parameters. This can be helpful when the security trace results contain a large number of records.
If you do not specify any of the optional parameters, the following is displayed:
storage virtual machine (SVM) name
Node name
Security trace index number
Security style
Path
Reason
User name
The user name displayed depends on how the trace filter is configured:
You can customize the output by using optional parameters. Some of the optional parameters that you can use to narrow the results returned in the command output include the following:
See the man page for information about other optional parameters that you can use with the command.
Display security trace filter results by using the
vserver security trace trace-result showcommand.
vserver security trace trace-result show -user-name domain\user
Vserver: vs1 Node Index Filter Details Reason -------- ------- --------------------- ----------------------------- node1 3 User:domain\user Access denied by explicit ACE Security Style:mixed Path:/dir1/dir2/ node1 5 User:domain\user Access denied by explicit ACE Security Style:unix Path:/dir1/ | https://docs.netapp.com/us-en/ontap/nas-audit/display-security-trace-results-task.html | 2021-11-27T02:51:11 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.netapp.com |
The Workload Visibility screen lets developers view the details and status of their Kubernetes Workloads to understand their structure, and debug issues.
Developers must perform the following actions to see their Workloads on the dashboard:
Define a Backstage Component with a
backstage.io/kubernetes-label-selector annotation. See Components in the Catalog Operations documentation.
apiVersion: backstage.io/v1alpha1 kind: Component metadata: name: petclinic description: Spring PetClinic annotations: 'backstage.io/kubernetes-label-selector': 'app.kubernetes.io/part-of=petclinic-server' spec: type: service lifecycle: demo owner: default-team system:
Commit and push the Component definition to a Git repository that is registered as a Catalog Location. See Adding Catalog Entities in the Catalog Operations documentation.
Create a Kubernetes Workload with a label matching the Component's selector, in a cluster available to the Tanzu Application Platform GUI. A Workload is one of the following:
v1/Service
apps/v1/Deployment
serving.knative.dev/v1/Service
For example:
$ cat <<EOF | kubectl apply -f - --- apiVersion: serving.knative.dev/v1 kind: Service metadata: name: petclinic namespace: default labels: 'app.kubernetes.io/part-of': petclinic-server spec: template: metadata: labels: 'app.kubernetes.io/part-of': petclinic-server spec: containers: - image: springcommunity/spring-framework-petclinic EOF
You can view the list of running Workloads and details about their status, type, namespace, cluster, and public URL if applicable for the Workload type.
To view the list of your running Workloads:
To view the Knative services details of your Workloads, select the Workload with 'Knative Service' type. In this page, additional information is available for Knative workloads including status, an ownership hierarchy, incoming routes, revisions, and pod details. | https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/0.3/tap-0-3/GUID-tap-gui-plugins-workload-visibility.html | 2021-11-27T03:42:23 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.vmware.com |
On the installer system navigate to the VMware Telco Cloud Operations Deploy Tool folder. Run the uninstall command to power off and delete the deployed virtual machines. The command provides options to uninstall either the complete footprint (all VMs) or one (or more) VMs specificed by name.
Prerequisites
The uninstall command uses the deploy.settings file to access the vCenter details to delete the deployed VMs. Verify the deploy.settings file has the SAME parameter values that were used when the VMs were deployed. The names of the VMs to be deleted can be provided on the command-line as shown below. Alternatively, ou can specify that ALL the deployed VMs be deleted, in which case the names are read from the deployed.vms file. If the deployed.vms file is not available, then the deploy.settings is checked for the VM names.
Before uninstallation, the command checks if the VM exists, and only powers off the VM, and then deletes it from the VCenter inventory. Progress is displayed on the terminal. If a VM cannot be deleted, the command will alert the user.
- To get help on the command:
$ ./uninstall option
where
option = -h OR --help
- To uninstall all the VMs, enter the following command. The names of the VMs to be deleted are obtained from the deploy.settings file.
$ ./uninstall option
where
option = -a OR --all
- To uninstall one or more VMs:
$ ./uninstall vm1_name vm2_name ....
- Force option — To avoid the confirmation prompts before deletion of VMs, use the force option -for --force. This flag can appear either before or after the other arguments as shown in the following arguments:
$ ./uninstall -f -a
$ ./uninstall --all --force
$ ./uninstall --force vm1 vm2
$ ./uninstall vm1 vm2 vm3 -f
and so forth.
Known Issues:
Redeploying the VMware Telco Cloud Operations cluster with the same static IP addresses
It may be necessary to redeploy the entire cluster while testing or learning about its capabilities. If you use the same static IP addresses when redeploying, you should run the $ ssh-keygen -R control_plane_node_ip_address command to remove the previous control plane node key from your known_hosts file and then begin the deployment procedure. Otherwise, you will see a security warning about a possible man-in-the-middle attack because you now have a new key from the control plane node for the same IP address. | https://docs.vmware.com/en/VMware-Telco-Cloud-Operations/1.3.0/deployment-guide-130/GUID-4CF601E6-3C30-4920-A50D-A32F296BC7B6.html | 2021-11-27T03:25:35 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.vmware.com |
send-backup-paths
omp send-backup-paths—Have OMP advertise backup routes to vEdge routers (on vSmart controllers only). By default, OMP advertises only the best route or routes. If you configure the send-backup-paths command, OMP also advertises the first non-best route in addition to the best route or routes.
vManage Feature Template
For vSmart controllers only:
Configuration ► Templates ► OMP
Options
None
Example
Configure a vSmart control to advertise the first non-best route to vEdge routers, in addition to the best route or routes:
vSmart# config Entering configuration mode terminal vSmart(config)# omp send-backup-paths vSmart(config-omp)# commit and-quit Commit complete. vSmart#
Operational Commands
show omp peers
show omp routes
show omp services
show omp summary
show omp tlocs
Release Information
Command introduced in Viptela Software Release 15.2.
Additional Information
See the Configuring Unicast Overlay Routing article for your software release.
send-path-limit | https://sdwan-docs.cisco.com/Product_Documentation/Command_Reference/Command_Reference/Configuration_Commands/send-backup-paths | 2021-11-27T03:01:05 | CC-MAIN-2021-49 | 1637964358078.2 | [] | sdwan-docs.cisco.com |
.
- Create a VPN Interface Bridge feature template to enable integrated routing and bridging (IRB).. | https://sdwan-docs.cisco.com/Product_Documentation/vManage_Help/Release_18.2/Configuration/Templates/Bridge | 2021-11-27T02:57:15 | CC-MAIN-2021-49 | 1637964358078.2 | [] | sdwan-docs.cisco.com |
application environment for python¶
welcome to the ae namespace documentation.
the portions (modules and sub-packages) of this freely extendable namespace (PEP 420) are providing helper functions and classes for your python application, resulting in less code for you to write and maintain.
- namespace portions guidelines:
pure python, fully typed (PEP 526) and documented
100 % test coverage
multi thread save
highly configurable logging
- core helpers:
data processing and validation
file handling
i18n (localization)
configuration settings
console
logging
database access
networking
- GUI helpers:
context help
app tours
user preferences (font size, color, theming)
QR code
sideloading
add new namespace portion¶
follow the steps underneath to add a new portion to the ae namespace. the steps 7 to 9 can be realized by the script register_portion.py, which can also be used or adopted to migrate an existing packages from other projects into this namespace:
choose a not-existing/unique name for the new portion (referred as <portion-name> in the next steps).
create a new project folder ae_<portion-name> in the source code directory of your local machine.
in the new project folder create/prepare a local virtual environment with pyenv.
create new test module in the tests folder within the new code project folder and then code your unit tests.
create the ae namespace sub-folder within the new code project folder.
within the ae folder create new module or sub-package, implement the new portion and set the __version__ variable to ‘0.0.1’.
copy all files from the folder portions_common_root of this package into the new code project folder.
init a new git repository on your local machine and add all files.
create new project/repository with the name ae_<portion-name> (in the GitLab group ae-group).
hack/test/fix/commit until all tests succeed.
complete type annotations and the docstrings to properly document the new portion.
finally push the new namespace portion repository onto<portion-name>.
Note
if you want to put the portion separately into your own repository (see step 11 above) then you also have to add two protected vars PYPI_USERNAME and PYPI_PASSWORD (mark also as masked) and provide the user name and password of your PYPI account (on Gitlab.com at Settings/CI_CD/Variables).
Hint
with the push to GitLab (done in step 12 above) the CI jobs will automatically run all tests and then publish the new portion onto PYPI.
to request the registration of the new portion to one of the ae namespace maintainers - this way the new portion will be included into the ae namespace package documentation on ReadTheDocs.
change common files of all portions¶
any change on one of the common files (situated in the portions_common_root folder of this repository) has to be done here.
use the script portions_common_deploy.py to distribute any changes to all the portion repositories.
Note
all changes to the common files of this root package have first to be pushed to the public git repository before they can be distributed to the namespace portion packages.
registered namespace package portions¶
the following list contains all registered portions of the ae namespace. most portions are single module files, some of them are sub-packages.
Hint
portions with no dependencies are at the begin of the following list. the portions that are depending on other portions of the ae namespace are listed more to the end. | https://ae.readthedocs.io/en/stable/index.html | 2021-11-27T01:48:16 | CC-MAIN-2021-49 | 1637964358078.2 | [] | ae.readthedocs.io |
We have just rolled out a new feature called “Interactive requests”.
This feature will help teachers and schools connect with parents through the phone app “School Co”.
This feature will allow school teachers to create instant questions or interactions that they want the parents to respond to directly.
Interactive requests will allow teachers and administrators at schools create a question with multiple choices and those questions will be targeted to a specific classroom or a specific set of children or students. The parents of those students will be receiving those questions on their phones and will see the choices and they will be able to make their choices by a specific deadline.
After making their choices those choices will immediately show up for the teachers so they can implement whatever the parents have chosen for their children.
How to create a request?
The way the feature works is as following:
- First go to the “School Co” app
- Click on Interactive requests from the side menu.
- Click Add New Request at the bottom
- Choose which location this will be targeted to then choose the room
- Then choose the list of students the questions will be sent to (or their parents in the case of young children)
- Select the deadline date and time by which the parents need to respond by
- Enter the questions
- Add the list of choices then hit submit
This is a feature that we think will allow for greater communication between schools and parents. | https://docs.facegraph.com/2021/10/01/interactive-requests-schools/ | 2021-11-27T01:58:34 | CC-MAIN-2021-49 | 1637964358078.2 | [array(['/wp-content/uploads/2021/10/Interactive_requests.jpg',
'Parent checking her School Co App while her child is studying'],
dtype=object) ] | docs.facegraph.com |
ODL Parent Developer Guide¶
Parent POMs¶
Overview¶
The ODL Parent component for OpenDaylight provides a number of Maven parent POMs which allow Maven projects to be easily integrated in the OpenDaylight ecosystem. Technically, the aim of projects in OpenDaylight is to produce Karaf features, and these parent projects provide common support for the different types of projects involved.
These parent projects are:
odlparent-lite— the basic parent POM for Maven modules which don’t produce artifacts (e.g. aggregator POMs)
odlparent— the common parent POM for Maven modules containing Java code
bundle-parent— the parent POM for Maven modules producing OSGi bundles
The following parent projects are deprecated, but still used in Carbon:
feature-parent— the parent POM for Maven modules producing Karaf 3 feature repositories
karaf-parent— the parent POM for Maven modules producing Karaf 3 distributions
The following parent projects are new in Carbon, for Karaf 4 support (which won’t be complete until Nitrogen):
single-feature-parent— the parent POM for Maven modules producing a single Karaf 4 feature
feature-repo-parent— the parent POM for Maven modules producing Karaf 4 feature repositories
karaf4-parent— the parent POM for Maven modules producing Karaf 4 distributions
odlparent-lite¶
This is the base parent for all OpenDaylight Maven projects and modules. It provides the following, notably to allow publishing artifacts to Maven Central:
license information;
organization information;
issue management information (a link to our Bugzilla);
continuous integration information (a link to our Jenkins setup);
default Maven plugins (
maven-clean-plugin,
maven-deploy-plugin,
maven-install-plugin,
maven-javadoc-pluginwith HelpMojo support,
maven-project-info-reports-plugin,
maven-site-pluginwith Asciidoc support,
jdepend-maven-plugin);
distribution management information.
It also defines two profiles which help during development:
q(
-Pq), the quick profile, which disables tests, code coverage, Javadoc generation, code analysis, etc. — anything which is not necessary to build the bundles and features (see this blog post for details);
addInstallRepositoryPath(
-DaddInstallRepositoryPath=…/karaf/system) which can be used to drop a bundle in the appropriate Karaf location, to enable hot-reloading of bundles during development (see this blog post for details).
For modules which don’t produce any useful artifacts (e.g. aggregator POMs), you should add the following to avoid processing artifacts:
<build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-deploy-plugin</artifactId> <configuration> <skip>true</skip> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-install-plugin</artifactId> <configuration> <skip>true</skip> </configuration> </plugin> </plugins> </build>
odlparent¶
This inherits from
odlparent-lite and mainly provides dependency and
plugin management for OpenDaylight projects.
If you use any of the following libraries, you should rely on
odlparent to provide the appropriate versions:
Akka (and Scala)
Apache Commons:
commons-codec
commons-fileupload
commons-io
commons-lang
commons-lang3
commons-net
Apache Shiro
Guava
JAX-RS with Jersey
JSON processing:
GSON
Jackson
Logging:
Logback
SLF4J
Netty
OSGi:
Apache Felix
core OSGi dependencies (
core,
compendium…)
Testing:
Hamcrest
JSON assert
JUnit
Mockito
Pax Exam
PowerMock
XML/XSL:
Xerces
XML APIs
Note
This list is not exhaustive. It is also not cast in stone;if you would like to add a new dependency (or migrate a dependency), please contact the mailing list.
odlparent also enforces some Checkstyle verification rules. In
particular, it enforces the common license header used in all
OpenDaylight code:
/* * Copyright © ${year} ${holder} and others. All rights reserved. * * This program and the accompanying materials are made available under the * terms of the Eclipse Public License v1.0 which accompanies this distribution, * and is available at */
where “
${year}” is initially the first year of publication, then
(after a year has passed) the first and latest years of publication,
separated by commas (e.g. “2014, 2016”), and “
${holder}” is
the initial copyright holder (typically, the first author’s employer).
If you need to disable this license check, e.g. for files imported
under another license (EPL-compatible of course), you can override the
maven-checkstyle-plugin configuration.
features-test does this
for its
CustomBundleUrlStreamHandlerFactory class, which is
ASL-licensed:
<plugin> <artifactId>maven-checkstyle-plugin</artifactId> <executions> <execution> <id>check-license</id> <goals> <goal>check</goal> </goals> <phase>process-sources</phase> <configuration> <configLocation>check-license.xml</configLocation> <headerLocation>EPL-LICENSE.regexp.txt</headerLocation> <includeResources>false</includeResources> <includeTestResources>false</includeTestResources> <sourceDirectory>${project.build.sourceDirectory}</sourceDirectory> <excludes> <!-- Skip Apache Licensed files --> org/opendaylight/odlparent/featuretest/CustomBundleUrlStreamHandlerFactory.java </excludes> <failsOnError>false</failsOnError> <consoleOutput>true</consoleOutput> </configuration> </execution> </executions> </plugin>
bundle-parent¶
This inherits from
odlparent and enables functionality useful for
OSGi bundles:
maven-javadoc-pluginis activated, to build the Javadoc JAR;
maven-source-pluginis activated, to build the source JAR;
maven-bundle-pluginis activated (including extensions), to build OSGi bundles (using the “bundle” packaging).
In addition to this, JUnit is included as a default dependency in “test” scope.
features-parent¶
This inherits from
odlparent and enables functionality useful for
Karaf features:
karaf-maven-pluginis activated, to build Karaf features — but for OpenDaylight, projects need to use
“jar”packaging (not
“feature”or
“kar”);
features.xmlfiles are processed from templates stored in
src/main/features/features.xml;
Karaf features are tested after build to ensure they can be activated in a Karaf container.
The
features.xml processing allows versions to be omitted from
certain feature dependencies, and replaced with “
{{version}}”.
For example:
<features name="odl-mdsal-${project.version}" xmlns="" xmlns: <repository>mvn:org.opendaylight.odlparent/features-odlparent/{{VERSION}}/xml/features</repository> [...] <feature name='odl-mdsal-broker-local' version='${project.version}' description="OpenDaylight :: MDSAL :: Broker"> <feature version='${yangtools.version}'>odl-yangtools-common</feature> <feature version='${mdsal.version}'>odl-mdsal-binding-dom-adapter</feature> <feature version='${mdsal.model.version}'>odl-mdsal-models</feature> <feature version='${project.version}'>odl-mdsal-common</feature> <feature version='${config.version}'>odl-config-startup</feature> <feature version='${config.version}'>odl-config-netty</feature> <feature version='[3.3.0,4.0.0)'>odl-lmax</feature> [...] <bundle>mvn:org.opendaylight.controller/sal-dom-broker-config/{{VERSION}}</bundle> <bundle start-mvn:org.opendaylight.controller/blueprint/{{VERSION}}</bundle> <configfile finalname="${config.configfile.directory}/${config.mdsal.configfile}">mvn:org.opendaylight.controller/md-sal-config/{{VERSION}}/xml/config</configfile> </feature>
As illustrated, versions can be omitted in this way for repository dependencies, bundle dependencies and configuration files. They must be specified traditionally (either hard-coded, or using Maven properties) for feature dependencies.
karaf-parent¶
This allows building a Karaf 3 distribution, typically for local testing
purposes. Any runtime-scoped feature dependencies will be included in the
distribution, and the
karaf.localFeature property can be used to
specify the boot feature (in addition to
standard).
single-feature-parent¶
This inherits from
odlparent and enables functionality useful for
Karaf 4 features:
karaf-maven-pluginis activated, to build Karaf features, typically with
"feature"packaging (
"kar"is also supported);
feature.xmlfiles are generated based on the compile-scope dependencies defined in the POM, optionally initialized from a stub in
src/main/feature/feature.xml.
Karaf features are tested after build to ensure they can be activated in a Karaf container.
The
feature.xml processing adds transitive dependencies by default, which
allows features to be defined using only the most significant dependencies
(those that define the feature); other requirements are determined
automatically as long as they exist as Maven dependencies.
configfiles need to be defined both as Maven dependencies (with the
appropriate type and classifier) and as
<configfile> elements in the
feature.xml stub.
Other features which a feature depends on need to be defined as Maven dependencies with type “xml” and classifier “features” (note the plural here).
feature-repo-parent¶
This inherits from
odlparent and enables functionality useful for
Karaf 4 feature repositories. It follows the same principles as
single-feature-parent, but is designed specifically for repositories
and should be used only for this type of artifacts.
It builds a feature repository referencing all the (feature) dependencies listed in the POM.
Features (for Karaf 3)¶
The ODL Parent component for OpenDaylight provides a number of Karaf 3 features which can be used by other Karaf 3 features to use certain third-party upstream dependencies.
These features are:
Akka features (in the
features-akkarepository):
odl-akka-all— all Akka bundles;
odl-akka-scala-2.11— Scala runtime for OpenDaylight;
odl-akka-system-2.4— Akka actor framework bundles;
odl-akka-clustering-2.4— Akka clustering bundles and dependencies;
odl-akka-leveldb-0.7— LevelDB;
odl-akka-persistence-2.4— Akka persistence;
general third-party features (in the
features-odlparentrepository):
odl-netty-4— all Netty bundles;
odl-guava-18— Guava 18;
odl-guava-21— Guava 21 (not intended for use in Carbon);
odl-lmax-3— LMAX Disruptor;
odl-triemap-0.2— Concurrent Hash-Trie Map.
To use these, you need to declare a dependency on the appropriate
repository in your
features.xml file:
<repository>mvn:org.opendaylight.odlparent/features-odlparent/{{VERSION}}/xml/features</repository>
and then include the feature, e.g.:
<feature name='odl-mdsal-broker-local' version='${project.version}' description="OpenDaylight :: MDSAL :: Broker"> [...] <feature version='[3.3.0,4.0.0)'>odl-lmax</feature> [...] </feature>
You also need to depend on the features repository in your POM:
<dependency> <groupId>org.opendaylight.odlparent</groupId> <artifactId>features-odlparent<). For the time being
you also need to depend separately on the individual JARs as
compile-time dependencies to build your dependent code; the relevant
dependencies are managed in
odlparent’s dependency management.
odl-netty:
[4.0.37,4.1.0)or
[4.0.37,5.0.0);
odl-guava:
[18,19)(if your code is ready for it,
[19,20)is also available, but the current default version of Guava in OpenDaylight is 18);
odl-lmax:
[3.3.4,4.0.0)
Features (for Karaf 4)¶
There are equivalent features to all the Karaf 3 features, for Karaf 4.
The repositories use “features4” instead of “features”, and the features
use
odl4 instead of
odl.
The following new features are specific to Karaf 4:
Karaf wrapper features (also in the
features4-odlparentrepository) — these can be used to pull in a Karaf feature using a Maven dependency in a POM:
odl-karaf-feat-feature— the Karaf
featurefeature;
odl-karaf-feat-jdbc— the Karaf
jdbcfeature;
odl-karaf-feat-jetty— the Karaf
jettyfeature;
odl-karaf-feat-war— the Karaf
warfeature.
To use these, all you need to do now is add the appropriate dependency in your feature POM; for example:
<dependency> <groupId>org.opendaylight.odlparent</groupId> <artifactId>odl4-guava-18<). We no longer use version
ranges, the feature dependencies all use the
odlparent version (but you
should rely on the artifacts POM). | https://docs.opendaylight.org/en/latest/developer-guides/odl-parent-developer-guide.html | 2021-11-27T02:39:31 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.opendaylight.org |
.../data/salience/sentiment
The
salience/sentiment directory contains data files that are relevant to the sentiment analysis performed by Salience Engine. Click on the name of a file for more detailed information below:
context.dat
This datafile contains a list of words with information about whether the are "factual" words or emotive words. This helps the contextual model determine if sentiment was intended or not.
contextual.bin
This binary model has been added to support the Use Polarity Model option. When this option is enabled, phrases that contain sentiment-bearing terms but are used in a non-sentiment manner are not included in sentiment analysis.
For example, the phrase "Good morning to everyone." contains positive sentiment words (good), but does not actually convey explicitly positive sentiment, it's just a phrase.
This binary file cannot be modified by users.
default.bin: Customizing model-based sentiment
Model-based sentiment analysis was introduced in Salience 4. The default data directory ships with a basic sentiment model,
default.bin, trained on generic business content. Salience provides a command line tool called the SentimentModelBuilder that users can use to create sentiment models using their own content.
general.hsd: Customizing phrase-based sentiment
The most common method of adjusting sentiment analysis within Salience Engine is through HSD files. An HSD file provides a listing of sentiment-bearing phrases and human-judged sentiment weights for these phrases. Phrase-based sentiment analysis can also be adjusted through negators and intensifiers.
The default data directory provides an HSD file developed by Lexalytics for use with general content. Customization of phrase-based sentiment analysis begins with this HSD file. It can be copied into an equivalent location in a
user directory and used as the based for a custom HSD.
Users that are looking to customize sentiment analysis should review sentiment output, particularly the phrases that are contributing to sentiment results. Phrases can be added or edited in the HSD file with appropriate sentiment weights.
Phrases that should be explicitly excluded from sentiment analysis can be indicated with a tilde (~). This is different from applying a sentiment weight of zero, which still considers the phrase in sentiment calculations, but the zero weight has a neutralizing effect. This also differs from the use of the tilde (~) operator in other data files such as pattern files, where the operator is used to control case-sensitivity.
jerk chicken~
Starting in Salience 6.1, queries can be used in any line of the HSD. If a line has a query operator (such as AND, OR, and *), the line will be treated as a query. Any text in the document matching a non-negated portion of the query will be assigned the provided sentiment value. For example:
tinny WITH sound -0.7 would penalize documents containing tinny in the same sentence as soun.
best* 0.5 would match anything starting with best (best, bested, bestest)
IMPORTANT
-Multiple HSD files can be placed within
user directories for customizing sentiment
-See Sentiment Options for API methods for setting the HSD file(s) to use
-The tilde (~) operator does not control case-sensitivity in HSD files (which are case-insensitive), it indicates that the specified phrase should not be considered (blacklisted) with respect to sentiment calculations
general.lsf
This binary file is the underlying database used for phrase-based sentiment analysis. Phrases that exist within the LSF are overridden by any that also appear in HSD files that are added through API calls. This file cannot be modified by users.
hsd.ptn
This file provides the part-of-speech patterns that indicate a sentiment-bearing phrase. Modifications to this file are not recommended because of the effect on sentiment analysis quality and performance. Please contact Lexalytics support if you have questions about adjusting the default sentiment phrase patterns.
intensifiers.dat
This file provides a tab-delimited list of intensifying phrases with accompanying multipliers. When an intensifier occurs before a sentiment-bearing phrase, the multiplier is applied to the sentiment weight of the sentiment-bearing phrase.
For example, assume the following HSD entry:
good<tab>0.4
And the following entry in intensifiers.dat:
very<tab>1.5
An occurrence of the phrase "very good" would contribute a sentiment weight of 0.6 to document-level (or entity, theme, or topic sentiment where applicable) sentiment.
Modifications or extensions to the list of intensifiers should be made in a
user directory (eg.
[user directory]/salience/sentiment/intensifiers.dat).
Updated almost 2 years ago | https://salience-docs.lexalytics.com/docs/lexalytics-rootdatasaliencesentiment | 2021-11-27T03:20:16 | CC-MAIN-2021-49 | 1637964358078.2 | [] | salience-docs.lexalytics.com |
Object Info Node¶.
Outputs¶
- Location
Location of the object in world space.
- Color
Object color, same as Color in the.
- Object Index
Object pass index, same as Pass Index in the.
- Material Index
Material pass index, same as Pass Index in the.
- Random
Random number unique to a single object instance.
Note
Note that this node only works for material shading nodes; it does nothing for light and world shading nodes. | https://docs.blender.org/manual/en/2.93/render/shader_nodes/input/object_info.html | 2021-11-27T02:51:18 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.blender.org |
Date: 16 Oct 2020 13:53:02 -0400 From: "John Levine" <[email protected]> To: [email protected] Cc: [email protected] Subject: Re: sh scripting question Message-ID: <[email protected]> In-Reply-To: <CAHzLAVGS_8kXtWoP3TfpTtz-zP57rpYEKrx3L089=eRRszsY-Q@mail.gmail.com>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
In article <CAHzLAVGS_8kXtWoP3TfpTtz-zP57rpYEKrx3L089=eRRszsY-Q@mail.gmail.com> you write: >On Thu, Oct 15, 2020 at 8:40 PM Robert Huff <[email protected]> wrote: >> I have a file ("files.list") with a list of filenames, similar to >> >> /path A/path B/FreeBSD is great.txt >> >> (note the embedded spaces) This is the sensible answer. Messing with IFS is very fragile. >$ while read line; do >> echo $line >> done < f.in >/path A/path B/FreeBSD is great.txt >/path T/path Q/FreeBSD rocks.txt The "read" command can set more than one variable so while read a b c do ... done < files.list will set $a to the first word on each line, $b to the second, $c to the rest of the line, which looks like what you wanted to do here.
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=461039+0+/usr/local/www/mailindex/archive/2020/freebsd-questions/20201018.freebsd-questions | 2021-11-27T03:28:48 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.freebsd.org |
Fakes¶
What are fakes?¶
Fakes are JavaScript objects that emulate the interface of a given Solidity contract. You can use fakes to customize the behavior of any public method or variable that a smart contract exposes.
When should I use a fake?¶
Fakes are a powerful tool when you want to test how a smart contract will interact with other contracts. Instead of initalizing a full-fledged smart contract to interact with, you can simply create a fake that can provide pre-programmed responses.
Fakes are especially useful when the contracts that you need to interact with are relatively complex.
For example, imagine that you’re testing a contract that needs to interact with another (very stateful) contract.
Without
smock, you’ll probably have to:
Deploy the contract you need to interact with.
Perform a series of transactions to get the contract into the relevant state.
Run the test.
Do this all over again for each test.
This is annoying, slow, and brittle.
You might have to update a bunch of tests if the behavior of the other contract ever changes.
Developers usually end up using tricks like state snapshots and complex test fixtures to get around this problem.
Instead, you can use
smock:
Create a
fake.
Make your
fakereturn the value you want it to return.
Run the test.
Using fakes¶
Initialization¶
Initialize with a contract factory¶
const myContractFactory = await hre.ethers.getContractFactory('MyContract'); const myFake = await smock.fake(myContractFactory);
Initialize with a contract instance¶
const myContractFactory = await hre.ethers.getContractFactory('MyContract'); const myContract = await myContractFactory.deploy(); const myFake = await smock.fake(myContract);
Take full advantage of typescript and typechain¶
const myFake = await smock.fake<MyContract>('MyContract');
Every fake comes with a
wallet property in order to make easy to sign transactions
myContract.connect(myFake.wallet).doSomething();
Making a function return¶
Returning a dynamic value¶
myFake.myFunction.returns(() => { if (Math.random() < 0.5) { return 0; } else { return 1; } });
Returning a value based on arguments¶
myFake.myFunction.whenCalledWith(123).returns(456); await myFake.myFunction(123); // returns 456
Returning a value with custom logic¶
myFake.getDynamicInput.returns(arg1 => arg1 * 10); await myFake.getDynamicInput(123); // returns 1230
Making a function revert¶
Reverting at a specific call count¶
myFake.myFunction.returns(1234); myFake.myFunction.revertsAtCall(1, 'Something went wrong'); await myFake.myFunction(); // returns 1234 await myFake.myFunction(); // reverts with 'Something went wrong' await myFake.myFunction(); // returns 1234
Resetting function behavior¶
Asserting call count¶
Asserting call arguments¶
Called with specific arguments¶
expect(myFake.myFunction).to.have.been.calledWith(123, true, 'abcd');
Called with struct arguments¶
expect(myFake.myFunction).to.have.been.calledWith({ myData: [1, 2, 3, 4], myNestedStruct: { otherValue: 5678 } });
Called at a specific call index with arguments¶
expect(myFake.myFunction.atCall(2)).to.have.been.calledWith(1234, false);
Asserting call order¶
Called before other function¶
expect(myFake.myFunction).to.have.been.calledBefore(myFake.myOtherFunction);
Called after other function¶
expect(myFake.myFunction).to.have.been.calledAfter(myFake.myOtherFunction);
Called immediately before other function¶
expect(myFake.myFunction).to.have.been.calledImmediatelyBefore(myFake.myOtherFunction); | https://smock.readthedocs.io/en/latest/fakes.html | 2021-11-27T02:58:37 | CC-MAIN-2021-49 | 1637964358078.2 | [] | smock.readthedocs.io |
Kubernetes v1.21 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
The its adoption and innovation, run
kubectl get pods, and contribute in a thousand other vital ways. Read on to learn how you can get involved and become part of this amazing community. | https://v1-21.docs.kubernetes.io/community/ | 2021-11-27T02:11:42 | CC-MAIN-2021-49 | 1637964358078.2 | [] | v1-21.docs.kubernetes.io |
Release 0.1.7¶
Features¶
[Issue 33] Support reading and writing parquet format via pyspark job.
[Issue 57, Issue 58] Amorphic API: Support Aurora Views and Materialized Redshift view.
[Issue 59] Support reading and writing json format via pyspark job and reading using pythonshell job.
Enhancement¶
Changes¶
Bug Fix¶
[Issue 51] Dataset reload failed because _SUCCESS file was written before actual data. For now, user can pass reload_wait param in seconds which will wait before trigger reload.
[Issue 54] Amorphic API: Support new variables to dataset creation. | https://amorphicutils.readthedocs.io/en/v0.1.8/release-notes/release_0.1.7.html | 2021-11-27T02:16:25 | CC-MAIN-2021-49 | 1637964358078.2 | [] | amorphicutils.readthedocs.io |
Migrating to Amazon OpenSearch Service
Index snapshots are a popular way to migrate from a self-managed OpenSearch cluster to Amazon OpenSearch Service. Broadly, the process consists of the following steps:
Take a snapshot of the existing cluster, and upload the snapshot to an Amazon S3 bucket.
Create an OpenSearch Service domain.
Give OpenSearch Service permissions to access the bucket, and give your user account permissions to work with snapshots.
Restore the snapshot on the OpenSearch Service domain.
This walkthrough provides more detailed steps and alternate options, where applicable.
Take and upload the snapshot
Although you can use the repository-s3
opensearch.yml, restart each node, add your
Amazon credentials, and finally take the snapshot. The plugin is a great option for
ongoing use or for migrating larger clusters.
For smaller clusters, a one-time approach is to take a shared file system snapshot
To take a snapshot and upload it to Amazon S3
Add the
path.reposetting to
opensearch.ymlon all nodes, and then restart each node.
path.repo: ["
/my/shared/directory/snapshots"]
Register the snapshot repository:
PUT _snapshot/
migration-repository{ "type": "fs", "settings": { "location": "
/my/shared/directory/snapshots" } }
Take the snapshot:
PUT _snapshot/
migration-repository/
migration-snapshot{ "indices": "
migration-index1,
migration-index2,
other-indices-*", "include_global_state": false }
Install the Amazon CLI
, and run
aws configureto add your credentials.
Navigate to the snapshot directory. Then run the following commands to create a new S3 bucket and upload the contents of the snapshot directory to that bucket:
aws s3 mb s3://
bucket-name--region
us-west-2aws s3 sync . s3://
bucket-name--sse AES256
Depending on the size of the snapshot and the speed of your internet connection, this operation can take a while.
Create a domain
Although the console is the easiest way to create a domain, in this case, you already have the terminal open and the Amazon CLI installed. Modify the following command to create a domain that fits your needs:
aws opensearch create-domain \ --domain-name
migration-domain\ --engine-version
OpenSearch_1.0\ --cluster-config InstanceType=c5.large.search,InstanceCount=2 \ --ebs-options EBSEnabled=true,VolumeType=gp2,VolumeSize=100 \ --node-to-node-encryption-options Enabled=true \ --encryption-at-rest-options Enabled=true \ --domain-endpoint-options EnforceHTTPS=true,TLSSecurityPolicy=Policy-Min-TLS-1-2-2019-07 \ --advanced-security-options Enabled=true,InternalUserDatabaseEnabled=true,MasterUserOptions='{MasterUserName=
master-user,MasterUserPassword=
master-user-password}' \ --access-policies '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":["*"]},"Action":["es:ESHttp*"],"Resource":"arn:aws:es:
us-west-2:
123456789012:domain/
migration-domain/*"}]}' \ --region
us-west-2
As is, the command creates an internet-accessible domain with two data nodes, each with 100 GiB of storage. It also enables fine-grained access control with HTTP basic authentication and all encryption settings. Use the OpenSearch Service console if you need a more advanced security configuration, such as a VPC.
Before issuing the command, change the domain name, master user credentials, and account number. Specify the same Amazon Web Services Region that you used for the S3 bucket and an OpenSearch/Elasticsearch version that is compatible with your snapshot.
Snapshots are only forward-compatible, and only by one major version. For example, you can't restore a snapshot from a 2.x cluster on a 1.x cluster or a 6.x cluster, only a 2.x or 5.x cluster. Minor version matters, too. You can't restore a snapshot from a self-managed 5.3.3 cluster on a 5.3.2 OpenSearch Service domain. We recommend choosing the most recent version of OpenSearch or Elasticsearch that your snapshot supports. For a table of compatible versions, see Using a snapshot to migrate data.
Provide permissions to the S3 bucket
In the Amazon Identity and Access Management (IAM) console, create a role with the following permissions and trust relationship. When creating the role, choose S3
as the Amazon Service. Name the role
OpenSearchSnapshotRole so it's easy to find.
Permissions
{ "Version": "2012-10-17", "Statement": [{ "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::
bucket-name" ] }, { "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::
bucket-name/*" ] } ] }
Trust relationship
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": "es.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }
Then give your personal IAM user or role—whatever you used to configure the
Amazon CLI earlier—permissions to assume
OpenSearchSnapshotRole. Create
the following policy and attach it to
Permissions
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": "iam:PassRole", "Resource": "arn:aws:iam::
123456789012:role/OpenSearchSnapshotRole" } ] }
Map the snapshot role in OpenSearch Dashboards (if using fine-grained access control)
If you enabled fine-grained access control, even
if you use HTTP basic authentication for all other purposes, you need to map the
manage_snapshots role to your IAM user or role so you can work
with snapshots.
To give your identity permissions to work with snapshots
Log in to Dashboards using the master user credentials you specified when you created the OpenSearch Service domain. You can find the Dashboards URL in the OpenSearch Service console. It takes the form of
https://.
domain-endpoint/_dashboards/
From the main menu choose Security, Roles, and select the manage_snapshots role.
Choose Mapped users, Manage mapping.
Add the domain ARN of your personal IAM user or role in the appropriate field. The ARN takes one of the following formats:
arn:aws:iam::
123456789123:user/
user-name
arn:aws:iam::
123456789123:role/
role-name
Select Map and confirm the user or role shows up under Mapped users.
Restore the snapshot
At this point, you have two ways to access your OpenSearch Service domain: HTTP basic authentication with your master user credentials or Amazon authentication using your IAM credentials. Because snapshots use Amazon S3, which has no concept of the master user, you must use your IAM credentials to register the snapshot repository with your OpenSearch Service domain.
Most programming languages have libraries to assist with signing requests, but the simpler approach is to
use a tool like Postman
To restore the snapshot
Regardless of how you choose to sign your requests, the first step is to register the repository:
PUT _snapshot/
migration-repository{ "type": "s3", "settings": { "bucket": "
bucket-name", "region": "
us-west-2", "role_arn": "arn:aws:iam::123456789012:role/OpenSearchSnapshotRole" } }
Then list the snapshots in the repository, and find the one you want to restore. At this point, you can continue using Postman or switch to a tool like curl
.
Shorthand
GET _snapshot/
migration-repository/_all
curl
curl -XGET -u '
master-user:
master-user-password' https://
domain-endpoint/_snapshot/
migration-repository/_all
Restore the snapshot.
Shorthand
POST _snapshot/
migration-repository/
migration-snapshot/_restore { "indices": "
migration-index1,
migration-index2,
other-indices-*", "include_global_state": false }
curl
curl -XPOST -u '
master-user:
master-user-password' https://
domain-endpoint/_snapshot/
migration-repository/
migration-snapshot/_restore \ -H 'Content-Type: application/json' \ -d '{"indices":"
migration-index1,
migration-index2,
other-indices-*","include_global_state":false}'
Finally, verify that your indices restored as expected.
Shorthand
GET _cat/indices?v
curl
curl -XGET -u '
master-user:
master-user-password' https://
domain-endpoint/_cat/indices?v
At this point, the migration is complete. You might configure your clients to use the new OpenSearch Service endpoint, resize the domain to suit your workload, check the shard count for your indices, switch to an IAM master user, or start building visualizations in OpenSearch Dashboards. | https://docs.amazonaws.cn/en_us/opensearch-service/latest/developerguide/migration.html | 2021-11-27T02:29:53 | CC-MAIN-2021-49 | 1637964358078.2 | [array(['images/migration2.png', None], dtype=object)] | docs.amazonaws.cn |
Domain Name System (DNS) is a mechanism which allows searching for a domain name by its IP address and vice versa, as well as other information that resource records contain. For more information please refer to the article Domain resource record. A domain name is a set of symbols which is used to identify an IP address.
ISPmanager Lite can be used as the master DNS-server.
Perform the following steps to create a domain name:
- Navigate to Domains → Domain names → Create domain.
- When you are logged in as the administrator, you can enable the option Administrators are owners to hide this newly created domain name for users or select its Owner.
- Enter a Domain name.
- Enter the IP-address corresponding to the domain name. They are specified in the A-records of the domain zone.
- Enter the Name servers that will manage the domain zone. They are specified in the NS-records of the domain zone.
- If you want a WWW-domain to be created for this domain, enter the Local IP addresses where the web servers accept requests to the domain's web pages and check the box Create WWW-domain. WWW-domains allows users to open web pages in browsers. For more information please refer to the article How to create a WWW-domain.
- If you want to create mailboxes on your domain, check the box Create a mail domain.
- Click Ok.
Note
ISPmanager checks that the domain name doesn't contain the restricted names listed in Domains → Reserved names. If a record is created for a domain name from that list, the domain name won't be created.
Note | https://docs.ispsystem.com/ispmanager6-lite/domain-names/create-a-domain-name | 2021-11-27T02:47:28 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.ispsystem.com |
Hi All,
Goal: Setup a cloud environment that allows cloud users to be able to log into the Windows Virtual Desktop
Context:
I have signed up for the 90 day trial Azure AD Premium P2 license which also supplies the Microsoft 365 E5 Developer (without Windows and Audio Conferencing).
Also using my admin account created within the trial tenant, I have signed up for the 12month of free services with USD200 credit.
I have configured the Azure AD DS (no errors when provisioned). Kept the default domain name. I have set-up the Windows Virtual Desktop following the set-up wizard.
Issue:
I have successfully signed into my workspace using a cloud user credential via web client (). When attempting to launch the session desktop, it prompts me to re-enter my credentials in which it returns sign in error (see attached image)
Troubleshoot steps:
Updated my cloud user password after AAD DS was created
Created new cloud user
Recreated the Host pool - Multisession
If anyone could provide some assistance, it would be much appreciated.
| https://docs.microsoft.com/en-us/answers/questions/528395/unable-to-sign-into-windows-virtual-desktop-sessio.html | 2021-11-27T04:16:50 | CC-MAIN-2021-49 | 1637964358078.2 | [array(['/answers/storage/attachments/126604-screenshot-2.png',
'126604-screenshot-2.png'], dtype=object) ] | docs.microsoft.com |
Enable custom domains.
This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a custom domain with your application provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign-in process rather than redirecting to the Azure AD B2C default domain <tenant-name>.b2clogin.com.
Custom domain overview
You can enable custom domains for Azure AD B2C by using Azure Front Door. Azure Front Door is a global entry point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. You can render Azure AD B2C content behind Azure Front Door, and then configure an option in Azure Front Door to deliver the content via a custom domain in your application's URL.
Watch this video to learn about Azure AD B2C custom domain.
The following diagram illustrates Azure Front Door integration:
- From an application, a user selects the sign-in button, which takes them to the Azure AD B2C sign-in page. This page specifies a custom domain name.
- The web browser resolves the custom domain name to the Azure Front Door IP address. During DNS resolution, a canonical name (CNAME) record with a custom domain name points to your Front Door default front-end host (for example,
contoso-frontend.azurefd.net).
- The traffic addressed to the custom domain (for example,
login.contoso.com) is routed to the specified Front Door default front-end host (
contoso-frontend.azurefd.net).
- Azure Front Door invokes Azure AD B2C content using the Azure AD B2C
<tenant-name>.b2clogin.comdefault domain. The request to the Azure AD B2C endpoint includes the original custom domain name.
- Azure AD B2C responds to the request by displaying the relevant content and the original custom domain.
Important
The connection from the browser to Azure Front Door should always use IPv4 instead of IPv6.
When using custom domains, consider the following:
- You can set up multiple custom domains. For the maximum number of supported custom domains, see Azure AD service limits and restrictions for Azure AD B2C and Azure subscription and service limits, quotas, and constraints for Azure Front Door.
- Azure Front Door is a separate Azure service, so extra charges will be incurred. For more information, see Front Door pricing.
- To use Azure Front Door Web Application Firewall, you need to confirm your firewall configuration and rules work correctly with your Azure AD B2C user flows.
- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name <tenant-name>.b2clogin.com (unless you're using a custom policy and you block access.
- If you have multiple applications, migrate them all to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used.
Prerequisites
- Create a user flow so users can sign up and sign in to your application.
- Register a web application and enable ID token implicit grant.
- Complete the steps in Get started with custom policies in Active Directory B2C
- Register a web application and enable ID token implicit grant.
Step 1. Add a custom domain name to your Azure AD B2C tenant
Every new Azure AD B2C tenant comes with an initial domain name, <domainname>.onmicrosoft.com. You can't change or delete the initial domain name, but you can add a custom domain.
Follow these steps to add a custom domain to your Azure AD B2C tenant:
Add your custom domain name to Azure AD.
Important
For these steps, be sure to sign in to your Azure AD B2C tenant and select the Azure Active Directory service.
Add your DNS information to the domain registrar. After you add your custom domain name to Azure AD, create a DNS
TXT, or
MXrecord for your domain. Creating this DNS record for your domain verifies ownership of your domain name.
The following examples demonstrate TXT records for login.contoso.com and account.contoso.com:
The TXT record must be associated with the subdomain, or hostname of the domain. For example, the login part of the contoso.com domain. If the hostname is empty or
@, Azure AD will not be able to verify the custom domain you added. In the following examples, both records are configured incorrectly.
Tip
You can manage your custom domain with any publicly available DNS service, such as GoDaddy. If you don't have a DNS server, you can use Azure DNS zone, or App Service domains.
Verify your custom domain name. Verify each subdomain, or hostname you plan to use. For example, to be able to sign-in with login.contoso.com and account.contoso.com, you need to verify both subdomains and not the top-level domain contoso.com.
After the domain is verified, delete the DNS TXT record you created.
Step 2. Create a new Azure Front Door instance
Follow these steps to create a Front Door for your Azure AD B2C tenant. For more information, see creating a Front Door for your application.
To choose the directory that contains the Azure subscription that you’d like to use for Azure Front Door and not the directory containing your Azure AD B2C tenant, select the Directories + subscriptions icon in the portal toolbar.
On the Portal settings | Directories + subscriptions page, find your Azure AD directory in the Directory name list, and then select Switch.
From the home page or the Azure menu, select Create a resource. Select Networking > See All > Front Door.
In the Basics tab of Create a Front Door page, enter or select the following information, and then select Next: Configuration.
2.1 Add frontend host
The frontend host is the domain name used by your application. When you create a Front Door, the default frontend host is a subdomain of
azurefd.net.
Azure Front Door provides the option of associating a custom domain with the frontend host. With this option, you associate the Azure AD B2C user interface with a custom domain in your URL instead of a Front Door owned domain name. For example,.
To add a frontend host, follow these steps:
In Frontends/domains, select + to open Add a frontend host.
For Host name, enter a globally unique hostname. The host name is not your custom domain. This example uses contoso-frontend. Select Add.
2.2 Add backend and backend pool
A backend refers to your Azure AD B2C tenant name,
tenant-name.b2clogin.com. To add a backend pool, follow these steps:
Still in Create a Front Door, in Backend pools, select + to open Add a backend pool.
Enter a Name. For example, myBackendPool. Select Add a backend.
The following screenshot demonstrates how to create a backend pool:
In the Add a backend blade, select the following information, and then select Add.
*Leave all other fields default.
The following screenshot demonstrates how to create a custom host backend that is associated with an Azure AD B2C tenant:
To complete the configuration of the backend pool, on the Add a backend pool blade, select Add.
After you add the backend to the backend pool, disable the Health probes.
2.3 Add a routing rule
Finally, add a routing rule. The routing rule maps your frontend host to the backend pool. The rule forwards a request for the frontend host to the Azure AD B2C backend. To add a routing rule, follow these steps:
In Add a rule, for Name, enter LocationRule. Accept all the default values, then select Add to add the routing rule.
Select Review + Create, and then Create.
Step 3. Set up your custom domain on Azure Front Door
In this step, you add the custom domain you registered in Step 1 to your Front Door.
3.1 Create a CNAME DNS record
Before you can use a custom domain with your Front Door, you must first create a canonical name (CNAME) record with your domain provider to point to your Front Door's default frontend host (say contoso-frontend.azurefd.net).
A CNAME record is a type of DNS record that maps a source domain name to a destination domain name (alias). For Azure Front Door, the source domain name is your custom domain name, and the destination domain name is your Front Door default hostname you configure in step 2.1.
After Front Door verifies the CNAME record that you created, traffic addressed to the source custom domain (such as login.contoso.com) is routed to the specified destination Front Door default frontend host, such as
contoso-frontend.azurefd.net. For more information, see add a custom domain to your Front Door.):
Save your changes.
3.2 Associate the custom domain with your Front Door
After you've registered your custom domain, you can then add it to your Front Door.
On the Front Door designer page, under the Frontends/domains, select + to add a custom domain.
For Frontend host, the frontend host to use as the destination domain of your CNAME record is pre-filled and is derived from your Front Door: <default hostname>.azurefd.net. It cannot be changed.
For Custom hostname, enter your custom domain, including the subdomain, to use as the source domain of your CNAME record. For example, login.contoso.com.
Azure verifies that the CNAME record exists for the custom domain name you entered. If the CNAME is correct, your custom domain will be validated.
After the custom domain name is verified, under the Custom domain name HTTPS, select Enabled.
For the Certificate management type, select Front Door management, or Use my own certificate. If you choose the Front Door managed option, wait until the certificate is fully provisioned.
Select Add.
3.3 Update the routing rule
In the Routing rules, select the routing rule you created in step 2.3.
Under the Frontends/domains, select your custom domain name.
Select Update.
From the main window, select Save.
Step 4. Configure CORS
If you customize the Azure AD B2C user interface with an HTML template, you need to Configure CORS with your custom domain.
Configure Azure Blob storage for Cross-Origin Resource Sharing with the following steps:
- In the Azure portal, navigate to your storage account.
- In the menu, select CORS.
- For Allowed origins, enter. Replace
your-domain-namewith your domain name. For example,. Use all lowercase letters when entering your tenant name.
- For Allowed Methods, select both
GETand
OPTIONS.
- For Allowed Headers, enter an asterisk (*).
- For Exposed Headers, enter an asterisk (*).
- For Max age, enter 200.
- Select Save.
Test your custom domain.
In the Azure portal, search for and select Azure AD B2C.
Under Policies, select User flows (policies).
Select a user flow, and then select Run user flow.
For Application, select the web application named webapp1 that you previously registered. The Reply URL should show.
Copy the URL under Run user flow endpoint.
To simulate a sign-in with your custom domain, open a web browser and use the URL you copied. Replace the Azure AD B2C domain (<tenant-name>.b2clogin.com) with your custom domain.
For example, instead of:
use:
Verify that the Azure AD B2C is loaded correctly. Then, sign-in with a local account.
Repeat the test with the rest of your policies.
Configure your identity provider
When a user chooses to sign in with a social identity provider, Azure AD B2C initiates an authorization request and takes the user to the selected identity provider to complete the sign-in process. The authorization request specifies the
redirect_uri with the Azure AD B2C default domain name:
https://<tenant-name>.b2clogin.com/<tenant-name>/oauth2/authresp
If you configured your policy to allow sign-in with an external identity provider, update the OAuth redirect URIs with the custom domain. Most identity providers allow you to register multiple redirect URIs. We recommend adding redirect URIs instead of replacing them so you can test your custom policy without affecting applications that use the Azure AD B2C default domain name.
In the following redirect URI:
https://<custom-domain-name>/<tenant-name>/oauth2/authresp
- Replace <custom-domain-name> with your custom domain name.
- Replace <tenant-name> with the name of your tenant, or your tenant ID.
The following example shows a valid OAuth redirect URI:
If you choose to use the tenant ID, a valid OAuth redirect URI would look like the following sample:
The SAML identity providers metadata would look like the following sample:
https://<custom-domain-name>.b2clogin.com/<tenant-name>/<your-policy>/samlp/metadata?idptp=<your-technical-profile>
Configure your application
After you configure and test the custom domain, you can update your applications to load the URL that specifies your custom domain as the hostname instead of the Azure AD B2C domain.
The custom domain integration applies to authentication endpoints that use Azure AD B2C policies (user flows or custom policies) to authenticate users. These endpoints may look like the following sample:
https://<custom-domain>/<tenant-name>/<policy-name>/v2.0/.well-known/openid-configuration
https://<custom-domain>/<tenant-name>/<policy-name>/oauth2/v2.0/authorize
https://<custom-domain>/<tenant-name>/<policy-name>/oauth2/v2.0/token
Replace:
- custom-domain with your custom domain
- tenant-name with your tenant name or tenant ID
- policy-name with your policy name. Learn more about Azure AD B2C policies.
The SAML service provider metadata may look like the following sample:
(Optional) Use tenant ID
You can replace your B2C tenant name in the URL with your tenant ID GUID so as to remove all references to “b2c” in the URL. You can find your tenant ID GUID in the B2C Overview page in Azure portal.
For example, change
to<tenant ID GUID>/
If you choose to use tenant ID instead of tenant name, be sure to update the identity provider OAuth redirect URIs accordingly. For more information, see Configure your identity provider.
Token issuance
The token issuer name (iss) claim changes based on the custom domain being used. For example:
https://<domain-name>/11111111-1111-1111-1111-111111111111/v2.0/
Block access to the default domain name
After you add the custom domain and configure your application, users will still be able to access the <tenant-name>.b2clogin.com domain. To prevent access, you can configure the policy to check the authorization request "host name" against an allowed list of domains. The host name is the domain name that appears in the URL. The host name is available through
{Context:HostName} claim resolvers. Then you can present a custom error message.
- Get the example of a conditional access policy that checks the host name from GitHub.
- In each file, replace the string
yourtenantwith the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is contosob2c, all instances of
yourtenant.onmicrosoft.combecome
contosob2c.onmicrosoft.com.
- Upload the policy files in the following order:
B2C_1A_TrustFrameworkExtensions_HostName.xmland then
B2C_1A_signup_signin_HostName.xml.
Troubleshooting
Azure AD B2C returns a page not found error
- Symptom - After you configure a custom domain, when you try to sign in with the custom domain, you get an HTTP 404 error message.
- Possible causes - This issue could be related to the DNS configuration or the Azure Front Door backend configuration.
- Resolution:
- Make sure the custom domain is registered and successfully verified in your Azure AD B2C tenant.
- Make sure the custom domain is configured properly. The
CNAMErecord for your custom domain must point to your Azure Front Door default frontend host (for example, contoso-frontend.azurefd.net).
- Make sure the Azure Front Door backend pool configuration points to the tenant where you set up the custom domain name, and where your user flow or custom policies are stored.
Azure AD B2C returns the resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
- Symptom - After you configure a custom domain, when you try to sign in with the custom domain, you get the resource you are looking for has been removed, had its name changed, or is temporarily unavailable error message.
- Possible causes - This issue could be related to the Azure AD custom domain verification.
- Resolution: Make sure the custom domain is registered and successfully verified in your Azure AD B2C tenant.
Identify provider returns an error
- Symptom - After you configure a custom domain, you're able to sign in with local accounts. But when you sign in with credentials from external social or enterprise identity providers, the identity provider presents an error message.
- Possible causes - When Azure AD B2C takes the user to sign in with a federated identity provider, it specifies the redirect URI. The redirect URI is the endpoint to where the identity provider returns the token. The redirect URI is the same domain your application uses with the authorization request. If the redirect URI is not yet registered in the identity provider, it may not trust the new redirect URI, which results in an error message.
- Resolution - Follow the steps in Configure your identity provider to add the new redirect URI.
Frequently asked questions
Can I use Azure Front Door advanced configuration, such as Web application firewall Rules?
While Azure Front Door advanced configuration settings are not officially supported, you can use them at your own risk.
When I use Run Now to try to run my policy, why I can't see the custom domain?
Copy the URL, change the domain name manually, and then paste it back to your browser.
Which IP address is presented to Azure AD B2C? The user's IP address, or the Azure Front Door IP address?
Azure Front Door passes the user's original IP address. It's the IP address that you'll see in the audit reporting or your custom policy.
Can I use a third-party web application firewall (WAF) with B2C?
To use your own web application firewall in front of Azure Front Door, you need to configure and validate that everything works correctly with your Azure AD B2C user flows, or custom polies.
Can my Azure Front Door instance be hosted in a different subscription than my Azure AD B2C tenant?
Yes, Azure Front Door can be in a different subscription.
Next steps | https://docs.microsoft.com/en-us/azure/active-directory-b2c/custom-domain?pivots=b2c-user-flow | 2021-11-27T04:08:30 | CC-MAIN-2021-49 | 1637964358078.2 | [array(['media/custom-domain/custom-domain-user-experience.png',
'Screenshot demonstrates an Azure AD B2C custom domain user experience.'],
dtype=object)
array(['media/custom-domain/custom-domain-network-flow.png',
'Diagram shows the custom domain networking flow.'], dtype=object)] | docs.microsoft.com |
Here's what you get with Phone System
This article describes Phone System features. For more information about using Phone System as your Private Branch Exchange (PBX) replacement, and options for connecting to the Public Switched Telephone Network (PSTN), see What is Phone System.
Clients are available for PC, Mac, and mobile, which provides features on devices from tablets and mobile phones to PCs and desktop IP phones. For more information, see Get clients for Microsoft Teams.
Note
For details about Teams phone systems on different platforms, see Teams features by platform.
To use Phone System features, your organization must have a Phone System license. For more information about licensing, see Microsoft Teams add-on licensing.
Be aware that most features require you to assign the Phone System license and ensure that users are "voice enabled." To assign the license, use the Set-CsUser cmdlet and set the enterprisevoiceenabled parameter to $true. A few features, such as cloud auto attendant, do not require a user to be voice enabled. Exceptions are called out in the table below.
Phone System features
Phone System provides the following features. Unless otherwise noted, features are available in both Teams and Skype for Business Online.
Availability in GCC High and DoD clouds
The following capabilities are not yet available in GCC High and DoD Clouds.
- Call settings for secondary ringer, voicemail, and enhanced delegation
- Transfer to voicemail mid call
- Call phone number from search bar
- Music on hold
- Azure AD reverse number lookup | https://docs.microsoft.com/en-us/microsoftteams/here-s-what-you-get-with-phone-system?toc=/skypeforbusiness/sfbotoc/toc.json&bc=/skypeforbusiness/breadcrumb/toc.json&redirectSourcePath=%252fen-us%252farticle%252fHere-s-what-you-get-with-Cloud-PBX-bc9756d1-8a2f-42c4-98f6-afb17c29231c | 2021-11-27T02:07:44 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.microsoft.com |
Qore comes with its own CMake support files.
The basic step for Qore inclusion in the CMake project is to put following line into
CMakeLists.txt project configuration.
CMake allows to set various
CMAKE_BUILD_TYPE build types. Qore package sets
CMAKE_BUILD_TYPE to
debug if it's not defined on input.
Qore files are located in
CMAKE_PREFIX/lib[LIB_SUFFIX]/cmake/Qore directory.
find_package(Qore) sets some compiler defines automatically.
There are more variables set when is the Qore Binary Modules Support for CMake functionality in use
There are some helper macros for Developing Qore Modules located in
QoreMacros.cmake file. Macros expect the module source code in following structure:
Location of rest of files including source code is irrelevant. If there are no
Doxyfile.in or
cmake_uninstall.cmake.in present the appropriate make target will not be created (make docs/make uninstall).
See Qore Binary Module Example for real use.
This macro generated C++ files and documentation headers from Qore Preprocessor (QPP). It takes files from
in_files_list for processing and it sets
out_files_list with values of new C++ files stored in
CMAKE_BINARY_DIR.
This macro sets environment for building binary modules.
Arguments:
target_name= a base name of the module, the same as specified in
add_libary
module_version= a string with version used in compiler defines later
Macro results:
PACKAGE_VERSIONdefine is set to
module_versionvalue
QORE_INCLUDE_DIRand
CMAKE_BINARY_DIRare appended to
include_directories
target_nameis set to use proper API and filename mask
QORE_MODULES_DIR
make
uninstalltarget is created when possible
make
docstarget is created when posiible
make docs notes
Macro calls
find_package(Doxygen) in this case. If there is Doxygen tool found and if is the
This macro creates
make
dist target to create distributable tarballs in TBZ2 format
This macro prints configuration label, text lines with information about architecture, build type, compiler flags, etc. | https://docs.qore.org/qore-0.9.1/library/html/qore_cmake.html | 2021-11-27T03:16:46 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.qore.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.