content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
How Microsoft/DevDiv uses TFS - Chapter 4
In the previous posts, I spoke about how we used TFS to implement the process.
In this post, I'll talk about how we went about planning a release.
On the feature record, we had a "Planning" tab:
Zooming in a bit:
What we did is have people enter an estimated cost for each feature in the work item. Then we pulled them into a stack-ranking spreadsheet that looked like this:
This is a TFS-bound Excel spreadsheet with some formatting options. Note the following:
- The ballpark estimates (and all the other data) are pulled directly from TFS. This was great, because all the estimates were entered separately, but could be pulled into a single place for planning
- We stack-ranked all the features, top-to-bottom.
- We added some logic to the spreadsheet to compare total cost with team capacity. Teams turned yellow if they used up more than 70% of their capacity, and turned red if they used up over 100%.
This gave us a very quick view of what could and could not be done, without a lot of work or schedule-crunching. It helped us determine where the cut-line was, and we played around by moving certain features up and down, to get a line we felt comfortable with. For example, some larger features were just moved down, simply because it allowed several smaller, key features to be above the cut-line.
Honestly, after a while, it felt like a video game. We called it the yellow/red game, because it was a trick to see how far we could push down the yellow and red! :-)
Next post: How we tracked progress | https://docs.microsoft.com/en-us/archive/blogs/teams_wit_tools/how-microsoftdevdiv-uses-tfs-chapter-4 | 2020-02-17T02:21:33 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
9.1.3. How can the root user login¶
Keeping in line with general best practice, SIMP does not allow
root to
login to the system remotely or at local terminals by default.
However, there may be cases where you need to login as
root for perfectly
valid reasons.
9.1.3.1. Enabling Terminal Logins¶
Important
If you are working on a system that was not installed from an ISO, you
should do this before running
simp bootstrap. Otherwise, unless you have
set up other users, you may not be able to log into your system. | https://simp.readthedocs.io/en/latest/help/FAQ/20_Root_Logins.html | 2020-02-17T00:17:17 | CC-MAIN-2020-10 | 1581875141460.64 | [] | simp.readthedocs.io |
Intro
Chorus can supplement coaching workflow gaps by using Natural Language Processing (NLP) to coach Reps on basic conversation skills. With Chorus taking the wheel on behavior tracking and recommendations, a coaching culture can scale to the reps' desired intake of feedback without overwhelming the busy manager’s calendar.
Self-Coaching feedback from Chorus in an easy/structured workflow that delivers relevant content and suggested behavior changes on-demand.
Where to Find it
Each recording in Chorus has a set of metrics that measure the conversation skills of the sales rep. There are privacy rules in place so that only the owner of the recording can see their personalized coaching metrics. Admins are the only users can see the coaching metrics of others.
Sales Reps also get an overview trend of how they are performing on their recent calls. This view can be found under the Coaching tab on the left navigation bar.
How to use it
Self-Coaching Points and My Conversation Trends give you a snapshot view of how you stand up to best practices.
- Longest Monologue: How long do you speak on a call without stopping to engage the prospect?
- Filler Words: Are you being concise with your word choice, or finding yourself reverting to informal filler words?
- Engaging Questions: How many times did you get a 30+ second response from the prospect?
- Next Steps: You know you need them on each call…Did you define next steps before hanging up?
Use the Recording-level view as quick look into a single conversation. Green, Yellow and Red messages show the user how they compare to Chorus-identified benchmarks and guide them to the best practices for each metric. Each of the four self-coaching areas are designed to be easily identified in any moment of a call. For example, do you ever catch yourself in a rambling answer and know you need to stop talking? Chorus is going to remind you on each call now, and track how your mindfulness is leading to improvement.
To zoom out a layer and understand your habits and skills in an aggregated format, visit the Coaching tab. This view shows you the averages across meetings from the last 7 days.
Use the arrow on the right side of each card to see how that compares to the previous 7 day period. This is a simple viewport into how you’re improving. Check back each week (or each day!) to see how you’re tracking towards those all-green goals!
Other Tips
If you don’t see My Conversation Trends when you click on the Coaching tab, you might not be a Sales Rep user-type in Chorus. Reach out to your Chorus Admin to inquire.
Please sign in to leave a comment. | https://docs.chorus.ai/hc/en-us/articles/360025822273-Self-Coaching | 2020-02-17T00:52:51 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['/hc/article_attachments/360040739913/1.png', '1.png'],
dtype=object)
array(['/hc/article_attachments/360040739953/2.png', '2.png'],
dtype=object)
array(['/hc/article_attachments/360039916714/3.png', '3.png'],
dtype=object) ] | docs.chorus.ai |
Product News
Stay up-to-date with the latest cleverbridge product news.
Increase Conversion Rates with Apple Pay and Amazon Pay
Market trends show that eWallets, a type of payment app that enables customers to purchase items either online or at a store with a smartphone, will become the leading payment method over the next years. As your ecommerce service provider, we are always committed to accommodating the many different ways that your customers choose to pay. As a result, we are happy to announce that Apple Pay and Amazon Pay are now available on the cleverbridge platform.
Main benefits:
- Increase your conversion rate
- Improve the payment approval rate
- Keep your customers happy by offering their preferred payment methods
To get access to Apple Pay and Amazon Pay now, contact the Client Experience team.
We are excited to inform you that the Products & Plans module is now available in the Subscription Commerce Manager, our web-based admin tool. This module enables your ecommerce managers to conveniently manage your product information online.
Main functionalities:
- Go online to view and update the product and plan data that is displayed to your customers.
- Create new products and plans using any computer or browser.
- Set up free trial periods for subscription products in a simpler, more user-friendly UI.
The Product & Plans module is available right now at subcomm.cleverbridge.com. Just use your cleverbridge credentials to log in.
Our Popup Order Process creates a modern, intuitive checkout experience that reduces cart abandonment and drives conversions.
The cleverbridge Popup Order Process offers the following features:
- Buy quickly and directly from a product page through an intuitive lightbox cart
- Maintain brand consistency by installing the code snippet directly on your hosted pages
- Continue the compliance and payment processing benefits of operating your business within a Merchant of RecordThe merchant of record is the entity that holds legal title to goods or services and from which the customer purchases these goods or services. environment
Piet Smet, Integration Specialist, discusses more in the Product Moment below.
If you’re ready to learn more or have questions, feel free to reach out to the Client Experience Team.
On September 14, 2019, a new requirement for authenticating online payments will be introduced in Europe as part of the second Payment Services Directive (PSD2). The goal of this new requirement is to reduce fraud and make online payments more secure.
Our Payments & Clearing team has taken the necessary steps to ensure compliance with the new regulation. Here’s what you need to know:
- We have implemented a Cardholder Initiated Transactions (CIT)/Merchant Initiated Transactions (MIT) framework that will only trigger 3D Secure when required.
- We are utilizing an up-to-date Bank Identification Number (BIN) database to identify issuers based outside of the European Economic Area (EEA) or transactions that use anonymous cards, in both scenarios no Strong Customer Authentication (SCA) mandate is required.
- We are using preferred acquirers with low fraud rates to utilize Transaction Risk Analysis (TRA) exemption for transactions below €100.
We’re aware that there are some exemptions to this mandate. Our team is actively working to identify transactions that are exempt from SCA so we can lessen the impact on your business.
Minmin Zhu, Product Owner of Payments & Clearing, discusses more in the Product Moment below.
If you have questions about PSD2 and the steps we’ve taken to ensure compliance, you can contact our Client Experience Team.
Fueled by powerful data, laser-focused messaging and impeccable timing, our Cart Abandonment Solution will allow you to claim orders that would otherwise be lost.
cleverbridge Cart Abandonment Email Campaigns offer the following features:
- Expanded sales funnels to reach a global audience
- Customizable campaigns sent at highly-targeted cadences
- A managed service to handle integration, reporting and optimization
Ashley Kolpak, Email Marketing Specialist, discusses more in the Product Moment below.
If you’re ready to learn more or have questions, feel free to reach out to [email protected].
Discover how you can increase your Digital Marketing return on investment by leveraging the cleverbridge Performance Marketing Platform powered by Partnerize!
In less than two minutes, Nick Oswald, Performance Marketing Manager at cleverbridge, discusses:
- The real-time reporting feature in Partnerize
- How to ensure you’re making the right decisions with the right publishers
- Why the new platform is worth the investment
Watch now and feel free to reach out to [email protected] with any questions.
It is easier than ever to get the help that you need to use our desktop applications.
You can now find the complete Commerce Assistant (CA) and Business Intelligence (BI) help in the cleverbridge Documentation.
You can now open the cleverbridge Documentation directly from the CA and BI.
You can also find new and improved context-sensitive help features in these applications.
In the CA, the new contextual help panel now gives you access to:
- Updated content and a more modern design
- Inline links which open the cleverbridge Documentation in a browser
- Ability to print help topics directly from the CA
- Dropdowns which allow users to hide/disclose help content
In the BI, all context-sensitive help links now open the cleverbridge Documentation directly.
Additionally, don't forget that with these changes, you benefit from the following:
- All CA/BI help content is identical, no matter where you access it
- You can now easily access information about the entire platform from the desktop tools
- You can now access help for all of our online and offline tools in the cleverbridge Documentation - CA, BI, Subscription Commerce Manager (SCM).
As a new vetted partner on the Salesforce AppExchange, we have the knowledge, experience, and the power to enhance your customer lifecycle journey by optimizing online selling and streamlining sales processes.
Whether you’re a B2B or a B2C company, our ecommerce solutions bolt onto the tools you use every day to enable you to do things like, automate orders, reduce churn and accelerate payment.
We explain more in our Product Moment video here:
If you’d like to drive additional sales and accelerate payments, connect with our Client Experience team now at [email protected].
We are thrilled to let you know you can now view your reports however and whenever you need them.
As we describe in this video, your reports created in the BI can be accessed on the go in our new web-based Subscription Commerce Manager.
Reporting is available right now at subcomm.cleverbridge.com. Just use your BI credentials to log in. This reporting tool is the first piece of functionality that we’ll continue to build out in this web-based management tool.
Create business value faster with cleverbridge Integration Services.
With clicks not code, you can use our adaptable, out-of-the-box solutions to automate business processes and connect applications to transmit data across mission-critical platforms.
As a result, you’ll drive sales, automate renewals, ensure data integrity, and respond to your customers at the most opportune time.
Our modern, flexible solution also grows with your business and features the following:
- Bi-Directional Data Exchange, which allows you to accomplish integration tasks like synchronizing, transmitting, and correcting real-time data across multiple applications
- Migration Services, which allow you to extract, transform, and load data simply and efficiently. You can also copy data from an old application to a new one or load production data to a test instance.
- Integration Services Connectivity, which allows you to rapidly integrate a broad range of business applications without any coding, such as, Marketo, Oracle Marketing Cloud, Eloqua, ExactTarget, HubSpot, Salesforce and Oracle Netsuite.
Making Your Integration a Success
Using our intuitive, drag-and-drop UI, your team can quickly map, deploy, and troubleshoot your most valuable data, without relying on valuable IT or engineering resources.
You’ll be able to choose from a basic field mapping screen for simple integrations, or an advanced process definition editor with custom queries, if-then logic, source and target lookups for each child loops, and the ability to update source rows based on data returned from the target.
cleverbridge can also integrate with critical business systems in various ways, including the following:
- Creating and updating leads, contacts, accounts and opportunities in your CRM system based on purchase data or outbound notifications
- Creating new transactions from your CRM system based off of a pre-defined CPQ process and assign opportunity ID or other identifying information
- Creating and managing your subscribers in a targeted marketing automation system based on outbound notifications
- Renewal automation
- Automate invoicing
Benefits Span the Entire Integration Lifecycle
Integration Services helps your ecommerce initiatives succeed by giving you advantages across the entire integration lifecycle, including the design, build, deploy, run, and adapt stages.
Our Integration Services requires minimal development effort and are scalable. Also, you’ll be able to save on implementation costs and focus your sales team efforts on high-value orders.
Other key features include:
- “Self-documenting” graphics development environment
- Full debugger
- Collaboration features that allow you to better manage integration projects
- Management console for tracking all of your integrations
These features bring lifecycle management benefits to your organization such as:
- Shorter deployment time/faster time to market
- Lower total cost of ownership
- Faster response to run-time issues at critical moments
- Staffing flexibility
- Reduced tracking and management of target applications
- Agile customization of integrations
- Automate responses to reduce churn
Our online training, extensive documentation, and free live technical support help you through each step in the process. Contact Client Experience or your Account Manager to learn more or explore how our Integration Services would work for you.
New functionality in our Revenue Retention Tools now enables you to reduce payment failures and churn, improve the customer renewal experience, and maximize online revenue. The new functionality, combined with our proven multi-step approach, offers you more control over retention strategies and enables you to maximize payment acceptance and renewal rates, while providing a better customer renewal experience. New revenue retention enhancements include:
- Account Updater support for American Express® – Now supporting MasterCard®, Visa® and AMEX®, the tool identifies when the information for a customer’s card on file has changed and automatically locates and updates the credit card number, expiration date, account status, etc. In the U.S., on average 23 percent of invalid/expired credit cards from all three major companies are recovered with Account Updater, which results in renewal rates of up to 85 percent.
- BIN-based Expiration Date Checker – Now taking into account the Bank Identification Number (BIN) of the customer’s credit card, the tool verifies whether a declined credit card is expired and automatically extends the expiration date for reauthorization. As a result, companies are able achieve at least a 38 percent verification rate for expired credit cards, and an overall increase in subscription renewal rate.
- Retry Logic support for multiple retries – The tool reprocesses declined payments after a certain number of days in case a customer exceeded their credit card limit for that billing period or experienced network errors. By adding multiple retries after 1-3-7-14 days, a cleverbridge client was able to recover 65 percent of its declined customer payments.
- Dynamic Transaction Routing – To increase the authorization rate, all customer payments that fail due to generic decline, invalid CVV or an invalid transaction (which represents 78 percent of all payment failures) are now instantly re-routed to another acquirer for a reattempt. With this intelligent transaction routing, companies are able to recover at least 10 percent of failed payments.
Our comprehensive set of Revenue Retention Tools equips you with everything they need to effectively process more renewals and retain more customers. In addition to Account Updater, Expiration Date Checker, and Retry Logic, our Revenue Retention Tools also include Customer Notifications and Winback Campaigns.
For more information on how cleverbridge’s Revenue Retention Tools help your business process more renewals and reduce churn, visit.
Find out how our Performance Marketing Platform can connect you to Brazilian partners to drive traffic and brand awareness.
Brazil is easily Latin America’s largest ecommerce market. With online sales volume of over 20 billion USD, it presents a large market opportunity to grow online revenue. And performance marketing is a low risk, high ROI digital marketing channel that can be used to break into lucrative new markets, such as Brazil.
Use the cleverbridge Performance Marketing Platform to:
- Connect with strategic publishers with Brazilian traffic to drive brand awareness
- Incentive Publisher based on performance (sign-up, sale, click)
- Gain actionable insights like which publisher brings the most/least value?
- Predict your future performance through AI
- Send localized marketing content, including newsletters and emails
Performance Marketing for Brazil includes:
- A tailored go-to-market strategy, including best practices to enter into the Brazilian market
- Localized communication to publishers
- Tailored publisher recruitment & activation campaigns
- Predictive reporting & insights
For more information on expanding into Brazil, including our new pricing, reach out to Client Experience.
Did you find this helpful?
Sorry about that
Why wasn't this helpful? (check all that apply)
Thanks for your feedback. Want to tell us more?
Send an email to [email protected].
Great!
Thanks for taking the time to give us some feedback. | https://docs.cleverbridge.com/public/all/additional-resources/news.htm | 2020-02-17T02:39:43 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['../Resources/Images/news/apple-pay.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['https://play.vidyard.com/keNMmwTPNpRBfMhYiZqM11.jpg', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['https://play.vidyard.com/YfGxRJqHeJuEZnUHSMyKu3.jpg', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['https://play.vidyard.com/1z1D7HaqzBsXCBpXKnPre9.jpg', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['https://play.vidyard.com/69RzEmJif6WMwFr9nb1jYc.jpg', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/revenue-retention-tools-process-flow.png',
None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/performance-marketing-brazil.png', None],
dtype=object) ] | docs.cleverbridge.com |
dhtmlxSidebar uses the icons of the DHTMLX library by default. However, you can use any other icon font pack, if necessary. For this, you need to include the desired icon font on a page and apply icons for Sidebar controls.
For example, you can use the Font Awesome icon pack by including link to its CDN after the source files of dhtmlxSidebar as follows:
<script type="text/javascript" src="../../codebase/sidebar.js"></script> <link rel="stylesheet" href="../../codebase/sidebar.css"> ">
Then you can use the name of the icon as the value of the icon property in the object with the control parameters for Sidebar:
var sidebarData = [ { type: "button", icon: "fas fa-bold", twoState: true, value: "Bold" }, { type: "button", icon: "fas fa-underline", twoState: true, value: "Underline" }, { type: "button", icon: "fas fa-italic", twoState: true, value: "Italic" }, { type: "button", icon: "fas fa-strikethrough", twoState: true, value: "Strikethrough" } ];
You can use the Material Design icon pack by including link to its CDN in the same way.
There is a possibility to make changes in the look and feel of a sidebar. For this you need to take the following steps:
<style> .my-first-class { /*some styles*/ } .my-second-class { /*some styles*/ } </style>
Back to topBack to top
<style> .my_first_class { /*some styles*/ } .my_second_class { /*some styles*/ } </style> var sidebar = new dhx.Sidebar({ css:"my_first_class my_second_class" }); | https://docs.dhtmlx.com/suite/sidebar__customization.html | 2020-02-17T02:07:57 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.dhtmlx.com |
All content with label adaptor+api+client+configuration+distribution+gridfs+hash_function+index+infinispan+interface+jta+mvcc+query+replication.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, rehash, recovery, transactionmanager, dist, release, partitioning, deadlock, archetype, jbossas, lock_striping, guide, schema, listener,
state_transfer, cache, s3, amazon, memcached, grid, jcache, test, xsd, ehcache, maven, documentation, wcm, write_behind, 缓存, ec2, s, hibernate, getting, aws, custom_interceptor, clustering, setup, eviction, concurrency, out_of_memory, examples, jboss_cache, import, events, batch, buddy_replication, loader, colocation, xa, cloud, remoting, tutorial, notification, murmurhash2, xml, jbosscache3x, read_committed, meeting, started, cachestore, data_grid, cacheloader, hibernate_search, cluster, development, br, websocket, async, transaction, interactive, xaresource, build, gatein, hinting, searchable, demo, scala, installation, command-line, migration, non-blocking, rebalance, filesystem, jpa, tx, gui_demo, eventing, snmp, deployer, shell, client_server, murmurhash, infinispan_user_guide, standalone, repeatable_read, snapshot, hotrod, webdav, docs, batching, consistent_hash, faq, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - adaptor, - api, - client, - configuration, - distribution, - gridfs, - hash_function, - index, - infinispan, - interface, - jta, - mvcc, - query, - replication )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/adaptor+api+client+configuration+distribution+gridfs+hash_function+index+infinispan+interface+jta+mvcc+query+replication | 2020-02-17T01:51:57 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.jboss.org |
Vista RTM on my Toshiba M400: smooth update from RC2
During the last couple of weeks, I was still using Vista RC2 on my M400 since I was on a roadshow, with all my demos built on that pre-release version, and I did not want to run the risk of ending up without a demo machine in the middle of the tour after a potentially failed update...
So I waited until yesterday, when the tour was over, and tried the update from Vista Ultimate RC2 to RTM. The M400 is notorious for having had lots of driver problems during the Beta phase (RC1 was almost unusable on my machine, BSOD at least once a day), RC2 was really good though, lacking a bit of performance, but otherwise being perfectly usable. But of course you don't run on pre-RTM bits forever. I was a bit worried whether I would have to re-install everything from scratch, since the update from RC2 was not officially supported (and our internal documents say explicitly "don't do this on the M400"), and I've heard a few stories, esp. on the RAID driver killing a machine occasionally. After doing the necessary backups (data mostly, I did not use Windows Easy Transfer although I've heard it works quite well on Vista), I launched the setup from within my running RC2 installation, it went ahead, offered to update, and complained about a few incompatible apps (which you then have to uninstall, which is a nuisance because (a) this can fail, in which case you are not able to proceed with the install at all, and (b) you have to restart the setup, incl. entering the product key again). But after that it just started doing the usual thing (copying files, etc.), which took a loooong time (it correctly says "this may take a few hours to complete" which it did), and booted into the RTM version. Sweet! I had to manually install a lot of drivers, but now everything works fine (incl. biometrics, bluetooth, HDD protection, etc.). And I am happy. "Windows Experience Index" is 3.1, which is pretty low even compared to my 3-year old Inspiron 8600 (3.2), but is due to the Intel graphics chipset which is not for high-performance games etc. Aero Glass (incl. 3D Flip) work really well, though, so for business use it's a great machine. | https://docs.microsoft.com/en-us/archive/blogs/frankpr/vista-rtm-on-my-toshiba-m400-smooth-update-from-rc2 | 2020-02-17T02:24:12 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['http://img.microsoft.com/library/media/1033/windows/images/homepage/wv_netProg_55x55.jpg',
None], dtype=object) ] | docs.microsoft.com |
Support for additional languages
Important
Some of the functionality described in this release plan has not been released. Delivery timelines may change and projected functionality may not be released (see Microsoft policy). Learn more: What's new and planned
Feature details
Power Virtual Agents is scaling up to support a larger set of languages. In addition to English, bots will be able to understand and converse in French, German, Spanish, Italian, Portuguese, and Chinese (a bot will only support one language). We will continue to increase support for additional languages over time. Note that the exact order and timing of language availability remain subject to change. | https://docs.microsoft.com/en-us/power-platform-release-plan/2020wave1/power-virtual-agents/support-additional-languages | 2020-02-17T00:45:04 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
You can extract features of an image object and recognize it from specific images. You can also track the image object in your application.
The main image recognition and tracking features include:
Recognizing images
You can recognize image objects in specific images, and extract image object features.
Tracking images
You can track image objects using the camera preview images and a specific image tracking model.
To enable your application to use the image recognition and tracking functionality:
Install the NuGet packages for Media Vision and Camera.
To use the methods and properties of the image recognition and tracking classes and to handle camera preview, include the Tizen.Multimedia and Tizen.Multimedia.Vision namespaces in your application:
using Tizen.Multimedia; using Tizen.Multimedia.Vision;
Define the configuration settings:
For configuring image object and feature extraction, create an instance of the Tizen.Multimedia.Vision.ImageFillConfiguration class and set its attributes accordingly:
static ImageFillConfiguration configFill = new ImageFillConfiguration(); /// Set the scale factor of the image being recognized configFill.ObjectScaleFactor = 1.2; /// Set the maximum amount of image key points to be detected configFill.ObjectMaxKeyPoints = 1000;
For image recognition, create an instance of the Tizen.Multimedia.Vision.ImageRecognitionConfiguration class and set its attributes accordingly:
static ImageRecognitionConfiguration configRecog = new ImageRecognitionConfiguration(); /// Set the scene scale factor configRecog.SceneScaleFactor = 1.2; /// Set the maximum amount of key points to be detected in a scene configRecog.SceneMaxKeyPoints = 3000;
For image tracking, create an instance of the Tizen.Multimedia.Vision.ImageTrackingConfiguration class and set its attributes accordingly:
static ImageTrackingConfiguration configTrack = new ImageTrackingConfiguration(); /// Set the history amount configTrack.HistoryAmount = 5; /// Set the expected offset configTrack.ExpectedOffset = 0.5;
To recognize an image (the target) in another (the scene):
To prepare the target image being recognized, create an instance of the Tizen.Multimedia.Vision.MediaVisionSource class with raw image buffer data and its corresponding width, height, and color space parameters:
/// Assume that there is a decoded raw data buffer of the byte[] type, and /// it has 640x480 resolution with an RGB888 color space MediaVisionSource sourceTarget = new MediaVisionSource(bytes, width, height, ColorSpace.Rgb888);
Create an instance of the Tizen.Multimedia.Vision.ImageObject class and use its
Fill() method to fill it with the
Tizen.Multimedia.Vision.MediaVisionSource instance:
static ImageObject obj = new ImageObject(); obj.Fill(sourceTarget); /// If you want to apply configuration options to the fill operation: /// obj.Fill(sourceTarget, configFill); /// If you want a specific label for the ImageObject instance, set it manually /// Otherwise the label automatically increments with each fill operation obj.SetLabel(1);
To prepare the scene where the target image is to be recognized, create a
Tizen.Multimedia.Vision.MediaVisionSource instance which stores the scene:
/// Assume that there is a decoded raw data buffer of the byte[] type, and /// it has 640x480 resolution with an RGB888 color space MediaVisionSource sourceScene = new MediaVisionSource(bytes, width, height, ColorSpace.Rgb888);
To recognize the target inside the scene, use the
RecognizeAsync() method of the Tizen.Multimedia.Vision.ImageRecognizer class:
/// You can recognize multiple targets ImageObject[] targets = new ImageObject[1] (obj); var results = await ImageRecognizer.RecognizeAsync(sourceScene, targets); foreach (ImageRecognitionResult imageResult in results) { if (imageResult.Success) Log.info(LogUtils.TAG, imageResult, Region.ToString(); else Log.info(LogUtils.Tag, "ImageRecognition: " + imageResult.Success.ToString()); }
To track images:
To prepare the camera and create an image tracking model:
Define a camera preview event handler for the
Preview event of the Tizen.Multimedia.Camera class and create an instance of that class:
/// Define a camera preview event handler static void PreviewCallback(object sender, PreviewEventArgs e) { PreviewData preview = e.Preview; SinglePlane singlePlane = (SinglePlane)preview.Plane; if (preview.Format == CameraPixelFormat.Rgb888) { MediaVisionSource source = new MediaVisionSource(singlePlane.Data, preview.width, preview.height, ColorSpace.Rgb888); } } /// Create the Tizen.Multimedia.Camera instance static Camera camera = null; try { camera = new Camera(CameraDevice.Rear); } catch (NotSupportedException) { Log.Info("Image Tracking Sample", "NotSupported"); }
Set the camera display, register the camera preview event handler, and start the camera preview with the
StartPreview() method:
/// Set the camera display camera.Display = new Display(new Window("Preview")); /// Register the camera preview event handler camera.Preview += PreviewCallback; IList previewFormats = camera.Feature.SupportedPreviewPixelFormats.ToList(); foreach (CameraPixelFormat previewFormat in previewFormats) { camera.Setting.PreviewPixelFormat = previewFormat; break; } /// Start the camera preview camera.StartPreview();
Create the image tracking model as an instance of the Tizen.Multimedia.Vision.ImageTrackingModel class:
static ImageTrackingModel model = new ImageTrackingModel();
Create a target image as an instance of the Tizen.Multimedia.Vision.MediaVisionSource class.
Create an instance of the Tizen.Multimedia.Vision.ImageObject class and use its
Fill() method to fill it with the target image.
static MediaVisionSource sourceTarget = new MediaVisionSource(bytes, width, height, ColorSpace.Rgb888); static ImageObject obj = new ImageObject(); obj.Fill(sourceTarget);
Set the target of the image tracking model with the
SetTarget() method of the
Tizen.Multimedia.Vision.ImageTrackingModel class, which takes the
Tizen.Multimedia.Vision.ImageObject instance as its parameter:
model.SetTarget(obj)
To track the target, use the
TrackAsync() method of the Tizen.Multimedia.Vision.ImageTracker class:
/// Assume that "frames" contains a sequence of decoded images as /// Tizen.Multimedia.Vision.MediaVisionSource instances foreach (MediaVisionSource frame in frames) { var result = await ImageTracker.TrackAsync(frame, model); /// If you want to apply configuration options to the tracking operation: /// var result = await ImageTracker.TrackAsync(frame, model, configTrack); if (result == null) { continue; } }
When image tracking is no longer needed, deregister the camera preview event handler, stop the camera preview, and destroy the camera instance:
camera.Preview -= PreviewCallback; camera.StopPreview(); camera.Dispose(); | https://docs.tizen.org/application/dotnet/guides/multimedia/image-recognition | 2020-02-17T00:56:02 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.tizen.org |
Height foot Computes the intersection point of the height and the base of a triangle. Syntax height_foot(Point, Point, Point) height_foot(Triangle, Integer) Description height_foot(Point, Point, Point) Given three points , and , computes the intersection point of the base and the height of the triangle with (the base of the triangle is the segment ). height_foot(Triangle, Integer) Given a triangle and an integer , computes the intersection point of the base and the height of the triangle with height at -th vertex. Related functions Height, Orthocenter Table of Contents Syntax Description Related functions | https://docs.wiris.com/en/calc/commands/geometry/height_foot?do=login§ok=86733bf66af9174d1f7f929514ef427c | 2020-02-17T01:19:19 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.wiris.com |
All content with label cloud+coherence+data_grid+distribution+gridfs+import+index+infinispan+interactive+jcache+jsr-107+locking+murmurhash+repeatable_read+scala.
Related Labels:
podcast, publish, datagrid, interceptor, server, rehash, replication, transactionmanager, dist, release, partitioning, query, deadlock, intro, contributor_project, archetype, lock_striping, jbossas, demos,
guide, schema, listener, state_transfer, cache, s3, amazon, memcached, grid, test, api, xsd, ehcache, maven, documentation, youtube, userguide, write_behind, ec2, 缓存, streaming, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, large_object, concurrency, out_of_memory, jboss_cache, events, hash_function, configuration, batch, buddy_replication, loader, colocation, remoting, mvcc, tutorial, notification, presentation, murmurhash2, read_committed, xml, jbosscache3x, meeting, cachestore, cacheloader, resteasy, hibernate_search, cluster, development, br, permission, websocket, async, transaction, xaresource, build, hinting, searchable, demo, installation, ispn, client, migration, non-blocking, rebalance, filesystem, jpa, tx, user_guide, gui_demo, eventing, student_project, client_server, testng, infinispan_user_guide, standalone, snapshot, hotrod, webdav, docs, batching, consistent_hash, jta, faq, 2lcache, docbook, lucene, jgroups, rest, hot_rod
more »
( - cloud, - coherence, - data_grid, - distribution, - gridfs, - import, - index, - infinispan, - interactive, - jcache, - jsr-107, - locking, - murmurhash, - repeatable_read, - scala )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/cloud+coherence+data_grid+distribution+gridfs+import+index+infinispan+interactive+jcache+jsr-107+locking+murmurhash+repeatable_read+scala | 2020-02-17T01:24:30 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.jboss.org |
Troubleshoot adding and deleting organization users
Azure DevOps Services | Azure DevOps Server 2019 | TFS 2018 | TFS 2017 | TFS 2015 | TFS 2013
Permissions
Q: Why can't I manage users?
A: To access and manage users at the organization level, you must be a member of the Project Collection Administrators group or the organization Owner. To get added, see Set permissions at the project- or collection-level.
Visual Studio subscriptions
Q: When do I select "Visual Studio/MSDN Subscriber"?
A: Assign this access level to users who have active, valid Visual Studio subscriptions. Azure DevOps automatically recognizes and validates Visual Studio subscribers who have Azure DevOps as a benefit. You need the email address that's associated with the subscription.
For example, if a user selects Visual Studio/MSDN Subscriber, but the user doesn't have a valid, active Visual Studio subscription, they can work only as a Stakeholder.
Q: Which Visual Studio subscriptions can I use with Azure DevOps?
A: See Azure DevOps benefits for Visual Studio subscribers.
Q: Why won't my Visual Studio subscription validate?
A: See Why won't Azure DevOps recognize my Visual Studio subscription?
Q: Why do Visual Studio subscriber access levels change after a subscriber signs in?
A: Azure DevOps recognizes Visual Studio subscribers. Azure DevOps automatically assigns a user access that's based on the user's subscription and not on the current access level that's assigned to the user.
Q: What happens if a user's subscription expires?
A: If no other access levels are available, users can work as Stakeholders. To restore access, a user must renew their subscription.
Q: What happened to Visual Studio Online Professional?
A: In 2016, we replaced Visual Studio Online Professional with the Visual Studio Professional monthly subscription. Customers who'd been purchasing Visual Studio Online Professional were able to continue purchasing it after that point, but it wasn't available to new customers. On September 30, 2019, we'll officially retire Visual Studio Online Professional. As a courtesy, billing for it stopped after August 1, 2019.
When Visual Studio Online Professional is retired, any users that are still assigned to it are assigned to the best Azure DevOps access level available to your organization. As a result, your Professional users’ access may be downgraded to Basic or Stakeholder. To avoid being downgraded, buy a Visual Studio Professional monthly subscription and assign your Professional users to it. The monthly subscription has the same monthly cost as Visual Studio Online Professional.
Follow these instructions to identify if you have Professional users, buy a monthly subscription, and assign them to it by September 30, 2019:
Organization settings.
Select Users and filter by access level to show only Professional users.
Buy a Visual Studio Professional monthly subscription.
Assign your Professional users to the subscription in the Visual Studio subscriptions administration portal.
If you don’t complete these steps by September 30, 2019, and your users are downgraded to Basic or Stakeholder access, you may restore their Professional access at any time by following the instructions above.
User access
Q: What does "Last Access" mean in the All Users view?
The value in Last Access is the last date a user accessed any resources or services. Accessing Azure DevOps includes using organizationname.visualstudio.com directly and using resources or services indirectly. For example, you might use the Azure Artifacts extension, or you can push code to Azure DevOps from a Git command line or IDE.
Q: Can a user who has paid for Basic access join other organizations?
A: No, a user can join only the organization for which the user has paid for Basic access. But a user can join any organization where free users with Basic access are still available. The user can also join as a user with Stakeholder access for free. does a user lose access to some features?
A: A user can lose access for the following reasons (although the user can continue to work as a Stakeholder):
The user's Visual Studio subscription has expired. Meanwhile, the user can work as a Stakeholder, or you can give the user Basic access until the user renews their subscription. After the user signs in, Azure DevOps restores access automatically.
The Azure subscription used for billing is no longer active. All purchases made with this subscription are affected, including Visual Studio subscriptions. To fix this issue, visit the Azure account portal.
The Azure subscription used for billing was removed from your organization. Learn more about linking your organization.
Your organization has more users with Basic access than the number of users that you're paying for in Azure. Your organization includes five free users with Basic access. If you need to add more users with Basic access, you can pay for these users.
Otherwise, on the first day of the calendar month, users who haven't signed in to your organization for the longest time lose access first. If your organization has users who don't need access anymore, remove them from your organization.
Azure Active Directory and your organization
Q: Why do I have to add users to a directory?
A: Your organization authenticates users and controls access through Azure Active Directory (Azure AD). All users must be directory members to get access.
If you're a directory administrator, you can add users to the directory. If you're not an administrator, work with your directory administrator to add users. Learn more about how to control access by using a directory.:. This is an advanced process we don't advise, but it allows the user to query Azure AD from the Azure DevOps organization thereafter., as pictured below. criteria above. have seen it take several hours or even days before this change is reflected inside Azure DevOps. If it doesn't fix your Azure DevOps issue immediately, give it some time and keep trying..
More | https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/faq-add-delete-users?cid=kerryherger&view=azure-devops | 2020-02-17T01:59:36 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['/en-us/azure/devops/media/connect-your-organization-to-azure-ad.png',
'Check for a connected directory in Organization settings = Not connected'],
dtype=object)
array(['/en-us/azure/devops/media/organization-connected-azure-ad.png',
'Check for a connected directory in Organization settings = Connected'],
dtype=object) ] | docs.microsoft.com |
Microsoft Search
Why Microsoft Search?
Get an enterprise search experience that increases productivity and saves time by delivering more relevant search results for your organization
Featured topics
Make content searchable
See which features are available for admins and users, including what you'll find when you search
-
-
Use Microsoft Search
View articles and videos to help train your users to be more productive with Microsoft Search | https://docs.microsoft.com/pl-pl/microsoftsearch/ | 2020-02-17T00:42:12 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
panda3d.vision.ARToolKit¶
- class
ARToolKit¶
ARToolKit is a software library for building Augmented Reality (AR) applications. These are applications that involve the overlay of virtual imagery on the real world. It was developed by Dr. Hirokazu Kato. Its ongoing development is being supported by the Human Interface Technology Laboratory (HIT Lab) at the University of Washington, HIT Lab NZ at the University of Canterbury, New Zealand, and ARToolworks, Inc, Seattle. It is available under a GPL license. It is also possible to negotiate other licenses with the copyright holders.
This class is a wrapper around the ARToolKit library.
Inheritance diagram
- static
make(camera: NodePath, paramfile: Filename, markersize: float) → ARToolKit¶
Create a new ARToolKit instance.
Camera must be the nodepath of a panda camera object. The panda camera’s field of view is initialized to match the field of view of the physical webcam. Each time you call analyze, all marker nodepaths will be moved into a position which is relative to this camera. The marker_size parameter indicates how large you printed the physical markers. You should use the same size units that you wish to use in the panda code.
- Return type
-
setThreshold(n: float) → None¶
As part of its analysis, the ARToolKit occasionally converts images to black and white by thresholding them. The threshold is set to 0.5 by default, but you can tweak it here.
attachPattern(pattern: Filename, path: NodePath) → None¶
Associates the specified glyph with the specified NodePath. Each time you call analyze, ARToolKit will update the NodePath’s transform. If the node is not visible, its scale will be set to zero.
analyze(tex: Texture, do_flip_texture: bool) → None¶
Analyzes the non-pad region of the specified texture. This causes all attached nodepaths to move. The parameter do_flip_texture is true by default, because Panda’s representation of textures is upside down from ARToolKit. If you already have a texture that’s upside-down, however, you should set it to false. | https://docs.panda3d.org/1.10/cpp/reference/panda3d.vision.ARToolKit | 2020-02-17T00:31:43 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.panda3d.org |
Getting Started with Eggplant Performance for JMeter
Eggplant Performance for JMeter is a free, fully functional version of Eggplant’s load and performance testing solution that you can use with JMeter test plans. Eggplant Performance adds strong test composition, environment management, dynamic control, and result analytics to JMeter’s existing capabilities.
This guide covers the following topics:
- Quick Start: If you're familiar with JMeter and software performance testing, jump in to this section, which features step-by-step instructions for importing your JMeter test plans, running them in Eggplant Performance Test Controller, and viewing results in Eggplant Performance Analyzer.
- Terminology and Functionality Comparison: No two tools use the same terminology, and how you use the tools can differ vastly even when your end goal is the same. This section highlights key conceptual differences between Eggplant Performance and JMeter and provides direct comparisons between common JMeter test elements (samplers, controllers, etc.) and their Eggplant Performance equivalents.
Quick Start
This guide assumes you have already installed Eggplant Performance for JMeter. If you need help with installation, see Installing the Eggplant Performance Applications before proceeding.
To use Eggplant Performance for JMeter, you need a JMeter test plan. You can use your own test plan or download the sample test plan used in this guide.
Creating a New Workspace
After Eggplant Performance for JMeter has been installed, launch the Eggplant Performance Studio component. Studio prompts you to create a new workspace, locate an existing one, or import a previously exported workspace:
Select Create New, then use the Create a workspace dialog box to give the workspace a name and browse to a directory:
A folder for the workspace is created in the directory you specify.
Your newly created workspace is opened in Studio, revealing a collection of six folders. You can think of these as somewhat similar to JMeter's test elements, in that each folder offers some functionality that you'll likely want to use at some point.
The folder of interest is of course the JMeter Test Plans folder. Before you can make use of it, you need to create a project.
Creating a New Project
A project in Eggplant Performance is typically where you maintain assets related to a specific test or series of tests for a specific application under test. Every test in Eggplant Performance is run from within a specific project.
To create a new project, click Create a new project in the central pane of the Studio UI, then give the project a suitable name:
Importing the JMeter Test Plan
After you create a project, you can import your JMeter test plan. Right-click the JMeter Test Plans folder in the workspace, then select Import JMeter test plan.
A dialog box appears to indicate that the test plan .jmx file has been copied into the workspace.
Note: Eggplant Performance will not modify the original .jmx file. If you want to update the test plan without repeating the import process, you can open and edit the .jmx file that was copied into the workspace.
You and locate imported test plans on disk in the workspace folder. Go to Tools > Explore > Workspace files on Studio's main menu, then locate the JMeter folder. Alternatively, you can right-click the test plan in the workspace, then select Open test plan in JMeter.
Creating a Test
Next, you need to create a test for the test plan. The test is a key component of Eggplant Performance The test definition is what Test Controller needs in order to execute anything.
The main purpose of the test in the context of JMeter is to let you distribute the load generation across one or more injectors. It also lets you define KPIs and enable server monitoring.
To create a test, right-click the test plan node, then select Create test.
The new test appears in the project under the Tests node. Selecting it reveals details about the test in the center pane of the UI:
Running the Test
At this point, you're all set to run your test in Test Controller. To start the test, right-click the test in Studio, then select Open test in Test Controller. This action launches Test Controller and loads the test. If the test was already running, Test Controller reloads the test (in case you've made any changes to it).
Opening the test in Test Controller does not launch the test, however. To start the test, click the green Start test button
on the toolbar to open the Start test run dialog box:
On this dialog box, you see that Test Controller performs several pre-test checks to ensure that all the required assets and configurations are in place for the defined test. If everything is ready, you can click Start at the bottom of the dialog box to start the test.
Customizing the Dashboard
Assuming the test started successfully, Test Controller soon updates its runtime dashboards with information being collected by the VUs. The default view typically consists of the following tables and charts:
- VU concurrency chart
- Transactions Summary
- Injector status (useful for keeping an eye on hardware utilization)
- HTTP response code rate
There are, however, many additional interesting charts to display, available in the tree view on the left-hand side of the UI. For more information about the available charts and reports, see Real-Time Test Monitoring in Test Controller.
The following video shows how you can modify the UI to suit your needs. Note that the UI configuration is stored on a per-test basis.
Launching Analyzer and Analyzing a Test Run
Test Controller displays charts and tables of data. However, Analyzer is the Eggplant Performance component that is best suited for digging deep into collected test results. Analyzer is installed separately by default, so make sure you download and install it before proceeding.
After you install Analyzer, you can launch it either from its desktop shortcut or by using the toolbar button in Studio:
After you launch Analyzer, click the Analyze a test run button in the center pane to open the Test Run Browser dialog box. You'll be presented with your workspace, including any completed test runs:
Select the test run you want to analyze, then click Import.
After the import process completes, you should see several folders appear in the left-hand pane with your test run at the top. When you select the test run, the right-hand pane displays several tabs with detailed information about the test that was executed, such as when it was started and stopped, the VU Group composition, which injectors were used, errors and warnings encountered, and so forth.
Creating an Analysis View
The last step involves creating an analysis view, which is a collection of charts and tabular data that are generated based on a specific analysis template. To create an analysis view, right-click the test run in the left-hand tree view, then select New analysis view to launch the Create an Analysis View wizard.
The analysis view lets you focus your analysis on a particular period of time within the test. For example, you might be interested in only response time statistics when the test was at "steady state," which would exclude the ramp-up and ramp-down periods. To change the focus period, which is called the time slice, drag the vertical sliders to the desired position.
Click Next, then the wizard presents a screen where you select which analysis view template to use. There are two built-in templates: Default and Default Web. Select the Default Web template to generate HTTP request-level information in addition to what is generated by the Default template.
Click Next and the wizard switches to the Apply Grouping screen. This page is most useful for grouping error messages together that would otherwise be logged separately, such as when the error messages contain dynamic values. For more information, see Using Grouping in Eggplant Performance Analyzer.
For now, you can click Next, which leads to the final step of the wizard. Give the analysis view a name, then click Finish to complete the analysis view creation.
Viewing Analyzed Results
You've now generated charts and tabular data from your JMeter test. You can explore the different charts by selecting them in the tree. It is often useful to view two or more charts at the same time on the same X-axis. To do so, hold down CTRL and select multiple nodes as depicted in the following screenshot:
Next Steps
Now that you're familiar with the end-to-end process of importing a JMeter test plan, running a test, and analyzing its results, you might want to explore the other functionality available in Eggplant Performance. Here are some useful links to get you started:
- Server Monitoring: Gathering client-side metrics gives you only half the picture. Make sure to define monitoring targets so that you can see the hardware utilization your VUs incur on the system you're testing.
- Web Log Viewer: The Web Log is a useful debugging utility that exposes the lower-level HTTP request and response data. By default, only 1 in 100 VUs has a web log enabled, but you can modify this number in the test's Runtime Settings. The first VU in your test should feature a globe icon
that can be used to open the web log. See Eggplant Performance Test Controller Viewers for information about how to open the web log directly from a VU that has logging enabled.
Terminology and Functionality Comparison
Eggplant Performance frequently uses different terms for concepts you might be familiar with from JMeter tests. Use the following tables to familiarize yourself with how common JMeter concepts are referenced in Eggplant Performance.
High-Level Concepts
Common Elements | http://docs.eggplantsoftware.com/epp/9.0.0/integration/epp-jm-getting-started.htm | 2020-02-17T00:38:30 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['../../../Resources/Images/epp-find-workspace-dialog.png',
'The Find a workspace dialog box in Eggplant Performance Studio The Find a workspace dialog box in Eggplant Performance Studio'],
dtype=object)
array(['../../../Resources/Images/epp-create-workspace-dialog.png',
'The Create a workspace dialog box in Eggplant Performance Studio The Create a workspace dialog box in Eggplant Performance Studio'],
dtype=object)
array(['../../../Resources/Images/epp-create-test-menu.png',
'Creating a test from a test plan in Eggplant Performance Studio Creating a test from a test plan in Eggplant Performance Studio'],
dtype=object)
array(['../../../Resources/Images/epp-start-test-run-dialog.png',
'The Start test run dialog box in Eggplant Performance Test Controller'],
dtype=object)
array(['../../../Resources/Images/epp-launch-analyzer-button.png',
'The launch Analyzer button in Eggplant Performance Studio The launch Analyzer button in Eggplant Performance Studio'],
dtype=object) ] | docs.eggplantsoftware.com |
Introduction¶
Concept of operations¶
Event Management¶
From an event management point of view MozDef relies on Elastic Search for:
- event storage
- event archiving
- event indexing
- event searching
This means if you use MozDef for your log management you can use the features of Elastic Search to store millions of events, archive them to Amazon if needed, index the fields of your events, and search them using highly capable interfaces like Kibana.
MozDef differs from other log management solutions that use Elastic Search in that it does not allow your log shippers direct contact with Elastic Search itself. In order to provide advanced functionality like event correlation, aggregation and machine learning, MozDef inserts itself as a shim between your log shippers (rsyslog, syslog-ng, beaver, nxlog, heka, logstash) and Elastic Search. This means your log shippers interact with MozDef directly and MozDef handles translating their events as they make they’re way to Elastic Search.
Event Pipeline¶
The logical flow of events is:
+–––––––––––+ +––––––––––––––+ | MozDef +––––––––––––––+ | +––––––––––+ | FrontEnd | Elastic | | shipper +–––––––+–––––––––––+ | Search | ++++++++++++ | cluster | ++++++++++++ | | | shipper +–––––––+–––––––––––+ | | +––––––––––+ | MozDef +-–––––––––––––+ | | FrontEnd | | +–––––––––––+ | | +––––––––––––––+
Choose a shipper (logstash, nxlog, beaver, heka, rsyslog, etc) that can send JSON over http(s). MozDef uses nginx to provide http(s) endpoints that accept JSON posted over http. Each front end contains a Rabbit-MQ message queue server that accepts the event and sends it for further processing.
You can have as many front ends, shippers and cluster members as you with in any geographic organization that makes sense for your topology. Each front end runs a series of python workers hosted by uwsgi that perform:
- event normalization (i.e. translating between shippers to a common taxonomy of event data types and fields)
- event enrichment
- simple regex-based alerting
- machine learning on the real-time event stream
Event Enrichment¶
To facilitate event correlation, MozDef allows you to write plugins to populate your event data with consistent meta-data customized for your environment. Through simple python plug-ins this allows you to accomplish a variety of event-related tasks like:
- further parse your events into more details
- geoIP tag your events
- correct fields not properly handled by log shippers
- tag all events involving key staff
- tag all events involving previous attackers or hits on a watchlist
- tap into your event stream for ancilary systems
- maintain ‘last-seen’ lists for assets, employees, attackers
Event Correlation/Alerting¶
Correlation/Alerting is currently handled as a series of queries run periodically against the Elastic Search engine. This allows MozDef to make full use of the lucene query engine to group events together into summary alerts and to correlate across any data source accessible to python. | https://mozdef.readthedocs.io/en/latest/introduction.html | 2020-02-17T02:06:12 | CC-MAIN-2020-10 | 1581875141460.64 | [] | mozdef.readthedocs.io |
Todo List Application in One File¶
This:
- provides views to list, insert and close tasks
- uses route patterns to match your URLs to view code functions
- uses Mako Templates to render your views
- stores data in an SQLite database
Here's a screenshot of the final application:
Step 1 - Organizing the project¶
Note
For help getting Pyramid set up, try the guide Installing Pyramid.
To use Mako templates, you need to install the
pyramid_mako add-on as
indicated under Major Backwards Incompatibilities under What's New In
Pyramid 1.5.
In short, you'll need to have both the
pyramid and
pyramid_mako
packages installed. Use
easy_install pyramid pyramid_mako or
pip
install pyramid and
pip install pyramid_mako to install these
packages.
Before getting started, we will create the directory hierarchy needed for our application layout. Create the following directory layout on your filesystem:
/tasks /static /templates
Note that the
tasks directory will not be used as a Python package; it will
just serve as a container in which we can put our project.
Step 2 - Application setup¶
To begin our application, start by adding a Python source file named
tasks.py to the
tasks directory. We'll add a few basic imports within
the newly created file.
Then we'll set up logging and the current working directory path.
Finally, in a block that runs only when the file is directly executed (i.e., not imported), we'll configure the Pyramid application, establish rudimentary sessions, obtain the WSGI app, and serve it.
We now have the basic project layout needed to run our application, but we still need to add database support, routing, views, and templates.
Step 3 - Database and schema¶
To make things straightforward, we'll use the widely installed SQLite database
for our project. The schema for our tasks is simple: an
id to uniquely
identify the task, a
name not longer than 100 characters, and a
closed
boolean to indicate whether the task is closed. as indicated by the
emphasized lines.
To make the process of creating the database slightly easier, rather than
requiring a user to execute the data import manually with SQLite, we'll create
a function that subscribes to a Pyramid system event for this purpose. By
subscribing a function to the
ApplicationCreated event, for each time we
start the application, our subscribed function will be executed. Consequently,
our database will be created or updated as necessary when the application is
started..
To make those changes active, we'll have to specify the database location in
the configuration settings and make sure our
@subscriber decorator is
scanned by the application at runtime using
config.scan().
We now have the basic mechanism in place to create and talk to the database in
the application through
request.db.
Step 4 - View functions and routes¶:
Note that our imports are sorted alphabetically within the
pyramid
Python-dotted name which makes them easier to find as their number increases.
We'll now add some view functions to our application for listing, adding, and closing todos.
List view¶.
New view¶. Insert the
following code immediately after the
list_view.
Warning
Be sure to use question marks when building SQL statements via
db.execute, otherwise your application will be vulnerable to SQL
injection when using string formatting.
Close view¶
This view lets the user mark a task as closed, flashes a success message, and
redirects back to the
list_view page. Insert the following code immediately
after the
new_view.
NotFound view¶
This view lets us customize the default
NotFound view provided by Pyramid,
by using our own template. The
NotFound view is displayed by Pyramid when
a URL cannot be mapped to a Pyramid view. We'll add the template in a
subsequent step. Insert the following code immediately after the
close_view.
Adding routes¶
We finally need to add some routing elements to our application configuration if we want our view functions to be matched to application URLs. Insert the following code immediately after the configuration setup code.
We've now added functionality to the application by defining views exposed through the routes system.
Step 5 - View templates¶
The views perform the work, but they need to render something that the web browser understands: HTML. We have seen that the view configuration accepts a renderer argument with the name of a template. We'll use one of the templating engines, Mako, supported by the Pyramid add-on, pyramid_mako.
We'll also use Mako template inheritance. Template inheritance makes it possible to reuse a generic layout across multiple templates, easing layout maintenance and uniformity.
Create the following templates in the
templates directory with the
respective content:
layout.mako¶>
list.mako¶>
new.mako¶>
notfound.mako¶
This template extends the master
layout.mako template. We use it as the
template for our custom
NotFound view.
# -*- coding: utf-8 -*- <%inherit <div id="notfound"> <h1>404 - PAGE NOT FOUND</h1> The page you're looking for isn't here. </div>
Step 6 - Styling your templates¶.
Step 7 - Running the application¶. Open a web browser to the URL to view and interact with the app. | http://pyramid-cookbook.readthedocs.io/en/latest/sample_applications/single_file_tasks.html | 2018-02-17T21:09:55 | CC-MAIN-2018-09 | 1518891807825.38 | [array(['../_images/single_file_tasks.png',
'../_images/single_file_tasks.png'], dtype=object)] | pyramid-cookbook.readthedocs.io |
.
Using the BTM API. database instead of line 8.
Using the Resource Loader
A connection factory configuration utility is also bundled with BTM. It is convenient to use it rather than create your connection factory way is to bind the
PoolingConnectionFactoryyourself or via some application server specific configuration. This is the approach used with Jetty.
- Yet another way (in case the JNDI context is read only, like in Tomcat) is to bind a bitronix.tm.resource.ResourceFactory object, passing it a javax.naming.Reference containing a javax.naming.StringRefAddr containing the connection factory's
uniqueNameas
addrTypesomewhere in your JNDI tree. The
bitronix.tm.resource.ResourceFactoryclass will just return the connection factory with the specified
uniqueName. This is explained more in-depth in the Tomcat integration page.
- The last way is to call bitronix.tm.resource.ResourceRegistrar.get(String uniqueName). This is the least preferred method as this ties your code to BTM which you probably want to avoid. | http://docs.codehaus.org/pages/viewpage.action?pageId=9240652 | 2015-05-22T11:50:21 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.codehaus.org |
Difference between revisions of "Administrators"
From Joomla! Documentation
Revision as of 20:55, 13 September 2012 Miscellaneous Management Tasks
- 13 Administrators Documentation Projects and Open Tasks
- 14 Other Ideas and Suggestions
<translate>
Joomla! Administrator's Manual
</translate> <translate> Note from the Doc Team: The intention is that the list of topics you see below should be task-orientated and not a "feature list" for Joomla! website administration. These items should address real and common activities that an Administrator will need to perform. </translate>
Managing a Joomla! Website
>
For more information, see Article Management.
{{:Menu Management/Reading list/<translate> en</translate>}}
</translate>
<translate> The Joomla! Documentation Wiki needs your help! Below is a list of pages/articles needed for this page. If you know of a topic which needs to be addressed, just add it to the List of Red Links below by using the following wikimarkup. </translate>
<translate> List of Red Links for needed Administrator Articles:
- none listed yet</translate>
<translate>
Other Ideas and Suggestions
List your ideas and suggestions here:
- We need more ideas and suggestions for improving this page. -The Doc Team
</translate> | https://docs.joomla.org/index.php?title=Administrators&diff=74366&oldid=11418 | 2015-05-22T12:32:14 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.joomla.org |
Prevent additional devices from connecting to your mobile hotspot
- On the home screen, click the connections area at the top of the screen, or click the Manage Connections icon.
- Click Networks and Connections > Mobile Hotspot Connections.
- Select the Don't allow any more devices to connect checkbox.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/41695/1329198.jsp | 2015-05-22T11:36:44 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.blackberry.com |
Sample Application Overview
Local Navigation
Files in the sample application
LocalizationDemo.java
This file defines the LocalizationDemo class , the LocalizationDemoScreen class, and the InfoScreen class.
The LocalizationDemo class extends the UiApplication class, and contains the following constructor and methods:
- main(String[ ] args) : contains a new instance of the application named theApp, provides the entry point to the sample application, and starts the main thread using theApp.enterEventDispatcher( )
- LocalizationDemo() : constructs a new LocalizationDemo object and pushes a LocalizationDemoScreen object on to the display stack
The LocalizationDemoScreen class extends the MainScreen class, and implements the LocalizationDemoResource and FieldChangeListener interfaces. It contains the following constructor and methods:
- LocalizationDemoScreen() : initializes the screen by creating the RichTextField, SeparatorField, and LabelField objects
- fieldChanged(Field field, int context) : implementation of the FieldChangeListener interface listens for a BlackBerry® device user to select a locale and displays the menu when the ObjectChoiceField changes
The InfoScreen class extends MainScreen, implements LocalizationDemoResource and displays the information for the currently selected locale.Resource files
Next topic: System requirements
Previous topic: Featured interfaces
Was this information helpful? Send us your comments. | http://docs.blackberry.com/zh-cn/developers/deliverables/33805/Localization_sample_app_files_1791764_11.jsp | 2015-05-22T11:31:48 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.blackberry.com |
.
Lately, I've had noticeably more people asking me about Tapestry and why one should choose it over the other (Java) web frameworks. To me, Tapestry is a good compromise, just like Java is. Linus Torvalds, my fellow country man, has famously said "performance almost always matters". There are so many aspects to web development, and performance is often seen as one of the smallest of your problems because in the end "it always comes down to the database". However, a high performing framework solves many other problems. Today, a typical, reasonably well-implemented Java web application on a modest hardware can serve hundreds of concurrent requests, thousands of concurrent users and tens of thousands of users a day from a single server. Most start-ups never need to worry about the scaling out problem until they actually have the money to pay for it. Unfortunately, you can also easily make the implementation horribly slow, suffering from scalability problems from the get-go and even more unfortunately, it's easier to go wrong with some Java frameworks than with others. For what Tapestry offers, the performance of the framework itself, both in terms of cpu and memory consumption is simply phenomenal. Performance matters.
However, I really don't want to make this post about Tapestry's performance. As soon as you mention one thing about a particular framework, people tend to place it in that category and forget about everything else.. Tapestry doesn't force you to a certain development model - such as using sessions, always post, single url, ajax-only, thick RIA etc. If you just need to handle a specific case, such as building a single-page, desktop-like application for web, you could pick GWT, Flex or Vaadin, but if you are a building a generic, mixed static/dynamic content site with multiple pages you'd undoubtedly pick entirely different set of tools. Tapestry though, is an "enabling" technology - you could use it together with all three aforementioned RIA frameworks. You could also use and people have used Tapestry-IoC alone in non-web desktop applications. Not a whole lot of other "web" frameworks can claim suitability for such diverse use cases. Sadly, comprehensiveness of a framework can be a somewhat difficult area to objectively compare so each framework usually resorts to toting their best features to prove their superiority over others.
One criteria I personally use a lot in comparing effectiveness of competing solutions is their expressiveness and succinctness. Now, everybody knows that Java is a butt-ugly language (though it makes up on other departments, like performance and comprehensiveness).. Minimal effort required for creating Tapestry components makes it easy to refactor your application logic into reusable elements, rather than having to repeat yourself. Patterns in object-oriented languages are a well studied and accepted principle, but only a few (IoC) frameworks besides Tapestry IoC manages to have a framework level support for implementing common ones, such as chain of command, strategy and pipelines.
For Tynamo, I've said it before but I just don't think we could have achieved the same CRUD functionality with any other framework. Certainly anything can be done, but the cost of it would have both been far higher and we would have needed to build much more infrastructure. When we moved from Tapestry 4 to Tapestry 5 (and from Trails to Tynamo), it was amazing to see how we were able to simplify our implementation and remove huge amounts of code while keeping the concept unchanged and making it all more modular at the same time. Using a different stack, you could probably get closest to what tapestry-model is with a combination of Wicket and Spring, but allowing the same level of extensibility would undoubtedly be more cumbersome. Back in Trails, we actually had one person working on a pure Spring (MVC + core) implementation of the same concept but it died a slow death. As the documentation states, tapestry-model produced "default model is highly customizable, you can change pretty much anything you need, and make the changes specific to type, page or instance - a feature that very few other CRUD frameworks offer". The big difference is that when you need to customize the model, you don't have to rewrite it all, you'll be just customizing the pages and overriding components as needed.
Perhaps we've gone a bit overboard with modularity, but since it's just that simple with Tapestry, most of our modules are independently usable but seamlessly work together in the same web application as soon as you add them to the classpath. Today, Tynamo is much more than just tapestry-model, the CRUD framework. Tapestry-security, tapestry-conversations and tapestry-resteasy are all steadily gaining popularity and based on the page views, it seems that tapestry-security is poised to become our most popular module offering at some point. On that note, I have a few new supplemental modules for tapestry-security coming up which should be of interest to others as well, but more on that in a separate post. For now, I hope I've been able to give some answers to why at Tynamo, we think we've made the right choice with Tapestry and I'm confident that 2011 will be the best year yet both for Tapestry and Tynamo! | http://docs.codehaus.org/display/TYNAMO/2011/01/ | 2015-05-22T11:44:15 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.codehaus.org |
-rc-1 is the latest packaged preview of the next version of Groovy. Visit the Roadmap to find out further details.
Download zip: Binary Release | Source Release
Download Windows-Installer: Binary Release
Download documentation: JavaDoc and zipped online documentation
Other ways to get Groovy
If you're on Windows, you can also use the NSIS Windows installer.
If you use Ubuntu or a Debian based Linux, you can download binary packages: Groovy Ubuntu packages.
You may download other distributions of Groovy from this site.
If you prefer to live on the bleeding edge, you can also grab the source code from SVN.
If you wish to embed Groovy in your application, you may just prefer to point to your favourite maven repositories or the codehaus snapshot maven repository.
If you are an IDE user, you can just grab the latest IDE plugin and follow the plugin installation instructions. | http://docs.codehaus.org/pages/viewpage.action?pageId=16580697 | 2015-05-22T11:44:23 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.codehaus.org |
Download and buy work apps
In the
BlackBerry World
for Work storefront, tap an item.
In the upper-right corner of the screen, tap the button with the price displayed on it.
To change your payment method before you pay for an item, in the
Bill Through:
drop-down list, tap a payment method.
Tap
Purchase
.
Parent topic:
BlackBerry World for Work | http://docs.blackberry.com/en/smartphone_users/deliverables/50498/amc1354635666079.html | 2015-05-22T12:01:35 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.blackberry.com |
The Agent is a special-purpose thread-safe non-blocking implementation inspired by Agents in Clojure...
For latest update, see the Agent section of the User Guide and the respective Demos. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=131432512 | 2015-05-22T11:34:01 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.codehaus.org |
Getting Started Page Index/1.5 From Joomla! Documentation (Redirected from Page Index) Index to the other documents in this series Start Here: Introduction to Getting Started with Joomla! for version 1.5 Beyond the basics: doing more Jooomla! Books, links and helpful resources Retrieved from ‘’ Categories: Navigation boxesArchived version Joomla! 1.5 | https://docs.joomla.org/Page_Index | 2015-05-22T11:47:31 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.joomla.org |
Sends.
For more information about formatting messages, see Send Custom Platform-Specific Payloads in Messages to Mobile Devices.
For information about the common parameters that all actions use, see Common Parameters.
The message you want to send to the topic.
If.
See the Examples section for the format of the JSON object.
Constraints: Messages must be UTF-8 encoded strings at most 256 KB in size (262144 bytes, not 262144 characters).
JSON-specific constraints:
Publishcall to return an error (no partial delivery).
Type: String
Required: Yes
Message attributes for Publish action.
Type: String to MessageAttributeValue map
Required: No:
Either TopicArn or EndpointArn, but not both.
Type: String
Required: No
The topic you want to publish to.
Type: String
Required: No
The following
element is
returned in a structure named
PublishResult.
Unique identifier assigned to the published message.
Length Constraint: Maximum 100 characters
Type: String
For information about the errors that are common to all actions, see Common Errors.
Indicates that the user has been denied access to the requested resource.
HTTP Status Code: 403
Exception error indicating endpoint disabled.
HTTP Status Code: 400
Indicates an internal service error.
HTTP Status Code: 500
Indicates that a request parameter does not comply with the associated constraints.
HTTP Status Code: 400
Indicates that a request parameter does not comply with the associated constraints.
HTTP Status Code: 400
Indicates that the requested resource does not exist.
HTTP Status Code: 404
Exception error indicating platform application disabled.
HTTP Status Code: 400
The following example publishes the same message to all protocols: > | http://docs.aws.amazon.com/sns/latest/api/API_Publish.html | 2015-05-22T11:30:21 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.aws.amazon.com |
Peek at your BBM notifications
From almost any screen in BBM, you can quickly see all of your BBM notifications. For example, while looking at a picture someone shared, you can peek at how many new BBM updates or invitations you have.
- On
or
, slide your finger right. You might need to hide the keyboard to see the icons.
- To return to what you were doing, slide your finger left.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/47561/laf1348847280825.jsp | 2015-05-22T11:56:57 | CC-MAIN-2015-22 | 1432207924991.22 | [array(['mes1344622640483_lowres_en-us.png',
'Illustration of a tap and slide on the left side of the screen, over the back icon.'],
dtype=object) ] | docs.blackberry.com |
Timeindexindexindexing can be as diverse as audio and video selection and processing, server log file maintenance and management, financial time-series processing, sensor data storage and processing.
Timeindexing is suitable in many business sectors, including: | http://docs.codehaus.org/display/TIX/Home | 2015-05-22T11:50:58 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.codehaus.org |
.7
Groovy 1.7, the latest major and stable version of the popular dynamic language for the JVM, has been released. To learn more about the novelties, make sure to read the release notes. In a nutshell, Groovy 1.7 provides support for Anonymous Inner Classes and Nested Classes, annotations, SQL, Groovy console and Grape enhancements, the nicer Power Assert assertion, an AST Viewer and an AST Builder, a fully rewritten GroovyScriptEngine, and much more!_1<<
Samples
A simple hello world script:
A more sophisticated version using Object Orientation:
Leveraging existing Java libraries:
On the command line:
Documentation [more]
Getting Started Guide
How to install and begin using Groovy as well as introductory tutorials.
User Guide
Provides information about using the Groovy language including language facilities, libraries and programming guidelines.
Cookbook Examples
Illustrates larger examples of using Groovy in the Wild with a focus on applications or tasks rather than just showing off the features, APIs or modules.
Developer Guide
Contains information mainly of interest to the developers involved in creating Groovy and its supporting modules and tools.
Testing Guide
Contains information of relevance to those writing developer tests or systems and acceptance tests.
Advanced Usage Guide
Covers topics which you don't need to worry about initially when using Groovy but may want to dive into to as you strive for Guru. | http://docs.codehaus.org/pages/viewpage.action?pageId=201719960 | 2015-05-22T11:46:20 | CC-MAIN-2015-22 | 1432207924991.22 | [array(['/download/attachments/1866/JAX-Innovation-Award.png?version=1&modificationDate=1177665250968&api=v2',
None], dtype=object)
array(['/download/attachments/1866/groovydukemed.jpg?version=1&modificationDate=1176732937408&api=v2',
None], dtype=object) ] | docs.codehaus.org |
.
Setting up the source folders
There are several ways to set up your maven project to recognize Groovy source files:..
Project Lomb.
The compiler plugin was originally described here and here, but these posts are no longer updated and this page will always contain the more recent information. | http://docs.codehaus.org/pages/viewpage.action?pageId=231735675 | 2015-05-22T11:47:50 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.codehaus.org |
public interface DroolsJaxbHelperProvider
DroolsJaxbHelperProvider is used by the DroolsJaxbHelperFactory to "provide" it's concrete implementation. This class is not considered stable and may change, the user is protected from this change by using the Factory api, which is considered stable.
This api is experimental and thus the classes and the interfaces returned are subject to change.
String[] addXsdModel(Resource resource, KnowledgeBuilder kbuilder, com.sun.tools.xjc.Options xjcOpts, String systemId) throws IOException
resource- The resource to the XSD model
kbuilder- the KnowledgeBuilder where the generated .class files will be placed
xjcOpts- XJC Options
systemId- XJC systemId
IOException
javax.xml.bind.JAXBContext newJAXBContext(String[] classNames, Map<String,?> properties, KnowledgeBase kbase) throws javax.xml.bind.JAXBException
classNames- An array of class names that can be resolved by this JAXBContext
properties- JAXB properties
kbase- The KnowledgeBase
javax.xml.bind.JAXBException | http://docs.jboss.org/jbpm/v5.2/javadocs/org/drools/builder/help/DroolsJaxbHelperProvider.html | 2015-05-22T12:30:14 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.jboss.org |
This document contains information for an outdated version (2.4) and may not be maintained any more. If some of your projects still use this version, consider upgrading as soon as possible.
2.4.0-alpha1 (2009-11-11)
Changelog
New Features
- [rev:91044] Added Session::destroy() as a means to remove the current session using session_destroy()
- [rev:90036] Allow Text/Varchar fields to be configured to differentiate between NULL and empty string. (#4178, petebd)
- [rev:89827] If there is no Name set, but there is an author, use the author's name (from r89650)
- [rev:89221] batch actions for setting/resetting embargo/expiry (from r85397)
- [rev:89194] SiteConfig (from r85339)
- [rev:89193] Add a simple interface for administrating permission roles. (from r85297)
- [rev:89190] SiteConfig (from r85339)
- [rev:89189] Add a simple interface for administrating permission roles. (from r85297)
- [rev:89176] Add another permission code that allows users to edit siteconfig without having admin priveleges (from r87261)
- [rev:89157] Virtual pages now copy allowed children from the page they are
- [rev:88992] Added MigrateSiteTreeLinkingTask to allow plain HTML links to be migrated into shortcode links. From: Andrew Short
- [rev:88516] Added a SideReport to display all pages with broken page or file links. From: Andrew Short
- [rev:88510] Re-instated broken link highlighting by manually checking all shortcodes in HtmlEditorField->Field(), and adding a class to broken ones. From: Andrew Short
- [rev:88508] Added RequestHandler->allowedActions() to return a unified representation (including extensions) of all allowed actions on a controller.
- [rev:88505] Added RequestHandler->hasAction() and Controller->hasAction() to check if a specific action is defined on a controller.
- [rev:88503] Updated SiteTree::get_by_link() to integrate with translatable, and allow it to work across languages by implementing Translatable->alternateGetByLink().
- [rev:88496] Refactored RootURLController to allow nested home pages.
- [rev:88492] Updated HtmlEditorField to use DOMDocument to more reliably parse image tracking and shortcode link tracking data. From: Andrew Short
- [rev:88484] Added SiteTree::get_by_link() to fetch the SiteTree object associated with a nested link.
- [rev:88483] Allow you to access nested pages by falling over to a child page in ContentController if one is available. From: Andrew Short
- [rev:88481] Allow you to link to SiteTree? objects in HTMLText or HTMLVarchar fields by using a "[sitetree_link id=n]" shortcode. From: Andrew Short
- [rev:88474] Refactored ViewableData. The main changes are:
- [rev:88472] Added the Shortcode API (ShortcodeParser) to allow you to replace simple BBCode-like tags in a string with the results of a callback. From: Andrew Short
- [rev:88468] Added utility methods to enable and disable nested URLs to SiteTree. From: Andrew Short
- [rev:88104] added extend() call to enable FieldHolder() html to be customized via extensions.
- [rev:85789] Added Widget_Controller class to enable nested forms within Wiget class.
API Change
- )
- [rev:90097] replaced Database::alteration_message() with DB::alteration_message()
- [rev:90076]75]59] Added dev/tests/build, which runs everything, meaning that dev/tests/all doesn't need to run PhpSyntaxTes
- [rev:89988] Add extra classes to WidgetHolder (#3855, patch from jshipman)
- [rev:89841] Fixed change in r89716 to be more semantic with FileIFrameField
- [rev:89726] TableListField customQuery and customCsvQuery won't automatically include ID, ClassName, and RecordClassName fields (from r87354)
- [rev:89708] Change the way that Database::requireField() gets field type information from the underlying database driver. (from r82793)
- [rev:89209] Added SapphireTest::logInWithPermission() (from r89012)
- [rev:89205] Don't automatically set a default action on complex table fiels. It leads to too many accidental clicks when trying to click a non-default action. Still allow for people to explicitly select a default action. (from r88961)
- [rev:89187] Added PermissionRole and PermissionRoleCode, along with relevant tests for the permission system. (from r85173)
- [rev:88991] Updated Form->FormAction() to use Controller::join_links() rather than relying on the action parameter (to preserve b/c). From: Andrew Short
- [rev:88797] HTTPRequest and HTTPResponse no longer inherit from Object, since they should not be extended. From: Andrew Short
- [rev:88700] SSViewer and SQLQuery no longer inherit from Object, since they should not be extended. From: Andrew Short
- [rev:88632] Added Debug::$friendly_error_header and Debug::$friendly_error_detail for customising the friendly error message. (from r69855)
- [rev:88507] Decoupled ErrorPage::response_for() from the request and updated it so it will only return a response if an appropriate error page can be found.
- [rev:88503] Moved lang_filter enabling & disabling into static methods on Translatable, and renamed to locale_filter.
- [rev:88495] #3724: Unified the Link() method to accept an action parameter. From: Andrew Short
- [rev:88296] support for advanced database options now included
- [rev:88295] The advancedOptions variable now passed to the database connection
- [rev:88294] $database_extensions static variable now supported
- [rev:88293] The advancedOptions variable now passed to the database connection
- [rev:88123] Requiring TRANSLATEALL or TRANSLATE
<locale>permission for authors without administrative access to edit translations
- [rev:87894] array brackets removed for generation of field types
- [rev:87893] Transaction stubs created
- [rev:87568] array data type now supported
- [rev:87567] array data type now supported
- [rev:87566] array data type now supported
- [rev:87565] array data type now supported
- [rev:87564] array data type now supported
- [rev:87563] array data type now supported
- [rev:87562] array data type now supported
- [rev:87561] array data type now supported
- [rev:87560] array data type now supported
- [rev:87559] array data type now supported
- [rev:87558] array data type now supported
- [rev:87557] array data type now supported
- [rev:87555] array data types now supported by dev/build
- [rev:87087] Added name argument to DB::getConn() and DB::setConn(), so that you can store multiple named connections.
- [rev:86006] Removed Permission->listcodes(), use custom code
- [rev:86002] Don't exempt 'index' controller actions from $allowed_actions check - they might still contain sensitive information (for example ImageEditor). This action has to explicitly allowed on controllers with $allowed_actions defined now.
- [rev:85789] Removed unnecessary WidgetFormProxy class and Widget->FormObjectLink(), broken functionality since the RequestHandler restructuring in 2.3. Use Widget_Controller instead.
- [rev:85073] Added DataObjectSet assertions to SapphireTest
- [rev:85028] Added comparison argument to SSLog::add_writer()
- [rev:84828] Added SSLogFileWriter to replace Debug::log_errors_to() and Debug::log_error_if_necessary() - the existing formatting for the Debug deprecation functions is now wrapped into SSLogErrorFileFormatter
- [rev:84774] Debug::send_errors_to() and Debug::send_warnings_to() are deprecated in favour of SSLog. See class documentation for SSLog on configuration of error email notifications
- [rev:84570] added onAfterSave in LeftAndMain
- [rev:84523] Refactor CMSMenu internals to not generate the menu item list until its actually needed, rather than from a CMSMenu::populate_menu() call in cms/_config.php. This lets an app/_config.php file actually manipulate the menu.
- [rev:84521] If you can't create a given dataobject type, then don't show an import form in modeladmin
- [rev:84161] Deprecated DataObject::databaseFields() in favour of the static DataObject::database_fields()
- [rev:84160] Extension no longer inherits from Object.
- [rev:84151] Make Object::uninherited_static() have a separate execution path to Object::get_static(), for more reliable operation. The intention is that for any given static, you either use Object::get_static() or you use Object::uninherited_static() - not both.
- [rev:84061] Database and Query no longer inherit from Object, since they shouldn't be extended with Extensions.
Bugfixes
- [rev:91209] Return correct error when 404 page doesn't exist and page is not found.
- [rev:91203] Fix concurrent editing message always being displayed on page version history.
- [rev:91156] Returning TRUE on Translatable->hasTranslation() if called on a record that is in the current locale (merged from r91032)
- [rev:91047] Don't failover to standard value in ViewableData_Customised if the customised value is defined but isn't set. $obj->customise(array('Content'=>'')) should set Content to ''
- [rev:91045] Session::destroy() should make use of setcookie() to remove the cookie from the user, unsetting the superglobal doesn't unset from the browser
- [rev:91036] Added setup/teardown methods to SiteTreeBrokenLinksTest? to make it work with Translatable enabled (merged from r91033)
- [rev:90964] use second argument only if its an array (from r90927)
- [rev:90936] Fixed pages not being manipulated properly in the CMS because of a PHP error in CMSBatchAction
- [rev:90934] MSSQL does not support double, using float instead (from r90928)
- [rev:90876] Added ContentController->successfullyinstalled() to $allowed_actions
- [rev:90857] applied patch from #4381. Observable doesnt play nice with jQuery (from r82094)
- [rev:90855] Added rewriteHashlinks = 'php' option to SSViewer so that static publisher can handle internal hashlinks properly. (from r89612)
- [rev:90854] Pass locale rather than language to spellchecker_languages (from r87869)
- [rev:90853] Fixed Links to Moderate Comments from the CMS and front end. MINOR: removed complextable functions which no longer get called, moved logic to the PageComment Class (from r86325)
- [rev:90852] Tied rollback action to edit, rather than publish, permission, since it only involves editing the draft site. (from r84957)
- [rev:90851] Fix Form.FieldMap, used when constructing forms that have the HTML explicitly specified.
- [rev:90850] Allow null default on MultiEnum fields
- [rev:90849] Fixing the comment's author website url being converted to lowercase: now case is not affected. (from r84380)
- [rev:90848] CMSMenuItem constructor now calls parent to respect inheritance (from r83586)
- [rev:90845] Fixed bugs in content differencer, and improved styling. BUGFIX: fixed notice when getting title of member which didnt exist. Merged from trunk r77661. (from r81942)
- [rev:90842] Added rewriteHashlinks = 'php' option to SSViewer so that static publisher can handle internal hashlinks properly. (from r89611)
- [rev:90834] was being passed to foreach without a check to see if it's an array or not. (from r86202)
- [rev:90833] Added required javascript files (behaviour, prototype, prototype_improvements) to the Field() method of TreeSelectorField.php (from r84320)
- [rev:90831] WidgetArea now works. Can have multiple areas on a page, and has unit tests
- [rev:90747] Fixed Text::scaffoldFormField() showing a "Is Null" checkbox, even if nullifyEmpty is true
- [rev:90644] Fixed "Class not found CMSBatchAction_Unpublish ..." in BatchActionHandler.php, since this class was removed in r90489
- [rev:90632] Make DataObject::dbObject('ClassName') work.
- [rev:90595] When deleting a WidgetArea, delete all the widgets it contains.
- [rev:90554] #4609: Fixed portoguese locales in common locales list.
- [rev:90553] #4617: Make delete formatted images case-insensitive.
- [rev:90552] #4642: Fixed creation of folders in non-english languages.
- [rev:90551] Fixed glitch in permission code formats.
- [rev:90550] Fixed glitch in permission code formats.
- [rev:90548] #2476: Rename lowercase tables to correct casing if they have been transferred from a windows box.
- [rev:90547] #4063: Corrected base tag for IE6
- [rev:90196] fixed typo
- [rev:90082] Don't skip flushCache() extension if $cache_get_one is empty on DataObject->flushCache()
- [rev:90056] UTF-8 byte order mark gets propagated from template files (#4357)
- [rev:90051] Remove blockquote from tinymce default plugin list - blockquote isnt a plugin in tinymce3.
- [rev:90047] Some places want tableList() to have lower case, some want native case - return both!
- [rev:90023] Security::$default_login_dest isn't used (#4179, simon_w)
- [rev:90020] Reenable setting size on HasManyComplexTableField popups (#3921, rjmackay)
- [rev:89911] Fixing regression in TranslatableTest due to outdated singleton caching.
- [rev:89893] Moved SINGLETON resetting for test runs from SiteTreeTest/ObjectTest into SapphireTest - there should be no caching between all test invocations to avoid side effects
- [rev:89881] Reset $_SINGLETONS cache in SiteTreeTest::tear_down() to avoid stale Translatable information. This broke SiteTreePermissionTest and SiteTreeTest when running in parallel with Translatable enabled.
- [rev:89864] Added setup/teardown methods to CMSMainTest to fix test breakages when used alongside cmsworkflow module (which unsets the public batch action)
- [rev:89863] Added setup/teardown methods to SiteTreeBacklinksTest to make it work with Translatable enabled
- [rev:89825] Fix comment feed on SQLServer (from r89641)
- [rev:89823] Made dragndropping possible for folders in ajax-expanded tree. Also fixed glitch in r82534 that made page drag and drop impossible (from r82571)
- [rev:89821] repaired dragndropping files into nested directories - now code refers to the updated object which is initially hidden and zero sized (from r82534)
- [rev:89812] If image does not exist in the file system, don't show a non-object error when viewing the Image/File record in AssetTableField (from r82390)
- [rev:89811] Paging of search results now works for AssetTableField by overloading the TableListField link methods (from r81190, r82188)
- [rev:89798] Removed double up of classes in TestRunner::coverage() (from r88463)
- [rev:89731] Fixed ModelAdmin_CollectionController->Link() return value
- [rev:89719] Folder::syncChildren() now uses far less memory - we do this by destroying the child object memory after use (from r82780)
- [rev:89718] Fixed array to string conversion error in Date::setValue() (from r82749)
- [rev:89716] disabling user ability to upload images into the CMS from their local computer (from r82573)
- [rev:89715] Ensure that FileIFrameField gets the proper class, this could be a subclass of File instead
- [rev:89714] Ensure that before creating default 404 error page, we don't have one already that exists with a record ID (from r81991)
- [rev:89460] Hard code the migration task to use Content instead of the no-longer-used FieldName. This should probably be improved to iterate over all HTMLText fields on the model.
- [rev:89444] Removed SiteTree::rewriteLink() method that is no longer necessary due to the use of shortcodes.
- [rev:89338] Respecting SiteTree->canDelete() in SiteTree->getCMSActions()
- [rev:89333] Removed 'name' attribute from HeaderField markup - its invalid HTML to put in
<h*>elements (#4623)
- [rev:89328] Fixed CMSSiteTreeFilter
- [rev:89236] Fixed SiteTree->validURLSegment() to perform a DataObject::get_one() instead of raw SQL, in order for decorated filtering (e.g. by the subsites module) to apply correctly.
- [rev:89215] Detect a brokenh link on an incompletely specified redirector page. (from r89043)
- [rev:89213] Fixed link generation in CTF default action (from r89026)
- [rev:89208] Fixed image link rewriting and added a test. (from r89011)
- [rev:89206] Fixed diabled image references for edit and delete links in CTF (from r88967)
- [rev:89204] If a CTF without a show action is made readonly, don't add the show action back. (from r88960)
- [rev:89203] Fixed resolution of amibiguous has_many foreign keys in ComplexTableField to use the same logic as DataObject (from r88945)
- [rev:89200] Fixed inversion of condition created in r88869 (from r88905)
- [rev:89199] AuthorID field for page version history wasn't being set. (from r88894)
- [rev:89183] Fixed generation of static cache files in subdirectories. (from r88569)
- [rev:89177] Fix image tracking not working cross subsite (from r88008)
- [rev:89175] Fix broken link tracking of linked files (from r87252)
- [rev:89172] Fix error when adding roles tab (from r86997)
- [rev:89170] Fix image tracking to take resized images into account (from r86198)
- [rev:89169] Fix items not deleting on tablefields (from r86099)
- [rev:89163] Fixed RequestHandler->allowedActions() lowercasing of actions - was applying the logic, but not writing it back to the $actions array.
- [rev:89161] Don't return empty value from ViewableData->XML_val() if the actual value is an uncasted 0 integeter (or anything else evaluating to untyped boolean false)
- [rev:89003] Added PageComments to ContentController::$allowed_actions so commenting works. From: Andrew Short
- [rev:88989] Reset broken file & link flags in HtmlEditorField->saveInto() before determining if a record contains broken links. From: Andrew Short
- [rev:88957] Fixed missing default english text for "Clear" and "Search" buttons in template CMSMain_left.ss
- [rev:88956] #4605 DataObject::newClassInstance() should repopulate it's defaults after changing to an instance of a different class, otherwise databases will complain of NULL values being written to columns that don't accept NULL values.
- [rev:88799] Updated ModelAsController->findOldPage() query to be cross-database compatible. From: Andrew Short
- [rev:88774] Stopped HtmlEditorField->saveInto() from dying when encountering a link that cannot be made relative. From: Andrew Short
- [rev:88773] Suppressed errors in SS_HTMLValue->setContent() so it can handle malformed HTML.
- [rev:88752] error messages suppressed as a temporary fix
- [rev:88664] Fixed bugs in ViewableData casting system. From: Sam Minnee
- [rev:88639] Set publication base_url on every URL, just in case it gets re-set by some rogue script (from r73510)
- [rev:88523] Fix regression in r88521 that prevented the index action from being explictly disabled by setting the * key in allowed_actions
- [rev:88522] Improved reliability of PhpSyntaxTest on build slave.
- [rev:88521] Ensure that the index action works even if allowed_actions is set.
- [rev:88513] #3858: Updated StaticExporter to handle nested pages. From: Andrew Short
- [rev:88512] #3724: Updated Link() methods to accept an action parameter. From: Andrew Short
- [rev:88508] Updated Controller->hasAction() to use RequestHandler->allowedActions() so that extension actions are recognised. From: Andrew Short
- [rev:88503] Fixed viewing a translatable page by URL without explicitly setting a Locale in ContentController->handleRequest(). From: Andrew Short
- [rev:88494] Fixed Controller::join_links() to properly handle multiple consecutive slashes. From: Andrew Short
- [rev:88493] Use Link() on the controller to generate to form action path. From: Andrew Short
- [rev:88473] #3862: Explicitly defined browsing and viewing actions on CodeViewer. From: Andrew Short
- [rev:88471] #2133: Removed UniqueTextField JavaScript that was causing URLSegments to be incorrectly rewritten if they had a number at the end. From: Andrew Short
- [rev:88469] Updated HTTP::findByTagAndAttribute() to be more versatile, especially when dealing with attributes containing special characters. From: Andrew Short
- [rev:88218] #3685: Fixed setting of character set by default when no content negotiator is used.
- [rev:88145] Added setup/teardown routines to SiteTreeActionsTest to avoid breaking with Translatable enabled on recent canTranslate()/canEdit() extensions
- [rev:88143] Added setup/teardown routines to SiteTreeTest and SiteTreePermissionsTest to avoid breaking with Translatable enabled on recent canTranslate()/canEdit() extensions
- [rev:88139] Changed CMSMain->LangSelector() to always return a DropdownField, which ensures the 'Locale' parameter is always available to be passed along with ajax queries
- [rev:88138] Filter both 'available' and 'new' languages in LanguageDropdownField for canTranslate() permissions
- [rev:88124] Added required permissions to TranslatableSearchFormTest
- [rev:88003] Fixed CSVBulkLoaderTest not to assume ID ordering in the assertions, which breaks with databases not ordering by PK automatically (e.g. Postgres)
- [rev:88000] Fixed SearchContextTest capitalization of string assertions
- [rev:87926] Fixed SearchFilterApplyRelationTest not to assume ID ordering in the assertions, which breaks with databases not ordering by PK automatically (e.g. Postgres)
- [rev:87925] Fixed SoapModelAccessTest not to assume ID ordering in the assertions, which breaks with databases not ordering by PK automatically (e.g. Postgres)
- [rev:87922] Fixed RestfulServerTest not to assume ID ordering in the assertions, which breaks with databases not ordering by PK automatically (e.g. Postgres)
- [rev:87913] Fixed ID associations in TableListFieldTest (was assuming numerically ascending IDs, which isnt necessarily true in Postgres)
- [rev:87897] tests which aren't supported by Postgres temporarily disabled
- [rev:87456] #4579: Translatable's call to new LanguageDropdownField() broked
- [rev:87234] Fix MemoryLimitTest not to fail on machines with <1G of memory and later versions of PHP 5.2.x that check available memory before setting memory_limit setting.
- [rev:87228] Fixed bug in recent changes to Hierarchy::liveChildren() to do with Translatable
- [rev:87210] Fixed Hierarchy::liveChildren() to work on PostgreSQL
- [rev:87131] Don't throw a notice-level error in DB::getConn() if connection hasn't been set yet, to mimic previous behaviour.
- [rev:86876] Fixed content-type for SapphireSoapServer->wsdl() (#4570, thanks Cristian)
- [rev:86556] missplaced quotes were ruining unit tests
- [rev:86532] $params variable removed
- [rev:86414] Fixed SearchFilterApplyRelationTest to match new SearchContext->addFilter() API: Needs the full name including relation instead of the ambiguous stripped name. This matches DataObject->scaffoldSearchFields() and getDefaultSearchContext()
- [rev:86218] Initializing controllers through init() in WidgetArea->WidgetControllers()
- [rev:86217] Return FALSE in SapphireTest->assertDOSEquals() if no valid DataObjectSet is passed in
- [rev:86170] ID column in delete function now quoted properly
- [rev:86085] Don't lowercase permission codes contained in $allowed_actions in RequestHandler->checkAccessAction(). Permission checks are case sensitive.
- [rev:86060] Made SecurityAdminTest more resilient against changes to localized strings, by inspecting the CSV line-by-line instead
- [rev:86008] Consistently returning from a Security::permissionFailure() to avoid ambiguous situations when controllers are in ajax mode
- [rev:85817] Fixed WidgetControllerTest by adding missing url routing to ContentController (see r85789)
- [rev:85758] Detecting DataObjectSet for readonly transformations in CheckboxSetField (thanks martijn, #4527)
- [rev:85713] moved $versionAuthor variable invocation into a check for the existence of the $record variable on which it depends (Ticket #4458)
- [rev:85696] Ticket #4220 - Copying of uploaded files from temp to assets folder fails on IIS installs; simple patch fixes it
- [rev:85514] More robust URL handling in SecurityTest to avoid failing on custom /admin redirects
- [rev:85513] More robust URL handling in CMSMainTest to avoid failing on custom /admin redirects
- [rev:85336] Fixed SiteTree::can_edit_multiple() and canEdit() to collect permissions from different Versioned tables, which fixes querying a SiteTree record which has been deleted from stage for its permissions (e.g. in SiteTreeActionsTest)
- [rev:85330] Disabled PHPUnit backup of global variables, which caused i18n::_t() calls in subsequent test cases to fail because of a cached empty global
- [rev:85310] Limiting i18n::include_by_locale() to scan directories only
- [rev:85281] Implementing TestOnly interface in ModelAdminTest to avoid having it included automatically in CMSMenu and hence breaking other tests like LeftAndMainTest.
- [rev:85157] #4423: Don't allow page duplication if canCreate is false.
- [rev:85136] #3713 Escape HTTP request URL properly in DebugView::writeError() using htmlentities()
- [rev:85130] merge r 85079 from branches/iss to fix Payment Validation of php side when submit a OrderForm
- [rev:85120] Fix the bug in buildSQL() by trying to join an table with non-exsiting composite db field like "Money"
- [rev:85086] #4463: Set AuthorID and PublisherID correctly
- [rev:85085] Use default File classname in Folder::syncChildren()
- [rev:85076] #3228 Fixed undefined offset error in Text::BigSummary() if trying to summarise text that is smaller than the requested word limit
- [rev:85039] SelectionGroup.js typo, prevAl()l change to nextAll()
- [rev:84980] Fixed issues with recent CMSMenu refactoring.
- [rev:84976] SelectionGroup should include jQuery and jQuery livequery plugin when it's used or it will fail
- [rev:84971] Fixed code for regenerating cached test manifest.
- [rev:84843] #4486 Make use of DataObject::get_by_id() in File::getRelativePath() instead of building ID query manually in a get_one()
- [rev:84796] Fixed querying of composite fields (broken due to inappropriate optimisation of hasField)
- [rev:84789] Reverted some changes from r84163 because they broke cases where you have two fields of the same name on different subclasses.
- [rev:84167] Performance improvement to Member::currentUserID()
- [rev:84166] Performance imporvement to i18n::include_by_locale
- [rev:84164] Removed deprecated (and slower) eregi_replace
- [rev:84162] Removed some code that needed Extension to extend from Object.
- [rev:84156] Ameneded r84151 so that the application order of extensions is the same as it was previously.
- [rev:84155] Ameneded r84151 so that the application order of extensions is the same as it was previously.
- [rev:84147] Added static resetting methods for more reliable test execution.
- [rev:84093] Fixed SQLQuery::filtersOnID() for cases where a ClassName filter is inserted before the ID filter.
- [rev:84092] Fixed filtering by archive date
- [rev:84086] an time field input between 12:00pm to 12:59pm can't save back to database or always save as 00:00:00.
- [rev:84079] VirtualPages won't call SiteTree_Controller anymore.
- [rev:84068] Restored SiteTree::canView() functionality.
- [rev:84066] Fixed some bugs in the performance fixes on Permission
- [rev:84065] Fixed manifest builder tests to not have fake data, and to test that classes can be in files with different names
- [rev:84064] Removed Requirements::combine_files() reference to non-existent cms/javascript/FilterSiteTree.js
- [rev:84063] Don't make the Director completely fail if there was output prior to session_start() being called.
- [rev:84000] prevent a nasty permissions situation where no one but an admin can edit a new page
- [rev:83999] prevent a nasty permissions situation where no one but an admin can edit a new page
- [rev:83970] Using standard SQL and SSDatetime::now() in SideReport_RecentlyEdited (see #4052)
- [rev:83969] Fixed SiteTreeActionsTest to use unconditional class defintion - was failing due to recent changes in ClassInfo and class_exists()
Enhancement
- [rev:91044] Added optional $sid parameter to Session::start() to start the session using an existing session ID
- [rev:90084] Changed Hierarchy->numChildren() caching to be instance specific and respect flushCache(). This increases the amount of queries on large sets, but decreases the time for a single instance call (implemented in r89999)
- [rev:89999] Only run a single query per class for Hierarchy::numChildren()
- [rev:89883] Improved CMSSiteTreeFilter API to make it easier to create custom filter.s (from r89071, from r88465)
- [rev:89820] Current search and current page of asset section are persistent. Fixes the open source ticket #4470 and also a part of #4256 (from r84091)
- [rev:89815] FilesystemSyncTask: If folderID GET parameter is available, only synchronise that folder ID - useful for only synchronising a specific folder and it's children (from r82841)
- [rev:89813] Return the results of the FilesystemSyncTask to the status message in the CMS instead of a generic success message (from r82618)
- [rev:89724] Filesystem::sync() now accepts a folderID argument, meaning you can specifically target a folder and it's children to sychronise, instead of everything (from r82840)
- [rev:89717] Filesystem::sync() will now return the number of added and deleted files and folders instead of null (from r82616, 82617 and 82724)
- [rev:89182] Fixed sapphire execution if you run the uninstalled sake from a foreigh directory. (from r88533)
- [rev:88635] Added Member::set_login_marker_cookie(), to let developers bypass static caching for logged-in users (from r73803)
- [rev:88633] Make base tag in 404 page dynamic (from r72282)
- [rev:88570] Improved performance of ViewableData casting by removing an object::get_static call From: Sam Minnee
- [rev:88518] #3729: Updated the link inserter to insert a shortcode rather than a plain HTML link. From: Andrew Short
- [rev:88505] Updated ContentController->handleRequest() to use Controller->hasAction() to check whether to fall over to a child page, rather than relying on an error response from Controller->handleRequest(). From: Andrew Short
- [rev:88504] Cached the value for RootURLController::get_homepage_link() between calls. From: ajshort
- [rev:88502] Updated the SiteTree URLSegment conflict resolver to work with nested URLs.
- [rev:88501] Updated SiteTree CMS fields to be in line with nested URL changes. From: Andrew Short
- [rev:88499] Refactored ModelAsController to only grab the first page of a request, then pass control on to it. From: Andrew Short
- [rev:88491] #3279: Updated the link inserter to insert a shortcode rather than a plain HTML link. From: Andrew Short
- [rev:88489] Updated the SiteTree link and section code to derive data from the current page, rather than relying on its own cache.
- [rev:88488] Added Hierachy->getAncestors() to return all the parent objects of the class in a DataObjectSet. From: Andrew Short
- [rev:88487] Update ContentController to manually set the current Director page in handleRequest().
- [rev:88482] Refactored TreeDropdownField to generate and manage the tree using Hierachy's ParentID data, rather than relying on the client. From: Andrew Short
- [rev:88480] Added ErrorPage::response_for() to get a response for a HTTP error code and request.
- [rev:88479] Added ModelAsController::controller_for() to link a SiteTree object to its controller. From: Andrew Short
- [rev:88478] Added HTTPRequest->isMedia() to check if a request is for a common media type. From: Andrew Short
- [rev:88477] Added Controller->hasActionTemplate() to check if a template exists for a specific action. From: Andrew Short
- [rev:88476] Updated SiteTree linking methods to generate nested URLs.
- [rev:88474] Added template and value methods to database fields.
- [rev:88139] Passing sitetree instance to CMSMain->LangSelector() in order to trigger canTranslate() filtering
- [rev:88125] Added Translatable->getAllowedLocalesForMember()
- [rev:88123] Added Translatable->providePermissions()
- [rev:87777] Added ComponentSet->getComponentInfo() (#4587, thanks rorschach)
- [rev:86506] Database specific version of RANDOM() created
- [rev:86402] Added SearchContext->getFullName() to preserve the original fieldname including the relationship
- [rev:86338] Added TableListField->paginationBaseLink
- [rev:86216] Supporting full parameter signature in Versioned->Versions(), allVersions()
- [rev:86027] Limiting "alc_enc" cookie (remember login token) to httpOnly to reduce risk of information exposure through XSS
- [rev:86026] Added full parameter signature of PHP's set_cookie() to Cookie::set(), including the new $httpOnly flag
- [rev:86021] Avoid information disclosure in Security/lostpassword form by returning the same message regardless wether a matching email address was found in the database.
- [rev:86017] Added Member->FailedLoginCount property to allow Member->registerFailedLogin() to persist across sessions by writing them to the database, and be less vulnerable to brute force attacks. This means failed logins will persist longer than before, but are still reset after a valid login.
- [rev:85823] Allowing Widget->Content() to render with any templates found in ancestry instead of requiring a template for the specific subclass
- [rev:85789] Added handleWidgets() to ContentController to support new Widget_Controller class
- [rev:85736] added tinymce valid element to allow style, ids and classes on any element to allow for styling hooks. Ticket: #4455
- [rev:85731] hide unmoderated page comments from the page comment RSS feed. Ticket #4477
- [rev:85716] Ticket #3910 - MySQL Time Zone support (alternative time zone to that of the website to which the server is set to)
- [rev:85709] added option to truncate (clear) database table before importing a new CSV file with CSVBulkerLoader and ModelAdmin.
- [rev:85700] Ticket #4297 - Use Director::baseFolder instead of relative links in sapphire/core/Image.php
- [rev:85281] Filtering out TestOnly classes in CMSMenu::get_cms_classes()
- [rev:84860] convert SelectionGroup.js from prototype version to jQuery version
- [rev:84816] Updated SSLogErrorEmailFormatter to support NOTICE priority level logging
- [rev:84774] Added SSLog, SSLogEmailWriter and SSLogErrorEmailFormatter for silverstripe message reporting
- [rev:84165] Improved performance of DataObject::hasField()
- [rev:84160] Object::__construct() performance improved slightly.
- [rev:84159] Improved performance of Object::uninherited_static()
- [rev:84158] Improved performance of Object::allMethodNames() and Object::addMethodsFrom()
- [rev:84149] add more assertion in SearchFilterAapplyRelationTest to test more cases for many_many relation.
- [rev:84117] add more assertion in SearchFilterAapplyRelationTest to test more cases for many_many relation.
- [rev:84113] add "InnerJoin" clause for an has_many component's ancestry classes for SearchFilter::applyRelation() so that an searchfliter could filter on that component's ancestry's field. add unit tests for this enhancement and r83500
- [rev:84073] added new permission, SITETREE_REORGANISE
- [rev:83798] #3638: There is no longer any need to have the class name match the PHP filename
- [rev:83789] Added ClassInfo::is_subclass_of() for better performance
- [rev:83674] sitetree filters now show up in a dropdown, and you can add your own by extending CMSSiteTreeFilter
Other
- [rev:90071] Merge branch 'master' of :sminnee/sapphire From: Sam Minnee
- [rev:89715] (from r82175)
- [rev:89702] Merge branch 'master' of :sminnee/silverstripe-cms From: Sam Minnee
- [rev:89224] slightly later, so FormResponses can be overridden if necessary. (from r85614)
- [rev:89220] ENHANCMENT side reports can now have parameters (from r85329)
- [rev:89207] ENHANCMENT improved reporting around broken links/files (from r88993)
- [rev:89186] #108 - Subsite Virtual Page ordering (from r84848)
- [rev:89178] as they are confusing. (from r88019)
- [rev:89174] #148 - Stable against restructures (from r87251)
- [rev:89157] pointing at. (from r85197)
- [rev:89155] #63 - Stable against restructures (from r84861)
- [rev:88638] Add support for configuring multiple static publisher on a single site (from r70203)
- [rev:88637] Basic authentication now (back) in configurefromenv.php (from r82551)
- [rev:88527] Added readme for github From: Sam Minnee
- [rev:88525] Added readme for GitHub copy of SilverStripe. From: Sam Minnee
- [rev:88474] * Removed ViewableData_ObjectCustomised - now just uses ViewableData_Customised.
- [rev:87896] Transaction test created
- [rev:86684] Merged in Requirements::combine_files() fix from branches/2.3 - r83048
- [rev:86679] Merged in Member::sendInfo() bug fixes from branches/2.3 - r85779
- [rev:86678] Merged in Email template codes change from branches/2.3 - r84594
- [rev:86676] Merged in parent::__construct() additions from branches/2.3 - r83580 and r83587
- [rev:86669] Merged Text::ContextSummary() changes from branches/2.3 - r82035 and r82036
- [rev:86655] Patched to allow id|class|style|title attributes in all elements and allow empty td cells (will pad with non-breaking space) in line with #4332 and 4497 in 2.3.x changes to cms/LeftAndMain.php
- [rev:84981] Ensure that DataObject->ClassName is set on object instantiation
- [rev:84970] Made timing code for test runner more accurate (includes initial db build):
- [rev:84814] ENHANCMENT: get svn merged revision 84806:84808 from branches/iss
- [rev:84163] ENHANCMENT: Low-level performance improvements in database access. | http://docs.silverstripe.org/en/2.4/changelogs/alpha/2.4.0-alpha1 | 2015-05-22T11:31:34 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.silverstripe.org |
C4DtoA is shipped with an API for 3rd party developers to write modules (basically custom translators).
The API is a static library located in the $C4D_INSTALL/plugins/C4DtoA/api folder. To add it to your project you have to add the following settings:
In the code just simply include the c4dtoa_api header.
If you want to use any C4DtoA node or parameter ids:
UI framework
C4DtoA has its own UI generator framework which utilizes Arnold's node structure and the Arnold Metadata API. This means widgets of Arnold node parameters are generated automatically by the type and metadata of the parameters and parameter values are automatically exported.
Each parameter has a unique id which is generated from the node entry name and parameter name. To generate an id use the paramid_generator tool of the API. For example, run the following command to get the id of the Kd_color parameter of the standard node:
$C4D_INSTALL/plugins/C4DtoA/api/bin/paramid_generator standard.Kd_color
In the .res file use the AIPARAM data type. This tells C4DtoA that this parameter belongs to the Arnold node so the framework will generate the required widget in this position. Each parameter type has its own widget, some custom widget can be defined in metadata files (.mtd), see Metadata file (.mtd).
AIPARAM C4DAIP_STANDARD_KD_COLOR {}
Parameters which are not listed in the resource file will be added to a tab called "Unsorted" until you did not hide them directly in the metadata file.
NOTE: 3rd party custom widgets cannot be added via the API at the moment.
Labels are generated from the name of the parameter by default. Sometimes we want custom labels. For example, we want to display Diffuse color instead of Kd color. In this case, we have to define the label in the .str file just like by a normal C4D parameter.
C4DAIP_STANDARD_KD_COLOR "Diffuse color";
Some special flags which used by the framework can be defined in the metadata file (.mtd).
Node flags:
Parameter flags:
An example definition:
[node standard] c4d.classification STRING "surface" c4d.menu STRING "shader/surface" c4d.command_id INT 1031853 [attr Kd] c4d.gui_type INT 6 [attr direct_diffuse] min FLOAT 0.0 softmax FLOAT 1.0 c4d.step FLOAT 0.01 | https://docs.arnoldrenderer.com/plugins/viewsource/viewpagesrc.action?pageId=119769113 | 2021-11-27T04:47:20 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.arnoldrenderer.com |
MCLivingEntityUseItemStartEvent
Link to mclivingentityuseitemstartevent
Fired process.StartEvent;
Extending MCLivingEntityUseItemEvent
Link to extending-mclivingentityuseitemevent
MCLivingEntityUseItemStartEvent extends MCLivingEntityUseItemEvent. That means all methods available in MCLivingEntityUseItemEvent are also available in MCLivingEntityUseItemStartEvent | https://docs.blamejared.com/1.16/zh/vanilla/api/event/entity/living/MCLivingEntityUseItemStartEvent | 2021-11-27T04:40:22 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.blamejared.com |
Domino 4.3¶
4.3.0 (August 2020)¶
New Features
Domino now generates short-lived JWT tokens that can be used to authenticate to third party resources, data sources, and the Domino API. Tokens are made available in Workspaces, Jobs, or Apps. Learn more about Domino’s JWT tokens here.
All components of Domino On-demand Spark clusters are now displayed with key infrastructure and configuration information in the administrator’s execution dashboard.
Changes
fleetcommand-agentv25 released to support 4.3.0 installation.
Domino now uses version 10.0.1 of Keycloak.
Domino now uses version 7.7.1 of Elasticsearch.
Fixed an issue that prevented hardware tiers from functioning as intended if the Cores Limit field was left blank. Execution specifications would still be produced with a limit equal to the CPU request. Now, leaving the Cores Limit field blank will correctly produce execution specifications with no CPU limit.
Fixed an issue that prevented GPU-based Spark clusters from functioning properly.
Fixed an issue where restarting a node would cause apps in Domino to fail.
Fixed an issue that prevented workspaces from properly detecting file changes when performing volume recovery.
Fixed an issue that prevented workspace recovery with a salvaged volume if a Domino session was expired.
Fixed an issue where the base image in an Environment changed when other non-image changes were made to the Environment.
Fixed an issue that prevented archived workspaces and jobs from appearing in their corresponding dashboard. | https://docs.dominodatalab.com/en/4.6.2/release_notes/4-3.html | 2021-11-27T05:05:28 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.dominodatalab.com |
Procore
Increase the value, usage, and adoption of Procore Documents Tool in your company
PROCORE WITH ODRIVE
Collaborating over documents with a diverse set of stakeholders plays a significant role in making your construction projects successful. With odrive, you can bring even more value to files in Procore Company Documents and Project Documents repositories by making them more accessible to you, your team, and any outsiders that you work with.
CORE BENEFITS FOR PROCORE USERS
- Sync for Mac, Windows, and Linux (command-line). Files and folders are presented as placeholder cloud files initially that don't take up any space on your hard drive. Content is only downloaded on demand when you access the file for the first time. Save disk space and bandwidth with odrive's progressive sync.
- File sharing with both insiders and outsiders that you need to work with. Share secure weblinks with an optional password and expiration date to conveniently transfer files to anyone. Or share storage collaboratively with other odrive users by inviting them to odrive Spaces backed by your Procore storage.
GETTING STARTED
Choose a sign-in provider
When you sign up with odrive, you can choose a sign-in provider such as Procore.
Choose a sign-in provider. Select Procore if you want to sign in to odrive using your Procore credentials.
Do I have to choose Procore as my sign-in provider?
No. Your authentication provider is separate from whatever storage you link to odrive. You can sign up using Facebook credentials, for example, and then link your Procore storage separately later on. However, if your Procore Admin is rolling out odrive to your company, they may require that you sign in using Procore, so it may be useful to check in with them first.
After signing up, you can link additional storage accounts. We automatically link any storage associated with your sign-in provider initially. So if you chose Procore as your sign-in provider, we'll link your Procore account's Company Documents and Project Documents storage when your odrive account is created.
Click on the + Link Storage option in your odrive home to add more storage accounts.
Can I link multiple Procore accounts?
Yes, you can link multiple accounts from the same provider. Many users like that they can connect their work and personal Google Drive accounts as well. There's no limit on how many storage accounts you can link, including Procore accounts.
Download and install the desktop sync app
Once you're finished adding storage, you can click on the links shown below in your odrive home to download the desktop client.
CONTENT ORGANIZATION
Once your Procore storage is connected to odrive, you'll be able to see your company documents and project documents inside of your odrive folder. You'll see your files under the following paths:
procore_storage_link_name/company/company documents/
procore_storage_link_name/company/projects/project_name/
See the screenshot below.
Shortened Company Names
Your company name folder may appear slightly differently because odrive automatically shortens company names to make the overall path shorter (a frequently requested feature). See the FAQ question below for more information.
Example:
Procore Company, Inc. gets shortened to ProcoreC (8 characters).
EXPLORE ODRIVE FEATURES
Here's a list of recommended features to try. There are links to the general usage guide within the list so you can get more information about each item if the brief explanation is not enough.
Take advantage of your free 7-day trial of Premium
When you sign up for odrive, you are automatically enrolled in a 7-day free trial of Premium features. We recommend that you start trying the features below since they represent capabilities that other Procore Procore Procore. Please contact [email protected] in order to have a discussion about your needs.
Business Subscriptions start at $15/month per user, but alternative discounted plans (such as site-wide licenses for all of your company's Procore:
- Information sheet on how to accelerate adoption of Documents tool with odrive.
- Procore storage?
No. You can link your Procore storage (and any other storage) to odrive with a free account. You only need Premium to utilize any of the Premium universal storage features that odrive unlocks for any linked storage.
Are there any reasons why I shouldn't pick Procore Procore storage separately later on.
Your Procore admin may wish for you to create your user account using your Procore credentials, however, especially if they are paying for Premium for you. So please talk to your Procore admin if you are unsure.
Note that if you lose the ability to log in to Procore, you won't be able to sign in to your odrive anymore. So if you left your company and your company disabled your Procore Procore account.
What Procore permissions are used when I access my own storage through odrive?
Each time you try to link Procore storage, you have to specify what Procore user you are using to connect with. The backend storage permissions (Procore storage permissions) of the user account used to link to Procore will dictate the permissions that are enforced by the Procore API when you make changes within that storage link in odrive.
This is independent of what sign-in provider you chose to use. Your sign-in provider and the storage accounts you have linked are separate and decoupled.
Example
Let's say you signed up with Procore User A as your sign-in credentials. We automatically link Procore User A's storage for convenience, but you can always delete this link if you don't want it.
Next, you choose to link another Procore storage using Procore User B's credentials. Even though you signed up for odrive using Procore User A, you always have Procore User B's permissions when you access your storage under Procore User B's storage folder in odrive.
It is the storage credentials that dictate the level of permission that is enforced by the backend storage provider, Procore.
What Procore permissions are obeyed when I share files from Procore?
Weblinks:
Files and folders shared using weblinks are by nature read-only. Weblinks to distribute content to anyone, but recipients cannot make changes to your files. read-only permissions to the Documents tool within Project A and I share an odrive Space to a folder within this project, the other space members will only gain read access to the files there. They will be unable to make changes since the user that shared access with them does not have permission to make changes to the folder either.
If I had standard access to the project's Documents, then those that I shared my space with would be able to upload changes to files.
The permission level of the Procure user account that was used to link the underlying storage is the permission level that is obeyed for all space members when creating a space against that storage.
Is the Documents tool version history and audit information preserved?
Yes, when files are uploaded through odrive to Procore, all of the Documents tool features such version history and audit information are maintained.
To recover a prior version, you'll need to use the Procore web client interface. Go to the corresponding company level or project level Documents tool page.
How are Company Names shortened?
Company names are automatically shortened as follows:
- Remove all spaces and any period at the end of the name.
- Shorten to 8 characters. If two or more company names would have conflicting short names, we skip this rule for those particular company names.
Examples
-
Simple case (no conflicts, apply rules 1 and 2):
Procore Company, Inc. gets shortened to ProcoreC (8 characters).
-
Conflicts (only apply rule 1):
Procore Company, Inc. and Procore Champions get shortened to:
ProcoreCompanyInc and ProcoreChampions
Updated 2 months ago | https://docs.odrive.com/docs/procore | 2021-11-27T05:16:22 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['https://files.readme.io/bb44333-procore-plus-odrive.png',
'procore-plus-odrive.png'], dtype=object)
array(['https://files.readme.io/bb44333-procore-plus-odrive.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/50cb888-sign-in-providers.png',
'sign-in-providers.png Choose a sign-in provider. Select Procore if you want to sign in to odrive using your Procore credentials.'],
dtype=object)
array(['https://files.readme.io/50cb888-sign-in-providers.png',
'Click to close... Choose a sign-in provider. Select Procore if you want to sign in to odrive using your Procore credentials.'],
dtype=object)
array(['https://files.readme.io/4cd01d3-link-more-storage.png',
'link-more-storage.png Click on the **+ Link Storage** option in your odrive home to add more storage accounts.'],
dtype=object)
array(['https://files.readme.io/4cd01d3-link-more-storage.png',
'Click to close... Click on the **+ Link Storage** option in your odrive home to add more storage accounts.'],
dtype=object)
array(['https://files.readme.io/2b1a93e-download-sync.png',
'download-sync.png'], dtype=object)
array(['https://files.readme.io/2b1a93e-download-sync.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/db73682-Procore-docs-shortened-company-name.png',
'Procore-docs-shortened-company-name.png'], dtype=object)
array(['https://files.readme.io/db73682-Procore-docs-shortened-company-name.png',
'Click to close...'], dtype=object) ] | docs.odrive.com |
oceanspy.OceanDataset.set_grid_coords
- OceanDataset.set_grid_coords(grid_coords, add_midp=False, overwrite=None)[source]
Set grid coordinates used by
xgcm.Grid.
- Parameters
- grid_coords: str
Grid coordinates used by
xgcm.Grid. Keys are axes, and values are dict with key=dim and value=c_grid_axis_shift. Available c_grid_axis_shift are {0.5, None, -0.5}. E.g., {‘Y’: {‘Y’: None, ‘Yp1’: 0.5}} See
oceanspy.OCEANSPY_AXESfor a list of axes
- add_midp: bool
If true, add inner dimension (mid points) to axes with outer dimension only. The new dimension will be named as the outer dimension + ‘_midp’
- overwrite: bool or None
If None, raises error if grid_coords has been previously set. If True, overwrite previous grid_coors. If False, combine with previous grid_coors.
References | https://oceanspy.readthedocs.io/en/latest/generated/oceanspy.OceanDataset.set_grid_coords.html | 2021-11-27T06:03:00 | CC-MAIN-2021-49 | 1637964358118.13 | [] | oceanspy.readthedocs.io |
A GroovyObject facade for an underlying MBean which acts like a normal groovy object but which is actually implemented via an underlying JMX MBean. Properties and normal method invocations delegate to the MBeanServer to the actual MBean.
Construct a simple key based on the method name and the number of parameters
operation- - the mbean operation name
params- - the number of parameters the operation supports
Description of the specified attribute name.
attr- - the attribute
Description of the specified attribute name.
attributeName- - stringified name of the attribute
Get the description of the specified operation. This returns a Collection since operations can be overloaded and one operationName can have multiple forms.
operationName- the name of the operation to describe
Description of the operation.
operation- the operation to describe
List of string representations of all of the attributes on the MBean.
List of the names of each of the attributes on the MBean
The values of each of the attributes on the MBean
Description of all of the operations available on the MBean.
Names of all the operations available on the MBean.
Return an end user readable representation of the underlying MBean | http://docs.groovy-lang.org/docs/groovy-2.5.4/html/gapi/groovy/util/GroovyMBean.html | 2021-11-27T04:57:56 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.groovy-lang.org |
Using Apache Spark with Domino¶
The below video is a recording of a webinar titled Using Apache Spark in Domino, held in May 2019. The video covers how Domino interacts with Spark clusters, essential Domino features for supporting Spark, and how to handle some common Spark use cases and workflows in Domino.
Click here to view or download the slides used in the presentation.
For additional information and guides on setting up Domino for use with Spark, read the Hadoop and Spark Overview. | https://docs.dominodatalab.com/en/4.4/reference/spark/external_spark/Using_Apache_Spark_with_Domino.html | 2021-11-27T05:13:32 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.dominodatalab.com |
Create a Pages website from a template
Introduced in GitLab 11.8.
GitLab provides templates for the most popular Static Site Generators (SSGs). You can create a new project from a template and run the CI/CD pipeline to generate a Pages website.
Use a template when you want to test GitLab Pages or start a new project that’s already configured to generate a Pages site.
- From the top navigation, click the + button and select New project.
- Select Create from Template.
Next to one of the templates starting with Pages, click Use template.
-. | https://docs.gitlab.com/14.3/ee/user/project/pages/getting_started/pages_new_project_template.html | 2021-11-27T06:24:10 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.gitlab.com |
Transparency Note for Azure Cognitive Service for language
What is a Transparency Note?
An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, its capabilities and limitations, and how to achieve the best performance. Microsoft's Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system.
Microsoft's Transparency Notes are part of a broader effort at Microsoft to put our AI principles into practice. To find out more, see Responsible AI principles from Microsoft.
General principles
This Transparency Note discusses Azure Cognitive Service for Language and the key considerations for making use of this technology responsibly. There are a number of things you need to consider when deciding how to use and implement AI-powered products and features:
- Will this product or feature perform well in my scenario? Before deploying AI.
Introduction to Azure Cognitive Service for language
Azure Cognitive Service for language is a cloud-based service that provides Natural Language Processing (NLP) features for text mining and text analysis, including:
- Named Entity Recognition (NER), Personally Identifying Information (PII)
- Health specific functions
- Key Phrase Extraction
- Language Detection
- Sentiment Analysis and opinion mining
- Question answering
- Summarization
- Custom Named Entity Recognition (Custom NER)
- Custom Text Classification
- Conversational Language Understanding
Read the overview to get an introduction to each feature and review the example use cases. See the How-to guides and the API reference to understand more details about what each feature does and what gets returned by the system.
This article contains basic guidelines for how to use Azure Cognitive Service for language features responsibly, and specific guidance for a few features. Read the general information first and then jump to the specific article if you're using once of the features below.
- Transparency note for Named Entity Recognition and Personally Identifying Information
- Transparency note for the health feature
- Transparency note for Key Phrase Extraction
- Transparency note for Language Detection
- Transparency note for Sentiment Analysis
- Transparency note for Question answering
- Transparency note for Summarization
General guidelines to understand and improve performance
Understand confidence scores
The sentiment, named entity recognition, language detection and health functions all return a confidence score as a part of the system response. This is an indicator of how confident the service is with the system's response. A higher value indicates that the service is more confident that the result is accurate. For example, the system recognizes entity of category U.S. Driver's License Number on the text 555 555 555 when given the text "My NY driver's license number is 555 555 555" with a score of .75 and might recognize category U.S. Driver's License Number on the text 555 555 555 with a score of .65 when given the text "My NY DL number is 555 555 555". Given the more specific context in the first example, the system is more confident in its response. In many cases, the system response can be used without examining the confidence score. In other cases, you can choose to use a response only if its confidence is above a specified confidence score threshold.
Understand and measuring performance
The performance of Azure Cognitive Service for language features is measured by examining how well the system recognizes the supported NLP concepts (at a given threshold value in comparison with a human judge.) For named entity extraction (NER), for example, one might count the true number of phone number entities in some text based on human judgement, and then compare with the output of the system from processing the same text. Comparing the human judgement with the system recognized entities would allow you to classify the events into two kinds of correct (or "true") events and two kinds of incorrect (or "false") events.
Azure Cognitive Service for language features will not always be correct. You'll likely experience both false negative and false positive errors. It's important to consider how each type of error will affect your system. Carefully think through scenarios where true events won't be recognized and where incorrect events will be recognized and what the downstream affects will be in your implementation. Make sure to build in ways to identify, report and respond to each type of error. Plan to periodically review the performance of your deployed system to ensure errors are being handled appropriately.
How to set confidence score thresholds
You can choose to make decisions in your system based on the confidence score the system returns. You can adjust the confidence score threshold your system uses to meet your needs. If it is more important to identify all potential instances of the NLP concepts you want, you can use a lower threshold. This means that you may get more false positives but fewer false negatives. If it is more important for your system to recognize only true instances of the feature you're calling, you can use a higher threshold. If you use a higher threshold, you may get fewer false positives but more false negatives. Different scenarios call for different approaches. In addition, threshold values may not have consistent behavior across individual features of Azure Cognitive Service for language and categories of entities. For example, do not make assumptions that using a certain threshold for NER category Phone Number would be sufficient for another NER category, or that a threshold you use in NER would work similarly for Sentiment Analysis. Therefore, it is critical that you test your system with any thresholds you want to experiment with using real data it will process in production to determine the effects of various threshold values.
The quality of the incoming text to the system will affect your results
Azure Cognitive Service for language features only processes text. The fidelity and formatting of the incoming text will affect the performance of the system. Make sure you consider the following:
- Speech transcription quality may affect the quality of the results. If your source data is voice, make sure you use the highest quality combination of automatic and human transcription to ensure the best performance. Consider using custom speech models for better quality results.
- Lack of standard punctuation or casing may affect the quality of your results. If you are using a speech system, like Cognitive Services Speech to Text, be sure to select the option to include punctuation.
- Optical character recognition (OCR) quality may affect the quality of the system. If your source data is images and you use OCR technology to generate the text, incorrect text generated may affect the performance of the system. Consider using custom OCR models to help improve the quality of results.
- If your data includes frequent misspellings, consider using Bing Spell Check to correct misspellings.
- Tabular data may not be identified correctly depending on how you send the table text to the system. Examine how you send text from tables in source documents to the service. For tables in documents, consider using Microsoft Form Recognizer or another similar service, which will allow you to get the appropriate keys and values to send to Azure Cognitive Service for language so contextual keys can be sent close enough to the values for the system to properly recognize the entities.
- Microsoft trained its Azure Cognitive Service for language feature models (with the exception of language detection) using natural language text data that is comprised primarily of fully formed sentences and paragraphs. Therefore, using this service for data that most closely resembles this type of text will yield the best performance. We recommend avoiding use of this service to evaluate incomplete sentences and phrases where possible, as the performance here may be reduced.
- The service only supports single language text. If your text includes multiple languages like "the sandwich was Bueno", the output may not be accurate.
- The language code must match the input text language to get accurate results. If you are not sure about the input language you can use the language detection feature.
Fairness
At Microsoft, we strive to empower every person on the planet to do more. An essential part of this goal is working to create technologies and products that are fair and inclusive. Fairness is a multi-dimensional, sociotechnical topic and impacts many different aspects of our product development. You can learn more about Microsoft’s approach to fairness here.
One dimension we need to consider is how well the system performs for different groups of people. This may include looking at the accuracy of the model as well as measuring the performance of the complete system. Research has shown that without conscious effort focused on improving performance for all groups, it is often possible for the performance of an AI system to vary across groups based on factors such as race, ethnicity, language, gender, and age.
Each service and feature is different, and our testing may not perfectly match your context or cover all scenarios required for your use case. We encourage developers to thoroughly evaluate error rates for the service with real-world data that reflects your use case, including testing with users from different demographic groups.
For Azure Cognitive Service for language, certain dialects and language varieties within our supported languages and text from some demographic groups may not yet have enough representation in our current training datasets. We encourage you to review our responsible use guidelines, and if you encounter performance differences, we encourage you to let us know.
Performance varies across features and languages
Various languages are supported for each Azure Cognitive Service for language feature. You may find that performance for a particular feature is not consistent with another feature. Also, you may find that for a particular feature that performance is not consistent across various languages.
Next steps
If you are using any of the features below, be sure to review the specific information for that feature.
See also
- Transparency note for Named Entity Recognition and Personally Identifying Information
- Transparency note for the health feature
- Transparency note for Key Phrase Extraction
- Transparency note for Language Detection
- Transparency note for Question answering
- Transparency note for Summarization
- Transparency note for Sentiment Analysis
Also, make sure to review: | https://docs.microsoft.com/fr-fr/legal/cognitive-services/language-service/transparency-note?context=%2Fazure%2Fcognitive-services%2Ftext-analytics%2Fcontext%2Fcontext | 2021-11-27T06:07:34 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.microsoft.com |
Testing a Single Directory of Files in a Project
Testing a Single Source File
Testing a Single Project Under a Solution Folder
Testing a Single Source File When No Solution is Provided
Because the name of the solution is unknown, the solution path should start with
/:
Use the
-reference switch if you receive an "Unable to find reference assembly" message. | https://docs.parasoft.com/pages/diffpagesbyversion.action?pageId=38643211&originalVersion=3&revisedVersion=7 | 2021-11-27T05:29:35 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.parasoft.com |
You are viewing the RapidMiner Server documentation for version 9.5 - Check here for latest version
Username and group filtering
If you want to restrict access of RapidMiner Server to only a limited set of users of the LDAP server, then you can define username or group filters.
- If a username filter is defined then it limits the access to those users only whose username matches the defined filter condition.
- If a group filter is defined then it limits the access to those users only who belong to at least one LDAP group whose name matches the defined filter condition.
- When both filters are defined then a user can access RapidMiner Server if any of the above conditions are met.
Defining username and group filtering
Username and group filtering can be defined in the Administration > System Settings window of the web interface.
Open Administration > System Settings and the System Settings tab.
Set the com.rapidanalytics.access.userfilter property to set the username filter and enable filtering based on username.
Set the com.rapidanalytics.access.groupfilter property to set the group filter and enable filtering based on LDAP group membership.
If any of the above properties is removed from the table or exists in the table but left blank or contains only whitespaces then the corresponding filter is considered as undefined.
Filtering conditions to use
Username and group filters are regular expressions that allows you to create flexible conditions to limit user access. | https://docs.rapidminer.com/9.5/server/configure/users-groups/ldap/username-and-group-filters.html | 2021-11-27T05:14:31 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.rapidminer.com |
Ethernet Port Lines
Ethernet port of the EM120 is of 10BaseT type. Onboard electronics of the EM120 do not include Ethernet magnetics, so magnetic circuitry must be connected externally. You can use either a standalone magnetics part (such as YCL-20F001N, schematic diagram shown below) or RJ45 connector with integrated magnetics.
It is important to make the PCB wire connections between the Ethernet port pins of the EM120). | https://docs.tibbo.com/soism/em120_pin_ether | 2021-11-27T06:17:46 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['em120_ether_arrangement.jpg',
'em120_ether_arrangement em120_ether_arrangement'], dtype=object)] | docs.tibbo.com |
The RVG 6500 System features the latest innovations in digital radiography—while still delivering the highest image resolution (> 20 lp/mm)..
- Built To Last
Thanks to rigorous design and testing all RVG sensors provide maximum durability and flexibility. Completely waterproof, RVG sensors can be safely submerged in disinfectant. The shock-resistant cases and silicon padding offers protection from falls, bites, and other damage. In addition, a lead barrier.
- Highest Image Quality
Delivering the highest image resolution (20 lp/mm) in the industry, the RVG 6500 System is the result of decades of experience in digital radiography. Our advanced technology with optical fiber yields high image resolution, while greater exposure latitude helps ensure optimal image capture over a wide range of exposure—making it an ideal choice for even the most complex dental applications, including endodontic examinations.
- Mobility and Affordability
Versatile and easy to share, the RVG 6500 System is an affordable solution for any practice. Wi-Fi technology eliminates the need for a wired connection to your computer—so you can easily move the sensors from one examination room to another. The sensor can even be used in a variety of configurations, making it an ideal choice for both single- and multi-chair practices. These benefits extend the latest technology to your practice—making the RVG 6500 System the most affordable option for practices going digital. And, with no need for extensive rewiring to set up additional monitors in each operatory, the cost of the RVG 6500 System is an attractive solution for any practice. | https://tools4docs.com/products/Kodak-RVG-6500-Digital-Intra%252doral-Sensor-For-Dental-Radiography.html | 2021-11-27T05:51:54 | CC-MAIN-2021-49 | 1637964358118.13 | [] | tools4docs.com |
Remote Desktop¶
Introduction¶
Using a VNC server will enable you to access the GUI of Windows running on your LattePanda from a different PC on your local network. TightVNC is a free and easy way to set up this service.
VNC stands for “Virtual Network Computing”. It is a way of transmitting the keyboard and mouse events of one computer to another - in other words, it uses the information from one computer to remotely control aspects of other computers which are connected along the same network. This is useful, because you might not have extra monitors, keyboards, or mice for available use – using a VNC service enables you to access several computers on your local network using just one computer, monitor, keyboard and mouse. You might also have headless servers set up which might benefit more from a remote connection to the master server rather than a vast amount of peripherals attached to each server. Thus, setting up a VNC server on your headless server is an effective way to interface with its GUI whenever you should need to connect to it.
Let’s get started:
Step 1 - Installation¶
1.Download and install TightVNC for Windows on your LattePanda. Choose 32-bit or 64-bit depending on your system architecture. (LattePanda Standard is 32-bit, LattePanda Enhanced is 64-bit)
2.End-User Licence Agreement
Accept the licence agreement and click next
3.Choose Setup Type
Typical installation will install both TightVNC server and TightVNC viewer on your LattePanda
Custom installation allows you to select which elements to install. In this situation, all we need is the server itself (unless you would like to be able to view other PCs on your network through the LattePanda, in which case you will need to install the viewer, as well).
For this tutorial, we will perform the typical installation method.
4.Select Additional Tasks
Make sure all of the boxes are checked
5.Install TightVNC
Click Install to begin!
6.TightVNC Server: Set Passwords
Password Access
At this point it is advised to set a password for remote access. To do this, click on the radio button, “Require password-based authentication,” and then choose a password. Then, retype your password into the next textbox directly underneath.
In this example, the password has been set to “lattepan” (since the password cannot be longer than 8 characters)
7.Administrative Password
This is not strictly necessary. In this tutorial I will not set an administrative password, but you may if you wish. If you set a password for this portion, you will have to enter it each time before changing any configuration settings.
When you are happy with your settings, click “OK”. Click “Finish” to exit the setup wizard.
Step 2 - Configuration¶
You should now see a new icon in your system tray. (If you don’t, try logging out and logging back in to your PC).
Here you can see the IP address of your PC. (If you cannot hover your mouse over the icon and see the IP address, you can also view the IP address that your computer is using via the command prompt.)
Double click this icon to bring up the service configuration window. These default settings should be fine for our purposes of initially setting up the servers.
Next, you will need to go on to the computer that you would like to use
Step 3 - Testing¶
Open TightVNC Viewer. A window will appear for a new TightVNC Connection. At this point, you need to input the IP address of your LattePanda.
Tip: A quick way to find this IP address is to hover over the system tray TightVNC icon on your LattePanda. A hint will pop up with “TightVNC Service -
” You can also go into your router control interface and look for attached devices. The next step is to input this IP address into the New TightVNC Connection Window, followed by the port number you set in the service settings. The default port is 5900.
<ip address of LattePanda>:<port number>
e.g. 192.168.2.60:5900 Click connect. If all goes well you will be prompted to input a password. Please input the password that you created earlier. In this example, this password was “lattepan”. Then, press enter.
You will now see a window containing your LattePanda’s GUI! You can now control it remotely!
This concludes the LattePanda VNC tutorial. If you have any questions or comments, please let us know in the forums. We hope this has has helped. | http://docs.lattepanda.com/content/1st_edition/tools/ | 2021-11-27T04:53:29 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image001.png',
'vnc_image001'], dtype=object)
array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image002.png',
'vnc_image002'], dtype=object)
array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image003.png',
'vnc_image003'], dtype=object)
array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image004.png',
'vnc_image004'], dtype=object)
array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image005.png',
'vnc_image005'], dtype=object)
array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image006.png',
'vnc_image005'], dtype=object)
array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image007.png',
'vnc_image007'], dtype=object)
array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image008.png',
'vnc_image008'], dtype=object)
array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image009.png',
'vnc_image009'], dtype=object)
array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image010.png',
'vnc_image010'], dtype=object)
array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image011.png',
'vnc_image011'], dtype=object)
array(['http://www.lattepanda.com/wp-content/uploads/2016/02/vnc_image012.png',
'vnc_image012'], dtype=object) ] | docs.lattepanda.com |
SDK - Client ModuleSDK - Client Module
The 8base SDK provides a convient way of initializing an API client to start making GraphQL calls to a workspace.
This client library is used by the other 8base service packages to make requests to the 8base API. You can also use it independently to make custom requests to the API.
UsageUsage
The
Client module exposes a number of different methods that can be used to configure itself. Those functions are listed below with relevant descriptions.
In the most basic case, the client can be used to query public resources from a given workspace.
/* Import client module */ import { Client } from '@8base/api-client'; /* Instantiate new instance with workspace endpoint */ const client = new Client(''); /* Run a query with a callback handler */ client.request(` query { __schema { types { name } } } `).then(console.log);
Should an
idToken or
apiToken need to get set, the
client.setIdToken(tk) method can be used. Under the hood, this will set the supplied value as a Bearer token header on subsequent requests.
/* Set the Token */ client.setIdToken('MY_API_TOKEN') /* Run a query with a callback handler */ client.request(` query { privateResourceList { items { id } } } `).then(console.log);
Client MethodsClient Methods
setIdToken(token: String!)setIdToken(token: String!)
Update the id token.
setRefreshToken(token: String!)setRefreshToken(token: String!)
Update the refresh token.
setEmail(email: String!)setEmail(email: String!)
Update the user email.
setWorkspaceId(id: String!)setWorkspaceId(id: String!)
Update the workspace identifier.
request(query: GraphqlString!, variables: Object)request(query: GraphqlString!, variables: Object)
Send request to the API with variables that will be used when executing the query. . Returns promise to be resolved.
/* Set variables */ const variables = { search: "ste" } /* Set query */ const query = /* GraphQL */` query($search: String!) { resourceList(filter: { name: { contains: $search } }) { items { id } } } ` /* Run a query with a callback handler */ client.request(query, variables).then(console.log);
AlternativesAlternatives
There any a number of ways developers can connect to their workspace and begin executing queries. The
Client module is only one of them! If you're curious about alternatives for how you can create a client, check out the following video. However, remember that all GraphQL calls are only HTTP post requests – and connecting to your 8base workspace is no different! | https://docs.8base.com/docs/development-tools/sdk/api-client/ | 2021-11-27T05:38:35 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.8base.com |
Domino service filesystem¶
Overview¶
This article describes the filesystem structure you will find in Domino Runs.
The filesystem root (
/) contains the following directories.
/ ├── bin ├── boot ├── dev ├── domino # contains datasets directory ├── etc ├── home ├── lib ├── lib32 ├── lib64 ├── media ├── mnt # contains working directory ├── opt ├── proc ├── root ├── run ├── sbin ├── scripts ├── srv ├── sys ├── tmp ├── usr └── var
Domino working directory¶
When you start a Run from a Domino project, your project files and some additional special files and directories are loaded into the Domino working directory. There are two different paths where you may find this directory, depending on how your project is configured:
By default, your working directory will just be
/mnt. The folders and files from your project will be in that directory, along with the special files and folders described below.
If your project is set up to import another project, your working directory will instead be
/mnt/<project-owner-username>/<project-name>.
Note that Domino sets the
DOMINO_WORKING_DIR special environment
variable for all Runs, and it will always contain the path to your
working directory.
Inside your working directory you will find your project files. Additionally, the following folders and files have special significance in the working directory:
DOMINO_WORKING_DIR/ ├── ipynb_checkpoints # folder with your auto-saved Jupyter states ├── results # folder with your generated results └── stdout.txt # tail of the console output from your Run ├── requirements.txt # add this file to specify python package dependencies ├── .dominoresults # controls which files are rendered as results ├── .dominoignore # add file patterns here for Domino to ignore ├── .dominokeep # add this to an empty folder to make Domino keep it ├── dominostats.json # values written here are shown in the Jobs dashboard ├── email.html # used to format your own notification emails ├── .noLock # create this file to remediate "too many open files" ├── app.sh # put your app-launching code here for Domino Apps ├── domino.log # in local CLI projects only; contains CLI logs └── .domino.vmoptions # in local CLI projects only; contains proxy settings
Learn more about: | https://docs.dominodatalab.com/en/latest/reference/projects/Domino_service_filesystem.html | 2021-11-27T04:59:02 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.dominodatalab.com |
Date: Thu, 01 Feb 2007 23:35:35 -0800 From: Garrett Cooper <[email protected]> To: [email protected] Subject: Re: Easy USB-drive automounter and "filemanager" for nontechies? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
Jason Morgan wrote: > On Thu, Feb 01, 2007 at 09:29:39PM -0500, Rod Person wrote: >> On Thu, 01 Feb 2007 19:52:27 -0500 >> Chris Shenton <[email protected]> wrote: >> >>> I'm looking for something like she'd get on a Mac or PC: >>> >>> 1. a way to automount the USB 'drive' when she plugs in >>> 2. a visual filemanager or some other friendly way for her to see >>> files and copy them off so she can mail them or whatnot. >>> 3. a way to safely unmount the USB device when she's done >>> >>> I've got no idea about friendly GUI/filemanager with drag-n-drop or >>> other easy way to get files off. She's using simple olde FVWM2 now >>> and I'd prefer not to load up a massive GUI like KDE or Gnome. I just >>> don't know what's out there, being a command line dinosuar myself. >>> >>> Any recommendations? >> >> Thunar. I just started using this it's part of XFCE4, but you can >> install it separately, I use it with fluxbox. It uses hal-d but it very >> light. > > My wife (non-techie) and I use Thunar in XFCE4.4. Thunar comes > installed by default with XFCE4.4, I believe. It is plenty fast and > doesn't require all the Gnome and KDE bloat. XFCE4 is also newbie > friendly and fast enough for my purposes. > > Cheers, > > Jason Anything that uses HALd will work with automounting drives out of the box if you set it up properly. -Garrett
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1253940+0+archive/2007/freebsd-questions/20070204.freebsd-questions | 2021-11-27T05:57:41 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.freebsd.org |
Closed Fuel Transaction Report (FFLC)
An audit trail of all wash transactions will be posted to the Closed Fuel Transaction file.
The transaction report will show the:
Date/Time: Date/Time the transaction was entered in the Tricoder.
Vehicle: Unit Number.
Odometer: This report will display the vehicle's fuel meter at the time of the download, it does not display the meter reading entered into the Tricoder (This information can be acquired with a third party report writer, via the Vehicle Fueling History database table using the field FH-ALT-ODOM).
Charge#: TCVHWASH will be written in this field to identify these as Tricoder Vehicle Wash Transactions.
Part: Quantity of "1" will be written to this field for each wash.
PCOST $: Cost of the wash, pulled from the Class Code.
The Grand Totals at the bottom of the report will give you the total quantity and total cost of the washes. | https://docs.rtafleet.com/rta-manual/vehicle-wash-module/closed-fuel-transaction-report-(fflc)/ | 2021-11-27T04:47:38 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.rtafleet.com |
The security of your data is very important to us, and we designed Segmind with multiple layers of protection across a distributed, reliable infrastructure.
For Enterprise plan, we deploy Segmind in your cloud and provide the same cluster management tools backing our own internal infrastructure. This means you can keep your data in your own S3 buckets, perform runs on your own machines, and deploy models within your own cloud infrastructure.
The data engineer doesn’t need to worry about many of the details — simply write the code and Segmind runs it. A key benefit of the hybrid PaaS model is that the vast majority of your actual data remains in systems under your control, such as your AWS account. While certain data, such as your notebooks, configurations, K8s logs, and user information, is present within the control plane, that information is encrypted at rest within the control plane, and communication to and from the control plane is encrypted in transit.
We create a dedicated VPC to provide complete network isolation between clients. This helps us secure and monitor connections, screen traffic, and restrict instance access inside your virtual network. All data stored with Segmind are encrypted at rest. The keys are managed and rotated automatically by our Cloud Service Provider.
All files are encrypted at rest, whether they are files you create or anything you upload. The keys are managed and rotated automatically by the Cloud Provider.
For sensitive information (such as database integrations or environment variables), we apply a layer of industry standard AES-256 encryption before storing them in our database. Decryption keys are stored separately.
All data transmitted between Segmind and our users is protected using Transport Layer Security (TLS), and our Strict-Transport-Security (HSTS) settings assure that your browser will never send an unencrypted request to us.
All the data is stored on tier 1 data centers with highest level of compliance including HIPAA.
We support only trusted SSO providers and do not support email/password based authentication. All SSOs support multi-factor authentication (MFA) and have a high level of security.
You own and control your data. Segmind is committed to keeping it private. Our privacy policy describes when we collect your information and why. We process your data with due care, in accordance with all applicable laws and regulations, including the regulation (EU) 2016/679 of the European Parliament and of the Council, the General Data Protection Regulation (GDPR).
Segmind access to your environment includes a cross-account IAM role. The cross-account IAM role allows the Segmind control plane to configure resources in your environment using the AWS APIs. It does not grant access to your data sets.
We follow the principle of least privilege in how we write design our cloud infrastructure and how we access it. We use Google account authentication with two-factor authentication enforced for all accesses to production systems.
Changes to source code destined for production systems are subject to code reviews by qualified engineering peers. We adhere to a secure development lifecycle and review the security implications of every change. Prior to updating production services, the contributors to the updated software version are required to verify that their changes are working as intended in the staging environment.
If you’re spot a vulnerability on our application (*.segmind.com), we’d love to know about it. | https://docs.segmind.com/security | 2021-11-27T05:55:37 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.segmind.com |
This page lists any changes in 2017 in the Editor
MonoBehaviour.OnValidate is called when a Scene loads, when GameObjects are duplicated or when a value changes in the Inspector..
Ejemplo:
.
Purely smooth materials that use the GGX version of the standard shader now receive specular highlights which increases the realism of such materials. | https://docs.unity3d.com/es/2018.4/Manual/UpgradeGuide20172.html | 2021-11-27T06:51:27 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.unity3d.com |
How to use affiliate referral URL campaign tracking
Affiliate referral URLs can include a campaign parameter to help your affiliates track and monitor the performance of their affiliate links.
Affiliates can name their campaigns in the Affiliate Area when generating an affiliate referral link, or manually append a campaign name to an affiliate referral link. When using the generator, the campaign name will be automatically appended to their affiliate referral link. Here are a couple of examples of what the affiliate referral link looks like with a campaign parameter added:
-
- you provide to your affiliates, or a sale or promotion you have scheduled.
We recommend letting your affiliates know that they use in their affiliate referral link, the less space they have to market your product or site.
Using this campaign parameter will allow your affiliates to identify where they should focus their marketing efforts for maximum sales and referrals. | https://docs.affiliatewp.com/article/1171-how-to-use-affiliate-referral-url-campaign-tracking | 2021-09-17T05:19:08 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.affiliatewp.com |
azure.azcollection.azure_rm_vmbackuppolicy_info – Fetch Backup Policy Details¶
Note
This plugin is part of the azure.azcollection collection (version 1.9.0).
To install it use:
ansible-galaxy collection install azure.azcollection.
To use it in a playbook, specify:
azure.azcollection.azure_rm_vmbackuppolicy_info.
New in version 1.1
azure_rm_backvmuppolicy_info: name: 'myBackupPolicy' vault_name: 'myVault' resource_group: 'myResourceGroup'
Return Values¶
Common return values are documented here, the following are the fields unique to this module: | https://docs.ansible.com/ansible/latest/collections/azure/azcollection/azure_rm_vmbackuppolicy_info_module.html | 2021-09-17T04:12:26 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.ansible.com |
If you are having issues, check the log output to make sure that there were no connection issues. Logs from Leanplum start with "Leanplum:".
You can also view the Debug tab in the Leanplum dashboard, which shows you the API calls made to our server in development mode.
Updated 6 months ago | https://docs.leanplum.com/docs/general-troubleshooting | 2021-09-17T04:02:30 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['https://files.readme.io/1327c2b-debugger.png', 'debugger.png'],
dtype=object)
array(['https://files.readme.io/1327c2b-debugger.png',
'Click to close...'], dtype=object) ] | docs.leanplum.com |
This element MUST be conveyed as the root element in any instance document based on this Schema expression ABIE Waybill. Details. Waybill Consignment Note A container for extensions foreign to the document. BBIE Waybill. UBL Version Identifier. Identifier Identifies the earliest version of the UBL 2 schema for this document type that defines all of the elements that might be encountered in the current instance. 0..1 Waybill UBL Version Identifier Identifier Identifier. Type 2.0.5 BBIE Waybill. Customization Identifier. Identifier Identifies a user-defined customization of UBL for a specific use. 0..1 Waybill Customization Identifier Identifier Identifier. Type NES BBIE Waybill. Profile Identifier. Identifier Identifies a user-defined profile of the customization of UBL being used. 0..1 Waybill Profile Identifier Identifier Identifier. Type BasicProcurementProcess BBIE Waybill. Profile Execution Identifier. Identifier Identifies an instance of executing a profile, to associate all transactions in a collaboration. 0..1 Waybill Profile Execution Identifier Identifier Identifier. Type BPP-1001 BBIE Waybill. Identifier An identifier for this document, assigned by the sender. 1 Waybill Identifier Identifier Identifier. Type Master Waybill Number BBIE Waybill. Carrier Assigned_ Identifier. Identifier An identifier (in the form of a reference number) assigned by a carrier or its agent to identify a specific shipment. 0..1 Waybill Carrier Assigned Identifier Identifier Identifier. Type BBIE Waybill. UUID. Identifier A universally unique identifier for an instance of this document. 0..1 Waybill UUID Identifier Identifier. Type BBIE Waybill. Issue Date. Date The date, assigned by the sender, on which this document was issued. 0..1 Waybill Issue Date Date Date. Type BBIE Waybill. Issue Time. Time The time, assigned by the sender, at which this document was issued. 0..1 Waybill Issue Time Time Time. Type BBIE Waybill. Name Text, assigned by the sender, that identifies this document to business users. 0..1 Waybill Name Name Name. Type Air Waybill , House Waybill BBIE Waybill. Description. Text Text describing the contents of the Waybill. 0..n Waybill Description Text Text. Type BBIE Waybill. Note. Text Free-form text pertinent to this document, conveying information that is not contained explicitly in other structures. 0..n Waybill Note Text Text. Type BBIE Waybill. Shipping Order Identifier. Identifier An identifier (in the form of a reference number) of the Shipping Order or Forwarding Instruction associated with this shipment. 0..1 Waybill Shipping Order Identifier Identifier Identifier. Type BBIE Waybill. Ad Valorem_ Indicator. Indicator A term used in commerce in reference to certain duties, called ad valorem duties, which are levied on commodities at certain rates per centum on their value. 0..1 Waybill Ad Valorem Indicator Indicator Indicator. Type BBIE Waybill. Declared Carriage_ Value. Amount Value declared by the shipper or his agent solely for the purpose of varying the carrier's level of liability from that provided in the contract of carriage in case of loss or damage to goods or delayed delivery. 0..1 Waybill Declared Carriage Value Amount Amount. Type BBIE Waybill. Other_ Instruction. Text Other free-text instructions related to the shipment to the forwarders or carriers. This should only be used where such information cannot be represented in other structured information entities within the document. 0..n Waybill Other Instruction Text Text. Type ASBIE Waybill. Consignor_ Party. Party The party consigning goods, as stipulated in the transport contract by the party ordering transport. 0..1 Waybill Consignor Party Party Party Consignor (WCO ID 71 and 72) ASBIE Waybill. Carrier_ Party. Party The party providing the transport of goods between named points. 0..1 Waybill Carrier Party Party Party Transport Company, Shipping Line, NVOCC, Airline, Haulier, Courier, Carrier (WCO ID 49 and 50) ASBIE Waybill. Waybill Freight Forwarder Party Party Party Consolidator (WCO ID 192 AND 193) ASBIE Waybill. Shipment A description of the shipment. 1 Waybill Shipment Shipment Shipment ASBIE Waybill. Document Reference A reference to another document associated with this document. 0..n Waybill Document Reference Document Reference Document Reference ASBIE Waybill. Exchange Rate Information about the rate of exchange (conversion) between two currencies. 0..n Waybill Exchange Rate Exchange Rate Exchange Rate ASBIE Waybill. Document Distribution A list of interested parties to whom this document is distributed. 0..n Waybill Document Distribution Document Distribution Document Distribution ASBIE Waybill. Signature A signature applied to this document. 0..n Waybill Signature Signature Signature | https://docs.oasis-open.org/ubl/cs02-UBL-2.3/xsd/maindoc/UBL-Waybill-2.3.xsd | 2021-09-17T03:20:02 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.oasis-open.org |
1 - Install the database locally
PostgreSQL or MySQL for the SQL version.
MongoDB for the MongoDB version.
2 - Open 'frontend/src/config/localhost.js'
2.1 - Set your custom configs
3 - Open 'backend/config/localhost.js'
3.1 - Set your custom configs
4 - Go to the 'frontend' folder
4.1 - Run npm run install
4.2 - Run npm run start
5 - Go to the 'backend' folder
5.1 - Run npm run install
5.2 - Run npm run start
6 - Create the database tables (SQL only)
npm run db:reset:localhost
This command will DROP ALL THE DATABASE TABLES. Make sure you are running it pointing to the correct database.
7 - The app will be available at.
1 - Setup and deploy first
You must first set up and deploy because you need the storage rules deployed at the Firebase.
2 - Make sure your database is running locally
3 - Configure the database password at '<project-folder>/backend/config/localhost.json'.
4 - Create the database tables:
npm run db:reset:localhost
This command will DROP ALL THE DATABASE TABLES. Make sure you are running it pointing to the correct database.
5 - Open the 'backend' folder and run:
npm run start
6 - Open the 'frontend' folder and run:
npm run start
1 - Install Docker locally ()
2 - Open 'frontend/src/config/production.js'
2.1 - Set your custom configs
3 - Open 'backend/config/production.js'
3.1 - Set your custom configs
4 - Go to the project root folder
4.1 - Run docker-compose build
4.2 - Run docker-compose up
5 - Run docker ps to find the Container ID. Select the one that's related to the app (not the database). (For SQL only)
5.1 - Run docker exec -w=/app/backend -ti [CONTAINER ID] npm run db:reset:production
This command will DROP ALL THE DATABASE TABLES. Make sure you are running it pointing to the correct database.
6 - The app will be available at.
1 - Generate the application and download the code
2 - Configure Firebase
2.1 - Create a Firebase project at the Firebase Console.
2.2 - Enable Storage
Go to Storage
Click at 'Enable'
2.3 - Enable Email/Password sign-in:
At the Firebase console, open the Auth section.
On the Sign-in method tab, enable the Email/password sign-in method and click Save.
2.4 - Add the Firebase project id to the '<project-folder>/.firebaserc'.
2.5 - Configure the client-side account's credentials:
Go to the Firebase Project.
Open 'Project settings'.
At 'Your apps', click at the web icon.
Copy only the content of the var 'config'.
Paste the content at the variable 'firebaseConfig' of the files:
'<project-folder>/frontend/src/config/localhost.js'
'<project-folder>/frontend/src/config/development.js'
'<project-folder>/frontend/src/config/production.js'
2.6 - Configure the server-side account credentials:
Go to the Firebase Project.
Click at the configuration icon, placed near 'Overview' at the left corner.
Click at 'Project settings'.
Open the tab 'Service accounts'.
Open 'Firebase Admin SDK'.
Click at 'GENERATE NEW PRIVATE KEY'.
Save and replicate the file as:
<project-folder>/backend/service-accounts/localhost.json
<project-folder>/backend/service-accounts/test.json
<project-folder>/backend/service-accounts/development.json
<project-folder>/backend/service-accounts/production.json
2.7 - Upgrade the Firebase project to the Blaze (Pay as you go) plan
3 - Create and configure a PostgreSQL or MySQL connection
3.1 - Go to the Google Cloud Console related to the Firebase project that you created.
The database instance doesn't need to be at the Google Cloud, but I recommend it because of the low latency.
3.2 - Go to SQL instances.
3.3 - Create a new instance.
3.4 - Go to 'Connections'.
Mark Public IP
Add '0.0.0.0/0' or your IP address to the new Authorized networks.
3.5 - Go to 'Overview'.
Open '<project-folder>/backend/config/<development|production>.json'.
Inform the database password at the database.password variable.
Copy the Public IP address to the database.migrationHost variable.
Copy the Instance connection name to the database.host as '/cloudsql/<instance connection name>'.
Ps.: If you experience problems connecting to the database, try using the Public IP address at the database.host.
3.5 - Go to 'Databases'.
Create a new database and name it 'development'.
4 - Configure email sender (Optional): In order to be able to send user invitation emails, you need the SMTP credentials configured. It's optional if you don't have it configured the app just won't send those emails.
Open '<project-folder>/backend/config/<environment>.json'.
Locate the 'email' section.
Configure the SMTP credentials. See for config options.
5 - Configure Internationalization (I18n)
You must setup the labels for the entities, fields and roles.5.1 - For each locale at 'frontend/src/i18n/<locale>.js':
Go to the 'entities' section.
Add the labels for the entity and its fields.
Go to the 'roles' section.
Replace the object name for the label.
6 - Setup project dependencies
6.1 - Download and install NodeJS
Go to, download, and install 10.20.1v of NodeJS.
6.2 - Open the console at the project folder.
6.3 - Update NPM:
npm install -g npm
6.3 - Install firebase-tools globally:
npm install -g firebase-tools
6.4 - Log in at firebase-cli and add the project:
firebase login
6.5 - Open the 'frontend' folder and run:
npm install
npm run deploy:development
6.6 - Open the 'backend' folder and run:
npm install
npm run deploy:development
7 - Create database tables:
npm run db:reset:development
This command will DROP ALL THE DATABASE TABLES. Make sure you are running it pointing to the correct database.
8 - Open the application at the URL informed after the first deploy | https://docs.scaffoldhub.io/legacy-scaffolds | 2021-09-17T04:19:08 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.scaffoldhub.io |
Deploy Alluxio Locally
This guide goes over how to run and test Alluxio on your local machine.
- Requirements
- Mount RAMFS file system
- Format Alluxio Filesystem
- Start Alluxio Filesystem Locally
- Verify Alluxio is running
Requirements
The prerequisite for this part is that you have a version of Java 8 installed.
Download the binary distribution of mount the RAMFS. If you do not want to type in the password every time, or you do not have sudo privileges, please read the alternative approaches in FAQ.
Verify Alluxio is running
To verify that Alluxio is running,, the Alluxio filesystem uses
RAMFS as its
in-memory data storage.
On MacOS, it is fine for Alluxio to mount a RAMFS without being a super user.
However, on Linux, it requires sudo privileges to perform
mount (and the user.
- Add the user who starts Alluxio to the sudoers file.
-
This allows Linux user “alluxio” to mount, umount, mkdir and chmod (assume they are in
/bin/) a
specific path
/mnt/ramdisk with sudo privileges without typing the password, but nothing else.
See more detailed explanation about Sudoer User
Specifications. | https://docs.alluxio.io/os/user/stable/en/deploy/Running-Alluxio-Locally.html | 2020-08-03T13:02:03 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.alluxio.io |
Instagram feed via a Guttenberg block
If you have updated and are using Gutenberg on your site, you will be able to select Flo Social via your Gutenberg blocks, as shown below:
Once you add the FloSocial block, you’ll see all the settings on your left side bar, just like with all other Gutenberg blocks. Choose your settings and needed Instagram account and click on Update. Done. The beauty of this option, is that it easily allows you to add your Instagram feed to both, pages and posts. | https://docs.flothemes.com/instagram-feed-via-a-guttenberg-block/ | 2020-08-03T11:42:42 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['https://docs.flothemes.com/app/uploads/2019/08/ezgif.com-video-to-gif-1.gif',
None], dtype=object) ] | docs.flothemes.com |
What are included in Model Fields?
This way you have more options to present your data without directly modifying the underlying table columns:
- Name: This is the true name of the field which analysts will see when managing modeling. Only lowercase alpha-numeric characters (a-z, 0 - 9) and underscores (_) are allowed. If the name starts with a numeric character, the name will be prepended with an underscore (
1abcxy→
_1abcxy)
- Label: This is what normal users see when exploring a Model/Dataset. You should make the Field Label as descriptive as possible.
- Field Description: This provides more context to understand the field, for example how the field is calculated, or what this field should be used for.
- Visibility: If an analyst chooses to hide a field or measure, it will not be visible to business users exploring a dataset containing that field, It is still visible in Data Modeling interface and accessible in SQL interfaces (when writing adhoc SQL, SQL report, or creating Calculated Fields / measures)
Base Dimensions
Base Dimensions are the "original" dimensions that you got when you first create your model.
- In Base Models, Dimensions map directly to the underlying data table's columns.
- In Transform Models, they map to the resulted fields of the SQL.
- In Import Models, they map to the fields that you included in your data import.
Base dimensions can only be hidden, and cannot be removed via the UI. These are the only fields that can be persisted into the database.
Custom Dimensions and Measures
To extend on the base dimensions, users can create Custom Dimensions and Measures. Properties of these custom fields:
- Custom Dimensions and Measures are only calculated when users explore a dataset, and cannot be persisted (in other words, pre-calculated) into the database.
- If you want to persist the result of Custom Dimensions and Measures, you will have to create a derived SQL model using those fields and persist this model.
From Data Model's Structure tab, simply clicking on Add→ Measure or Calculated Field.
The formula of your Custom Dimensions and Measures depends entirely on your database's SQL flavor. These are the code snippets that are inserted between the
SELECT ... FROM ... keywords in your final query.
Custom Dimensions
Custom Dimensions (or Calculated Fields) are created by using non-aggregate (scalar) functions and operations to transform one or multiple columns, for example:
CASE ... WHEN,
CONCAT(), or
field_a + field_b... In the model's structure view, Calculated Fields are represented with the Function Icon (fx) right next to Field Name.
When creating Custom Dimensions (Calculated Fields), you can reuse other dimensions that you created in the same model:
Aggregate functions and Measures are not allowed within Custom Dimensions.
Measures
Measures are created from SQL's aggregate functions and operations such as
COUNT(),
SUM(),
AVG(),
SUM(field_a) + SUM(field_b)... Measures have the Sigma (Σ) icon on the left of the Field Name and are colored blue.
In Measures' formula, you can refer to any base dimensions, custom dimensions and measures you created in the same model. For example, in the formula for Cancelled Value Ratio we used measure
cancelled_value and dimension
item_value:
Measures must be aggregated, therefore simply using scalar operations or referring only dimensions are not allowed:
Updated 4 months ago | https://docs.holistics.io/docs/model-fields | 2020-08-03T11:30:36 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['https://files.readme.io/a4cfa82-model_fields2.png',
'model_fields2.png'], dtype=object)
array(['https://files.readme.io/a4cfa82-model_fields2.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/a1ef4b4-fields_calcualtedfields_measures.png',
'fields_calcualtedfields_measures.png'], dtype=object)
array(['https://files.readme.io/a1ef4b4-fields_calcualtedfields_measures.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/04960b1-b6de25b-field_concept.png',
'b6de25b-field_concept.png'], dtype=object)
array(['https://files.readme.io/04960b1-b6de25b-field_concept.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/d15eadc-787df1e-add.png',
'787df1e-add.png'], dtype=object)
array(['https://files.readme.io/d15eadc-787df1e-add.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/5d601d9-create_cfield.png',
'create_cfield.png'], dtype=object)
array(['https://files.readme.io/5d601d9-create_cfield.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/d19bce0-reuse_cfield.png',
'reuse_cfield.png'], dtype=object)
array(['https://files.readme.io/d19bce0-reuse_cfield.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/4ac9ed9-invalid_cfield.png',
'invalid_cfield.png'], dtype=object)
array(['https://files.readme.io/4ac9ed9-invalid_cfield.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/74141ed-scalar_field.png',
'scalar_field.png'], dtype=object)
array(['https://files.readme.io/74141ed-scalar_field.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/d07886b-measure.png', 'measure.png'],
dtype=object)
array(['https://files.readme.io/d07886b-measure.png', 'Click to close...'],
dtype=object)
array(['https://files.readme.io/af87f15-measure2.png', 'measure2.png'],
dtype=object)
array(['https://files.readme.io/af87f15-measure2.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/3f62a19-invalid_measure.png',
'invalid_measure.png'], dtype=object)
array(['https://files.readme.io/3f62a19-invalid_measure.png',
'Click to close...'], dtype=object) ] | docs.holistics.io |
Logging
Logging is a critical part of any embedded development framework. Most devices can't display error or warning messages and don't have a human user monitoring them. Even when a device does have a display and a user watching it, the log messages often don't help the device's primary user. Displaying messages on a screen doesn't support remote troubleshooting; especially when the device is hidden from view inside a piece of equipment or located in remote geographic regions.
Legato creates log messages in LE_INFO() by default Only minimal info is reported, and only for the current app: essentially logs if the app is communicating. You need to modify default settings to enable monitoring for anything else.
There are two built-in features to control logging using the concepsLogs_tool or Logging API. There are also App Crash Logs.
Access Logs
Run
logread on the target to view the system log.
Run
logread -f to start monitoring the logs and display messages as they are logged.
The installed app's output LE_INFO() log message will appear in the target's system log (syslog).
Logging Tool
The target
log tool is the easiest way to set logging controls. You can control what's being logged, filter levels, trace keywords, and processes all through the command-line in a running system.
Run
log level INFO "processName/componentName" to set the log level to INFO for the specified component in a process.
Run
log trace "keyword" "processName/componentName" to use a keyword trace.
Default syslog
By default, app processes will have their
stdout and
stderr redirected to the
syslog. Each process’s stdout will be logged at INFO severity level; it’s stderr will be logged at “ERR” severity level.
See Standard Out and Standard Error in Syslog for more info.
Logging API
The Logging API provides a toolkit to set error, warning, info, and debugging messages with macros and condition support including default environment variable controls that can be output to different devices and formats.
See Logging API for details. | https://docs.legato.io/18_09/conceptsLogs.html | 2020-08-03T13:02:53 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.legato.io |
New-Temporary
File
Creates a temporary file.
Syntax
New-Temporary
File [-WhatIf] [-Confirm] [<CommonParameters>]
Description
This cmdlet creates temporary files that you can use in scripts.
The
New-TemporaryFile cmdlet creates an empty file that has the
.tmp file name extension.
This cmdlet names the file
tmp<NNNN>.tmp, where
<NNNN> is a random hexadecimal number.
The cmdlet creates the file in your TEMP folder.
This cmdlet uses the Path.GetTempPath() method to find your TEMP folder. This method checks for the existence of environment variables in the following order and uses the first path found:
On Windows platforms:
- The path specified by the TMP environment variable.
- The path specified by the TEMP environment variable.
- The path specified by the USERPROFILE environment variable.
- The Windows directory.
On non-Windows platforms: Uses the path specified by the TMPDIR environment variable.
Examples
Example 1: Create a temporary file
$TempFile = New-TemporaryFile
This command generates a
.tmp file in your temporary folder, and then stores a reference to the file
in the
$TempFile variable. You can use this file later in your script.
Parameters
Prompts you for confirmation before running the cmdlet.
Shows what would happen if the cmdlet runs. The cmdlet is not run.
Outputs
This cmdlet returns a FileInfo object that represents the temporary file. | https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/new-temporaryfile?view=powershell-7 | 2020-08-03T13:01:13 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
Tigase ACS SM component is by default provided with Tigase XMPP Server release (@-dist-max@ flavour of archive) so it’s enough to enable it in the configuration. It can be also obtained from
tigase-acs distribution package.
After downloading the archive it’s simply matter of extracting it and copying contents of
jars/ directory of extracted archive to the
jars/ directory in
tigase-server/ installation directory, eg. under *nix systems (assuming the archive was downloaded to main Tigase Server directory):
tar -xf tigase-acs-${version}.tar.gz cp -R tigase-acs-${version}/jars/ tigase-server/jars/ | https://docs.tigase.net/tigase-server/master-snapshot/Administration_Guide/html_chunk/_tigase_acs_sm_installation.html | 2020-08-03T11:52:39 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.tigase.net |
Overview
The V‑Cloud API is a collection of REST web services that can be accessed by any programming language capable of performing HTTP POST and GET calls. The endpoint for this service is available at. Additional endpoints that isolate processing to particular regions (for example, the European Union) are available on request.
Available methods and mandatory parameters for the V‑Cloud API are discussed in Querying V‑Cloud Status and Transcribing Audio Files. V‑Cloud Transcription Parameters has a complete list of all the available V‑Cloud parameters.
Web Service Overview
The V‑Cloud API is a collection of REST web services that can be accessed by any programming language capable of performing HTTP
GET calls.
The V‑Cloud API utilizes web services to receive requests and return results. The following image illustrates some web service interactions between your system and the V‑Cloud API.
Your machine
POSTs a single audio file or a zip file containing one or more audio files that you want to transcribe. For each successful
POST you will receive a
requestid.
If you have a callback endpoint set up, you will receive results at that endpoint as soon as V‑Cloud processing of your audio file is finished. By default, you will always receive a zip file of the results if you uploaded a zip file.
Tip
The callback mechanism is the recommended way to receive transcription results, since it provides the shortest turnaround time. Because no subsequent request is required to retrieve results, using callbacks is the most efficient way to use V‑Cloud.
Uploading and downloading zip files is recommended to minimize network bandwidth consumption.
You may also use the
requestid in a
GET call which will return a secure URL for you to download the results. You will only receive a URL if the files from that request are done processing.
Tip
Uploading and downloading zip files is recommended to minimize network bandwidth consumption. V‑Cloud has a limit of 250MB per /transcribe POST, which allows for more than two hours of uncompressed 16bit 8KHz 2-channel PCM audio. Voci recommends compressing audio with zip to reduce file size and network bandwidth requirements. Using zip for PCM audio typically allows call lengths of four hours or more.
Other audio formats will have other practical maximum call lengths, typically much longer. Using zip for upload results in the ASR result being zip by default. If you'd like to use a simpler response format while still saving bandwidth, can use the 'outzip=false' request parameter to ensure non-compressed API output (for single files).
If not using callbacks, users should also consider imposing a limit on the number of
GET calls that are being made in parallel to the V‑Cloud API. A recommended limit is 10-20 simultaneous calls. This helps smooth network load by minimizing the number of network connections that must be open simultaneously. | https://docs.vocitec.com/en/overview-58449.html | 2020-08-03T12:20:40 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.vocitec.com |
FUSE-based POSIX API
- Requirements
- Usage
- Advanced configuration
- Assumptions and limitations
- Performance considerations
- Configuration Parameters For Alluxio POSIX API
- Acknowledgements
Alluxio POSIX API is a feature that allows mounting the distributed Alluxio File System as a standard
file system on most flavors of Unix. By using this feature, standard bash tools (for example,
ls,
cat or
mkdir) will have basic access to the distributed Alluxio data store. More importantly,
with is based on the project Filesystem in Userspace (FUSE), and most basic file system operations are supported. However, given the intrinsic characteristics of Alluxio, like its write-once/read-many-times file data model, the mounted file system will not have full POSIX semantics and will have specific limitations. Please read the section of limitations for details.
Requirements
- JDK 1.8 or newer
- libfuse 2.9.3 or newer (2.8.3 has been reported to also work - with some warnings) for Linux
- osxfuse 3.7.1 or newer for MacOS
Usage
Mount Alluxio-FUSE
After having properly configured and started the Alluxio cluster, and from the node where you wish
to mount Alluxio, point a shell to your
$ALLUXIO_HOME and run:
$ integration/fuse/bin/alluxio-fuse mount mount_point [alluxio_path]
This will spawn a background user-space java process (alluxio-fuse) that will mount the Alluxio path
specified at
alluxio_path to the local file system on the specified
mount_point.
For example, the following command will mount the Alluxio path
/people to the folder
/mnt/people
in the local file system.
$ . mounting status
To list the mounting points, on the node where the file system is mounted, point a shell to your
$ALLUXIO_HOME and run:
$ integration/fuse/bin/alluxio-fuse stat
This outputs the
pid, mount_point, alluxio_path of all the running Alluxio-FUSE processes.
For example, the output will be like:
pid mount_point alluxio_path 80846 /mnt/people /people 80847 /mnt/sales /sales
Advanced configuration
Configure Alluxio client options
Alluxio.
$ integration/fuse/bin/alluxio-fuse mount \ -o [comma separated mount options] mount_point [alluxio_path]
Note that
direct_io mount option is set by default so that writes and reads bypass the kernel page cache
and go directly to Alluxio.
Note that different versions of libfuse and osxfuse support different mount options.
Example: allow_other or allow_root
By default, Alluxio Fuse mount point can only be accessed by the user mounting the Alluxio namespace to the local filesystem. up, pass the
allow_other or
allow_root mount options when mounting Alluxio-Fuse:
# All users (including root) can access the files. $ integration/fuse/bin/alluxio-fuse mount -o allow_other mount_point [alluxio_path] # The user mounting the filesystem and root can access the files. $ integration/fuse/bin/alluxio-fuse mount -o allow_root mount_point [alluxio_path]
Note that only one of the
allow_other or
allow_root could be set.
Assumptions and limitations
Currently, most basic file system operations are supported. However, due to Alluxio implicit characteristics, please be aware that:
- Files can be written only once, only sequentially, and never be modified. That means overriding a file is not allowed, and an explicit combination of delete and then create is needed. For example,
cpcommand will fail when the destination file exists. is configured to use shell user group translation service, by setting
alluxio.fuse.user.group.translation.enabledto
truein
conf/alluxio-site.properties. Otherwise
chownand
chgrpare no-ops, and
llwill return the user and group of the user who started the Alluxio-FUSE process. The translation service does not change the actual file permission when running
ll.
Performance considerations
Due to the conjunct use of FUSE and JNR, the performance of the mounted file system is expected to be worse than what you would see by using the Alluxio Java client directly.
Most of the overheads (supported by libfuse 3.x userspace libs but not supported in jnr-fuse yet).
Configuration Parameters For Alluxio POSIX API
These are the configuration parameters for Alluxio POSIX API.
Acknowledgements
This project uses jnr-fuse for FUSE on Java. | https://docs.alluxio.io/os/user/2.0/en/api/POSIX-API.html | 2020-08-03T12:00:32 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['../../img/stack-posix.png', 'Alluxio stack with its POSIX API'],
dtype=object) ] | docs.alluxio.io |
Smart links¶
Smart links are a way to link documents without changing how they are organized in their respective indexes. Smart links are useful when two documents are related somehow but are of different type or different hierarchical units.
Example: A patient record can be related to a prescription drug information document, but they each belong to their own Document indexes.
Smart links are rule based, but don’t create any organizational structure. Smart links just show the documents that match the rules as evaluated against the metadata or properties of the currently displayed document.
Indexes are automatic hierarchical units used to group documents, smart links are automatic references between documents.
Example:
Document type:
Patient records
Metadata type:
Prescription, associated as an optional metadata for the document type
Patient records.
Document type:
Prescription information sheets
A smart link with the following condition, will automatically links patient records to the prescription information sheets based on the value of the metadata type of the patient record.
foreign label is equal to {{ document.metadata_value_of.prescription }} | https://docs.mayan-edms.com/chapters/smart_links.html | 2020-08-03T11:52:38 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.mayan-edms.com |
Table of Contents
Before you begin installing Tigase server onto your system, please make sure the minimum requirements are met first:
It may be possible to run Tigase XMPP Server with JDK v8 but it is not recommended as not all features will be available and you may encounter startup issues.
While it should be possible to use newer versions of the JDK, we don’t guarantee it and we recommend using the one mentioned above. | https://docs.tigase.net/tigase-server/master-snapshot/Administration_Guide/html_chunk/QuickStart.html | 2020-08-03T11:51:49 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.tigase.net |
Smart Camera
The smart camera can learn and recognize brightly colored objects as well as detecting bar codes and lines, which allows it to be used in various applications, such as garbage sorting, intelligent transportation, object tracking, and intelligent line following.
Connect to mBot or Halocode
Based on the wiring mode, the smart camera can be used as an RJ25 electronic module or mBuild electronic. After connecting it to mBot or Halocode, you can control it through mBot or Halocode.
Connect to mBot
After connecting the smart camera to mBot, you can power it with a 3.7V lithium battery or the mBuild power module.
Method 1 Power through the mBuild power block (recommended)
Method 2 Power through a 3.7V lithium battery
Connect to Halocode
Note: It is recommended to power the smart camera through the mBuild power module. Power only through the USB port may affect the proper use of the block.
Features
Color learning
The smart camera can learn brightly colored objects and identify the color blocks after learning, and then return their coordinates, length, and width.
Learn a colored object as follows:
- Long press the Learn button until the indicator turns red (orange, yellow, green, blue, or purple, different colors indicate learning different objects), and then release the button.
- Place the color block to be learned in front of the camera.
- Observe the indicator on the front or back of the smart camera, and slowly move the object to be learned until the color of the indicator matches the object.
- Short press the Learn button to record the current learned object. After the learning is successful, when the camera recognizes a learned object, the color of the indicator becomes the same as that of the learned object.
A maximum of seven objects can be learned and recorded in this mode.
After the learning is complete, you can use the following program to track the color block (color block 1).
Note
1. The more brighter color of the learned object, the more accurate the recognition.
Cautionary Example:learning a panda doll
2. The smart camera can learn and record seven objects, which should be easily distinguished in color.
Cautionary Example:learning a yellow ball and a yellow hat
Good example:learning an orange ball and a green ball
3. Switch the smart camera to the color block detection mode to detect the coordinates and size of the color block.
4. The smart camera can prevent light interference. However, changeable light can still affect recognition accuracy. The previous learning results may fail to be used in a new environment. Here are some solutions:
- Reset the white balance
- Turn on the LED before learning or recognizing
- Set the detail parameters in PixyMon
- Use a lampshade to cover the blocks
Bar codes and lines recognizing
The smart camera can detect bar codes, lines, and branch roads simultaneously without learning.
Switch the smart camera to the line/label tracking mode to detect and return the coordinate information of bar codes, lines, and branch roads.
Bar codes recognizing
Bar code cards and stickers are included in the package. Click to download the bar code cards.
You can put the cards on the line-follower map.
Lines and branch roads recognizing
The smart camera can detect the lines and the number of branch roads, and return the coordinates, directions, and the number of branch roads.
Note
1. You can set the line tracking mode to dark line on light or light line on dark in the program.
2. The smart camera will filter out the narrow lines by default. Check PixyMon's user guide to change the default settings.
PixyMon
The smart camera supports PixyMon software. You can view the camera screen through PixyMon, debug the smart camera function, and fine-tune some parameters to explore more complex features.
PixyMon downloads
PixyMon guide
If you can't open the PixyMon downloads page, click the following links to download the the software or firmware you need:
PixyMon for Windows
PixyMon for Mac
PixyMon firmware
Note
Considering the ease of use of the module, we haven't developed a lot of functions on the blocks. If you are interested in controlling the smart camera through the blocks to implement more features, use the mBlock Extension Builder.
mBlock Extension Builder documentation
Parameters
- Dimensions: 48 x 48mm
- Resolution: 640 x 480
- Field of view: 65.0 degrees
- Effective focal length: 4.65±5% mm
- Recognition speed: 60fps
- Recognition distance: 0.25–1.2m
- Fall resistance: 1m
- Power supply: 3.7V lithium battery or 5V mBuild power module
- Power consumption range: 0.9–1.3W
- Operating temperature: -10℃–55℃ | http://docs.makeblock.com/diy-platform/en/mbuild/hardware/sensors/smart-camera.html | 2020-08-03T12:46:44 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['images/vision-1.png', None], dtype=object)
array(['../../../../zh/mbuild/hardware/sensors/images/vision-5.png', None],
dtype=object)
array(['../../../../zh/mbuild/hardware/sensors/images/vision-6.png', None],
dtype=object)
array(['../../../../zh/mbuild/hardware/sensors/images/vision-7.png', None],
dtype=object)
array(['images/vision-2.png', None], dtype=object)
array(['images/vision-3.png', None], dtype=object)
array(['../../../../zh/mbuild/hardware/sensors/images/vision-9.png', None],
dtype=object)
array(['../../../../zh/mbuild/hardware/sensors/images/vision-10.png',
None], dtype=object)
array(['../../../../zh/mbuild/hardware/sensors/images/vision-11.png',
None], dtype=object)
array(['../../../../zh/mbuild/hardware/sensors/images/vision-3.png', None],
dtype=object)
array(['../../../../zh/mbuild/hardware/sensors/images/vision-12.png',
None], dtype=object)
array(['../../../../zh/mbuild/hardware/sensors/images/vision-4.png', None],
dtype=object) ] | docs.makeblock.com |
ListRepositoriesForApprovalRuleTemplate
Lists all repositories associated with the specified approval rule template.
Request Syntax
{ "approvalRuleTemplateName": "
string", "maxResults":
number, "nextToken": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- approvalRuleTemplateName
The name of the approval rule template for which you want to list repositories that are associated with that template.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 100.
Required: Yes
- maxResults
A non-zero, non-negative integer used to limit the number of returned results.
Type: Integer
Required: No
- nextToken
An enumeration token that, when provided in a request, returns the next batch of the results.
Type: String
Required: No
Response Syntax
{ "nextToken": "string", "repositoryNames": [ "string" ] }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- nextToken
An enumeration token that allows the operation to batch the next results of the operation.
Type: String
- repositoryNames
A list of repository names that are associated with the specified approval rule template.
Type: Array of strings
Length Constraints: Minimum length of 1. Maximum length of 100.
Pattern:
[\w\.-]+
Errors
For information about the errors that are common to all actions, see Common Errors.
- ApprovalRuleTemplateDoesNotExistException
The specified approval rule template does not exist. Verify that the name is correct and that you are signed in to the AWS Region where the template was created, and then try again.
HTTP Status Code: 400
- ApprovalRuleTemplateNameRequiredException
An approval rule templateApprovalRuleTemplateNameException
The name of the approval rule template is not valid. Template names must be between 1 and 100 valid characters in length. For more information about limits in AWS CodeCommit, see AWS CodeCommit User Guide.
HTTP Status Code: 400
- InvalidContinuationTokenException
The specified continuation token is not valid.
HTTP Status Code: 400
- InvalidMaxResultsException
The specified number of maximum results is not valid.
HTTP Status Code: 400
Example
Sample Request
POST / HTTP/1.1 Host: codecommit.us-east-1.amazonaws.com Accept-Encoding: identity Content-Length: 2 X-Amz-Target: CodeCommit_20150413.ListRepositoriesForApprovalRuleTemplate X-Amz-Date: 20191021T212036Z User-Agent: aws-cli/1.7.38 Python/2.7.9 Windows/10 { "approvalRuleTemplateName": "2-approver-rule-for-master" }
Sample Response
HTTP/1.1 200 OK x-amzn-RequestId: 0728aaa8-EXAMPLE Content-Type: application/x-amz-json-1.1 Content-Length: 721 Date: Mon, 21 Oct 2019 21:20:37 GMT { "repositoryNames": [ "MyDemoRepo", "MyClonedRepo" ] }
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/codecommit/latest/APIReference/API_ListRepositoriesForApprovalRuleTemplate.html | 2020-08-03T11:51:51 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.aws.amazon.com |
Enabling and configuring the DSE Backup and Restore Service (beta)
Get the DSE Backup and Restore Service up and running.
cassandra.yamlThe location of the cassandra.yaml file depends on the type of installation:
Important: The DSE Backup and Restore Service is currently in beta and is intended to be used by users provisioning DSE via the DataStax Cass Operator in a Kubernetes cluster.
To enable the DSE Backup and Restore Service:
- Edit the cassandra.yaml configuration file on each node in the cluster and uncomment:
backup_service: enabled: falseImportant: Retain the four spaces before
enabled.
- Set the value of
enabled: falseto
enabled: true.
- Optional: uncomment
staging_directory:and point to a directory of your choice. That directory will hold the backup staging files. The default staging directory for package installations is
/var/lib/cassandra/backups_staging. If the property is not set, the default staging directory is $CASSANDRA_HOME
/data/backups_staging.Important: The directory must exist. Package DSE installations will automatically create the default staging directory whereas tarball installations will not.
- Optional: uncomment
backups_max_retry_attempts:and set the number of times the backup process should retry after each failed attempt.
- Restart each node.
- Start cqlsh and verify that the DSE Backup and Restore Service is running:
LIST BACKUP STORES; name | class | settings ------+-------+---------- (0 rows)
Enabling the DSE Backup and Restore Service cluster-wide
If you have many nodes in your DSE cluster, logging in to each one to make cassandra.yaml changes can be a tedious process. To streamline modifications, you can use the parallel-ssh utility to issue modification commands on multiple nodes using a single command.
Important: Because of variations between operating systems, the commands in this section may not work using copy/paste. Be prepared to make environment-specific modifications as required.
Important: The instructions in this section assume you are using the DSE package installation method. If you have installed DSE using tarballs, adjust the commands as required.
Prerequisites
Before continuing, make sure you meet the following prerequisites:
- SSH access to every node in the cluster from an administration node
- A version of
parallel-sshand
sshpassinstalled on an administration node
- An account on each node that has read/write access to DSE configuration files, specifically cassandra.yaml
- The awk text processor installed on each node
Installing and configure parallel-ssh
Before continuing, configure a node you'll use to manage backup and restore operations:
- Install parallel-ssh and sshpass:
- Debian Linux:
sudo apt-get install pssh
sudo apt-get install sshpass
- RedHat Linux
sudo yum install pssh
sudo yum install sshpass
- Retrieve a list of DSE nodes and copy it to the parallel SSH hosts file:
nodetool status | awk '/^(U|D)(N|L|J|M)/{print $2}' > ~/.pssh_hosts_files
- Generate an SSH keypair:
ssh-keygen
- Copy your node password to a local file, password_file.Warning: Do not keep the password file any longer than it takes to complete the instructions in this section.
- Copy the key to each node in the cluster:
cat ~/.pssh_hosts_files | while read line; do sshpass -f password_file ssh_copy-id -i ~/.ssh/ssh-key username@"$line"; done
- Verify that parallel SSH is working as expected by executing the
datecommand on the node list:
pssh -i -h ~/.pssh_hosts_file date [1] 18:10:10 [SUCCESS] [email protected] Mon Mar 16 18:10:10 MST 2020 [2] 18:10:10 [SUCCESS] [email protected] Mon Mar 16 18:10:10 MST 2020 [3] 18:10:10 [SUCCESS] [email protected] Mon Mar 16 18:10:10 MST 2020 [4] 18:10:10 [SUCCESS] [email protected] Mon Mar 16 18:10:10 MST 2020
- Delete password_file:
rm password_file
Enabling the DSE Backup and Restore Service on all nodes
Enable the backup and restore service on each node:
- Update cassandra.yaml to enable the DSE Backup and Restore Service:
pssh -i -h ~/.pssh_hosts_file awk -i inplace \ '{ if (/backup_service:/) { \ getline; getline; print "backup_service:\n enabled: true" \ } \ else { print } \ }' cassandra.yaml
- Restart the DSE nodes:
pssh -i -h ~/.pssh_hosts_file sudo service dse restart
Assigning the backup role to a DSE user
You can run backups using the default cassandra superuser (or any other superuser); however, as a security best practice, you should assign the backup role to a non-superuser account.
To assign the backup role to a DSE user:
- Create a new user if required:
CREATE ROLE IF NOT EXISTS username;
- Assign the backup role:
GRANT dse_backup_operator TO username;
- Set a password:
ALTER ROLE username WITH PASSWORD = 'password';
What to do next
With the DSE Backup and Restore Service enabled and running, and a user with assigned backup role rights, continue to Creating and managing backup stores (beta). | https://docs.datastax.com/en/dse/6.8/dse-admin/datastax_enterprise/operations/opsBackupRestoreServiceEnable.html | 2020-08-03T12:59:59 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.datastax.com |
We released the FloLaunch plugin a couple of years back, to offer our clients an easy and safe way to build their new website, test various design ideas, new plugins or themes – without the fear of breaking something on their live site, or loosing clients due to maintenance mode. Since then, FloLaunch has been successfully used by thousands of our clients, helping them smoothly set up their new Flothemes sites in test mode.
During this time, we’ve observed how all of you used the plugin, collected feedback on what can be improved and which options you need. Today, we’re happy to release FloLaunch version 2.0! We’ve enhanced its interface, to offer a smoother and more automated user experience, that guides you through each step of the process. Also, we added options that allow you to easily switch between your Live site and Clone mode, and always easily identify which one you’re currently editing.
6 Reasons why you’ll love Flolaunch 2.0:
- A clear, guided process for each step, from back up and clone site creation, to successfully launching it.
- We’ve integrated a thorough compatibility check to ensure that your server configuration works well with your clone site, before you even create it.
- No more confusion about which mode are you in – live or clone. You’ll hardly miss the RED color scheme of your WordPress dashboard clone site + the multiple switchers indicating that you’re in test mode currently.
- Caching. We’ve integrated an option to clear cache before pushing your new site live, to make sure that you see it without any errors.
- An easy way to share a temporary link with your friend or mentor to get feedback on your progress (expires in 12 hours)
- You can always revert back to your old website, if after publishing it you realize that you’d like to improve and tweak a few more things. Note: your old website database will be kept up till the point you create a new clone website.
NOTE for existing Flothemes users: If you’re currently in the middle of updating your website, and are using the FloLaunch plugin version 1, you DO NOT NEED to reinstall it because you’ll lose all your current progress. FloLaunch 2.0 is not an update, it’s a fully rebuilt, new plugin. Just continue working as you are. You’ll be able to enjoy these new features next time you’ll want to revamp your site. At that stage, you’ll need to remove v.1, and install v.2.0 to your site. | https://docs.flothemes.com/flolaunch-2-0/ | 2020-08-03T12:45:03 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['https://docs.flothemes.com/app/uploads/2019/08/Flo-Launch-v2-launch.jpg',
None], dtype=object) ] | docs.flothemes.com |
Visualization is the representation of your raw data in graphical form.
By default, Holistics displays your report data in a table. While this gives you a full view of your query result, sometimes interesting patterns only show themselves when you give them shapes and colors.
Please visit the following pages to find out more about our supported visualizations.
- Table
- Pivot Table
- Metric/KPI
- Line Chart
- Area Chart
- Column Chart
- Bar Chart
- Combination Chart
- Pie Chart & Donut Chart
- Scatter Chart
- Bubble Chart
- Retention Heatmap
- Geo Heatmap
- Conversion Funnel
- Radar Chart
- Word Cloud
- Gauge Chart
Visualization Settings
Visualization select panel
This is where you choose your visualization (chart) type. Each type has unique requirements for the underlying data, as well as unique settings and styling options.
Please visit the individual docs of each visualization type for more details on their settings.
Settings tab
Settings tab is where you select the basic settings to build up your visualization:
- Fields: The fields required will vary between different char types. For example, in Bar Chart, X and Y-Axis are compulsory while Legend is optional, but in Pie Chart you need to choose Legend and Y-Axis is compulsory).
- Conditions: This is where you apply filter condition on the result set. Available operators will depend on the data type of the column. For example,
greater thanand
equal toare available for numeric columns, while
containsand
inare available for text columns.
- Sorts: This option allows you sort your displayed data in along a field that you already dragged in.
Styles tab
This is where you control the graphical styling of your charts. The options will vary between different chart types.
Format tab
This is where you choose display format for your data. This tab is the same for all chart types.
The visualization editor will default to variable character (varchar) type if it is unable to detect your column's data type.
Troubleshooting
Error: Too many columns, not showing all data
This error happens when you add a field (which contains more than 100 values) to the Column Field (of Pivot Table) or to Legend Field (of other charts) in the Viz Setting.
Currently, we only support showing a maximum of 100 columns. If the field has more than 100 values, Holistics will visualize a preview of the first 100 columns. It is recommended that you use a column with low cardinality in the Column field or Legend field to reduce visual clutter, which will improve the readability of your chart.
Updated 8 months ago | https://docs.holistics.io/docs/visualizations | 2020-08-03T12:03:51 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['https://files.readme.io/8ef9e45-viz.png', 'viz.png'], dtype=object)
array(['https://files.readme.io/8ef9e45-viz.png', 'Click to close...'],
dtype=object)
array(['https://files.readme.io/5977d63-vizselect.png', 'vizselect.png'],
dtype=object)
array(['https://files.readme.io/5977d63-vizselect.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/e1f5a56-settings.png', 'settings.png'],
dtype=object)
array(['https://files.readme.io/e1f5a56-settings.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/0406bee-sort.png', 'sort.png'],
dtype=object)
array(['https://files.readme.io/0406bee-sort.png', 'Click to close...'],
dtype=object)
array(['https://files.readme.io/1938c56-styles.png', 'styles.png'],
dtype=object)
array(['https://files.readme.io/1938c56-styles.png', 'Click to close...'],
dtype=object)
array(['https://files.readme.io/228b8e5-Selection_517.png',
'Selection_517.png'], dtype=object)
array(['https://files.readme.io/228b8e5-Selection_517.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/05b7812-8baade9-too_many_columns.png',
'8baade9-too_many_columns.png'], dtype=object)
array(['https://files.readme.io/05b7812-8baade9-too_many_columns.png',
'Click to close...'], dtype=object) ] | docs.holistics.io |
Resource
Manager Class
Definition
Represents a resource manager that provides convenient access to culture-specific resources at run time.
public ref class ResourceManager
public class ResourceManager
[System.Serializable] public class ResourceManager
[System.Serializable] [System.Runtime.InteropServices.ComVisible(true)] public class ResourceManager
type ResourceManager = class
[<System.Serializable>] type ResourceManager = class
[<System.Serializable>] [<System.Runtime.InteropServices.ComVisible(true)>] type ResourceManager = class
Public Class ResourceManager
- Inheritance
-
- Derived
-
- Attributes
-
Examples:
day=Friday year=2006 holiday="Cinco de Mayo"
Use the Resource File Generator to generate the rmc.resources resource file from the rmc.txt input file as follows:
resgen rmc.txt:
day=Viernes year=2006 holiday="Cinco de Mayo"
Use the Resource File Generator to generate the rmc.es-MX.resources resource file from the rmc.es-MX.txt input file as follows:
resgen rmc.es-MX.txt
Assume that the filename for this example is rmc.vb or rmc.cs. Copy the following source code into a file. Then compile it and embed the main assembly resource file, rmc.resources, in the executable assembly. If you are using the Visual Basic compiler, the syntax is:
vbc rmc.vb /resource:rmc.resources
The corresponding syntax for the C# compiler is:
csc /resource:rmc.resources rmc.cs:
al /embed:rmc.es-MX.resources /c:es-MX /out:rmc.resources.dll. */
Imports System.Resources Imports System.Reflection Imports System.Threading Imports System.Globalization Class Example Public Shared Sub Main() Dim day As String Dim year As String Dim holiday As String Dim celebrate As String = "{0} will occur on {1} in {2}." & vbCrLf ' Create a resource manager. Dim rm As New ResourceManager("rmc", GetType. Dim ci As New CultureInfo("es-MX") Console.WriteLine("Obtain resources using the es-MX culture.") ' Get the resource strings for the day, year, and holiday '. ' day = rm.GetString("day") ' year = rm.GetString("year") ' holiday = rm.GetString("holiday") ' --------------------------------------------------------------- ' Regardless of the alternative that you choose, display a message ' using the retrieved resource strings. Console.WriteLine(celebrate, holiday, day, year) End Sub End Class ' This example displays the following output: 'Obtain resources using the current UI culture. '"5th of May" will occur on Friday in 2006. ' 'Obtain resources using the es-MX culture. '"Cinco de Mayo" will occur on Viernes in 2006.
Remarks
Important
Calling methods from this class with untrusted data is a security risk. Call the methods from this class only with trusted data. For more information, see Data Validation. 8.x calling the CreateFileBasedResourceManager method.
Caution
Using standalone .resources files in an ASP.NET app will break XCOPY deployment, because the resources remain locked until they are explicitly released by the ReleaseAllResources method. If you want to deploy resources with your ASP.NET apps, you should compile your .resources files into satellite assemblies..
Creating Resources (AI, Creating Satellite Assemblies, and Packaging and Deploying Resources.
Instantiating a ResourceManager Object rm = new ResourceManager("MyCompany.StringResources", typeof(Example).Assembly);
Dim rm As New ResourceManager("MyCompany.StringResources", GetType(Example).Assembly):
ResourceManager rm = new ResourceManager(typeof(MyCompany.StringResources));
Dim rm As New ResourceManager(GetType(MyCompany.StringResources)):
TimeHeader=The current time is
You can use a batch file to generate the resource file and embed it into the executable. Here's the batch file to generate an executable by using the C# compiler:
resgen strings.txt csc ShowTime.cs /resource:strings.resources
For the Visual Basic compiler, you can use the following batch file:
resgen strings.txt vbc ShowTime.vb /resource:strings.resources
Imports System.Resources Module Example Public Sub Main() Dim rm As New ResourceManager("Strings", GetType(Example).Assembly) Dim timeString As String = rm.GetString("TimeHeader") Console.WriteLine("{0} {1:T}", timeString, Date.Now) End Sub End Module ' The example displays output similar to the following: ' The current time is 2:03:14 PM
ResourceManager and Culture-Specific Resources
A localized app requires resources to be deployed, as discussed in the article Packaging and Deploying Resources.function.:
HelloString=Hello world!
The fr-FR resources are contained in a text file named Greetings.fr-FR.txt:
HelloString=Salut tout le monde!
The ru-RU resources are contained in a text file named Greetings.ru-RU.txt:
HelloString=Всем привет!!
Imports System.Globalization Imports System.Resources Imports System.Threading Module Example Sub Main() ' Create array of supported cultures Dim cultures() As String = {"en-CA", "en-US", "fr-FR", "ru-RU" } Dim rnd As New Random() Dim cultureNdx As Integer = rnd.Next(0, cultures.Length) Dim originalCulture As CultureInfo = Thread.CurrentThread.CurrentCulture Dim rm As New ResourceManager("Greetings", GetType(Example).Assembly) Try Dim newCulture As New CultureInfo(cultures(cultureNdx)) Thread.CurrentThread.CurrentCulture = newCulture Thread.CurrentThread.CurrentUICulture = newCulture Dim greeting As String = String.Format("The current culture is {0}.{1}{2}", Thread.CurrentThread.CurrentUICulture.Name, vbCrLf, rm.GetString("HelloString")) Console.WriteLine(greeting) Catch e As CultureNotFoundException Console.WriteLine("Unable to instantiate culture {0}", e.InvalidCultureName) Finally Thread.CurrentThread.CurrentCulture = originalCulture Thread.CurrentThread.CurrentUICulture = originalCulture End Try End Sub End Module '
Retrieving Resources.
Note
If the .resources file specified in the ResourceManager class constructor cannot be found, the attempt to retrieve a resource throws a MissingManifestResourceException or MissingSatelliteAssemblyException exception. For information about dealing with the exception, see the Handling MissingManifestResourceException and MissingSatelliteAssemblyException Exceptions section later in this topic..
Here's the source code for the example (ShowDate.vb for the Visual Basic version or ShowDate.cs for the C# version of the code).); } } } //.
Imports System.Globalization Imports System.Resources Imports System.Threading <Assembly:NeutralResourcesLanguage("en")> Module Example Public Sub Main() Dim cultureNames() As String = { "en-US", "fr-FR", "ru-RU", "sv-SE" } Dim rm As New ResourceManager("DateStrings", GetType(Example).Assembly) For Each cultureName In cultureNames Dim culture As CultureInfo = CultureInfo.CreateSpecificCulture(cultureName) Thread.CurrentThread.CurrentCulture = culture Thread.CurrentThread.CurrentUICulture = culture Console.WriteLine("Current UI Culture: {0}", CultureInfo.CurrentUICulture.Name) Dim dateString As String = rm.GetString("DateStart") Console.WriteLine("{0} {1:M}.", dateString, Date.Now) Console.WriteLine() Next End Sub End Module '.
To compile this example, create a batch file that contains the following commands and run it from the command prompt. If you're using C#, specify
csc instead of
vbc and
showdate.cs instead of
showdate.vb.
resgen DateStrings.txt vbc.
Handling MissingManifestResourceException and MissingSatelliteAssemblyException Exceptionsparameterparameter.
Resource Versioning.
<satelliteassemblies> Configuration File Node.
Note
The preferred alternative to creating a
<satelliteassemblies> node is to use the ClickOnce Deployment Manifest feature. specifies a fully qualified assembly name. Specify the name of your main assembly in place of MainAssemblyName, and specify the
Version,
PublicKeyToken, and
Cultureattribute values that correspond to your main assembly.
For the
Versionattribute, specify the version number of your assembly. For example, the first release of your assembly might be version number 1.0.0.0.
For the
PublicKeyTokenattribute, specify the keyword
nullif you have not signed your assembly with a strong name, or specify your public key token if you have signed your assembly.
For the
Cultureattribute, specify the keyword
neutralto designate the main assembly and cause the ResourceManager class to probe only for the cultures listed in the
<culture>nodes.
For more information about fully qualified assembly names, see the article Assembly Names. For more information about strong-named assemblies, see the article Create and use.
Windows 8.x Store Apps
Important
Although the ResourceManager class is supported in Windows 8.x Store apps, we do not recommend its use. Use this class only when you develop Portable Class Library projects that can be used with Windows 8.x Store apps. To retrieve resources from Windows 8.x Store apps, use the Windows.ApplicationModel.Resources.ResourceLoader class instead.
For Windows 8.x 8.x.
Constructors
Fields
Properties
Methods
Applies to
Thread Safety
This type is thread safe. | https://docs.microsoft.com/en-us/dotnet/api/system.resources.resourcemanager?view=netframework-4.7.2 | 2020-08-03T12:10:48 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
<Max. Ticks of Light> defines the maximal value of <Ticks of Light>.
This is normally used from refilling light sources.
If set and the gate is triggered, the gate will stay in the new condition for the specified amount of time, by which it will reset back to the previous state.
A value of 10 is around 5 seconds. | https://server.docs.atrinik.org/page_field_maxhp.html | 2020-08-03T11:31:22 | CC-MAIN-2020-34 | 1596439735810.18 | [] | server.docs.atrinik.org |
Setup with unique user IDs for every Signer
Signing with unique user IDs - having the documents signed within your website or app by signers with unique user IDs.
Every signing session in this scenario is set up individually for every Signer and every document. You can identify every signing operation by the unique document ID and unique user ID.
Steps
1. One-time setup
1.1. Create a Sender account
1.2. Get an access token for the Sender
You can use your own token and account credentials as Sender’s.
2. Setup for every document you’d like to embed
2.1. Create a document as Sender (3 methods)
2.2. Create a Signer account
The number of Signers must be equal to the number of roles in one document.
2.3. Create the invite to sign that document
2.4. Generate restricted scope access token for a signing link
2.5. Generate the signing link to the document
2.6. (optional) Adjust your application to showing the signing link according to the signing order.
▶ Create a Sender account (1.1)
The Sender’s email address and password are used as
sender_email and
sender_password values below.
▶ Get an access token for the Sender (1.2)
Use your own bearer token as Sender’s access token, or generate a new access token if you created a new user for the Sender.
Use the Sender’s token as
sender_access_token value in the next steps.
▶ Create a document as Sender (2.1)
Method 1: Use a template that has been previously created in your account.
Generate a document from a template with one role.
curl -X POST \{{sender_template_id}}/copy \ -H 'Authorization: Bearer {{sender_access_token}}' \ -H 'Content-Type: application/json' \ -d '{"document_name":"doc_from_template"}'
curl -X POST \ \ -H 'Authorization: Bearer {{USER_1_ACCESS_TOKEN}}' \ -H 'Content-Type: multipart/form-data' \ -F 'file=@/path/to /your/file/pdf-test.pdf' \ -F 'Tags=[ { "tag_name": "email", "role": "Signer 1", "label": "email", "required": true, "type": "text", "prefilled_text": "[email protected]", "validator_id": "7cd795fd64ce63b670b52b2e83457d59ac796a39", "height": 15, "width": 100 }, { "tag_name": "name", "role": "Signer 1", "label": "name", "required": true, "type": "text", "height": 15, "width": 100 } ]'
curl -X POST \ \ -H 'Authorization: Bearer {{USER_1_ACCESS_TOKEN}}' \ -H 'content-type: multipart/form-data' \ -F 'file=@/path/to/your/document/pdf-test.pdf'
curl -X PUT \{{document_id}} \ -H 'Authorization: Bearer {{USER_1_ACCESS_TOKEN}}'\ -H 'Content-Type: application/json' \ -d '{ "fields": [ { "x": 305, "y": 18, "width": 122, "height": 10, "page_number": 0, "label": "first_name", "role": "Signer 1", "required": true, "type": "text", "prefilled_text": "John" }, { "x": 305, "y": 38, "width": 122, "height": 10, "page_number": 0, "label": "last_name", "role": "Signer 1", "required": true, "type": "text", "prefilled_text": "Doe" }, { "x": 305, "y": 67, "width": 100, "height": 34, "page_number": 0, "label": "a sample label", "role": "Signer 1", "required": true, "type": "signature" } ] }'
▶ Create a Signer account (2.2)
Only one Signer account is needed and it will be re-used for every signing event. Keep this email address and password, you’ll use it as
signer_email and
signer_password values below.
The number of Signers must be equal to the number of roles in one document.
▶ Create the invite to sign that document (2.3)
Make a
POST /document/{{document_id}}/invite?email=disable call, with email=disable to prevent the delivery of an email invite:
curl -X POST \{{document_id}}/invite?email=disable \ -H 'Authorization: Bearer {{sender_access_token}}' \ -d '{ "to": [ { "email": "{{signer_email}}", "role": "Signer 1", "order": 1, "role_id": "" } ], "from": "{{sender_email}}", "cc": [], "subject": "You received a signature request", "message": "Please, fill in and sign the attached document" }'
▶ Generate restricted scope access token for a signing link (2.4)
Get an access_token with a scope specific to that document by making a
POST /oauth2/token request for
signer.
This returns
signer_limited_scope_token.
curl -X POST \ \ -H 'Authorization: Basic {{basic_authorization_token}' \ -H 'content-type: multipart/form-data;' \ -F 'username={{signer_email}}' \ -F 'password={{signer_password}}' \ -F 'grant_type=password' \ -F 'scope=signer_limited_scope_token document/{{document_id}}'
▶ Generate the signing link to the document (2.5)
The base URL for the link is:
The link requires the query parameters
document_id and
access_token.{{DOCUMENT_ID}}&access_token={{signer_restricted_token}}
The signing link accepts optional query parameters:
redirect_uri: displays a ‘Thank you’ message after the document was signed
theme: displays neutral light-gray standard SignNow theme for the document signing session
disable_email: prevents sending the signed copy via email
Example query string:
redirect_uri=%2Fdocument-saved-successfully&theme=neutral&disable_email=true
The full link will then be:
Alternative Signing Link
This signing link automatically provides a mobile app signing link for iOS and Android device users:{{DOCUMENT_ID}}&access_token={{signer_limited_scope_token}}
Optional Parameters:
redirect_uri=%2Fdocument-saved-successfully
If a user is signing from a mobile device, add
mobileweb=mobileweb_only query parameter to direct the user to the mobile web. If a user is on a desktop system, they are automatically directed to desktop signing.
To allow users to sign using the mobile web signature panel, add
&use_signature_panel=1.{{document_id}}&access_token={{signer_limited_scope_token}}&redirect_uri=%2Fdocument-saved-successfully&theme=neutral&disable_email=true | https://docs.signnow.com/code-examples/embedded-signing/embedded-signing-2 | 2020-08-03T12:26:02 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.signnow.com |
Structure
The RadSplitter control defines a region that can be divided into resizable frame-like regions called panes, as shown below:
Panes
Each pane is implemented by the RadPane control. Panes can contain any kind of content, including HTML elements, content from external sources, or other ASP.NET controls: even other RadSplitter controls, to subdivide the region the pane defines into additional resizable regions.
Depending on the orientation of the splitter, panes are laid out from left to right or top to bottom.
Split Bars
Between the panes appear split bars, which are implemented by the RadSplitBar control. Split bars represent the borders that can be dragged in order to resize the panes. Each split bar can have up to two collapse buttons, used to collapse and expand the adjacent panes.
Sliding Zones
Sliding zones, which are implemented by the RadSlidingZone control, hold panes (called sliding panes) that slide in and out, like the user interface of Visual Studio. RadSlidingZone must be placed inside a RadPane control.
Sliding zones represent the sliding panes they hold as sliding zone tabs. The user uses these tabs to collapse and expand the sliding panes. You can configure the direction in which the sliding zone expands its tabs. This direction determines the way the sliding zone lays out its sliding zone tabs.
Sliding Panes
Sliding panes are placed inside sliding zones to define their content. Each sliding pane is implemented by a RadSlidingPane control. RadSlidingPane must be placed inside a RadSlidingZone control.
A sliding pane, like a regular pane, can contain any HTML elements, including ASP.NET controls (but not externally loaded content). Sliding panes, unlike regular panes, have a title bar that displays the title of the pane (which also appears on its tab). The title bar optionally contains control buttons as well: a dock/undock button, which pins the sliding pane in place, and a collapse button that collapses the sliding pane. The edge of the sliding pane farthest from the sliding zone tab optionally has a resizable border, that the user can drag to expand or contract the size of the sliding pane.
Example
The ASP.NET declaration for the RadSplitter shown above is as follows:
<telerik:RadSplitter <telerik:RadPane Left Pane </telerik:RadPane> <telerik:RadSplitBar <telerik:RadPane Middle Pane <telerik:RadSplitter <telerik:RadPane Top Pane </telerik:RadPane> <telerik:RadSplitBar <telerik:RadPane Bottom Pane </telerik:RadPane> </telerik:RadSplitter> </telerik:RadPane> <telerik:RadSplitBar <telerik:RadPane Right Pane <telerik:RadSlidingZone <telerik:RadSlidingPane Sliding Pane1 </telerik:RadSlidingPane> <telerik:RadSlidingPane Sliding Pane 2 </telerik:RadSlidingPane> </telerik:RadSlidingZone> </telerik:RadPane> </telerik:RadSplitter> | https://docs.telerik.com/devtools/aspnet-ajax/controls/splitter/getting-started/structure | 2020-08-03T12:20:36 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['images/splitter-structure.png', None], dtype=object)] | docs.telerik.com |
...
When creating a new API or editing an existing API,
Go to the “Manage” Manage tab in the UI.. Insttead Instead of the usual "Authorization" header, you can will see the "Token" header which was defined in the previous steps in the , as shown below screenshot.
If you want to invoke the API with curl, following is the command.
Configuring the authorization.. | https://docs.wso2.com/pages/diffpages.action?originalId=97566509&pageId=97566513 | 2020-08-03T12:06:45 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.wso2.com |
@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class AbstractAWSApplicationAutoScaling extends Object implements AWSApplicationAutoScaling
AWSApplicationAutoScaling.: "application-autoscaling
AWSApplicationAutoScaling
endpoint- The endpoint (ex: "application-autoscaling.us-east-1.amazonaws.com") or a full URL, including the protocol (ex: "") of the region specific AWS endpoint this client will communicate with.
public void setRegion(Region region)
AWSApplicationAutoScalingApplicationAutoScaling DeleteScalingPolicyResult deleteScalingPolicy(DeleteScalingPolicyRequest request)
Deletes the specified scaling policy for an Application Auto Scaling scalable target.
Deleting a step scaling policy deletes the underlying alarm action, but does not delete the CloudWatch alarm associated with the scaling policy, even if it no longer has an associated action.
For more information, see Delete a Step Scaling Policy and Delete a Target Tracking Scaling Policy in the Application Auto Scaling User Guide.
deleteScalingPolicyin interface
AWSApplicationAutoScaling
public DeleteScheduledActionResult deleteScheduledAction(DeleteScheduledActionRequest request)
Deletes the specified scheduled action for an Application Auto Scaling scalable target.
For more information, see Delete a Scheduled Action in the Application Auto Scaling User Guide.
deleteScheduledActionin interface
AWSApplicationAutoScaling
public DeregisterScalableTargetResult deregisterScalableTarget(DeregisterScalableTargetRequest request)
Deregisters an Application Auto Scaling scalable target when you have finished using it. To see which resources have been registered, use DescribeScalableTargets.
Deregistering a scalable target deletes the scaling policies and the scheduled actions that are associated with it.
deregisterScalableTargetin interface
AWSApplicationAutoScaling
public DescribeScalableTargetsResult describeScalableTargets(DescribeScalableTargetsRequest request)
Gets information about the scalable targets in the specified namespace.
You can filter the results using
ResourceIds and
ScalableDimension.
describeScalableTargetsin interface
AWSApplicationAutoScaling
public DescribeScalingActivitiesResult describeScalingActivities(DescribeScalingActivitiesRequest request)
Provides descriptive information about the scaling activities in the specified namespace from the previous six weeks.
You can filter the results using
ResourceId and
ScalableDimension.
describeScalingActivitiesin interface
AWSApplicationAutoScaling
public DescribeScalingPoliciesResult describeScalingPolicies(DescribeScalingPoliciesRequest request)
Describes the Application Auto Scaling scaling policies for the specified service namespace.
You can filter the results using
ResourceId,
ScalableDimension, and
PolicyNames.
For more information, see Target Tracking Scaling Policies and Step Scaling Policies in the Application Auto Scaling User Guide.
describeScalingPoliciesin interface
AWSApplicationAutoScaling
public DescribeScheduledActionsResult describeScheduledActions(DescribeScheduledActionsRequest request)
Describes the Application Auto Scaling scheduled actions for the specified service namespace.
You can filter the results using the
ResourceId,
ScalableDimension, and
ScheduledActionNames parameters.
For more information, see Scheduled Scaling in the Application Auto Scaling User Guide.
describeScheduledActionsin interface
AWSApplicationAutoScaling
public PutScalingPolicyResult putScalingPolicy(PutScalingPolicyRequest request)
Creates or updates a scaling policy for an Application Auto Scaling scalable target.
Each scalable target is identified by a service namespace, resource ID, and scalable dimension. A scaling policy applies to the scalable target identified by those three attributes. You cannot create a scaling policy until you have registered the resource as a scalable target.
Multiple scaling policies can be in force at the same time for the same scalable target. You can have one or more target tracking scaling policies, one or more step scaling policies, or both. However, there is a chance that multiple policies could conflict, instructing the scalable target to scale out or in at the same time. Application Auto Scaling gives precedence to the policy that provides the largest capacity for both scale out and scale in. For example, if one policy increases capacity by 3, another policy increases capacity by 200 percent, and the current capacity is 10, Application Auto Scaling uses the policy with the highest calculated capacity (200% of 10 = 20) and scales out to 30. scalable target to scale out again.
For more information, see Target Tracking Scaling Policies and Step Scaling Policies in the Application Auto Scaling User Guide.
If a scalable target is deregistered, the scalable target is no longer available to execute scaling policies. Any scaling policies that were specified for the scalable target are deleted.
putScalingPolicyin interface
AWSApplicationAutoScaling
public PutScheduledActionResult putScheduledAction(PutScheduledActionRequest request)
Creates or updates a scheduled action for an Application Auto Scaling scalable target.
Each scalable target is identified by a service namespace, resource ID, and scalable dimension. A scheduled action applies to the scalable target identified by those three attributes. You cannot create a scheduled action until you have registered the resource as a scalable target.
When start and end times are specified with a recurring schedule using a cron expression or rates, they form the boundaries of when the recurring action starts and stops.
To update a scheduled action, specify the parameters that you want to change. If you don't specify start and end times, the old values are deleted.
For more information, see Scheduled Scaling in the Application Auto Scaling User Guide.
If a scalable target is deregistered, the scalable target is no longer available to run scheduled actions. Any scheduled actions that were specified for the scalable target are deleted.
putScheduledActionin interface
AWSApplicationAutoScaling
public RegisterScalableTargetResult registerScalableTarget(RegisterScalableTargetRequest request)
Registers or updates a scalable target.
A scalable target is a resource that Application Auto Scaling can scale out and scale in. Scalable targets are uniquely identified by the combination of resource ID, scalable dimension, and namespace.
When you register a new scalable target, you must specify values for minimum and maximum capacity. Application Auto Scaling scaling policies will not scale capacity to values that are outside of this range.
After you register a scalable target, you do not need to register it again to use other Application Auto Scaling operations. To see which resources have been registered, use DescribeScalableTargets. You can also view the scaling policies for a service namespace by using DescribeScalableTargets. If you no longer need a scalable target, you can deregister it by using DeregisterScalableTarget.
To update a scalable target, specify the parameters that you want to change. Include the parameters that identify the scalable target: resource ID, scalable dimension, and namespace. Any parameters that you don't specify are not changed by this update request.
registerScalableTargetin interface
AWSApplicationAutoScaling
public void shutdown()
shutdownin interface
AWSApplicationAutoScalingApplicationAutoScaling
request- The originally executed request. | https://docs.amazonaws.cn/AWSJavaSDK/latest/javadoc/com/amazonaws/services/applicationautoscaling/AbstractAWSApplicationAutoScaling.html | 2020-08-03T12:07:38 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.amazonaws.cn |
If:
Below is a sample of a TNS connection string:
Add the values highlighted in bold in the TNS connection string above into the uptime.conf file as follows: | http://docs.uptimesoftware.com/plugins/viewsource/viewpagesrc.action?pageId=4555000 | 2020-08-03T12:26:41 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.uptimesoftware.com |
List of Forums
The main content area of the forum site home page contains a list of all forums in the site. When a new forum is created in the site, the list is automatically updated. To view a forum, click on its name in the list.
The following information is displayed for each forum in the list:
- The name of the forum. Click the name to read the forum.
- A link to subscribe to the RSS feed for the forum.
- The number of topics in the forum.
- The number of posts in all topics.
Forum Sites List
When a new forum site is created in Community Central, a link to the new site it is automatically added to the Forum Sites list. Click a forum site name to navigate directly to the site page and begin reading posts. For information about customizing this list, see Managing the Blog and Forum Site Lists.
Forum Site List
TOP
Most Replies List
The Most Replies list in the forum site page lists the topics from all forums in the site the most reply posts. Click a topic name to read it.
Most Replies
Note: The Forums Home page also include a Most Replies list. In the Forum Home page, the list includes topics from all forum sites in the community.
TOP
Most Viewed List
The Most Viewed list in the forum site page lists the topics from all forums in the site with the most views. Click a topic name to read it.
Most Viewed
Note: The Forums Home page also includes a Most Viewed list. In the Forum Home page, the list includes topics from all forum sites in the community.
TOP
Most Active Users List
The Most Active Users list in the forum site page contains users who have earned the most points for forum-related community activities. Points are assigned according to the scoring rules defined by Administrators in the Community Central Control Panel. The Most Active Users list is filtered for the current forum, so a user may be listed as “Most Active” in one forum but not in another.
TOP
Top Experts List
The Top Experts list contains users with the most verified answers. Verified Answers are posts that forum users or Moderators have marked as the answer to a forum topic. Top Experts can be viewed are those users who consistently provide the most valuable content for Community Central forums. The Top Experts list is filtered for the current forum, so a user may be a “Top Expert” in one forum but not in another.
TOP
Forum Links List
The Forum Links list contains links to other pages and sites. This list is configurable by Community Central Moderators or Administrators. To add, edit, or remove links from the Forum Links list, go to Site Actions > View All Site Content and click the Forum Links list. Edit items in the list like you would any SharePoint Links list.
TOP
Forum Statistics
Forum statistics are constantly updated to help you track forum participation. Statistics displayed on the forum site home page include:
The total number of forum topics and the number of topics added today.
The total number of forum posts and the number of posts added today.
The number of topics in the forum site that have replies.
The number of topics in the forum site without replies.
Forum Statistics
Other community statistics are displayed on the Community Central Home, Forums Home, and Blogs Home pages and on individual blog sites. See About Community Statistics for more information.
TOP | https://docs.bamboosolutions.com/document/list_of_forum_site_lists/ | 2020-08-03T12:42:53 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.bamboosolutions.com |
Customizing the Configuration¶
Each Bareos component (Director, Client, Storage, Console) has its own configuration containing a set of resource definitions. These resources are very similar from one service to another, but may contain different directives (records) depending on the component. For example, in the Director configuration, the Director Resource defines the name of the Director, a number of global Director parameters and his password. In the File daemon configuration, the Director Resource specifies which Directors are permitted to use the File daemon.
If you install all Bareos daemons (Director, Storage and File Daemon) onto one system, the Bareos package tries its best to generate a working configuration as a basis for your individual configuration.
The details of each resource and the directives permitted therein are described in the following chapters.
The following configuration files must be present:
- Director Configuration – to define the resources necessary for the Director. You define all the Clients and Storage daemons that you use in this configuration file.
- Storage Daemon Configuration – to define the resources to be used by each Storage daemon. Normally, you will have a single Storage daemon that controls your disk storage or tape drives. However, if you have tape drives on several machines, you will have at least one Storage daemon per machine.
- Client/File Daemon Configuration – to define the resources for each client to be backed up. That is, you will have a separate Client resource file on each machine that runs a File daemon.
- Console Configuration – to define the resources for the Console program (user interface to the Director). It defines which Directors are available so that you may interact with them.
Configuration Path Layout¶
When a Bareos component starts, it reads its configuration. In Bareos < 16.2.2 only configuration files (which optionally can include other files) are supported. Since Bareos Version >= 16.2.2 also configuration subdirectories are supported.
Naming¶
In this section, the following naming is used:
CONFIGDIRrefers to the base configuration directory. Bareos Linux packages use
/etc/bareos/.
- A component is one of the following Bareos programs:
- bareos-dir
- bareos-sd
- bareos-fd
- bareos-traymonitor
- bconsole
- bat (only legacy config file: bat.conf)
- Bareos tools, like Volume Utility Commands and others.
COMPONENTrefers to one of the listed components.
What configuration will be used?¶
When starting a Bareos component, it will look for its configuration. Bareos components allow the configuration file/directory to be specified as a command line parameter
-c PATH.
- configuration path parameter is not given (default)
CONFIGDIR/COMPONENT.confis a file
- the configuration is read from the file
CONFIGDIR/COMPONENT.conf
CONFIGDIR/COMPONENT.d/is a directory
- the configuration is read from
CONFIGDIR/COMPONENT.d/*/*.conf(subdirectory configuration)
- configuration path parameter is given (
-c PATH)
PATHis a file
- the configuration is read from the file specified in
PATH
PATHis a directory
- the configuration is read from
PATH/COMPONENT.d/*/*.conf(subdirectory configuration)
As the
CONFIGDIR differs between platforms or is overwritten by the path parameter, the documentation will often refer to the configuration without the leading path (e.g.
COMPONENT.d/*/*.conf instead of
CONFIGDIR/COMPONENT.d/*/*.conf).
When subdirectory configuration is used, all files matching
PATH/COMPONENT.d/*/*.conf will be read, see Subdirectory Configuration Scheme.
Subdirectory Configuration Scheme¶
If the subdirectory configuration is used, instead of a single configuration file, all files matching
COMPONENT.d/*/*.conf are read as a configuration, see What configuration will be used?.
Reason for the Subdirectory Configuration Scheme¶
In Bareos < 16.2.2, Bareos uses one configuration file per component.
Most larger Bareos environments split their configuration into separate files, making it easier to manage the configuration.
Also some extra packages (bareos-webui, plugins, …) require a configuration, which must be included into the Bareos Director or Bareos Storage Daemon configuration. The subdirectory approach makes it easier to add or modify the configuration resources of different Bareos packages.
The Bareos configure command requires a configuration directory structure, as provided by the subdirectory approach.
From Bareos Version >= 16.2.4 on, new installations will use configuration subdirectories by default.
Resource file conventions¶
- Each configuration resource has to use its own configuration file.
- The path of a resource file is
COMPONENT.d/RESOURCETYPE/RESOURCENAME.conf.
- The name of the configuration file is identical with the resource name:
- e.g.
bareos-dir.d/director/bareos-dir.conf
bareos-dir.d/pool/Full.conf
- Exceptions:
- The resource file
bareos-fd.d/client/myself.confalways has the file name
myself.conf, while the name is normally set to the hostname of the system.
- Example resource files:
- Additional packages can contain configuration files that are automatically included. However, most additional configuration resources require configuration. When a resource file requires configuration, it has to be included as an example file:
CONFIGDIR/COMPONENT.d/RESOURCE/NAME.conf.example
- For example, the Bareos WebUI entails one config resource and one config resource example for the Bareos Director:
CONFIGDIR/bareos-director.d/profile/webui-admin.conf
CONFIGDIR/bareos-director.d/console/admin.conf.example
- extbfsection-deleteConfigurationResourceFilesDisable/remove configuration resource files:
- Normally you should not remove resources that are already in use (jobs, clients, …). Instead you should disable them by adding the directive
Enable = no. Otherwise you take the risk that orphaned entries are kept in the Bareos catalog. However, if a resource has not been used or all references have been cleared from the database, they can also be removed from the configuration.
Warning
- If you want to remove a configuration resource that is part of a Bareos package,
- replace the resource configuration file by an empty file. This prevents the resource from reappearing in the course of a package update.
Using Subdirectories Configuration Scheme¶
New installation¶
- The Subdirectories Configuration Scheme is used by default from Bareos Version >= 16.2.4 onwards.
- They will be usable immediately after installing a Bareos component.
- If additional packages entail example configuration files (
NAME.conf.example), copy them to
NAME.conf, modify it as required and reload or restart the component.
Updates from Bareos < 16.2.4¶
When updating to a Bareos version containing the Subdirectories Configuration, the existing configuration will not be touched and is still the default configuration.
Warning
- Problems can occur if you have implemented an own wildcard mechanism to load your configuration
from the same subdirectories as used by the new packages (
CONFIGDIR/COMPONENT.d/*/*.conf). In this case, newly installed configuration resource files can alter your current configuration by adding resources.
Best create a copy of your configuration directory before updating Bareos and modify your existing configuration file to use that other directory.
As long as the old configuration file (
CONFIGDIR/COMPONENT.conf) exists, it will be used.
The correct way of migrating to the new configuration scheme would be to split the configuration file into resources, store them in the resource directories and then remove the original configuration file.
For migrating the Bareos Director configuration, the script bareos-migrate-config.sh exists. Being called, it connects via bconsole to a running Bareos Director and creates subdirectories with the resource configuration files.
# prepare temporary directory mkdir /tmp/bareos-dir.d cd /tmp/bareos-dir.d # download migration script wget # execute the script bash bareos-migrate-config.sh # backup old configuration mv /etc/bareos/bareos-dir.conf /etc/bareos/bareos-dir.conf.bak mv /etc/bareos/bareos-dir.d /etc/bareos/bareos-dir.d.bak # make sure, that all packaged configuration resources exists, # otherwise they will be added when updating Bareos. for i in `find /etc/bareos/bareos-dir.d.bak/ -name *.conf -type f -printf "%P\n"`; do touch "$i"; done # install newly generated configuration cp -a /tmp/bareos-dir.d /etc/bareos/
Restart the Bareos Director and verify your configuration. Also make sure, that all resource configuration files coming from Bareos packages exists, in doubt as empty files, see remove configuration resource files.
Another way, without splitting the configuration into resource files is:
-
Resources defined in both, the new configuration directory scheme and the old configuration file, must be removed from one of the places, best from the old configuration file, after verifying that the settings are identical with the new settings.
Configuration File Format¶
A configuration file consists of one or more resources (see Resource).
Bareos programs can work with
- all resources defined in one configuration file
- configuration files that include other configuration files (see Including other Configuration Files)
- Subdirectory Configuration Scheme, where each configuration file contains exactly one resource definition
Character Sets¶
Bareos Bareos.
To ensure that Bareos configuration files can be correctly read including foreign characters, the LANG environment variable must end in .UTF-8. A full example is en_US.UTF-8. The exact syntax may vary a bit from OS to OS, so that the way you have to define it will differ from the example. On most newer Win32 machines you can use notepad to edit the conf files, then choose output encoding UTF-8.
Bareos assumes that all filenames are in UTF-8 format on Linux and Unix machines. On Win32 they are in Unicode (UTF-16) and will hence be automatically converted to UTF-8 format.
Semicolons¶.
Including other Configuration Files¶
If you wish to break your configuration file into smaller pieces, you can do so by including other files using the syntax @filename where
filename is the full path and filename of another file. The @filename specification can be given anywhere a primitive token would appear.
Since Bareos Version >= 16.2.1 wildcards in pathes are supported:
By using @|command it is also possible to include the output of a script as a configuration:
or if a parameter should be used:
@|"sh -c '/etc/bareos/generate_client_configuration_to_stdout.sh clientname=client1.example.com'"
The scripts are called at the start of the daemon. You should use this with care.
Resource¶
A resource is defined as the resource type, followed by an open brace (
{), a number of Resource Directive, and ended by a closing brace (
})
Each resource definition MUST contain a Name directive. It can contain a Description directive. The Name directive is used to uniquely identify the resource. The Description directive can be used during the display of the Resource to provide easier human recognition. For example:
Director { Name = "bareos-dir" Description = "Main Bareos Director" Query File = "/usr/lib/bareos/scripts/query.sql" }
defines the Director resource with the name bareos-dir and a query file
/usr/lib/bareos/scripts/query.sql.
When naming resources, for some resource types naming conventions should be applied:
- Client
- names should be postfixed with -fd
- Storage
- names should be postfixed with -sd
- Director
- names should be postfixed with -dir
These conventions helps a lot when reading log messages.
Resource Directive¶
Each directive contained within the resource (within the curly braces
{}) is composed of a Resource Directive Keyword followed by an equal sign (=) followed by a Resource Directive Value. The keywords must be one of the known Bareos resource record keywords.
Resource Directive Keyword¶
A resource directive keyword is the part before the equal sign (
=) in a Resource Directive. The following sections will list all available directives by Bareos component resources.
Resource Directive Value¶
A resource directive value is the part after the equal sign (
=) in a Resource Directive.
Spaces¶
Spaces after the equal sign and before the first character of the value are ignored. Other spaces within a value may be significant (not ignored) and may require quoting.
Quotes¶
In general, if you want spaces in a name to the right of the first equal sign (=), you must enclose that name within double quotes. Otherwise quotes are not generally necessary because once defined, quoted strings and unquoted strings are all equal.
Within a quoted string, any character following a backslash () is taken as itself (handy for inserting backslashes and double quotes (“)).
Warning
If the configure directive is used to define a number, the number is never to be put between surrounding quotes. This is even true if the number is specified together with its unit, like 365 days.
Numbers¶
Numbers are not to be quoted, see Quotes. Also do not prepend numbers by zeros (0), as these are not parsed in the expected manner (write 1 instead of 01).
Data Types¶
When parsing the resource directives, Bareos classifies the data according to the types listed below.
- acl
This directive defines what is permitted to be accessed. It does this by using a list of regular expressions, separated by commas (,) or using multiple directives. If ! is prepended, the expression is negated. The special keyword *all* allows unrestricted access.
Depending on the type of the ACL, the regular expressions can be either resource names, paths or console commands.
Since Bareos Version >= 16.2.4 regular expression are handled more strictly. Before also substring matches has been accepted.
For clarification, we demonstrate the usage of ACLs by some examples for
Command ACL (Dir->Console):
Command ACL = !sqlquery, !u.*, *all*
Same:
Command ACL = !sqlquery, !u.* Command ACL = *all*
Command ACL = !sqlquery Command ACL = !u.* Comamnd ACL = !set(ip|debug) Comamnd ACL = *all*
- auth-type
Specifies the authentication type that must be supplied when connecting to a backup protocol that uses a specific authentication type. Currently only used for NDMP Resource.
The following values are allowed:
- None
- Use no password
- Clear
- Use clear text password
- MD5
- Use MD5 hashing
- integer
A 32 bit integer value. It may be positive or negative.
Don’t use quotes around the number, see Quotes.
- long integer
A 64 bit integer value. Typically these are values such as bytes that can exceed 4 billion and thus require a 64 bit value.
Don’t use quotes around the number, see Quotes.
- job protocol
The protocol to run a the job. Following protocols are available:
- Native
- Native Bareos job protocol.
- NDMP
- Deprecated. Alias for NDMP_BAREOS.
- NDMP_BAREOS
- Since Bareos Version >= 17.2.3. See NDMP_BAREOS.
- NDMP_NATIVE
- Since Bareos Version >= 17.2.3. See NDMP_NATIVE.
- name
A keyword or name consisting of alphanumeric characters, including the hyphen, underscore, and dollar characters. The first character of a name must be a letter. A name has a maximum length currently set to 127 bytes.
Please note that Bareos resource names as well as certain other names (e.g. Volume names) must contain only letters (including ISO accented letters), numbers, and a few special characters (space, underscore, …). All other characters and punctuation are invalid.
This is a Bareos password and it is stored internally in MD5 hashed format.
- path
A path is either a quoted or non-quoted string. A path will be passed to your standard shell for expansion when it is scanned. Thus constructs such as $HOME are interpreted to be their correct values. The path can either reference to a file or a directory.
- speed
The speed parameter can be specified as k/s, kb/s, m/s or mb/s.
Don’t use quotes around the parameter, see Quotes.
- string
A quoted string containing virtually any character including spaces, or a non-quoted string. A string may be of any length. Strings are typically values that correspond to filenames, directories, or system command names. A backslash () turns the next character into itself, so to include a double quote in a string, you precede the double quote with a backslash. Likewise to include a backslash.
- string-list
Multiple strings, specified in multiple directives, or in a single directive, separated by commas (,).
- strname
is similar to a Name, except that the name may be quoted and can thus contain additional characters including spaces.
- net-address
is either a domain name or an IP address specified as a dotted quadruple in string or quoted string format. This directive only permits a single address to be specified. The NetPort must be specificly separated. If multiple net-addresses are needed, please assess if it is also possible to specify NetAddresses (plural).
- net-addresses
Specify a set of net-addresses and net-ports. Probably the simplest way to explain this is to show an example:
Addresses = { = server.example.com } }
where ip, ip4, ip6, addr, and port are all keywords. Note, that the address can be specified as either a dotted quadruple, or in IPv6 colon notation, or as a symbolic name (only in the ip specification). Also, the port can be specified as a number or as the mnemonic value from the
/etc/servicesfile. If a port is not specified, the default one will be used. If an ip section is specified, the resolution can be made either by IPv4 or IPv6. If ip4 is specified, then only IPv4 resolutions will be permitted, and likewise with ip6.
- net-port
Specify a network port (a positive integer).
Don’t use quotes around the parameter, see Quotes.
- resource
A resource defines a relation to the name of another resource.
- size
A size specified as bytes. Typically, this is a floating point scientific input format followed by an optional modifier. The floating point input is stored as a 64 bit integer value. If a modifier is present, it must immediately follow the value with no intervening spaces. The following modifiers are permitted:
- k
- 1,024 (kilobytes)
- kb
- 1,000 (kilobytes)
- m
- 1,048,576 (megabytes)
- mb
- 1,000,000 (megabytes)
- g
- 1,073,741,824 (gigabytes)
- gb
- 1,000,000,000 (gigabytes)
Don’t use quotes around the parameter, see Quotes.
- time
A time or duration specified in seconds. The time is stored internally as a 64 bit integer value, but it is specified in two parts: a number part and a modifier part. The number can be an integer or a floating point number. If it is entered in floating point notation, it will be rounded to the nearest integer. The modifier is mandatory and follows the number part, either with or without intervening spaces. The following modifiers are permitted:
- seconds
-
- minutes
- (60 seconds)
- hours
- (3600 seconds)
- days
- (3600*24 seconds)
- weeks
- (3600*24*7 seconds)
- months
- (3600*24*30 seconds)
- quarters
- (3600*24*91 seconds)
- years
- (3600*24*365 seconds).
Don’t use quotes around the parameter, see Quotes.
- audit-command-list
Specifies the commands that should be logged on execution (audited). E.g.
Audit Events = label Audit Events = restore
Based on the type string-list.
- yes|no
Either a yes or a no (or true or false).
Variable Expansion¶
Depending on the directive, Bareos will expand to the following variables:
Variable Expansion on Volume Labels¶
When labeling a new volume (see
Label Format (Dir->Pool)), following Bareos internal variables can be used:
Additional, normal environment variables can be used, e.g. $HOME oder $HOSTNAME.
With the exception of Job specific variables, you can trigger the variable expansion by using the var command.
Variable Expansion in Autochanger Commands¶
At the configuration of autochanger commands the following variables can be used:
Variable Expansion in Mount Commands¶
At the configuration of mount commands the following variables can be used:
Variable Expansion on RunScripts¶
Variable Expansion on RunScripts is described at
Run Script (Dir->Job).
When reading a configuration, blank lines are ignored and everything after a hash sign (#) until the end of the line is taken to be a comment. | https://docs.bareos.org/master/Configuration/CustomizingTheConfiguration.html | 2020-08-03T12:53:46 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.bareos.org |
.
Note
Skype For Business will slowly be replaced by Microsoft Teams as the primary communication method in Microsoft 365 and Office 365. See A new vision for intelligent communications in Office 365 for more information.
To get the latest updates and most up-to-date information on supported devices, see the Microsoft Teams devices for intelligent communications.
Supported phones
Microsoft is partnering and working closely with Polycom, Yealink, and AudioCodes to develop and certify a wide variety of devices through the Partner IP Phone Program (PIP) for the Phone System. Poly Documentation Library.
For more details on Yealink phones, see Skype for Business IP Phones.
For more details on AudioCodes phones, see Skype for Business IP Phones.
Note
Lync Phone Edition is supported with Skype for Business Online, but not with Microsoft Teams. Mainstream support for the LPE platform ended by April/10/2014, with extended support until April/11/2023 to align with the product support lifecycle of Lync Server 2013. See Microsoft Product Lifecycle for details on the LPE lifecycle. LPE CAP models aren't supported with Skype for Business Online.
Later this year, Office 365 will not support any version of TLS older than 1.2. Since the underlying operating system of LPE does not support TLS 1.2, LPE will not be supported to connect to Office 365 anymore. See Preparing for the mandatory use of TLS 1.2 in Office 365 for more information.
Supported firmware
This is the minimum software release required for supported phones to work with Phone System:
For more details on current certified firmware versions, see Skype for Business IP Phones.
Country and region availability for Audio Conferencing and Calling Plans | https://docs.microsoft.com/en-us/skypeforbusiness/what-is-phone-system-in-office-365/getting-phones-for-skype-for-business-online/getting-phones-for-skype-for-business-online | 2020-08-03T13:35:20 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
Running a Local API
The
stackery local start-api command allows you to run a local HTTP server. You can invoke functions triggered by your API, and can start any function in your stack.
Only follow this guide once you have set up your local serverless development environment using our local development guide.
Running a Local APIRunning a Local API
Set-upSet-up
To use the
stackery local start-api command, your stack needs to include an API resource and be deployed in order to interact with additional cloud resources. In this example, we will be using the
local-demo stack from the local serverless development doc mentioned above:
- In your terminal or shell,
cdto the root directory of your
local-demostack
- Run the following command to start your local api:
stackery local start-api --stack-name local-demo --env-name test --aws-profile <your aws profile name>
Note that unlike the
stackery local invoke command, we do not need to designate the function ID, even if there are multiple functions in the stack. There is also no need for the
--watch flag, as the HTTP server keeps running until you stop it.
You should see a message like this that gives you your HTTP address:
- To stop running the local server, enter
CTRL + C
OptionsOptions
PortPort
The
stackery local api command uses port 3000 by default. You can change the port by appending a
--port flag followed by your designated port number.
HostHost
The default host is
127.0.0.1. If you would like to change the host, append a
--host flag to the command.
Template pathTemplate path
If your
template.yaml does not reside in the repo's root directory, you can designate its location relative to the root using the
--template-path flag.
DebuggingDebugging
If you are using a debugger, you can pass debugging values as flags and they are passed through directly to the SAM CLI. The following flags are supported:
--debug
--debug-args
--debug-port
--debugger-path
Additional resourcesAdditional resources
Read the AWS docs on sam local start-api for more information about local api configuration. | https://docs.stackery.io/docs/3.12.2/workflow/local-api/ | 2020-08-03T11:35:51 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['/docs/assets/local/dashboard-deploy.png', 'Deployed stack'],
dtype=object)
array(['/docs/assets/local/api-server.png', 'local api server'],
dtype=object) ] | docs.stackery.io |
International Phone Numbers
When formatting international phone numbers, SecureAuth IdP requires the string to begin with the country code followed by a space – the country code format begins with a plus (+) symbol followed by the country code
For example, the SecureAuth sales phone number would be entered as +1 949 777 6959
The remainder of the numbers in the dial string can use any combination of the following separators: ' space', 'dash' or 'period'
Or the use of separators can be eliminated from the numeric string – e.g. +1 9497776959
National Phone Numbers
A national phone number can use any combination of the following separators: 'space', 'dash' or 'period'
Or the use of separators can be eliminated from the numeric string – e.g. 9497776959
An email address should follow the format local-part@domain in which
Recommended Phone Number and Email Address Formats
SecureAuth recommends formatting phone numbers and email addresses per the International Telecommunication Union E.123: Notation for National and International Telephone numbers, e-mail addresses and web addresses specification
This internationally accepted standard will help ensure interoperability with telephone and email systems worldwide
Here are examples of a national number, international number and email address in the E.123 format:
ITU E.164 Support
SecureAuth IdP supports the ITU E.164 Format for international phone numbers | https://docs.secureauth.com/display/KBA/Phone+Number+and+Email+Formatting+Best+Practices | 2020-08-03T12:28:45 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.secureauth.com |
Design Time
The Smart Tag of RadSocialShare lets you easily select the available buttons, change the skin for your control or quickly get help. You can display the Smart Tag by right clicking on a RadSocialShare control and choosing "Show Smart Tag", or by clicking the small rightward-pointing arrow located in the upper right corner of the control.
Configurator For The Button Lists
The built-in visual designer allows you to easily add the buttons you wish to the RadSocialShare control and it will create the needed markup for you.
The left column lets you choose which of the Button Collections you will modify. By default the MainButtons collection is selected.
In the middle pane you see a list with the already added buttons and the name corresponds to the type of the button - the SocialNetType property for the Styled Buttons and the name for the Standard Buttons.
You can add a Styled Button by pressing the first button, the next three are respectively the Standard Buttons for Facebook, Twitter and GooglePlusOne. The fifth button adds the RadCompactButton and the sixth removes the selected RadSocialButton.
You can choose which network the button connects to by directly typing the Standard Buttons's name (or SocialNetType property for the Styled Button) in the list, or you can select this from the dropdown in the right pane where you can choose all other options.
If you type in a name that does not exist as a possible value for these properties the input will not be taken and the button will be reset to its previous state. Note that the names are case-sensitive. By default the GoogleBookmarks Styled Button is added as it is the first one in the alphabetical order.
If the button type is changed via the properties pane this change is automatically reflected in its name in the list and vice versa.
You can reorder the buttons in the collection by using the two arrows on the right of the list - each click moves the selected button one position up or down the list.
All other properties can be controlled via the right pane which is the standard Properties pane of the Visual Studio. By default only the SocialNetType and the ToolTip are set for each Styled Button and are rendered in the markup. For the Facebook Standard Button only the ButtonType property is selected by default and the Twitter and GooglePlusOne buttons do not need any additional properties initially. You can leave this as-is, or modify the properties as needed.
When working with the CompactButtons collection you can only choose from the Styled Button as they are the only ones that are acceptable for it. Therefore if a name for a Standard Button is entered it will not be taken by the Configurator. built-in skins that you can apply, along with an example of what the control will look like for each skin. Assign a skin by selecting from the list.
Learning Center
Links navigate you directly to examples, help, and code library.
You can navigate directly to the Telerik Support Center. | https://docs.telerik.com/devtools/aspnet-ajax/controls/socialshare/design-time | 2020-08-03T12:18:53 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['images/socialshare-smart-tag.png', 'socialshare-smart-tag'],
dtype=object)
array(['images/radsocialshare-smart-tag-designer.png',
'radsocialshare-smart-tag-designer'], dtype=object)] | docs.telerik.com |
The Editor Page provides access to the property groups and properties used to alter how the scene is drawn in the viewport.
When the drawstyle is based on a specific technology, and the active Render Engine is based on the same technology, the DrawStyle can be considered a “preview” of the final render for that engine; this is true of the opengl-based options as well as the NVIDIA Iray option. When the chosen DrawStyle is not based on the same underlying technology as the active render engine—for example choosing the nvidia_iray DrawStyle and the 3Delight render engine—the style drawn in the viewport cannot be considered a “preview” of the final render. Instead the viewport will be drawn in the style of the NVIDIA Iray engine and the final image will be rendered in the style of the 3Delight engine.
The DrawStyle Selector is displayed at the top of the Draw Settings pane and causes the DrawStyle of the selected Viewport to change. Clicking the selector and choosing another DrawStyle causes the viewport to be drawn differently. The available DrawStyles include the following:
Like the Parameters (WIP) pane, there are two primary views displayed within the Editor page. Selecting a property group in the left-hand Property Groups View determines which properties are displayed in the right-hand Properties View. | http://docs.daz3d.com/doku.php/public/software/dazstudio/4/referenceguide/interface/panes/draw_settings/editor_page/start | 2020-08-03T13:20:26 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.daz3d.com |
Amazon Redshift snapshots
Topics
- Overview
- Automated snapshots
- Automated snapshot schedules
- Snapshot schedule format
- Manual snapshots
- Managing snapshot storage
- Excluding tables from snapshots
- Copying snapshots to another AWS Region
- Restoring a cluster from a snapshot
- Restoring a table from a snapshot
- Sharing snapshots
- Managing snapshots using the console
- Managing snapshots using the AWS SDK for Java
- Managing snapshots using the Amazon Redshift CLI and API
Overview
Snapshots are point-in-time backups of a cluster. There are two types of snapshots: automated and manual. Amazon Redshift stores these snapshots internally in Amazon S3 by using an encrypted Secure Sockets Layer (SSL) connection.
Amazon Redshift automatically takes incremental snapshots that track changes to the cluster since the previous automated snapshot. Automated snapshots retain all of the data required to restore a cluster from a snapshot. You can create a snapshot schedule to control when automated snapshots are taken, or you can take a manual snapshot any time..
When you launch a cluster, you can set the retention period for automated and manual snapshots. You can change the default retention period for automated and manual snapshots by modifying the cluster. You can change the retention period for a manual snapshot when you create the snapshot or by modifying.
To ensure that your backups are always available to your cluster, Amazon Redshift
stores snapshots
in an internally managed Amazon S3 bucket that is managed by Amazon Redshift.
To manage storage charges, evaluate how many
days you need to keep automated snapshots and configure their retention period accordingly.
Delete any manual snapshots that you no longer need.
For more information about the cost of backup storage,
see the Amazon Redshift pricing.
Snapshot schedule format
On the Amazon Redshift console, you can create a snapshot schedule. Then, you can attach a schedule to a cluster to trigger the creation of a system snapshot. A schedule can be attached to multiple clusters, and you can create multiple cron definitions in a schedule to trigger a snapshot.
You can define a schedule for your snapshots using a cron syntax. The definition
of these schedules uses a modified Unix-like
cron
Amazon Redshift modified cron expressions have 3 required fields, which are separated by white space.
Syntax
cron(
Minutes
Hours
Day-of-week)
Wildcards
The , (comma) wildcard includes additional values. In the
Day-of-weekfield,
MON,WED,FRIwould include Monday, Wednesday, and Friday. Total values are limited to 24 per field.
The - (dash) wildcard specifies ranges. In the
Hourfield, 1–15 would include hours 1 through 15 of the specified day.
The * (asterisk) wildcard includes all values in the field. In the
Hoursfield, * would include every hour.
The / (forward slash) wildcard specifies increments. In the
Hoursfield, you could enter
1/10to specify every 10th hour, starting from the first hour of the day (for example, the 01:00, 11:00, and 21:00).
Limits
Snapshot schedules that lead to backup frequencies less than 1 hour or greater than 24 hours are not supported. If you have overlapping schedules that result in scheduling snapshots within a 1 hour window, a validation error results.
When creating a schedule, you can use the following sample cron strings.
For example to run on a schedule on an every 2 hour increment starting at 15:15 each day. This resolves to [15:15, 17:15, 19:15, 21:15, 23:15] , specify:
cron(15 15/2 *)
You can create multiple cron schedule definitions within as schedule. For example the following AWS CLI command contains two cron schedules in one schedule.
create-snapshot-schedule --schedule-identifier "my-test" --schedule-definition "cron(0 17 SAT,SUN)" "cron(0 9,17 MON-FRI)"
Manual snapshots
You can take a manual snapshot any time. By default, manual snapshots are retained indefinitely, even after you delete your cluster. You can specify the retention period when you create a manual snapshot, or you can change the retention period by modifying the snapshot. For more information about changing the retention period, see Changing the manual snapshot retention period.
If a snapshot is deleted, you can't AWS Region. The default quota is listed at Quotas and limits in Amazon Redshift.
Managing snapshot storage
Because snapshots accrue storage charges, it's important that you delete them when you no longer need them. Amazon Redshift deletes automated and manual snapshots at the end of their respective snapshot retention periods. You can also delete manual snapshots using the AWS Management Console or with the batch-delete-cluster-snapshots CLI command.
You can change the retention period for a manual snapshot by modifying the manual snapshot settings.
You can get information about how much storage your snapshots are consuming using the Amazon Redshift Console or using the describe-storage CLI command. AWS Region
You can configure Amazon Redshift to automatically copy snapshots (automated or manual) for a cluster to another AWS Region. When a snapshot is created in the cluster's primary AWS Region, it's copied to a secondary AWS Region. The two AWS Regions are known respectively as the source AWS Region and destination AWS Region. If you store a copy of your snapshots in another AWS Region, you can restore your cluster from recent data if anything affects the primary AWS Region. You can configure your cluster to copy snapshots to only one destination AWS Region at a time. For a list of Amazon Redshift Regions, see Regions and endpoints in the Amazon Web Services General Reference.
When you enable Amazon Redshift to automatically copy snapshots to another AWS Region, you specify the destination AWS Region to copy the snapshots to. For automated snapshots, you can also specify the retention period to keep them in the destination AWS Region. After an automated snapshot is copied to the destination AWS Region and it reaches the retention time period there, it's deleted from the destination AWS Region. Doing this keeps your snapshot usage low. To keep the automated snapshots for a shorter or longer time in the destination AWS Region, change this retention period.
The retention period that you set for automated snapshots that are copied to the destination AWS Region is separate from the retention period for automated snapshots in the source AWS Region. The default retention period for copied snapshots is seven days. That seven-day period applies only to automated snapshots. In both the source and destination AWS Regions, manual snapshots are deleted at the end of the snapshot retention period or when you manually delete them.
You can disable automatic snapshot copy for a cluster at any time. When you disable this feature, snapshots are no longer copied from the source AWS Region to the destination AWS Region. Any automated snapshots copied to the destination AWS Region are deleted as they reach the retention period limit, unless you create manual snapshot copies of them. These manual snapshots, and any manual snapshots that were copied from the destination AWS Region, are kept in the destination AWS Region until you manually delete them.
To change the destination AWS Region that you copy snapshots to, first disable the automatic copy feature. Then re-enable it, specifying the new destination AWS Region.
After a snapshot is copied to the destination AWS Region, it becomes active and available for restoration purposes.
To copy snapshots for AWS KMS–encrypted clusters to another AWS Region, create a grant for Amazon Redshift to use a KMS customer master key (CMK) in the destination AWS Region. Then choose that grant when you enable copying of snapshots in the source AWS Region. For more information about configuring snapshot copy grants, see Copying AWS KMS–encrypted snapshots to another AWS Region.
Restoring a cluster from a snapshot
A snapshot contains data from any databases that are running on your cluster. It also contains information about your cluster, including the number of nodes, node type, and master user name. If you restore your cluster from a snapshot, Amazon Redshift uses the cluster information to create a new cluster. Then it restores all the databases from the snapshot data.
For the new cluster created from the original snapshot, you can choose the configuration, such as node type and number of nodes. The cluster is restored in the same AWS Region and a random, system-chosen Availability Zone, unless you specify another Availability Zone in your request. When you restore a cluster from a snapshot, you can optionally choose a compatible maintenance track for the new cluster.
When you restore a snapshot to a cluster with a different configuration, the snapshot must have been taken on a cluster with cluster version 1.0.10013, or later.
When a restore is in progress, events are typically emitted in the following order:
RESTORE_STARTED – REDSHIFT-EVENT-2008 sent when the restore process begins.
RESTORE_SUCCEEDED – REDSHIFT-EVENT-3003 sent when the new cluster has been created.
The cluster is available for queries.
DATA_TRANSFER_COMPLETED – REDSHIFT-EVENT-3537 sent when data transfer complete.
RA3 clusters only emit RESTORE_STARTED and RESTORE_SUCCEEDED events. There is no explicit data transfer to be done after a RESTORE succeeds because RA3 node types store data in Amazon Redshift managed storage. With RA3 nodes, data is continuously transferred between RA3 nodes and Amazon Redshift managed storage as part of normal query processing. RA3 nodes cache hot data locally and keep less frequently queried blocks in Amazon Redshift managed storage automatically.
You can monitor the progress of a restore by either calling the DescribeClusters API operation, or viewing the cluster details in the AWS Management Console. For an in-progress restore, these display information such as the size of the snapshot data, the transfer rate, the elapsed time, and the estimated time remaining. For a description of these metrics, see RestoreStatus.
You can't use a snapshot to revert an active cluster to a previous state.
When you restore a snapshot into a new cluster, the default security group and parameter group are used unless you specify different values.
You might want to restore a snapshot to a cluster with a different configuration for these reasons:
When a cluster is made up of smaller node types and you want to consolidate it into a larger node type with fewer nodes.
When you have monitored your workload and determined the need to move to a node type with more CPU and storage.
When you want to measure performance of test workloads with different node types.
Restore has the following constraints:
The new node configuration must have enough storage for existing data. Even when you add nodes, your new configuration might not have enough storage because of the way that data is redistributed.
The restore operation checks if the snapshot was created on a cluster version that is compatible with the cluster version of the new cluster. If the new cluster has a version level that is too early, then the restore operation fails and reports more information in an error message.
The possible configurations (number of nodes and node type) you can restore to is determined by the number of nodes in the original cluster and the target node type of the new cluster. To determine the possible configurations available, you can use the Amazon Redshift console or the
describe-node-configuration-optionsAWS CLI command with
action-type restore-cluster. For more information about the restoring using the Amazon Redshift console, see Restoring a cluster from a snapshot.
The following steps take a cluster with many nodes and consolidate it into a bigger
node
type with a smaller number of nodes using the AWS CLI. For this example, we start
with a
source cluster of 24
ds2.xlarge nodes. In this case, suppose that we already
created a snapshot of this cluster and want to restore it into a bigger node type.
Run the following command to get the details of our 24-node
ds2.xlargecluster.
aws redshift describe-clusters --region eu-west-1 -—cluster-identifier mycluster-123456789012
Run the following command to get the details of the snapshot.
aws redshift describe-cluster-snapshots --region eu-west-1 -—snapshot-identifier mycluster-snapshot
Run the following command to describe the options available for this snapshot.
aws redshift describe-node-configuration-options --snapshot-identifier mycluster-snapshot --region eu-west-1 -—action-type restore-cluster
This command returns an option list with recommended node types, number of nodes, and disk utilization for each option. For this example, the preceding command lists the following possible node configurations. We choose to restore into a three-node
ds2.8xlargecluster.
{ "NodeConfigurationOptionList": [ { "EstimatedDiskUtilizationPercent": 65.26134808858235, "NodeType": "ds2.xlarge", "NumberOfNodes": 24 }, { "EstimatedDiskUtilizationPercent": 32.630674044291176, "NodeType": "ds2.xlarge", "NumberOfNodes": 48 }, { "EstimatedDiskUtilizationPercent": 65.26134808858235, "NodeType": "ds2.8xlarge", "NumberOfNodes": 3 }, { "EstimatedDiskUtilizationPercent": 48.94601106643677, "NodeType": "ds2.8xlarge", "NumberOfNodes": 4 }, { "EstimatedDiskUtilizationPercent": 39.156808853149414, "NodeType": "ds2.8xlarge", "NumberOfNodes": 5 }, { "EstimatedDiskUtilizationPercent": 32.630674044291176, "NodeType": "ds2.8xlarge", "NumberOfNodes": 6 } ] }
Run the following command to restore the snapshot into the cluster configuration that we chose. After this cluster is restored, we have the same content as the source cluster, but the data has been consolidated into three
ds2.8xlargenodes.
aws redshift restore-from-cluster-snapshot --region eu-west-1 --snapshot-identifier mycluster-snapshot -—cluster-identifier mycluster-123456789012-x --node-type ds2.8xlarge --number-of-nodes 3
Restoring a table from a snapshot the Amazon Redshift.
A new console is available for Amazon Redshift. Choose either the New console or the Original console instructions based on the console that you are using. The New console instructions are open by default.
To restore a table from a snapshot
Sign in to the AWS Management Console and open the Amazon Redshift console at
.
On the navigation menu, choose CLUSTERS, then choose the cluster that you want to use to restore a table.
For Actions, choose Restore table to display the Restore table page.
Enter the information about which snapshot, source table, and target table to use, and then choose Restore table.
To restore a table from a snapshot using the Amazon Redshift console
Sign in to the AWS Management Console and open the Amazon Redshift console at
.
Choose Clusters and choose a cluster.
Choose the Table restore tab.
Choose Restore table.
In the Table restore panel, select a date range that contains the cluster snapshot that you want to restore from. For example, you might select
Last 1 Weekfor cluster snapshots taken in the previous week.
Add the following information:.
Choose Copy restore request." ] } ] }
After access to a snapshot has been revoked from an AWS account, no users in that account can access the snapshot. This is so even if they have IAM policies that allow actions on the previously shared snapshot resource. | https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html | 2020-10-19T22:21:09 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.aws.amazon.com |
Save signed Spreadsheet with different output file type
Signature class supports saving of Spreadsheet signed documents with different output file types. Each document type has list of compatible saving type. These values are listed in enum SpreadsheetSaveFileFormat.
Here are the steps to save signed Spreadsheet document to different output type with GroupDocs.Signature:
Create new instance of Signature class and pass source document path or stream as a constructor parameter.
Instantiate required signature options.
Instantiate the SpreadsheetSaveOptions object according to your requirements and specify FileFormat as one of predefined values from SpreadsheetSaveFileFormat.
Call Sign method of Signature class instance and pass signatureoptions and SpreadsheetSaveOptions object to it.
Following example demonstrates how to save signed Spreadsheet document with different output type
using (Signature signature = new Signature("sample.xlsx")) { // }; SpreadsheetSaveOptions saveOptions = new SpreadsheetSaveOptions() { FileFormat = SpreadsheetSaveFileFormat.Pdf, OverwriteExistingFiles = true }; // sign document to file signature.Sign("SignedXlsx.pdf", signOptions, saveOptions); }. | https://docs.groupdocs.com/signature/net/save-signed-spreadsheet-with-different-output-file-type/ | 2020-10-19T21:29:46 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.groupdocs.com |
Knowi enables data plumbing and visualizations from MarkLogic to go from data to visual interactive insights quickly.
Overview
Connect, extract and transform data from your MarkLogic, using one of the following options:
- Through our UI to connect directly, if your MarkLogic servers are accessible from the cloud.
- Using our Cloud9Agent for datasources inside your network. See agent configuration for more details.
Query, Visualize and track all your key metrics instantly.
If you are not a Knowi user, check out our MarkLogic Instant Reporting page to get started.
Getting Started
The following GIF image shows how to connect to MarkLogic.
Login to Knowi and select the Settings icon from left-hand menu pane.
Click on MarkLogic. Either follow the prompts to set up connectivity to your own MarkLogic database, or, use the pre-configured settings into Cloud9 Chart's own demo MarkLogic database to see how it works.
When connecting from the UI directly to your MarkLogic database, please follow the connectivity instructions to allow Knowi to access your database.
Alternatively, if you connecting through an agent, check Internal Datasource to assign it to your agent. The agent (running inside your network) will synchronize it automatically.
For SSL enabled Marklogic instances, set the ssl=true into database properties.
Save the Connection. Click on the "Configure Queries" link on the success bar.
Queries & Reports
Set up Query to execute.
Report Name: Specify a name for the report.
XQuery: Enter XQuery. For details, see XQuery Docs and MarkLogic XQuery docs.
Query Generator: Where direct connectivity to MarkLogic is enabled, queries can be auto-generated using our Data Discovery & Query Generator feature.
Cloud9QL: Optional SQL-Like post processor for the data returned by XQuery. CloudQL enables aggregations, data workflows, predictions and range of other features. See Cloud9QL Docs for more details.. 'Back to Dashboards' to access dashboards. You can drag and drop the newly created report from the widget list into the dashboard.
Semantics SPARQL with XQuery
Semantic SPARQL can be executed using XQuery as following.
xquery version "1.0-ml"; import module namespace sem = "" at "/MarkLogic/semantics.xqy"; sem:sparql(' <SPARQL QUERY> ')
Example:
xquery version "1.0-ml"; import module namespace sem = "" at "/MarkLogic/semantics.xqy"; sem:sparql(' SELECT ?person WHERE { ?person <> "London" } ')
For more details on Semantics and SPARQL, see MarkLogic Semantics Documentation.
Cloud9Agent (StandAlone) Configuration
As an alternative to the UI based connectivity above, you can configure Cloud9Agent directly within your network (instead of the UI) to query MarkLogic. See Cloud9Agent to download and run your agent.
Highlights:
- Pull data using XQuery and optionally manipulate the results further with Cloud9QL.
- Execute queries on a schedule, or, one time.
The agent contains a datasource_example_markLogic.json and query_example_markLogicarkLogic", "host": "54.205.52.22", "port": "8010", "dbName": "claimsdemo", "userId": "user", "password": "pass", "datasource": "marklogic" } ]
Query Examples:
[ { "entityName": "Total Claims", "queryStr": "let $sorted-claims :=\n for $claim in collection(\"claimscsv\")/root\n where $claim/id > 10190 and $claim/id < 10590\n order by $claim/id\n return $claim\nfor $claim at $count in subsequence($sorted-claims, 1, 10)\nreturn $claim", "c9QLFilter": "SELECT service_month, NET_PAID_AMT, BILL_AMT, MBR_AGE", "queryType": "XQuery", "dsName": "demoMarkLogic", "overrideVals": { "replaceAll": true }, "frequencyType":"minute", "frequency":10 } ]
The first query is run every 10 minutes at the top of the hour and replaces all data for that dataset in Knowi. | https://docs.knowi.com/hc/en-us/articles/115006385608-MarkLogic | 2020-10-19T21:54:02 | CC-MAIN-2020-45 | 1603107866404.1 | [array(['https://knowi.com/images/docs/marklogic-connect.gif',
'MarkLogic Connect'], dtype=object) ] | docs.knowi.com |
SC Categories widget The SC Categories widget displays Service Catalog categories. You can use this base system widget as-is in your Service Portal or clone it to suit your own business needs. The system renders the categories available in this widget from the Categories table in Service Catalog [sc_category]. Figure 1. SC Categories widget If you associate your portal with multiple catalogs, then the SC Categories widget also includes a menu to select which catalog to browse. For more information on associating your portal with catalogs, see Configure a catalog in Service Portal. Instance options Table 1. Categories widget instance options fields Field Description Data Page Defines what page opens when a user clicks a category. By default, this option redirects to the page for the selected category. Number of categories to load Specifies the number of categories displayed in the Categories pane. By default, ten categories are displayed. If there are additional categories, the Show All option is available. Presentation Bootstrap color Color scheme for the widget. The default colors are defined by the portal theme, but if you want the instance to have a specific color, select the option from the list. Category Layout Select a flat or nested layout. A flat layout shows all of the available categories. A nested layout shows only the parent categories. Use a nested layout if you have a large number of categories to prevent an unnecessarily long list. Click that appears next to a category with nested topics to expand the sub-categories.The widget only supports three levels of nesting. After level four, categories appear in the flat view. Behavior. Note: If a category contains more than 200 items, update the following instance options on this widget for better performance: Category Layout: Flat Omit badges: True Check canView per item: False Related conceptsCatalog Content widgetCatalog Homepage Search widgetRecent & Popular Items widgetRequest Fields widgetRequested Items widgetRequests and Approvals widgetSC Catalog Item widgetSC Category Page widgetSC Order Guide widgetSC Popular Items widgetSC Save Bundles widgetSC Saved Carts widgetSC Scroll to top widgetSC Shopping Cart widgetSP Variable Editor widgetSC Wish List Cart widgetRelated topicsAccess Service Catalog categories in Service Portal | https://docs.servicenow.com/bundle/madrid-servicenow-platform/page/build/service-portal/concept/sc-categories-widget.html | 2020-10-19T21:06:27 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.servicenow.com |
Installation of the Gen3 SDK¶
This demonstrates how easy it is to get started with the Gen3 API.
$ pip install gen3¶
To install the Gen3 SDK, simply run this command:
$ pip install gen3
If you don’t have pip installed, the Python install guide has instructions on getting started with pip. | https://gen3sdk-python.readthedocs.io/en/latest/install.html | 2020-10-19T22:00:09 | CC-MAIN-2020-45 | 1603107866404.1 | [] | gen3sdk-python.readthedocs.io |
Creating an IP Resource
- Create a virtual IP resource. The IP resource address must be outside the CIDR block managed by the VPC.
Creating an EC2 Resource
- Create EC2 resources. For the IP resource requested when creating resources, specify the resource created in “Create IP Resource” above. Specify the Route Table (Backend Cluster) as the EC2 resource type required when creating resources.
Creating Resources for Protected Services
- Create resources for the services you want to protect. If an IP resource is required for resource creation, specify the resource created in “IP Resource Creation” above. Configure resource dependencies so that the resources of the protected service are the parent resources and the EC2 resources are the child resources.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.us.sios.com/spslinux/9.5.0/en/topic/creating-direct-connect-resources | 2020-10-19T21:46:20 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.us.sios.com |
Masscan
- 1 Minute To Read
-
- DarkLight
Masscan is a free internet port scanner utility.
The Mascan adapter is able to import Masscan JSON files with information about devices.
The adapter parameters are as same as the CSV adapter parameters, except for the File contains users information and the File contains installed software information parameters. These fields are not part of the Masscan adapter configuration, as the adapter provides devices data only, without any information on the installed software.
The functionality of this adapter is as same as the CSV adapter.
Was This Article Helpful? | https://docs.axonius.com/docs/masscan | 2020-10-19T21:53:19 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.axonius.com |
produces conversion statistics, which you can view in the Frosmo Control Panel.
For more information about conversions, conversion attribution, and conversion definitions, see the conversions user guide.
Transaction tracking is a subset of conversion tracking. If you want to track conversions that involve the purchase of one or more products, use transaction tracking. For more information, see Tracking transactions.
Tracking conversions with the data layer
Tracking conversions with the data layer means triggering a conversion event whenever a visitor successfully completes an action that qualifies as a conversion. The data you pass in the conversion event defines the conversion.
Figure: Tracking conversions by triggering a conversion event from shared code (click to enlarge)
You can trigger conversion events from modifications, from shared code, or directly from your page code. If you use a modification, you trigger the events either from custom content or, if you're using a template, from the template content.
To use the data layer on a site, the data layer module must be enabled for the site.
Triggering a conversion event
To trigger a conversion event, call the
dataLayer.push() function with a conversion object containing the conversion data:
dataLayer.push({ conversionId: 'string', conversionType: 'string', conversionValue: 0, frosmoConversionName: 'string' });
Conversion object
The conversion object contains the data of a conversion event.
Table: Conversion object properties
Conversion examples
dataLayer.push({ conversionId: 'ebook-download-personalization', conversionType: 'Ebook download', conversionValue: 0, frosmoConversionName: 'Personalization ebook download' });
// Set the product data variables... dataLayer.push({ conversionId: productId, conversionType: productType, conversionValue: 0, frosmoConversionName: productName });:
If you want more details on a data layer call, select the Network tab in the developer tools, and check the
setProductDataand
buyProductrequests to the Optimizer API. If the status is
200, the request completed successfully.
Triggering a conversion does not trigger a product view nor a transaction event. Frosmo Core merely uses the
setProductDataand
buyProductevents to pass the conversion data to the Frosmo back end.
To show only Optimizer API requests, filter the requests list by "optimizer". | https://docs.frosmo.com/pages/viewpage.action?pageId=42799360 | 2020-10-19T21:16:01 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.frosmo.com |
Basic Usage
This page is designed to demonstrate some basic principles about the xPDO/MODX cache manager and how it can be used to help you write more effective Snippets.
Create Our Snippets
Snippet One: Write to Cache
Here's our first Snippet, named cacheWrite:
$cacheManager = $modx->getCacheManager(); $x = date('H:i:s'); $cacheManager->set('my_cache_key',$x); return $x;
Remember that we need to use the $x variable as an intermediary because the cacheManager relies on variable references. You can't simply pass it a static value.
This snippet simple stores the current timestamp to a cache key named "my_cache_key". Put this Snippet on a page in your site (CACHED), e.g. on "Page One":
[[writeCache]]
Snippet Two: Read from Cache
Next, we will create simple snippet that will read values from the cache, named readCache:
$cacheManager = $modx->getCacheManager(); return $cacheManager->get('my_cache_key');
And put this Snippet onto a different page on your site (UNCACHED), e.g. on "Page Two":
[[!readCache]]
Observing our Snippets
1. First, navigate to "Page One" (or just preview that page in your site). You should see a simple timestamp, e.g. '11:44:55'. 2. Next, navigate to "Page Two" on your site in a separate browser tab. You should see the same timestamp, e.g. '11:44:55'. Even if you wait 5 minutes, the timestamp should not change.
- Clear the Site cache (Site /-> Clear Cache), then repeat this process. What do you see?
You should notice that the timestamp gets set only when you first visit Page One.
Try this:
1. Clear your Site Cache 2. Visit Page Two. What happens?
You should notice that Page Two and the
readCache Snippet returns nothing when the cache is empty and the
writeCache snippet hasn't written anything to the cache.
Next, try this:
1. Edit "Page One" so that it calls
writeCache uncached:
[[!writeCache]]
2. Visit "Page One" in a browser. Notice the timestamp. 3. Refresh "Page One". Notice that the timestamp updates. 4. Visit "Page Two" in a browser. What timestamp does it show?
You'll see that "Page Two" in this scenario will always show the timestamp from the last time "Page One" was accessed.
Summary
This page demonstrated some basic principles of the cache manager, but even with these basic functions, you can do a lot more with your Snippets. You can write custom data to the MODX cache and have that data get cleared out when the Site's cache is cleared. This is useful when you need some extra control over your Snippet output and you don't want to go through the hassle of creating your own caching partition. Cached data in these examples has a lifetime that is the same as the other cached data for resources: it gets updated when you manually update it in your snippets, or when you clear your site's cache using the "Site -> Clear Cache" menu. | https://docs.modx.org/current/en/extending-modx/caching/example | 2020-10-19T21:29:19 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.modx.org |
Assign a knowledge base manager You can assign users as managers of a knowledge base. Before you beginRole required: knowledge_admin, or adminAdd a knowledge article to featured contentCreate a user criteria record in Knowledge ManagementSelect user criteria for a knowledge baseEnable user criteria system property to override role read accessSelect user criteria for an articleDefine a knowledge article category | https://docs.servicenow.com/bundle/madrid-servicenow-platform/page/product/knowledge-management/task/t_AssignAKnowledgeBaseManager.html | 2020-10-19T21:37:21 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.servicenow.com |
Moderation¶
Like many Discord bots, ZBot allows you to moderate your server in different ways. You will find the classic commands to delete messages, mute, kick out or ban a member, as well as being able to slow down or freeze a chat completely.
Among the features in preparation you will find the members’ records as well as the possibility of sending warnings, or a section on automatic moderation.
Note
Like most of the features of this bot, the Moderation section is constantly being developed. Feel free to help us by offering suggestions, voting for the best ideas or reporting bugs at our Discord server!
Warning
Most of these commands are reserved for certain roles only. To allow roles to use a command, see the config command
Warn¶
Syntax:
warn <user> <message>:
This command allows you to warn a member, without really sanctioning him. This member will receive this warning by personal message (if they have not disabled them), and the warning will be stored in his logs.
Mute/Unmute¶
Syntax:
mute <user> [duration] [reason]
This command mutes a member, preventing them from typing.
The principle is to assign the muted role to the member, in order to distinguish him from the others. Simply configure the permissions to have the “send messages” option disabled in your channels. And if configuring the role is too much work for you, you can ask the bot to try to setup it automatically with the
mute-config command (see below).
The duration of the tempmute is quite flexible: use
XXd for days,
XXh for hours and
XXm for minutes (replacing XX by the corresponding number, of course!)
Warning
The muted role must be placed below the bot role, and the bot must have “Manage roles” (to give the role) permission.
Syntax:
unmute <user>
This command unmutes a member, when they already have the muted role. Not necessary when you had specified a duration during the mute, unless you want to stop it prematurely.
Syntax:
mute-config
With this command, Zbot will try to configure automatically the muted role (and create it if needed) with the correct permissions, both in your server and in your channels/categories. Basically, in Discord, the rule is “if a member has any role allowing them to do X, then they will be able to do X, no matter what other roles they have”. So Zbot will at first make the muted role disallowing members to send messages in the channels (with the red cross permission), then check every other roles and make sure they don’t allow muted members to send messages (so any green check will become a gray tick in the channels permissions).
Slowmode¶
Syntax:
slowmode <seconds> or
slowmode off
Slowmode keeps your text channel quiet when excited people have decided to talk a little too fast. More precisely, it prevents members from posting messages too often. The frequency between two consecutive messages from the same member is indicated in the command.
Note
The system uses a brand new feature released on September 8th in Discord beta. It therefore is a completely new as in very few bots have it) feature and can be highly integrated into your applications. It is even better than just deleting messages.
Clear¶
Syntax:
clear <number> [parameters]
This command allows you to efficiently delete messages, with a list of possible parameters for more accuracy. You can thus specify a list of members to check by mentioning them, +i to delete all messages containing files/images, +l for those containing links or Discord invitations, +p for pinned messages. By default, the bot will not delete pinned messages.
Be careful, all specified settings must be validated for the message to be deleted. For example, if you enter
clear 10 @Z_runner#7515 +i, the bot will check in the last ten messages if the message comes from Z_runner#7515 AND if the message contains an image.
If you enter
clear 25 -p +l, the bot will clear the last 25 messages if they contains a link AND if they’re not pinned, no matter the author.
If you enter
clear 13 -p -i @Z_runner#7515, the bot will clear the last 13 messages if they are not pinned AND if they does not contain any file/image AND if the author is Z_runner#7515.
If you enter
clear 1000 @Z_runner#7515 @ZBot beta#4940, the bot will delete all messages contained in the last 1000 messages of the channel AND written by Z_runner#7515 OR ZBot beta#4940
Warning
The permissions “Manage messages” and “Read messages history” are required.
Syntax:
destop <message>
If you don’t know how many messages you want to delete, but instead want to delete all of them until a certain message, you can use this command. The “message” argument can be either a message ID (from the same channel) or a message url (from any channel of your server). Permissions needed for users and bot are the same as the clear command.
Kick¶
Syntax:
kick <user> [reason]
The kick allows you to eject a member from your server. This member will receive a personal message from the bot to alert him of his expulsion, with the reason for the kick if it’s specified. It is not possible to cancel a kick. The only way to get a member back is to send him an invitation (see the invite command) via another server.
Warning
For the command to succeed, the bot must have “Kick members” permissions and be placed higher than the highest role of that member.
Softban¶
Syntax:
softban <user> [reason]
This command allows you to expel a member from your server, such as kick. But in addition, it will delete all messages posted by this member during the last 7 days. This is what explains its name: the bot bans a member by asking Discord to delete the messages (which is not possible with a kick), then unban immediately the member.
Warning
For this command, the bot needs “Ban members” permission, and you need to have a role to use the “kick” command
Ban/Unban¶
Syntax:
ban <user> [duration] [days_to_delete] [reason]
The ban allows you to instantly ban a member from your server. This means that the member will be ejected, and will not be able to return before being unbanned by a moderator. The ‘days_to_delete’ option represents the number of days worth of messages to delete from the user in the guild, bewteen 0 and 7 (0 by default)
The duration of the tempban is the same as for the tempmute: use
XXd for days,
XXh for hours and
XXm for minutes (replacing XX by the corresponding number, of course!)
To cancel this action, use the Discord interface or the unban command. The member will nevertheless have to decide for himself if he wishes to return to your server.
Syntax:
unban <user> [reason]
This command allows you to revoke a ban, whether it was made via this bot or not. Just fill in the exact name or the identifier of the member you wish to be unbanned so that the bot can find the member you choose in the list of banned members for the member in question.
The persons authorized to use this command are the same as for the ban command(see the
config command).
Warning
For both commands to succeed, the bot must have “Ban members” permissions (as well as be placed higher than the highest role of the member to ban).
Banlist¶
Syntax:
banlist
If you ban so many people that you don’t remember the exact list, and you have the laziness to look in your server options, this command will be happy to refresh your memory without too much effort.
The ‘reasons’ argument allows you to display or not the reasons for the bans.
Note
Note that this command will be deleted after 15 minutes, because privacy is private, and because we like privacy, it is only available for your server administrators. Ah, and Discord also likes privacy, so the bot can’t read this list if he doesn’t have permission to “ban people”.
Handling cases¶
View list¶
Syntax:
cases list <user>
If you want to know the list of cases/logs that a member has in this server, you can use this command. Note that to select a member, you must either notify him/her, retrieve his/her ID or write his/her full name.
The persons authorized to use this command are the same as for the warn command.
Warning
The list of cases is returned in an embed, which means that the bot must have “Embed Links” permission.
Syntax:
cases search <case ID>
This command allows you to search for a case from its identifier. The identifiers are unique for the whole bot, so you can’t see them all. However, the ZBot support team has access to all the cases (without being able to modify them)
Warning
The case is returned in an embed, which means that the bot must have “Embed Links” permission to send it correctly.
Edit Reason¶
Syntax:
cases reason <case ID> <new reason>
If you want to edit the reason for a case after creating it, you will need to use this command. Simply retrieve the case ID and enter the new reason. There is no way to go back, so be sure to make no mistake!
The persons authorized to use this command are the same as for the warn command.
Remove case¶
Syntax:
cases (remove|clear|delete) <case ID>
This is the only way to delete a case from the logs for a user. Just to make sure you don’t forget the command name, there are three aliases for the same command.
The locker will be deleted forever, and forever can be very, very long. So be sure you’re not mistaken, there’s no backup!
The persons authorized to use this command are the same as for the warn command.
Anti-raid¶
Not a command, but a server option.
This option allows you to moderate the entry of your server, with several levels of security. Here is the list of levels:
0 (None): no filter
1 (Smooth): kick members with invitations in their nickname
2 (Careful): kick accounts created less than 5min before
3 (High): ban members with invitations in their nickname, and kick accounts created less than 30min before
4 ((╯°□°)╯︵ ┻━┻): ban members created less than 30min before, and kick those created less than 2h before
Note
Note that the levels are cumulative: level 3 will also have the specificities of levels 1 and 2
Warning
The bot must have access to “Kick members” and “Ban members” permissions
Anti-bot verification¶
How does it work?
The verification system works with a simple command and a role, and filters most of the selfbots that attack your servers.
Zbot uses a list of random questions he asks the user to test it, and if the answer is correct, the user is removed from the defined role (if he has it). The command to type to “verify” is
verify, and to define which role to remove, it is the configuration option verification_role, configurable using the command
config change verification_role <role>.
It is recommended to give this role to all new members via the welcome_roles option, then block access to the server for this role, in order to force the new members to check themselves.
List of commands:
verify: ask a question to check the member
config change verification_role <role>> configures the role to be removed from the verified members
Warning
For this system, the bot must have “Manage Roles” permission. The roles to be removed must also be lower than the role of Zbot in your server hierarchy (Server Settings > Roles tab).
Miscellaneaous¶
Emoji Manager¶
With this command, you can become the undisputed master of the Emojis and handle them all as you please. You can even do something that no one has ever done before, a beta exclusivity straight out of the Discord labs: restrict the use of certain emojis to certain roles! YES! It’s possible! Come on, let’s not waste any time, here’s the list of commands currently available :
emoji rename <emoji> <new name>: renames your emoji, without going through the Discord interface. No more complicated thing.
emoji restrict <emoji> <roles>: restrict the use of an emoji to certain roles. Members who do not have this role will simply not see the emoji in the list. Note that there is no need to mention, just put the identifier or the name.
emoji clear <message ID>: instantly removes reactions from a message. This message must be indicated via its identifier, and belong to the same chat as the one where the command is used. The bot must have “Manage Messages” and “Read Message History” permissions.
emoji list: lists all the server’s emojis, in an embed, and indicates if some of them are restricted to certain roles. The bot must have “Embed Links” permission.
Warning
The bot needs the Manage Emojis permission to edit these pretty little pictures. And you, you need Administrator permission to use these commands.
Role Manager¶
Nice command that allows you to do different things with the server roles (other subcommands will be created later). The permissions required to execute them depend on the subcommands, ranging from anyone to the administrator. If you have any ideas or other suggestions, feel free to contact us via our Discord server, or in PM at the bot!
role color <role> <colour>(alias role colour): Changes the color of the given role. The color must be in hexadecimal form, although some common names are accepted (red, blue, gold…). To remove the color, use the name default. Please check notes 1. and 2.
role give <role> <user(s) | role(s)>: Give a role to a list of people. You can target as many users or roles as you want, so for example to target your friends Joe and Jack, plus the Admin role, use
role give superRole Joe Jack Admin. Please check note 2.
role remove <role> <user(s) | role(s)>: Same as above, but instead of giving them, it takes them away. Please check note 2.
role list <role>: List every members who are in a specific role, if this number is under 200. The bot must have “Embed Links” permission to display the result. Please check note 2.
role server-list: Liste every role of your server, with the members count. The bot must have “Embed Links” permission to display the result. Please check note 2.
Warning
The bot need the “Manage roles” permission, also his highest role need to be higher than the role he’s trying to edit.
You need to have the “Manage roles” permission (or be an administrator) to use this command. Else, Zbot won’t react.
Unhoist members¶
People like to put strange characters in their nicknames to appear at the top of the membership list. With this command you will be able to put an end to this habit. Simply type the command without argument to remove all non-alphabetic characters (a-z A-Z 0-9) at the beginning of the nickname, and you can give your own characters via an argument. Easy, isn’t it?
Syntax:
unhoist [characters]
Warning
It is necessary that the bot has “Manage nicknames” permission, and that its role is above the roles of the members to be renamed. | https://zbot.readthedocs.io/en/latest/moderator.html | 2020-10-19T21:39:15 | CC-MAIN-2020-45 | 1603107866404.1 | [] | zbot.readthedocs.io |
]
Creates a project.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
create-project --name <value> [--default-job-timeout-minutes <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--name (string)
The project's name.
--default-job-timeout-minutes (integer)
Sets the execution timeout value (in minutes) for a project. All test runs in this project use the specified execution timeout value unless overridden when scheduling a project
The following command creates a new project named my-project:
aws devicefarm create-project --name my-project
Output:
{ "project": { "name": "myproject", "arn": "arn:aws:devicefarm:us-west-2:123456789012:project:070fc3ca-7ec1-4741-9c1f-d3e044efc506", "created": 1503612890.057 } }
project -> (structure)
The newly created project.. | https://docs.aws.amazon.com/cli/latest/reference/devicefarm/create-project.html | 2020-10-19T21:48:09 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.aws.amazon.com |
People in your organization can sign in to a Surface Hub without a password using the Microsoft Authenticator app, available on Android and iOS.
Organization prerequisites
To let people in your organization sign in to Surface Hub with their phones and other devices instead of a password, you’ll need to make sure that your organization meets these prerequisites:
Your organization must be a hybrid or cloud-only organization, backed by Azure Active Directory (Azure AD). For more information, see What is Azure Active Directory?
Make sure you have at minimum an Office 365 E3 subscription.
Configure Multi-Factor Authentication. Make sure Notification through mobile app is selected.
Enable content hosting on Azure AD services such as Office, SharePoint, etc.
Surface Hub must be running Windows 10, version 1703 or later.
Surface Hub is set up with either a local or domain-joined account.
Currently, you cannot use Microsoft Authenticator to sign in to Surface Hubs that are joined to Azure AD.
Individual prerequisites
An Android phone running 6.0 or later, or an iPhone or iPad running iOS9 or later
The most recent version of the Microsoft Authenticator app from the appropriate app store
Note
On iOS, the app version must be 5.4.0 or higher.
The Microsoft Authenticator app on phones running a Windows operating system can't be used to sign in to Surface Hub.
Passcode or screen lock on your device is enabled
A standard SMTP email address (example: [email protected]). Non-standard or vanity SMTP email addresses (example: [email protected]) currently don’t work.
How to set up the Microsoft Authenticator app
Note
If Company Portal is installed on your Android device, uninstall it before you set up Microsoft Authenticator. After you set up the app, you can reinstall Company Portal.
If you have already set up Microsoft Authenticator on your phone and registered your device, go to the sign-in instructions.
- Add your work or school account to Microsoft Authenticator for Multi-Factor Authentication. You will need a QR code provided by your IT department. For help, see Get started with the Microsoft Authenticator app.
- Go to Settings and register your device.
- Return to the accounts page and choose Enable phone sign-in from the account dropdown menu.
How to sign in to Surface Hub during a meeting
After you’ve set up a meeting, go to the Surface Hub and select Sign in to see your meetings and files.
Note
If you’re not sure how to schedule a meeting on a Surface Hub, see Schedule a meeting on Surface Hub.
You’ll see a list of the people invited to the meeting. Select yourself (or the person who wants to sign in – make sure this person has gone through the steps to set up their device before your meeting), and then select Continue.
You'll see a code on the Surface Hub.
To approve the sign-in, open the Authenticator app, enter the four-digit code that’s displayed on the Surface Hub, and select Approve. You will then be asked to enter the PIN or use your fingerprint to complete the sign in.
You can now access all files through the OneDrive app. | https://docs.microsoft.com/en-us/surface-hub/surface-hub-authenticator-app | 2020-10-19T23:34:13 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.microsoft.com |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.