content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
CephFS best practices¶ This guide provides recommendations for best results when deploying CephFS. For the actual configuration guide for CephFS, please see the instructions at Ceph Filesystem. Which Ceph version?¶ Use at least the Jewel (v10.2.0) release of Ceph. This is the first release to include stable CephFS code and fsck/repair tools. Make sure you are using the latest point release to get bug fixes. Note that Ceph releases do not include a kernel, this is versioned and released separately. See below for guidance of choosing an appropriate kernel version if you are using the kernel client for CephFS. Most stable configuration¶ Some features in CephFS are still experimental. See Experimental Features for guidance on these. For the best chance of a happy healthy filesystem, use a single active MDS and do not use snapshots. Both of these are the default. Note that creating multiple MDS daemons is fine, as these will simply be used as standbys. However, for best stability you should avoid adjusting max_mds upwards, as this would cause multiple MDS daemons to be active at once. Which client?¶ The FUSE client is the most accessible and the easiest to upgrade to the version of Ceph used by the storage cluster, while the kernel client will often give better performance. The clients do not always provide equivalent functionality, for example the fuse client supports client-enforced quotas while the kernel client does not. When encountering bugs or performance issues, it is often instructive to try using the other client, in order to find out whether the bug was client-specific or not (and then to let the developers know). Which kernel version?¶ Because the kernel client is distributed as part of the linux kernel (not as part of packaged ceph releases), you will need to consider which kernel version to use on your client nodes. Older kernels are known to include buggy ceph clients, and may not support features that more recent Ceph clusters support. Remember that the “latest” kernel in a stable linux distribution is likely to be years behind the latest upstream linux kernel where Ceph development takes place (including bug fixes). As a rough guide, as of Ceph 10.x (Jewel), you should be using a least a 4.x kernel. If you absolutely have to use an older kernel, you should use the fuse client instead of the kernel client. This advice does not apply if you are using a linux distribution that includes CephFS support, as in this case the distributor will be responsible for backporting fixes to their stable kernel: check with your vendor. Reporting issues¶ If you have identified a specific issue, please report it with as much information as possible. Especially important information: - Ceph versions installed on client and server - Whether you are using the kernel or fuse client - If you are using the kernel client, what kernel version? - How many clients are in play, doing what kind of workload? - If a system is ‘stuck’, is that affecting all clients or just one? - Any ceph health messages - Any backtraces in the ceph logs from crashes If you are satisfied that you have found a bug, please file it on the tracker. For more general queries please write to the ceph-users mailing list.
http://docs.ceph.com/docs/master/cephfs/best-practices/
2018-02-18T05:00:33
CC-MAIN-2018-09
1518891811655.65
[]
docs.ceph.com
Time Support in GeoServer WMS¶ GeoServer supports a TIME attribute in GetMap requests for layers that are properly configured with a time dimension. This is used to specify a temporal subset for rendering. For example, you might have a single dataset with weather observations collected over time and choose to plot a single day’s worth of observations. The attribute to be used in TIME requests can be set up through the GeoServer web interface by navigating to . Note Read more about how to use the web interface to configure an attribute for TIME requests. Specifying a time¶ The format used for specifying a time in the WMS TIME parameter is based on ISO-8601. Times may be specified up to a precision of 1 millisecond; GeoServer does not represent time queries with more precision than this. The parameter is: TIME=<timestring> Times follow the general format: yyyy-MM-ddThh:mm:ss.SSSZ where: yyyy: 4-digit year MM: 2-digit month dd: 2-digit day hh: 2-digit hour mm: 2-digit minute ss: 2-digit second SSS: 3-digit millisecond The day and intraday values are separated with a capital T, and the entire thing is suffixed with a Z, indicating UTC for the time zone. (The WMS specification does not provide for other time zones.) GeoServer will apply the TIME value to all temporally enabled layers in the LAYERS parameter of the GetMap request. Layers without a temporal component will be served normally, allowing clients to include reference information like political boundaries along with temporal data. Specifying an absolute interval¶ A client may request information over a continuous interval instead of a single instant by specifying a start and end time, separated by a / character. In this scenario the start and end are inclusive; that is, samples from exactly the endpoints of the specified range will be included in the rendered tile. Specifying a relative interval¶ A client may request information over a relative time interval instead of a set time range by specifying a start or end time with an associated duration, separated by a / character. One end of the interval must be a time value, but the other may be a duration value as defined by the ISO 8601 standard. The special keyword PRESENT may be used to specify a time relative to the present server time. Note The final example could be paired with the KML service to provide a Google Earth network link which is always updated with the last 36 hours of data. Reduced accuracy times¶ The WMS specification also allows time specifications to be truncated by omitting some of the time string. In this case, GeoServer treats the time as a range whose length is equal to the most precise unit specified in the time string. For example, if the time specification omits all fields except year, it identifies a range one year long starting at the beginning of that year. Note GeoServer implements this by adding the appropriate unit, then subtracting 1 millisecond. This avoids surprising results when using an interval that aligns with the actual sampling frequency of the data - for example, if yearly data is natively stored with dates like 2001-01-01T00:00:00.0Z, 2002-01-01T00:00:00Z, etc. then a request for 2001 would include the samples for both 2001 and 2002, which wouldn’t be desired. Reduced accuracy times with ranges¶ Reduced accuracy times are also allowed when specifying ranges. In this case, GeoServer effectively expands the start and end times as described above, and then includes any samples from after the beginning of the start interval and before the end of the end interval. Note Again, the ranges are inclusive. Note In the last example, note that the result may not be intuitive, as it includes all times from 6PM to 6:59PM. Specifying a list of times¶ GooServer can also accept a list of discrete time values. This is useful for some applications such as animations, where one time is equal to one frame. The elements of a list are separated by commas. Note GeoServer currently does not support lists of ranges, so all list queries effectively have a resolution of 1 millisecond. If you use reduced accuracy notation when specifying a range, each range will be automatically converted to the instant at the beginning of the range. If the list is evenly spaced (for example, daily or hourly samples) then the list may be specified as a range, using a start time, end time, and period separated by slashes. Specifying a periodicity¶ The periodicity is also specified in ISO-8601 format: a capital P followed by one or more interval lengths, each consisting of a number and a letter identifying a time unit: The Year/Month/Day group of values must be separated from the Hours/Minutes/Seconds group by a T character. The T itself may be omitted if hours, minutes, and seconds are all omitted. Additionally, fields which contain a 0 may be omitted entirely. Fractional values are permitted, but only for the most specific value that is included. Note The period must divide evenly into the interval defined by the start/end times. So if the start/end times denote 12 hours, a period of 1 hour would be allowed, but a period of 5 hours would not. For example, the multiple representations listed below are all equivalent. One hour: P0Y0M0DT1H0M0S PT1H0M0S PT1H 90 minutes: P0Y0M0DT1H30M0S PT1H30M P90M 18 months: P1Y6M0DT0H0M0S P1Y6M0D P0Y18M0DT0H0M0S P18M Note P1.25Y3Mwould not be acceptable, because fractional values are only permitted in the most specific value given, which in this case would be months.
http://docs.geoserver.org/stable/en/user/services/wms/time.html
2018-02-18T05:06:32
CC-MAIN-2018-09
1518891811655.65
[]
docs.geoserver.org
MaxDiff Mixture Models From Displayr (Redirected from Max-Diff Mixture Models) The main mixture models used to analyze the MaxDiff experiments are: - Latent class logit models, which assume that the population contains a number of segments (e.g., a segment wanting low priced phones with few features and another segment willing to pay a premium for more features) and identifies the segments automatically. - Random parameters logit models, which assume that the distribution of the parameters in the population is described by a multivariate normal distribution. This model is sometimes referred to in market research as Hierarchical Bayes, although this is a misnomer. See Tricked Random Parameters Logit Model for an example. - C-Factor models, which can be either latent class or random parameters logit models, but additionally allow for heterogeneity in scale factors.
https://docs.displayr.com/wiki/Max-Diff_Mixture_Models
2018-02-18T04:45:02
CC-MAIN-2018-09
1518891811655.65
[]
docs.displayr.com
Overview of the Greenplum-Spark Connector Pivotal Greenplum Database is a massively parallel processing database server specially designed to manage large scale analytic data warehouses and business intelligence workloads. Apache Spark is a fast, general-purpose computing system for distributed, large-scale data processing. The Pivotal Greenplum-Spark Connector provides high speed, parallel data transfer between Greenplum Database and Apache Spark clusters to support: - Interactive data analysis - In-memory analytics processing - Batch ETL - Continuous ETL pipeline (streaming) Architecture A Spark application consists of a driver program and executor processes running on worker nodes in your Spark cluster.. Figure: Greenplum-Spark Connector Architecture
https://greenplum-spark.docs.pivotal.io/110/overview.html
2018-02-18T05:09:16
CC-MAIN-2018-09
1518891811655.65
[array(['graphics/gscarch.png', None], dtype=object)]
greenplum-spark.docs.pivotal.io
osdmaptool – ceph osd cluster map manipulation tool¶ Description¶ osdmaptool is a utility that lets you create, view, and manipulate OSD cluster maps from the Ceph distributed storage system. Notably, it lets you extract the embedded CRUSH map or import a new CRUSH map. Options¶ ¶ will simply make the tool print a plaintext dump of the map, after any modifications are made. --createsimple numosd [--pgbits bitsperosd]¶ will create a relatively generic OSD map with the numosd devices. If –pgbits is specified, the initial placement group counts will be set with bitsperosd bits per OSD. That is, the pg_num map attribute will be set to numosd shifted by bitsperosd. Example¶ To create a simple map with 16 devices: osdmaptool --createsimple 16 osdmap --clobber To view the result: osdmaptool --print osdmap To view the mappings of placement groups for pool 0: osdmaptool --test-map-pgs-dump rbd --pool 0 pool 0 pg_num 8 0.0 [0,2,1] 0 0.1 [2,0,1] 2 0.2 [0,1,2] 0 0.3 [2,0,1] 2 0.4 [0,2,1] 0 0.5 [0,2,1] 0 0.6 [0,1,2] 0 0.7 [1,0,2] 1 #osd count first primary c wt wt osd.0 8 5 5 1 1 osd.1 8 1 1 1 1 osd.2 8 2 2 1 1 in 3 avg 8 stddev 0 (0x) (expected 2.3094 0.288675x)) min osd.0 8 max osd.0 8 size 0 0 size 1 0 size 2 0 size 3 8 - In which, - pool 0 has 8 placement groups. And two tables follow: - A table for placement groups. Each row presents a placement group. With columns of: - placement group id, - acting set, and - primary OSD. - A table for all OSDs. Each row presents an OSD. With columns of: - count of placement groups being mapped to this OSD, - count of placement groups where this OSD is the first one in their acting sets, - count of placement groups where this OSD is the primary of them, - the CRUSH weight of this OSD, and - the weight of this OSD. - Looking at the number of placement groups held by 3 OSDs. We have - avarge, stddev, stddev/average, expected stddev, expected stddev / average - min and max - The number of placement groups mapping to n OSDs. In this case, all 8 placement groups are mapping to 3 different OSDs. In a less-balanced cluster, we could have following output for the statistics of placement group distribution, whose standard deviation is 1.41421: #osd count first primary c wt wt osd.0 8 5 5 1 1 osd.1 8 1 1 1 1 osd.2 8 2 2 1 1 #osd count first primary c wt wt osd.0 33 9 9 0.0145874 1 osd.1 34 14 14 0.0145874 1 osd.2 31 7 7 0.0145874 1 osd.3 31 13 13 0.0145874 1 osd.4 30 14 14 0.0145874 1 osd.5 33 7 7 0.0145874 1 in 6 avg 32 stddev 1.41421 (0.0441942x) (expected 5.16398 0.161374x)) min osd.4 30 max osd.1 34 size 00 size 10 size 20 size 364 Availability¶ osdmaptool is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to the Ceph documentation at for more information.
http://docs.ceph.com/docs/master/man/8/osdmaptool/
2018-02-18T04:37:00
CC-MAIN-2018-09
1518891811655.65
[]
docs.ceph.com
Risk homepage The risk homepage provides an executive view into risk management, allowing risk managers to quickly identify areas of concern by pinpointing profiles with known high risk. The overview displays warnings for profiles in non-compliance that increase risk. It also displays up-to-the minute information in gauges that contain valuable visuals such as the new Heatmap. A Heatmap is a graphical representation of the number of records which meet a certain condition. The more records that meet a certain condition, the darker the color (the heat) is for that particular condition. In the case of Risk, the Heatmaps plot the values of likelihood vs. the values of significance. All Risks that meet a certain condition, for example Likelihood is 3 and Significance is 2, will aggregate into a score on the Heatmap. In the example Heatmap in the following image, the number of records that meet this condition is 8. Thus, using these Heatmaps, it is easy to identify the impact all Risks pose to your organization. In the example below, heat in the bottom-right of the map means greater risk to the organization, showing risks that are both highly likely and highly significant, while heat in the upper-left of the map shows risk that is less likely and less significant. Figure 1. Example Heatmap The following gauges are provided out-of-box.Table 1. Out-of-box gauges Name Visual Description Highest Risk Profiles List Display of any Profiles with a calculated score of moderate, high, and very high. Risk Warnings List Display of any Profiles with a calculated score that is different (that is, greater) than the residual score, which allows users to quickly identify areas of non-compliance. Risks by Response Semi-donut Chart of the number of risks that an organization has chosen to mitigate, avoid, accept, or transfer. User is able to drill-in to a list of all risks that meet a particular response. Risks by Category Semi-donut Chart of the number of risks that apply to a particular category whether it be IT, Operational, Legal, Reputational, or Financial. Inherent Likelihood vs. Significance Heatmap Plots the count of the number of risks by inherent likelihood vs. the inherent significance. Residual Likelihood vs. Significance Heatmap Plots the count of the number of risks by residual likelihood vs. the residual significance. Ideally, all Risks would be in upper left corner of the plot (Likelihood = 1, Significance = 1).
https://docs.servicenow.com/bundle/geneva-governance-risk-compliance/page/product/grc_risk/concept/c_RiskHomepage.html
2018-02-18T05:22:48
CC-MAIN-2018-09
1518891811655.65
[]
docs.servicenow.com
Modify the UI Action You can modify the behavior of the Communicate Workaround, Post Knowledge, and Post News related links. Before you beginRole required: admin Procedure Navigate to System Definition > UI Actions. Locate and open the UI action with the same name as the related link to modify. Edit the UI action and click Update. Related TasksAccess an external knowledge articleCommunicate a workaroundCreate knowledge from an incidentCreate knowledge manuallyMake an attachment visiblePost newsPost knowledgeUse the knowledge check boxRelated ReferenceOptions for creating knowledge from a problem
https://docs.servicenow.com/bundle/geneva-it-service-management/page/product/problem_management/task/t_ModifyTheUIAction.html
2018-02-18T05:22:38
CC-MAIN-2018-09
1518891811655.65
[]
docs.servicenow.com
HR ticket page You can create requests or view assigned or open tasks and cases with the HR ticket page. You can also communicate with HR from this page. The HR ticket page displays when you select a case or task from My To-dos or Open Cases and Requests from the HR Service Portal. Figure 1. HR ticket page: Request Onboarding From the HR ticket page, you can: Create a request from the HR Service Catalog. View assigned open tasks and cases and ability to complete. View the details and status about tasks and cases. View attachments related to tasks and cases. Sign or provide credentials indicating documents were read. View and acknowledge watching a video. View activity and status related to the tasks and cases. Send a message about the activity. Sign and complete tasks from the Case page. Upload requested documents. View and change the priority of your case. View and add to the Watch list. Approve or reject a task or case. HR managers can view a running total of all cases or only assigned cases. Create an onboarding request with the HR ticket pageFrom the HR Service Portal, you can make an onboarding request for a new hire using the HR ticket page.View the status of an onboarding requestFrom the HR Service Portal, you can view the status of an onboarding request using the HR ticket page.Reserve an office space with the HR ticket pageAs part of the onboarding process, you can find and reserve an office space for a new hire using the HR ticket page.Request new-hire provisioning with the HR ticket pageAs part of the onboarding process, you can make selections from the order guide to provide new hires with account access, software, and equipment, using the HR ticket page.
https://docs.servicenow.com/bundle/jakarta-hr-service-delivery/page/product/human-resources/concept/c_HRTicketPage.html
2018-02-18T05:23:27
CC-MAIN-2018-09
1518891811655.65
[]
docs.servicenow.com
WinJS.Promise.as function Returns a promise. If the object is already a Promise it is returned; otherwise the object is wrapped in a Promise. You can use this function when you need to treat a non-Promise object like a Promise, for example when you are calling a function that expects a promise, but already have the value needed rather than needing to get it asynchronously. Syntax var promise = WinJS.Promise.as(nonPromise); Parameters Return value The promise.
https://docs.microsoft.com/en-us/previous-versions/windows/apps/br211664(v=win.10)
2018-02-18T05:41:44
CC-MAIN-2018-09
1518891811655.65
[array(['images/hh464906.windows_and_phone%28en-us%2cwin.10%29.png', None], dtype=object) ]
docs.microsoft.com
Show DHCP Class ID Information at a Client Computer Applies To: Windows Server 2008 You can use the ipconfig /showclassid command to show DHCP class ID information at a client computer. Membership in the Domain Admins group, or equivalent, is the minimum required to complete this procedure. To show DHCP class ID information at a client computer At a DHCP-enabled client computer running Windows XP or Windows Vista, open a command prompt. Use the Ipconfig command-line tool to show the DHCP class ID that the client uses when obtaining its lease from the DHCP server. The following example shows Default BOOTP Class for Local Area Connection is currently set as the DHCP class ID at the client computer: C:\>ipconfig /showclassid "Local Area Connection" Windows IP Configuration DHCP Class ID for Adapter "Local Area Connection": DHCP ClassID Name . . . . . . . . : Default BOOTP Class DHCP ClassID Description . . . . : User class for BOOTP clients Additional considerations.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd183669(v=ws.10)
2018-02-18T05:47:55
CC-MAIN-2018-09
1518891811655.65
[]
docs.microsoft.com
Semidbm¶ Semidbm is a fast, pure python implementation of a dbm, which is a persistent key value store. It allows you to get and set keys through a dict interface: import semidbm db = semidbm.open('testdb', 'c') db['foo'] = 'bar' print db['foo'] db.close() These values are persisted to disk, and you can later retrieve these key/value pairs: # Then at a later time: db = semidbm.open('testdb', 'r') # prints "bar" print db['foo'] It was written with these things in mind: - Pure python, supporting python 2.6, 2.7, 3.3, and 3.4. - Cross platform, works on Windows, Linux, Mac OS X. - Supports CPython, pypy, and jython (versions 2.7-b3 and higher). - Simple and Fast (See Benchmarking Semidbm). Post feedback and issues on github issues, or check out the latest changes at the github repo.
http://semidbm.readthedocs.io/en/latest/
2018-02-18T04:57:21
CC-MAIN-2018-09
1518891811655.65
[]
semidbm.readthedocs.io
Monitor chat queues Chat queues can yield useful Key Performance Indicators (KPI) for evaluating support effectiveness. Queue Wait Time: amount of time a user waits in the queue before a help desk agent accepts the request. Percentage of Chats Abandoned: users that exit the queue before an agent responds (user stopped waiting). Percentage of Chats Accepted: requests that are answered by an agent. Note: This information is not calculated automatically. Administrators may calculate these values based on data collected by chat queues. Monitor help desk chat tasksHelp Desk Chat requests are tracked in the Chat queue entries table, which appears as a related list on the associated chat queue record.
https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/use/using_social_it/concept/c_MonitorChatQueues.html
2018-02-18T05:24:02
CC-MAIN-2018-09
1518891811655.65
[]
docs.servicenow.com
TheAppBuilder platform leverages Google Analytics to provide very comprehensive analysis of how apps are used. Google Analytics is a comprehensive platform tracking a vast array of different data, including new vs returning users, individual page views, flow through the app, what documents/videos etc have been accessed, spread of use across various devices, mobile vs desktop and a host of other information. There are a variety of ways of consuming the data from Google Analytics. The options include creating dashboards, exporting data as csv files or visually as pdfs. You can opt for high-level graphical representations or granular, clickable tables. There are also mobile apps that you can use to track your stats on the move. Learn How to set up Google Analytics for your app here.
https://docs.theappbuilder.com/reporting/
2018-02-18T04:31:18
CC-MAIN-2018-09
1518891811655.65
[]
docs.theappbuilder.com
IAM JSON Policy Elements: Version Disambiguation Note This topic is about the JSON policy element named "Version".. If you were searching for information about the multiple version support available for managed policies, see Versioning IAM Policies. The Version elements specifies the language syntax rules that are to be used to process this policy. If you include features that are not available in the specified version, then your policy will generate errors or not work the way you intend. As a general rule, you should specify the most recent version available, unless you depend on a feature that was deprecated in later versions. The Version element} aren't recognized as variables and are instead treated as literal strings in the policy. "Version": "2012-10-17"
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_version.html
2018-02-18T05:34:48
CC-MAIN-2018-09
1518891811655.65
[]
docs.aws.amazon.com
Mobile Devices¶ One of the key features of gthnk is that it runs on your computer - not in the cloud - so you can keep your thoughts private. However, sometimes you’re on the road and you really want to write something down. This scenario is no problem for gthnk - just use a service like Dropbox or Seafile to sync your journal buffers back to your computer. This permits you to take non-sensitive notes even when you’re not by your computer. Journal buffers via Dropbox¶ gthnk can import any number of journal buffers every day. Edit INPUT_FILES in the configuration file ~/Library/Gthnk/gthnk.conf and add any Dropbox files there. Since Dropbox is usually ~/Dropbox, you might create a file called ~/Dropbox/journal-phone.txt for capturing notes that come from your phone. To accomplish this, you would edit the configuration like this: INPUT_FILES = "/Users/me/Dropbox/journal-phone.txt,/Users/me/Desktop/journal.txt" Now gthnk will import entries from the file on your Desktop and from your phone. It doesn’t matter if these files are empty most of the time; that doesn’t bother gthnk. Mobile Text Editors¶ - Jota+ for Android supports Dropbox and supports macros. Have you had success with other mobile text editors? Please create an issue that describes your experience and we can add it to this document.
http://gthnk.readthedocs.io/en/latest/user/mobile-devices.html
2018-02-17T21:28:33
CC-MAIN-2018-09
1518891807825.38
[]
gthnk.readthedocs.io
What Is Amazon Translate? Amazon Translate translates documents from the following six languages into English, and from English into these languages: Arabic Chinese German Portuguese Spanish Amazon Translate uses advanced machine learning technologies to provides high-quality translation on demand. Use it to translate unstructured text documents or to build applications that work in multiple languages. For example, you can: Integrate Amazon Translate into your applications to enable multilingual user experiences. Translate user-authored content, such as chats, forum posts, and search queries. Translate company-authored content, such as product data and metadata. Use Amazon Translate as part of your company's workflow for incoming data. Process text in many languages. Present text to customers and team members in their native language. Integrate Amazon Translate with other AWS services to enable language-independent processing. Use it with Amazon Polly to speak translated content. Use it with Amazon S3 to translate document repositories of any kind. Use it with Amazon Comprehend to extract named entities, sentiment, and key phrases from unstructured text such as social media streams. Use it with Amazon DynamoDB, Amazon Aurora, and Amazon Redshift to translate structured and unstructured text. Use it with AWS Lambda or AWS Glue for seamless workflow integration. Are You a First-time User of Amazon Translate ? If you are a first-time user, we recommend that you read the following sections in order: How It Works—Introduces Amazon Translate. Getting Started with Amazon Translate—Explains how to set up your AWS account and test Amazon Translate. Examples—Provides code examples in Java and Python. Use them to explore how Amazon Translate works. API Reference—Contains reference documentation for Amazon Translate operations.
https://docs.aws.amazon.com/translate/latest/dg/what-is.html
2018-02-17T21:53:13
CC-MAIN-2018-09
1518891807825.38
[]
docs.aws.amazon.com
Confluent Proactive Support¶ This document is a high-level summary. Please refer to the Confluent Privacy Policy as the authoritative source of information. What is Proactive Support?¶ Proactive Support is a component of the Confluent Platform. It collects and reports 3.0 (including without limitation, your remote internet protocol address) to Confluent, Inc. (“Confluent”) or its parent, subsidiaries, affiliates or service providers. This Metadata may be transferred to any country in which Confluent maintains facilities. By proceeding with Metrics enabled, you agree to all such collection, transfer, storage and use of Metadata by Confluent. You can turn Metrics off at any time by following the instructions described below. Please refer to the Confluent Privacy Policy for an in-depth description of how Confluent processes such information. How it works¶ With the Metrics feature enabled, a Kafka broker will collect and report certain broker and cluster metadata every 24 hours to the following two destinations: - to a special-purpose Kafka topic within the same cluster (named __confluent.support.metricsby default) - to Confluent via either HTTPS or HTTP over the Internet (HTTPS preferred). The main reason for reporting to a Kafka topic is that there are certain situations when reporting the metadata via the Internet is not possible. For example, a company’s security policy may mandate that computer infrastructure in production environments must not be able to access the Internet directly. The drawback of this approach is that the collected metadata is not being shared automatically and requires manual operator intervention as described in section Sharing Proactive Support Metadata with Confluent manually. Reporting data to Confluent via the Internet is the most convenient option for most customers. The agent that collects and reports the metadata is collocated with the broker process and runs within the same JVM. The volume of the metadata collected by this agent (see Which metadata is collected?) is small and the default report interval is once every 24 hours (see Proactive Support configuration settings). The following sections describe in more detail which metadata is being collected, how to enable or disable the Metrics feature, how to configure the feature if you are a licensed Confluent customer, and how to tune its configuration settings when needed. Which metadata is collected?¶ Proactive Support has two versions, one for open-source users (called Version Collector) and another for licenced Confluent Customers (called Confluent Support Metrics). The former is included in Confluent Kafka package while the latter needs to be installed as a separate package called Confluent Support Metrics. If you installed Confluent Platform package (and not just Confluent Platform Open Source), the full Confluent Support Metrics collector is included. Version Collector (default)¶ The Version Collector package collects the Kafka and Confluent Platform versions and reports the following pieces of information to Confluent: - Confluent Platform version - The Confluent Platform version that the broker is running. - Kafka version - The Kafka version that the broker is running. - Broker token - A dynamically generated unique identifier that is valid for the runtime of a broker. The token is generated at broker startup and lost at shutdown. - Timestamp - Time when the metadata was collected on the broker. Confluent Support Metrics (add-on package)¶ The Confluent Support Metrics add-on package collects and reports additional metadata that helps Confluent to proactively identify issues in the field. This additional metadata includes but is not limited to information about the Java runtime environment of the Kafka broker and metrics relevant to help to comply with Confluent support contracts. Please reach out to our customer support or refer to the Confluent Privacy Policy for more information. You will need a Confluent customer ID to set in the broker configuration ( confluent.support.customer.id setting) as well as the Confluent Support Metrics package. Which metadata and data is not being collected?¶ We understand that Confluent Platform users often publish private or proprietary data to Kafka clusters, and often run Kafka in sensitive environments. We have done our best to avoid collecting any proprietary information about customers. In particular: - We do not inspect any messages sent through Kafka, and collect no data about what is inside these messages. - We do not collect any proprietary network information such as internal host names or IP addresses. - We do not collect information about the name of topics. Installing Support Metrics¶ In order to collect and report the additional support metadata, you need to have the Support Metrics package installed. If you installed confluent-3.0.0-2.11.8 or confluent-3.0.0-2.10.6 (through ZIP, TAR, DEB or RPM), then Support Metrics is already installed and the next step is to obtain your Confluent customer ID from our customer support and to update it in Kafka configuration (see Recommended Proactive Support configuration settings for licensed Confluent customers). If you installed confluent-oss-3.0.0-2.11.8 or confluent-oss-3.0.0-2.10.6, or if you chose to install individual packages, you will need to install confluent-support-metrics_3.0.0 packages first. DEB Packages via apt¶ We’ll assume you already followed instructions here DEB packages via apt to install Confluent’s public key and add the repository. Run apt-get update and install Support Metrics package: $ sudo apt-get update && sudo apt-get install confluent-support-metrics_3.0.0 The next step is to obtain your Confluent customer ID from our customer support and to update it in Kafka configuration (see Recommended Proactive Support configuration settings for licensed Confluent customers). RPM Packages via yum¶ We’ll assume you already followed instructions here RPM packages via yum to install Confluent’s public key and add the repository. It is recommended to clear the yum caches before proceeding: $ sudo yum clean all The repository is now ready for use. You can install Support Metrics with: $ sudo yum install confluent-platform-2.11.8 The next step is to obtain your Confluent customer ID from our customer support and to update it in Kafka configuration (see Recommended Proactive Support configuration settings for licensed Confluent customers). Enabling or disabling the Metrics feature¶ The Metrics feature can be enabled or disabled at any time by modifying the broker configuration as needed, followed by a restart of the broker. The relevant setting for the broker configuration (typically at /etc/kafka/server.properties) is described below: Recommended Proactive Support configuration settings for licensed Confluent customers¶ Confluent customers must change the confluent.support.customer.id setting and provide their respective Confluent customer ID. Please reach out to our customer support if you have any questions. Proactive Support configuration settings¶ This section documents all available Proactive Support settings that can be defined in the broker configuration (typically at /etc/kafka/server.properties), including their default values. Most users will not need to change these settings. In fact, we recommend leaving these settings at their default values; the exception are Confluent customers, which should change a few settings as described in the previous section. ##################### Confluent Proactive Support: ###################### ##################### broker configuration settings ###################### # If set to true, then the feature to collect and report support metrics # ("Metrics") is enabled. If set to false, the feature is disabled. # confluent.support.metrics.enable=true # The customer ID under which support metrics will be collected and # reported. # # When the customer ID is set to "anonymous" (the default), then only a # reduced set of metrics is being collected and reported. # # Confluent customers # ------------------- # If you are a Confluent customer, then you should replace the default # value with your actual Confluent customer ID. Doing so will ensure # that additional support metrics will be collected and reported. # confluent.support.customer.id=anonymous # The Kafka topic (within the same cluster as this broker) to which support # metrics will be submitted. # # To specifically disable reporting metrics to an internal Kafka topic when # `confluent.support.metrics.enable=true` set this variable to an empty value. # confluent.support.metrics.topic=__confluent.support.metrics # The interval at which support metrics will be collected from and reported # by this broker. # confluent.support.metrics.report.interval.hours=24 # To selectively disable the reporting of support metrics to Confluent # over the Internet when `confluent.support.metrics.enable=true`, # set these variables to false as needed. # # Tip: If you want to enforce that reporting over the Internet # will only ever use an encrypted channel, enable the secure # endpoint but disable the insecure one. # confluent.support.metrics.endpoint.insecure.enable=true confluent.support.metrics.endpoint.secure.enable=true Network ports used by Proactive Support¶ When the Metrics feature is enabled (default), brokers will attempt to report metadata via the Internet to Confluent. The metadata will be sent via HTTPS (preferred) or HTTP, which means you need to ensure that the brokers are allowed to talk to the Internet via destination ports 443 (HTTPS) and/or 80 (HTTP) if you want to benefit from this functionality.
https://docs.confluent.io/3.0.0/proactive-support/docs/proactive-support.html
2018-02-17T21:35:13
CC-MAIN-2018-09
1518891807825.38
[]
docs.confluent.io
Binding to Folder You can bind RadImageGallery to a virtual folder path and display all images contained in it. All that you need to do is set the ImagesFolderPath property as shown in the code snippet. The control will automatically generate thumbnail images. If necessary the thumbnails will be cropped to fit the defined thumbnail width and height. The images in the Image Area have relative paths and the user only needs to wait for the image to be loaded by the browser. Figure 1 shows how the rendered control will look on the web page. <telerik:RadImageGallery <ThumbnailsAreaSettings Position="Left" ScrollOrientation="Vertical" ScrollButtonsTrigger="Click" /> <ImageAreaSettings ShowNextPrevImageButtons="true" NavigationMode="Button" /> </telerik:RadImageGallery> Figure 1 - RadImageGallery bound to a Folder
https://docs.telerik.com/devtools/aspnet-ajax/controls/imagegallery/data-binding/server-side/binding-to-folder
2018-02-17T21:31:52
CC-MAIN-2018-09
1518891807825.38
[array(['images/image-gallery-binding-to-folder.jpg', 'image-gallery-binding-to-folder'], dtype=object)]
docs.telerik.com
Using OpenWrt SDK to Build C/C++ Programs Building C/C++ binaries for LinkIt Smart 7688 development platform requires cross-compilation. Follow these steps to build an example C/C++ ipk file that can be installed with an opkg command. Environment Currently only Ubuntu Linux and OS X are supported. Windows with Cygwin is not supported. The following steps assume an Ubuntu Linux environment. Steps Download and unzip the SDK package content from Downloads page. The name is quite long and we'll use SDK to denote instead. sudo tar -xvjf SDK.tar.bz2 Note that sudo is mandatory – without it the file won’t unpack properly. cd SDK - Download and unzip the example package file. Copy the example helloworld directory to SDK/package folder. The folder structure looks like: SDK/package +helloworld # Name of the package -Makefile # This Makefile describes the package +src -Makefile # This Makefile builds the binary -helloworld.c # C/C++ source code - In the SDK directory, enter make package/helloworld/compile to build the package. Once it's built: - Navigate to SDK/bin/ramips/packages/base. - Find a package file named helloworld_1.0.0-1_ramips_24kec.ipk. - Copy the .ipk file to the LinkIt Smart 7688 development board. - In the system console of the board, navigate to the location of the .ipk file and type opkg install helloworld_1.0.0-1_ramips_24kec.ipk. - After the installation is complete, type helloworld and you'll see a string Hello world, why won't my code compile? To cross-compile existing programs or libraries, apply the following in the SDK directory: ./scripts/feeds update ./scripts/feeds list # This gives you all the available pacakges ./scripts/feeds install curl # for example we want to build curl make package/curl/compile
https://docs.labs.mediatek.com/resource/linkit-smart-7688/en/tutorials/c-c++-programming/using-openwrt-sdk-to-build-c-c++-programs
2018-02-17T21:26:30
CC-MAIN-2018-09
1518891807825.38
[]
docs.labs.mediatek.com
User interface development home page This topic contains links to topics about developing user interface elements. The user interface for Microsoft Dynamics 365 for Finance and Operations, Enterprise edition differs significantly from the interface for Microsoft Dynamics AX 2012. The client in Dynamics AX 2012 is a Microsoft Win32 application that has extensions that use ActiveX, WinForm, or WPF controls. The X++ application logic runs on the client for the form and table methods, and some logic occurs on the server. For controls, both the X++ logic application programming interface (API) and the physical Win32 control are tightly connected on the client. The client is an HTML web client that runs in all major browsers. These browsers include Microsoft Edge, Internet Explorer 11, Chrome, and Safari (see System requirements). The move to a web client has produced the following changes to client forms and controls: - The physical presentation of forms and controls is now HTML, JavaScript, and CSS within the browser. - Form controls are split into logical and physical parts. The X++ logical API and related state run on the server. - The logical and physical parts are kept in sync through service calls that communicate changes from each side. For example, a user action on the client creates a service call to the server that is either sent immediately or queued so that it can be sent later. - The server tier keeps the form state in memory while the form is open. The form metamodel continues to be used to define controls and application logic. This approach supports almost all the existing Form, Form DataSource, and Form Control metamodel and X++ override methods. However, some control types, properties, and override methods have been removed, either because of incompatibility with the new platform or for performance reasons. For example, ActiveX and ManagedHost controls can no longer be used to add custom controls, because they are incompatible with the HTML platform. Instead, a new extensible control framework has been added that lets you add additional controls. Tutorials Forms Controls - Action controls - Sizing for input controls and grid columns - Check box support in tree controls - Filtering - Display pages side-by-side using the Open in New Window icon - Code migration - context menus - Code migration - mouse double click - Contextual data entry for lookups - HierarchyViewer control - Lookup controls - File upload control - How to: system-defined buttons - Using images - Specify the font and background colors for input, table, and grid controls - Support for right-to-left languages: A primer on bidirectional text - Creating icons for workspace tiles - Keyboard shortcuts for extensible controls - Extensible controls – public JavaScript APIs Messaging - Slider and MessageBox - Messaging API: Message center, Message bar, Message details - Messaging the user
https://docs.microsoft.com/ca-es/dynamics365/unified-operations/dev-itpro/user-interface/user-interface-development-home-page
2018-02-17T21:52:10
CC-MAIN-2018-09
1518891807825.38
[]
docs.microsoft.com
Overview Custom Vision Service brings the power of machine learning to your apps Custom Vision Service is a tool for building custom image classifiers. It makes it easy and fast to build, deploy, and improve an image classifier. We provide a REST API and a web interface to upload your images and train. What can Custom Vision Service do well? Custom Vision Service is a tool for building custom image classifiers, and for making them better over time. For example, if you want a tool that could identify images of "Daisies", "Daffodils", and "Dahlias", you could train a classifier to do that. You do so by providing Custom Vision Service with images for each tag you want to recognize. Custom Vision Service works best when the item you are trying to classify is prominent in your image. Custom Vision Service does "image classification" but not yet "object detection." This means that Custom Vision Service identifies whether an image is of a particular object, but not where that object is within the image. Very few images are required to create a classifier -- 30 images per class is enough to start your prototype. The methods Custom Vision Service uses are robust to differences, which allows you to start prototyping with so little data. However, this means Custom Vision Service is not well suited to scenarios where you want to detect very subtle differences (for example, minor cracks or dents in quality assurance scenarios.) Custom Vision Service is designed to make it easy to start building your classifier, and to help you improve the quality of your classifier over time. Release Notes Dec 19, 2017 - Export to Android (TensorFlow) added, in addition to previously released export to iOS (CoreML.) This allows export of a trained compact model to be run offline in an application. - Added Retail and Landmark "compact" domains to enable model export for these domains. - Released version 1.2 Training API and 1.1 Prediction API. Updated APIs support model export, new Prediction operation that does not save images to "Predictions," and introduced batch operations to the Training API. - UX tweaks, including the ability to see which domain was used to train an iteration. - Updated C# SDK and sample. Known issues - 1/3/2018: The new "Retail - compact" domain model export to iOS (CoreML) generates a faulty model which will not run and generates a validation error. The cloud service and Android export should work. A fix is on the way.
https://docs.microsoft.com/cs-cz/azure/cognitive-services/Custom-Vision-Service/home
2018-02-17T21:52:16
CC-MAIN-2018-09
1518891807825.38
[]
docs.microsoft.com
Copy data from Xero using Azure Data Factory (Beta) This article outlines how to use the Copy Activity in Azure Data Factory to copy data from Xero. provide feedback. Do not use it in production environments. Supported capabilities You can copy data from Xero to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the Supported data stores table. Specifically, this Xero connector supports: - Xero private application but not public application. - All Xero tables (API endpoints) except "Reports". Azure Data Factory provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector. Xero connector. Linked service properties The following properties are supported for Xero linked service: Example: { "name": "XeroLinkedService", "properties": { "type": "Xero", "typeProperties": { "host" : "api.xero.com", "consumerKey": { "type": "SecureString", "value": "<consumerKey>" }, "privateKey": { "type": "SecureString", "value": "<privateKey>" } } } } Sample private key value: Include all the text from the .pem file including the Unix line endings(\n). "-----BEGIN RSA PRIVATE KEY-----\nMII***************************************************P\nbu****************************************************s\nU/****************************************************B\nA*****************************************************W\njH****************************************************e\nsx*****************************************************l\nq******************************************************X\nh*****************************************************i\nd*****************************************************s\nA*****************************************************dsfb\nN*****************************************************M\np*****************************************************Ly\nK*****************************************************Y=\n-----END RSA PRIVATE KEY-----" Dataset properties For a full list of sections and properties available for defining datasets, see the datasets article. This section provides a list of properties supported by Xero dataset. To copy data from Xero, set the type property of the dataset to XeroObject. There is no additional type-specific property in this type of dataset. Example { "name": "XeroDataset", "properties": { "type": "XeroObject", "linkedServiceName": { "referenceName": "<Xero linked service name>", "type": "LinkedServiceReference" } } } Copy activity properties For a full list of sections and properties available for defining activities, see the Pipelines article. This section provides a list of properties supported by Xero source. Xero as source To copy data from Xero, set the source type in the copy activity to XeroSource. The following properties are supported in the copy activity source section: Example: "activities":[ { "name": "CopyFromXero", "type": "Copy", "inputs": [ { "referenceName": "<Xero input dataset name>", "type": "DatasetReference" } ], "outputs": [ { "referenceName": "<output dataset name>", "type": "DatasetReference" } ], "typeProperties": { "source": { "type": "XeroSource", "query": "SELECT * FROM Contacts" }, "sink": { "type": "<sink type>" } } } ] Note the following when specifying the Xero query: Tables with complex items will be split to multiple tables. For example, Bank transactions has a complex data structure "LineItems", so data of bank transaction is mapped to table Bank_Transactionand Bank_Transaction_Line_Items, with Bank_Transaction_IDas foreign key to link them together. Xero data is available through two schemas: Minimal(default) and Complete. The Complete schema contains prerequisite call tables which require additional data (e.g. ID column) before making the desired query. The following tables have the same information in the Minimal and Complete schema. To reduce the number of API calls, use Minimal schema (default). - Bank_Transactions - Contact_Groups - Contacts - Contacts_Sales_Tracking_Categories - Contacts_Phones - Contacts_Addresses - Contacts_Purchases_Tracking_Categories - Credit_Notes - Credit_Notes_Allocations - Expense_Claims - Expense_Claim_Validation_Errors - Invoices - Invoices_Credit_Notes - Invoices_ Prepayments - Invoices_Overpayments - Manual_Journals - Overpayments - Overpayments_Allocations - Prepayments - Prepayments_Allocations - Receipts - Receipt_Validation_Errors - Tracking_Categories The following tables can only be queried with complete schema: - Complete.Bank_Transaction_Line_Items - Complete.Bank_Transaction_Line_Item_Tracking - Complete.Contact_Group_Contacts - Complete.Contacts_Contact_ Persons - Complete.Credit_Note_Line_Items - Complete.Credit_Notes_Line_Items_Tracking - Complete.Expense_Claim_ Payments - Complete.Expense_Claim_Receipts - Complete.Invoice_Line_Items - Complete.Invoices_Line_Items_Tracking - Complete.Manual_Journal_Lines - Complete.Manual_Journal_Line_Tracking - Complete.Overpayment_Line_Items - Complete.Overpayment_Line_Items_Tracking - Complete.Prepayment_Line_Items - Complete.Prepayment_Line_Item_Tracking - Complete.Receipt_Line_Items - Complete.Receipt_Line_Item_Tracking - Complete.Tracking_Category_Options Next steps For a list of supported data stores by the copy activity, see supported data stores.
https://docs.microsoft.com/en-us/azure/data-factory/connector-xero
2018-02-17T21:52:12
CC-MAIN-2018-09
1518891807825.38
[]
docs.microsoft.com
Download SQL Server Management Studio (SSMS) SSMS is an integrated environment for managing any SQL infrastructure, from SQL Server to SQL Database. SSMS provides tools to configure, monitor, and administer instances of SQL. Use SSMS to deploy, monitor, and upgrade the data-tier components used by your applications, as well as build queries and scripts. Use SQL Server Management Studio (SSMS) to query, design, and manage your databases and data warehouses, wherever they are - on your local computer, or in the cloud. SSMS is free! SSMS 17.x is the latest generation of SQL Server Management Studio and provides support for SQL Server 2017. Download SQL Server Management Studio 17.5 Download SQL Server Management Studio 17.5 Upgrade Package (upgrades 17.x to 17.5) Microsoft SQL Server Management Studio 17, and has a new icon: Available Languages Note Non-English localized releases of SSMS require the KB 2862966 security update package if installing on: Windows 8, Windows 7, Windows Server 2012, and Windows Server 2008 R2. This release of SSMS can be installed in the following languages: SQL Server Management Studio 17.5: Chinese (People's Republic of China) | Chinese (Taiwan) | English (United States) | French | German | Italian | Japanese | Korean | Portuguese (Brazil) | Russian | Spanish SQL Server Management Studio 17.5 Upgrade Package (upgrades 17.x to 17.5): Chinese (People's Republic of China) | Chinese (Taiwan) | English (United States) | French | German | Italian | Japanese | Korean | Portuguese (Brazil) | Russian | Spanish Note The SQL Server PowerShell module is now a separate install through the PowerShell Gallery. For more information, see Download SQL Server PowerShell Module. SQL Server Management Studio Version Information Release number: 17.5 Build number for this release: 14.0.17224.0 Release date: February 15, 2018 New in this Release SSMS 17.5 is the latest version of SQL Server Management Studio. The 17.x generation of SSMS provides support for almost all feature areas on SQL Server 2008 through SQL Server 2017. Version 17.x also supports SQL Analysis Service PaaS. Version 17.5 includes:. Supported SQL offerings - This version of SSMS works with all supported versions of SQL Server 2008 - SQL Server 2017 and provides the greatest level of support for working with the latest cloud features in Azure SQL Database and Azure SQL Data Warehouse. - There is no explicit block for SQL Server 2000 or SQL Server 2005, but some features may not work properly. - Additionally, SSMS 17.x can be installed side by side with SSMS 16.x or SQL Server 2014 SSMS and earlier. Supported Operating systems This release of SSMS supports the following. SSMS installation tips and issues Minimize Installation Reboots - Take the following actions to reduce the chances of SSMS setup requiring a reboot at the end of installation: - Make sure you are running an up-to-date version of the Visual C++ 2013 Redistributable Package. Version 12.00.40649.5 (or greater) is required. Only the x64 version is needed. - Verify the version of .NET Framework on the computer is 4.6.1 (or greater). - Close any other instances of Visual Studio that are open on the computer. - Make sure all the latest OS updates are installed on the computer. - The noted actions are typically required only once. There are few cases where a reboot is required during additional upgrades to the same major version of SSMS. For minor upgrades, all the prerequirements for SSMS are already be installed on the computer. Release Notes The following are issues and limitations with this 17.5 release:. Previous releases Previous SQL Server Management Studio Releases Feedback Get Help - UserVoice - Suggestion to improve SQL Server? - Stack Overflow (tag sql-server) - ask SQL development questions - Setup and Upgrade - MSDN Forum - SQL Server Data Tools - MSDN forum - Reddit - general discussion about SQL Server - Microsoft SQL Server License Terms and Information - Support options for business users - Contact Microsoft
https://docs.microsoft.com/cs-cz/sql/ssms/download-sql-server-management-studio-ssms
2018-02-17T21:50:13
CC-MAIN-2018-09
1518891807825.38
[array(['../includes/media/yes.png', 'yes'], dtype=object) array(['../includes/media/yes.png', 'yes'], dtype=object) array(['../includes/media/yes.png', 'yes'], dtype=object) array(['../includes/media/no.png', 'no'], dtype=object) array(['media/download-sql-server-management-studio-ssms/version-icons.png', 'SSMS 17.x'], dtype=object) ]
docs.microsoft.com
uap:LaunchAction (in AppointmentsProviderLaunchActions) Describes an uap:AppointmentsProviderLaunchActions content action. Element hierarchy - <Package> - Syntax <LaunchAction Verb = "addAppointment" | "removeAppointment" | "replaceAppointment" | "showTimeFrame" | "showAppointmentDetails" DesiredView? = "default" | "useLess" | "useHalf" | "useMore" | "useMinimum" task handling the extension. This is normally the fully namespace-qualified name of a Windows Runtime type. If EntryPoint is not specified, the EntryPoint defined for the app is used instead. RuntimeType? = A string between 1 and 255 characters in length that cannot start or end with a period or contain these characters: <, >, :, ", /, \, |, ?, or *. StartPage? = A string between 1 and 256 characters in length that cannot contain these characters: <, >, :, ", |, ?, or *. ResourceGroup? = An alphanumeric string between 1 and 255 characters in length. Must begin with an alphabetic character. /> Key ? optional (zero or one) Attributes and Elements Attributes Child Elements None. Parent Elements Related elements The following elements have the same name as this one, but different content or attributes: Remarks For more info about launch actions that an appointments provider takes, see AppointmentsProviderLaunchActionVerbs. LaunchAction (in AppointmentsProviderLaunchActions) has these semantic validations: Extension base attributes must follow these rules: - If the StartPage attribute is specified, fail if the EntryPoint, Executable, or RuntimeType attribute is specified. - Otherwise, fail if the Executable or RuntimeType attribute is specified without an EntryPoint specified. If LaunchAction (in AppointmentsProviderLaunchActions) defines the EntryPoint attribute, either this LaunchAction (in AppointmentsProviderLaunchActions) or the parent uap:Extension or Application element must specify an Executable attribute.
https://docs.microsoft.com/en-us/uwp/schemas/appxpackage/uapmanifestschema/element-2-uap-launchaction
2018-02-17T21:50:08
CC-MAIN-2018-09
1518891807825.38
[]
docs.microsoft.com
Titanium SDK Add entire publication Select collection Cancel Create a New Collection Set as my default collection Cancel This topic and sub-topics have been added to MyDocs. Ok No Collection has been selected. --> Objective-C and Objective-C++ Coding Standards Contents Synopsis This document is for the Objective-C and Objective-C++ coding standards at Appcelerator. As with other coding standards documents, the primary goal is clean, readable code, which is comprable to common existing conventions. Basis for this document We You are expected to follow the C/C++ coding guidelines when writing Objective-C except where explicitly specified. These standards take precedence over any generic rules listed in the style guidelines above, although we have our own exceptions. However, for consistency, any pure-C functions you write in Objective-C source files are to follow the Objective-C rules with C exceptions. Standards The following are the standard set of spacing, formatting, and naming conventions we expect for Objective-C(++) code. import vs. include Always @import (not #import) Objective-C headers, and #include C (or C++) headers. Class Naming Objective-C classes are to be named with: The prefix Ti, or another project-appropriate prefix Camelcase Example @interface TiExampleClass : NSObject { // ivars } // properties // methods The @interface directive should not be indented, and neither should @property or method declarations. Protocols Protocols follow the same naming conventions as classes, with the following exceptions: Protocols which reference a behavior type should end with a gerund (-ing). Protocols which describe a set of actions should describe the functional property of these collective actions. Protocols which are a delegate should end with the word Delegate. Example @protocol TiScrolling; // Gerund; behavior type is "this object scrolls" @protocol TiFocusable; // Action set; describes actions related to "focusing" and "TiFocusing" seems inappropraite ("this object focuses" vs. "this object performs actions related to focusing") @protocol TiScrollViewDelegate; // Delegate Protocols must always include the @required directive explicitly. Category naming Header files which define an interface for a category only should be named <base class>+<category>.h. Categories on existing classes should be named appropriately, with the category describing the set of extensions. Categories which are intended to describe a private API within an implementation file should be the empty category (). ivars Prefer private properties to ivars. If you do have a valid use case for an ivar, then declare them in the @implementation block (not the @interface block). Instance variables for a class should be intended one tabstop. Instance variables should be named in camelcase, and are not required to follow any other specific naming convention. @public, @protected, and @private Use of access specifiers is discouraged (use publicly-declared and private-category @property instead). @property and @synthesize Use the default synthesis property of ivars. You should rarely need @synthesize. Methods Methods should be named in camelcase, with the first character lowercase. Method names must never begin with an underscore. The leading method specifier (+ or -) should not be followed by a space, and neither should the return type. Selector (and argument) names should not have a space after their : character, or the type. If method declarations, definitions, or calls are spread across multiple lines, their : characters should be aligned rather than spaced on tabstops. The opening brace of a method should be on its own line for implementations. Example +(void)x:(int)y { } -(void)veryLongMethodName:(NSObject*)veryLongArgumentName arg2:(NSObject*)anotherArg arg3:(NSObject*)moreArg { } init Every class must have one, and only one, designated initializer that is identified as such in a comment. The following is an example of well-written designated initializer: Example // Designated initializer. -(instancetype)init { self = [super init]; if (self) { // initialization code goes here... } return self; } Note the single braces. You may wish to turn off the "initializer not fully bracketed" clang warning in Xcode as a result. Blocks Block variables should never be a raw type; they should always have a typedefassociated with them and that name used as the variable type. EXCEPTION: The void (^varname)(void) block type does not require a typedef, although there are plenty of existing convenience typedefs for this block type which should be used when appropriate. Blocks should have their opening brace on the same line as their ^, and their closing brace on its own line, indented with the surrounding scope. Blocks have their contents indented one tabstop from the surrounding scope. The void ^(void) block type should always be written as {{ ^{ ... } }}. __block storage specifier objects should be used with care. Remember that if a __block variable goes out of scope when a block tries to access it, there can be unpredictable and bad results. Example typedef int ^(intBlock)(int); intBlock foo = ^(int foo) { return 2*foo; }; Fast enumeration (for x in y) Prefer fast enumeration loops to other looping constructs where possible. Note that if y is a method call, the result of it should be pre-cached. Do not write fast enumeration loops which would modify y (whether an object or a method call) as a side-effect of the loop contents. File names The following file names are acceptable for Objective-C: .h (headers) .m (implementation files) .mm (Objective-C++ - use with care, see below) .pch (precompiled header) @implementation ordering Methods should be ordered in @implementation in the following way: @synthesize directives Designated initializer(s), ending with init #pragma mark Private - Only required for implementations with a private category Methods declared in private category #pragma mark Public - Only required for implementations with a private category Methods declared in @interface #pragma mark Protocol @protocol-name - Only required for classes which implement a protocol Methods for @protocol, @required first, then @optional The protocol implementation sections may be repeated as necessary. nil and NULL Do not mix nil and NULL. NULL should only be used for C-style pointers, and nil for all Objective-C object (and id) types. It is illegal to use a statement such as {{ if (objcObject) { ... } }}. Instead directly compare to nil, only where required. Remember that it is actually faster to send a message to nil than to perform the cmp/jmp instructions from an if and make a method call. This is especially true on RISC architectures like ARM. BOOL types BOOL types should only be assigned to from YES, NO, or an explicit boolean operation. Do not mix BOOL types with C++ boolean or the C macros TRUE and FALSE - doing so may lead to subtle comparator errors for truth. Exceptions to the C standard There are a number of exceptions to the C standard, to make our Objective-C code more compatible with existing source and follow standard conventions. Any C code which is written within an Objective-C source file (.m) must also follow these conventions, for readability purposes. Comments Classes, methods, and properties are to be documented as part of their @interface, not @implementation. Anything intended to be accessible through a public API of any kind should be tagged with comments suitable for appledoc generation; see appledoc for format info. You may wish to brew install appledoc as well. Order of declarations Rather than namespace-contents, the basic block for an Objective-C header is objc-contents: @interface #pragma mark "@interface-name Properties" (or "Properties" for headers with one @interface) @property #pragma mark "@interface-name Methods" (or "Methods" for headers with one @interface) Class methods Instance methods The following is the order of declarations for an Objective-C or Objective-C++ header: Copyright notice #import headers (system, 3rd party, project) #include headers (system, 3rd party, project) macros const variables enum typedef @protocol declarations C-style function declarations objc-contents namespace-contents (for declared namespaces in Objective-C++ headers only) Braces Rather than spacing a brace on a newline in C, in Objective-C there are some cases in which an opening brace is placed on the same line as the preceding statement, with a space before it: Blocks (see above) Flow control (if/while/for/do...while/switch...case) Variables All variables are named in camel-case and should not contain punctuation. Exceptions to the C++ standard There are no exceptions to the C++ standard at this time. Other Rules 3rd party libraries As with all other source, the style in 3rd party libraries should be consistent with the style there rather than any Appcelerator coding standards. This holds true even for extensions we write to them. Deprecated classes and methods Avoid the usage of deprecated methods from standard frameworks where an alternative is available, unless it breaks backwards compatibility with a version of software that we support. @compatibility_alias Do not use the @compatibility_alias directive unless it is explicitly required due to a conflict between external libraries to a project, or multiple internal versions required by different 3rd party libraries. pragma mark Use #pragma mark liberally to annotate sections of your code where necessary, in addition to the rules spelled out above. Related Links
https://docs.axway.com/bundle/Titanium_SDK_allOS_en/page/objective-c_and_objective-c___coding_standards.html
2018-10-15T12:28:55
CC-MAIN-2018-43
1539583509196.33
[]
docs.axway.com
Hevo lets you load your Facebook Ads Insights data into your data warehouse using Facebook’s Marketing API. Let's walk through steps for adding Facebook Ads as a source. 1. Create Pipeline Click on PIPELINES option in the left navigation bar and click on Create New Pipeline. 2. Select Source Type Select Facebook Ads from the list on Select Source Type Screen. 3. Authorise Hevo to Access You need to add your Facebook Account first. Click on Add New Account. You will be taken to Facebook to login and asked to authorise us. The permission which we are looking for includes: - Access to Facebook ads and related stats. - Access to your Page and App Insights. 4. Configure Facebook Ads You need to configure some basic details like: - Source Name: A unique name for your Facebook Ads Source. - Ads Account: It represents a business, person or other entity who creates and manage ads on Facebook. Select the Ads account associated with your Facebook account you’d like to pull data from. - - Ads Action Report Time: Determines the report time of action stats. For example, if a person saw the ad on Jan 1st but converted on Jan 2nd. If you select `impression`, you will see a conversion on Jan 1st and if you select `conversion`, you will see a conversion on Jan 2nd. Note: Not all combinations of Ads Breakdown and Fields would work. For more information on this, please refer the Facebook documentation here. 5. Select the Destination Select the Destination where you want to replicate Facebook Ads.
https://docs.hevodata.com/hc/en-us/articles/360005444753-Facebook-Ads
2018-10-15T14:14:35
CC-MAIN-2018-43
1539583509196.33
[]
docs.hevodata.com
The AirWatch Tunnel application for iOS lets end users access internal corporate Web resources and sites through managed public and internal applications. The AirWatch Tunnel App for iOS is a legacy app. For the most up-to-date functionality, use the VMware Tunnel app for iOS. For more information, see Access the VMware Tunnel App for iOS. AirWatch Tunnel for iOS does not currently support UDP traffic. Requirements - iOS 8.0+ - Ensure you are on the latest AirWatch version for optimal functionality. Using the App Your end users must download and install the AirWatch Tunnel application from the iOS App Store. After installing it, end users have to run it at least once and accept the User Permission prompt. The AirWatch Tunnel displays as Connected whenever an end user opens a managed app that you configured to use the App Tunnel profile or a Safari domain that you set to connect automatically.
https://docs.vmware.com/en/VMware-AirWatch/9.2/vmware-airwatch-guides-92/GUID-AW92-Accessing_Tunnel_App_iOS.html
2018-10-15T13:08:15
CC-MAIN-2018-43
1539583509196.33
[]
docs.vmware.com
If you don't want to use an S3 Endpoint to access an S3 bucket, you can access it using the internet gateway. For example, you might do this Procedure - Ensure that the access permissions for the S3 bucket permit access from your cloud SDDC from the internet. See Managing Access Permissions to Your Amazon S3 Resources for more information. - Enable access to S3 through the internet gateway. By default, S3 access goes through the S3 endpoint of your connected Amazon VPC. You must enable access to S3 over the internet before you can use it. - Log in to the VMC Console at. - View Details - Network - Click Connected Amazon VPCs, and then click Disable next to S3 Endpoint. - From the VMC Console, create a compute gateway firewall rule to allow https access to the internet. - Under Compute Gateway, click Firewall Rules. - Add a compute gateway firewall rule with the following parameters. Results VMs in your SDDC can now access files on the S3 bucket using their https paths.
https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-60DD527A-A7E0-4E35-AB98-43943AE140AF.html
2018-10-15T12:30:54
CC-MAIN-2018-43
1539583509196.33
[]
docs.vmware.com
Brainstorm: fast, flexible and fun neural networks. Brainstorm is under active development, so the documentation may be spotty (though we are working on it!). The API documentation is fairly complete, and help may be found on the mailing list. Contents:¶ - Installation - Walkthrough - Data Format - Network - Layers - Trainer - Hooks - API Documentation - Internals - Contributing - Credits - History Feedback¶ If you have any suggestions or questions about brainstorm feel free to email us at [email protected]. If you encounter any errors or problems with brainstorm, please let us know! Open an Issue at the GitHub main repository.
https://brainstorm.readthedocs.io/en/stable/
2018-10-15T13:31:46
CC-MAIN-2018-43
1539583509196.33
[]
brainstorm.readthedocs.io
Watch how to produce a valide csv file for data parsing, and use corpus explorer script: This video demonstrates how to build a corpus from txt files, enrich it with proper time steps and use distant reading script: See how to use advanced options in Network Mapping script with this video: Other advanced options (regarding term extraction and network mapping) are shown in this other video showing how to work with a Twitter dataset. This last video demonstrates the capacity offered by Word Embedding methods (word2vec): Be aware that new options may be added to the existing scripts such that the forms may be organized slightly differently in the actual interface. Also note that the community blog https:/cortextscripts.wordpress.com is not entirely accessible yet. All videos were produced by Gabriel Varela
https://docs.cortext.net/video/
2018-10-15T13:46:03
CC-MAIN-2018-43
1539583509196.33
[]
docs.cortext.net
You can invite or remove your team members to your Hevo account. Invite a team member Follow the below steps to invite a team member. 1. Select Settings Go to Settings > Settings from the left navigation menu. 2. Add team member Select Team tab and enter member's email id in the Invite Members text box and hit Invite button. You will be able to see the email id added to the list. The invited user will receive an email with the steps to create the account and join your team. Search a Team Member You can also search an email id from the list. Enter the email you want to search in the search box above the members' list. Remove a Team Member You can remove a team member from the list using the delete icon in front of the member's email/name in the member's list. Please sign in to leave a comment.
https://docs.hevodata.com/hc/en-us/articles/360000958013-How-to-invite-and-moderate-team-members
2018-10-15T14:12:36
CC-MAIN-2018-43
1539583509196.33
[]
docs.hevodata.com
- face shop Résumé de l'étude de cas The Face Shop is a cosmetics brand that started in Korea, featuring a wide range of all-natural products at affordable prices. The Face Shop has been expanding globally, with franchises in places like Japan, Australia, and Hong Kong. Thisexpansion has now hit British Columbia, as The Face Shop currently has four stores in the Lower Mainland. Our overall goal in this marketing plan is to help The Face Shop succeed in Vancouver by adapting its Korean-dominated marketing strategies and tactics to better suit the Canadian market. We have proposed a set of strategies and tactics that are very different from what The Face Shop has done in the past. It is imperative that The Face Shop drastically change from the status quo, as the current marketing strategies are greatly restricting the growth of the company to new segments in Vancouver. Currently, The Face Shop customers are predominantly of Asian descent. This is because people who shop at The Face Shop are those who know of the brand from places like Korea or Hong Kong, where The Face Shop has a much larger market presence than in Vancouver. For these customers, the current marketing tactics may work well. However, in order to develop the company, The Face Shop needs to increase its target market to include a wider range of customers. The Face Shop must expand its target market to include people of all ethnicities and backgrounds, and should better target the male segment of the population. In order to do this, we have proposed a set of five strategies with a variety of tactics to make these strategies successful. The first strategy is to practice Total Customer Care. Employees should be aware of The Face Shop products in more detail, to the point where they would be able to provide accurate consultations to anyone who walks in the stores. The second strategy is to add a comprehensive online catalogue to the Canadian website, along with the ability to shop online. We want The Face Shop's products to be as accessible as possible, and by increasing the distribution channels, we hope to achieve this goal. Increasing brand recognition is vital for The Face Shop's success in Vancouver, which leads to our third strategy. The tactics for this strategy include using personal marketing, strategic placement of The Face Shop products, and sponsoring events that associate well with The Face Shop image. As the product is visible in the market, people will start to know about The Face Shop and will have more experience with the cosmetics brand. Another strategy for The Face Shop would be to target the male segment of the population. This can be done by co-packaging female products with male trial-size versions. The final strategy is to better promote the "Natural Story" image that The Face Shop tries to convey. The all-natural aspect of The Face Shop brand should be a major selling feature, as it provides customers with a benefit that most other cosmetics do not have. This will help distinguish our product in the marketplace and promote the worldwide company vision. Sommaire de l'étude de cas - Executive summary - Situation analysis - Company analysis - Industry analysis - Customer analysis - Competitive analysis - Distribution anaysis - SWOT analysis - Goal andmarketing objectives - Target market - Positioning - Marketing strategies - Monitors and controls - Contingency Extraits de l'étude de cas [...] by capturing a larger target market segment and satisfying customer needs. One objective of this plan is to increase profits by 20% in one year. The other objective is to add one more distribution channel in one year. Target market TheFaceShop should focus on three target market segments. These three target market segments are regular female purchasers, grasshoppers, and forward looking customers. The regular female purchasers have potential in increasing profits, especially given the fact that they are already familiar with TheFaceShop products and its attributes. [...] [...] For example, ?Cosmetic and fragrances? retail store sales in Canada increased from 1852.9 million in year 2001 to $ 2132.1 million in year 2006?[xii]. In 2001, its sale was almost at the bottom among 27 of selected commodities, but in 2006, it rose to the 5th largest sales. As well, according to the survey on ?spending patterns in Canada?, conducted by Statistics Canada, consumption on personal care products from 2004 to 2005 increased by 22% and particularly in B.C. [xiii]. This clearly shows strong performance and potential growth in the cosmetic industry in Canada. [...] [...] Males who try these products will then recognize the brand of TheFaceShop. This tactic is inexpensive, innovative, and quite easy to implement, as it just requires trial samples to be included with full size products. However, this sample co-packaging strategy is quite reliant on a strong female customer base. Therefore, it is very important to build relationships with female purchasers first by appreciating their loyalty, recognizing their needs, offering them high quality products, while giving them the benefits of younger, smoother, more radiant and beautiful skin. [...] [...] Also, having so many ideas will be hard to manage efficiently, which could lead to budget overruns. TheFaceShop must closely monitor the progress of implementing this plan, and if it proves to be too aggressive or difficult, spreading the tactics over a longer time period may be necessary. The internal changes, like Total Customer Care, are more immediately vital than some of the external changes, like person marketing. It is imperative that TheFaceShop first improves the shopping experience before attempting to expand the customer base. [...] [...] The main motivation that originally led them to TheFaceShop is the attractive prices. Grasshoppers have the potential to become regular purchasers and advocates. Freeloaders take samples, put on products labeled ?Sample and leave the store to come back some other time; not to purchase any products, but to continue to siphon stock. One-Timers are the female and male customers that are looking for new products to sample or are somehow convinced to buy a TheFaceShop product by the ever changing and product oriented staff of TheFaceShop. [...] À propos de l'auteurArthur G.Etudiant Marketing distribution - Niveau - Avancé - Etude suivie - sciences... - Ecole, université - IEP de Paris Descriptif de l'étude de cas - Date de publication - 2008-02-11 - Date de mise à jour - 2008-02-11 - Langue - anglais - Format - Word - Type - étude de cas - Nombre de pages - 17 pages - Niveau - avancé - Téléchargé - 4 fois - Validé par - le comité de lecture Autres docs sur : The face shop - Can socially responsible firms increase their profits? - The case of 'The Body Shop' - Emergence d'un phénomène nouveau : le fun shopping - Business plan for the start-up of a clothes shop in China: Frenchy - The body shop report: entering new markets - Supermarkets face out food discounters: from buying to shopping >
https://docs.school/marketing/marketing-de-la-distribution/etude-de-cas/face-shop-44841.html
2018-10-15T14:00:08
CC-MAIN-2018-43
1539583509196.33
[array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-MA.png', None], dtype=object) array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-MA.png', None], dtype=object) array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-MA.png', None], dtype=object) ]
docs.school
Zerynth RTTTL¶ Ring Tone Text Transfer Language (RTTTL) was developed by Nokia to be used to transfer ringtones of mobile phones. The RTTTL format is a string divided into three sections: name, default value, and data. The name section consists of a string describing the name of the ringtone. It can be no longer than 10 characters, and cannot contain a colon ”:” character (however, since the Smart Messaging specification allows names up to 15 characters in length, some applications processing RTTTL also do so). The default value section is a set of values separated by commas, where each value contains a key and a value separated by an ”=” character, which describes certain defaults which should be adhered to during the execution of the ringtone. Possible names are: - d - duration - o - octave - b - beat, tempo The data section consists of a set of character strings separated by commas, where each string contains a duration, pitch, octave and optional dotting (which increases the duration of the note by one half). Here below, the Zerynth Library for RTTTL melody player and some examples to better understand how to use it. Contents:
https://docs.zerynth.com/latest/official/lib.zerynth.rtttl/docs/index.html
2018-10-15T12:32:47
CC-MAIN-2018-43
1539583509196.33
[]
docs.zerynth.com
Change Log This is the history of version updates. Version 1.6.12 - FIXED: Invalid Geometry Crash - UPDATED: DTLoupe to 1.5.8 Version 1.6.11 - FIXED: Problems with dictation placeholder leading to crash or double text insertion - CHANGED: Updated DTLoupe to 1.5.7 - to support bundle in iOS framework Version 1.6.10 - FIXED: Crash in DTLoupe - FIXED: Crash on iOS <= 5.1.1 - ADDED: iOS Framework - ADDED: Support for CocoaPods frameworks and Modules Version 1.6.9 - FIXED: Dictation failure would lead to subsequent crash - FIXED: Loss of attachment attributes causes retina attachments to double in size - FIXED: Scroll indicator inset should be zero for left and right sides - FIXED: Static Framework missing DTCoreTextMacros.h - FIXED: Rotation of editor view controller causes too big bottom content inset - FIXED: Content inset was incorrect with hidden keyboard but input view still showing (i.e. hardware keyboard) - CHANGED: Updated DTCoreText to 1.6.15 Version 1.6.8 - CHANGED: Updated DTLoupe to 1.5.5 Version 1.6.7 - ADDED: Support for tintColor on >= iOS 7 - FIXED: iOS 8 crash - FIXED: Image text attachments missing from HTML output - CHANGED: Updated DTCoreText to 1.6.13 Version 1.6.6 - CHANGED: Updated DTLoupe to 1.5.4 - CHANGED: Updated DTCoreText to 1.6.10 - CHANGED: Updated DTFoundation to 1.6.1 Version Version 1.6.4 - FIXED: Backwards deleting of list prefixes broken due to a change in iOS 7 - FIXED: A crash might occur if editing text while drawing of tiles was still going on - ADDED: Adjust bottom contentInset to avoid cutting of autocorrect prompt - CHANGED: Updated DTCoreText to 1.6.8 Version 1.6.3 - FIXED: Pasting from Google Drive might yield empty content - CHANGED: Updated DTCoreText to 1.6.6 Version 1.6.2 - FIXED: Removed unnecessary test that would prevent redrawing for “empty” contents - FIXED: Tapping on editor would cause incorrect scrolling on long documents if the editor was not first reponder - CHANGED: Updated DTCoreText to 1.6.3 Version 1.6.1 - FIXED: Pasted plain text missing typing attributes - ADDED: DTProcessCustomHTMLAttributes in Demo App parsing options - CHANGED: Processing of Custom HTML Attributes is now optional and defaults to off. - CHANGED: Updated DTCoreText to 1.6.1 Version 1.6.0 - FIXED: Multi-stage text input had issues with input delegate messaging - ADDED: Support for custom HTML attributes - ADDED: Delegate method for finer control over pasted content. - ADDED: More formatting options in Demo app - CHANGED: Updated DTCoreText to 1.6.0 Version 1.5.1 - Version Version 1.4.1 - FIXED: Editor delegate set an out of bounds range when deleting backwards with a selection which starts from position 0. - UPDATED: DTCoreText to 1.4.3 - FIXED: Synthesizing italics for fonts that don’t have italic face. e.g. American Typewriter Version 1.4 - ADDED: A delegation protocol that gives it feature parity with UITextView. - FIXED: override typing attributes (like setting bold with no selection) would be reset on a new line - FIXED: Autocorrection was broken due to removal of input delegate notification - FIXED: Some problems with Undo - FIXED: In some circumstances Editor view would scroll horizontally - FIXED: Apps using multiple instances of Editor would have Undo problems - UPDATED: DTCoreText to 1.4 Version 1.3 - NO CHANGES: This is an interim version since we want to have the version number catch up to DTCoreText. Also this is the first tagged version on our git server. Version 1.2.3 - FIXED: inParagraph being ignored in replaceRange:withAttachment:inParagraph: Version 1.2.2 - CHANGED: Scaling for ranged selection loupe is now being calculated from the caret size - FIXED: This removed the synchronization by queue and replaced it with @synchronized as this was causing display problems - FIXED: Encoding of Emoji multi-byte unicode sequences - CHANGED: Updated to DTCoreText 1.3.2 Version 1.2.1 - FIXED: Too large documents would cause editor to go black. Changed content layer to tiled for fix. - FIXED: Making incremental changes right after setting string would cause incorrect content view height (e.g. contents in loupe be invisible) - FIXED: Use default font size from textDefaults if typingAttributes are missing a font attribute - REMOVED: defunct debug crosshairs functionality which stopped working when loupe was change to be layer-based Version 1.2 - ADDED: dictation placeholder - ADDED: setFont convenience method to set fontFamily and pointSize for default font - FIXED: Loupe contents where not adjusted for Retina - FIXED: Problems when Editor View being initialized with CGRectZero - FIXED: Selection problem in readonly mode, words at line ends cannot be selected - FIXED: Drag handles showing during readonly dragging - FIXED: Default for shouldDrawLinks was defaulting to NO which would cause links to be invisible if not drawin in custom subview - FIXED: Parser did not add font attribute to empty paragraph, causing smaller carets for these lines - CHANGED: Refactored selectionRectsForRange: for RTL support and better performance - CHANGED: DTMutableCoreTextLayoutFrame now caches selection rects for latest requested range - CHANGED: Margin around edited text is now set via contentInset instead of content view’s edgeInsets - CHANGED: Adopted resizing contentSize through content view notification instead of KVO, since content views no longer resize themselves - CHANGED: Prevent unnecessary re-layouting in several places (e.g. changing orientation) - CHANGED: selectedTextRange now set to nil in resignFirstResponder - CHANGED: textDefault and individual properties now set each other Version 1.1.4 - FIXED: horizontal flickering when moving round loupe over text - FIXED: cursor does not stop blinking during selection - FIXED: see-through mode of loupe used when touch leaves visible content area - FIXED: content size problem caused by DTCoreText change - FIXED: avoid redrawing of loupe if in see-through mode - CHANGED: restrict loupe towards bottom so that it does not go under keyboard - CHANGED: loupe no goes into see-through mode if touch point goes outside of visible area - CHANGED: renamed contentView to attributedTextContentView to avoid possible conflict with internal ivar of UIScrollView - CHANGED: replaced semaphore-based sync with dispatch_queue - CHANGED: improved performance on re-drawing so that only the area affected by the re-layouted lines is actually redrawn - CHANGED: dragging a selection handle now also scrolls the view if the touch point moves outside of the visible area - UPDATED: DTCoreText to 1.2.1 Version 1.1.3 - FIXED: Cursor would not show when becoming first responder - FIXED: Loupe would flash in top left corner when being presented the first time - ADDED: Known Issues file with warning not to use lists (incomplete) - CHANGED: refactored and made public boundsOfCurrentSelection method Version 1.1.2 - FIXED: Crash when dragging beyond end of document with keyboard hidden - FIXED: Selection rectangles did not get correctly extended for RTL text - UPDATED: DTCoreText Fixes Version 1.1.1 - UPDATED: DTCoreText to Version 1.1 - FIXED: Hopefully a certain crash involving Undo Version 1.1.0 - ADDED: Support for Undo/Redo - ADDED: Setting and changing font family and size for ranges - ADDED: Support for indenting - CHANGED: Lots of documentation added, refactoring and cleanup - FIXED: text scaling bug when pasting - UPDATED: DTCoreText + DTFoundation submodule Version 1.0.7 - FIXED: Crash when selection goes beyond a paragraph, somehow left over from 1.0.6 Version 1.0.6 - FIXED: Crash when selection ends at beginning of line (introduced in 1.0.5) - CHANGED: Removed unnecessary logging of pasteboard types on paste - FIXED: Hitting backspace with cursor right of an Emoji would only delete half of the composed character sequence resulting in an extra glyph showing up Version 1.0.5 - FIXED: Crash when dismissing modal view controller after loupe was shown once - CHANGED: Loupe is now a singleton with a dedicated UIWindow - FIXED: background-color attribute is no longer inherited from non-inline parent tag Version 1.0.4 - FIXED: toggleHighlightInRange has Yellow hard-coded. - ADDED: passing nil to toggleHighlightInRange does not try to add a color - FIXED: hasText reports YES even if only content is the default \n - ADDED: implemented delete: for sake of completeness - CHANGED: moved isEditable from category to main implementation to quench warning - CHANGED: implemented DTTextSelectionRect and cleaned up selection handling to fix warning - FIXED: crash when pasting web content from Safari into the editor Version 1.0.3 - FIXED: Without Keyboard drag handles should still appear to allow modifying selection for e.g. copy - ADDED: methods to toggle highlighting on NSAttributedString, option “H” in Demo demonstrating this - CHANGED: textDefaults is now a writeable property. For possible values see DTHTMLAttributedStringBuilder Version 1.0.2 - FIXED: copying multiple local attachments would cause them to all turn into the last image on pasting. A local attachment is one that has contents, but no contentURL so this is represented as HTML DATA URL and does not require an additional DTWebResource in the pastboard. - FIXED: issue #127, position of inserted autocorrection text when dismissing keyboard. - FIXED: Changed DTLoupeView to using 4 layers instead of drawRect. This fixes a display bug when moving the loupe to far to the right or down. - FIXED: Issue #464 through the internal change to DTLoupeView - FIXED: Cursor not showing up on programmatic becomeFirstResponder (knock on effect from fixing #127)
https://docs.cocoanetics.com/DTRichTextEditor/docs/Change%20Log.html
2017-07-20T20:27:34
CC-MAIN-2017-30
1500549423486.26
[]
docs.cocoanetics.com
NOTE: This node is optional since it is based on cartridge packaging model. If this server is not available at your wizard, please contact your hosting provider for its activation. JBoss 7 is a flexible, lightweight, managed application runtime environment. It is written in Java and implements Java Platform Enterprise Edition (Java EE) specification. This software is completely free and open sourced and can be run on multiple platforms. The main features are: - fast concurrent deployment and ability to edit static resources without redeployment; - each service can be started and stopped in isolation; - lightweights through efficient memory management; - modular approach. Simply follow the next steps to know how to receive your JBoss server at Jelastic PaI just in a minute.
https://docs.jelastic.com/jboss
2017-12-11T09:17:43
CC-MAIN-2017-51
1512948513330.14
[]
docs.jelastic.com
Document Type Article Abstract Among critical readers of Mark Twain’s Adventures of Huckleberry Finn, Jim’s decision not to escape from slavery by merely crossing the Mississippi River to the Illinois shore provokes active discussion. This article examines Jim’s decision in light of the racial climates of Ohio and Illinois during the 1840s, the setting of Twin’s novel and provides more evidence that Jim’s plan to steer clear of Illinois and head toward Ohio was quite sound. Recommended Citation Tackach, James. 2004. "Why Jim does not escape to Illinois in Mark Twain’s adventures of Huckleberry Finn." Journal of the Illinois State Historical Society 97 (3). Included in Arts and Humanities Commons In: Journal of the Illinois State Historical Society, Autumn 2004, v. 97, no. 3.
https://docs.rwu.edu/fcas_fp/5/
2017-12-11T09:19:47
CC-MAIN-2017-51
1512948513330.14
[]
docs.rwu.edu
Title Resilience as the ability to bounce back from stress: A neglected personal resource? Document Type Article Abstract The purpose of this study was to examine resilience, as the ability to bounce back from stress, in predicting health-related measures when controlling for other positive characteristics and resources. We assessed resilience, optimism, social support, mood clarity, spirituality, purpose in life, and health-related measures in two large undergraduate samples. In Study 1, resilience was related to both health-related measures (less negative affect and more positive affect) when controlling for demographics and other positive characteristics. In Study 2, resilience was related to all four health-related measures (less negative affect, more positive affect, less physical symptoms, and less perceived stress) when controlling for the other variables. None of the other positive characteristics were related to more than three of the six possible health-related measures when controlling for the other variables. Resilience, as the ability to bounce back, may be an important personal resource to examine in future studies and target in interventions. Recommended Citation Smith, B.W., Erin M. Tooley, E.M. Tooley, Paulette Christopher and Virginia S. Kay. 2010. "Resilience as the ability to bounce back from stress: A neglected personal resource?" Journal of Positive Psychology 5 93):166-176. Published in:Journal of Positive Psychology, vol. 5, no.3, 2010.
https://docs.rwu.edu/fcas_fp/268/
2017-12-11T09:20:46
CC-MAIN-2017-51
1512948513330.14
[]
docs.rwu.edu
public interface ExecutionContext The security context provides information about the user context in which this query is being run. As of 4.2, the SecurityContext is a sub-interface of ExecutionContext such that both interfaces contain all of the methods from the prior independent interfaces. Thus, these interfaces can now be used interchangeably. ConnectorIdentity getConnectorIdentity() ConnectorIdentitycreated by the Connector's ConnectorIdentityFactory ConnectorIdentityor SingleIdentityif the Connector does not implement ConnectorIdentityFactory java.lang.String getConnectorIdentifier() java.lang.String getRequestIdentifier() java.lang.String getPartIdentifier() java.lang.String getExecutionCountIdentifier() java.lang.String getVirtualDatabaseName() java.lang.String getVirtualDatabaseVersion() java.lang.String getUser() java.io.Serializable getTrustedPayload() java.io.Serializable getExecutionPayload() The execution payload differs from the Trusted Payload in that it is set on the Statement and so may not be constant over the Connection lifecycle and may be changed upon each statement execution. The Execution Payload is not authenticated or validated by the MetaMatrix system. Given that the Execution Payload is not authenticated by the MetaMatrix system, connector writers are responsible for ensuring its validity. This can possibly be accomplished by comparing it against the Trusted Payload. java.lang.String getConnectionIdentifier() void keepExecutionAlive(boolean alive) alive- int getBatchSize() void addWarning(java.lang.Exception ex) ex- boolean isTransactional()
http://docs.jboss.org/teiid/6.0/apidocs/org/teiid/connector/api/ExecutionContext.html
2016-02-06T00:30:48
CC-MAIN-2016-07
1454701145578.23
[]
docs.jboss.org
Table of Contents Official Slackware HOWTO Slackware Linux CD-ROM Installation HOWTO Patrick Volkerding <volkerdi AT slackware.com> v13.1, 2010-05-18 This document covers installation of the Slackware(R) distribution of the Linux operating system from the Slackware CD-ROM. 1. Introduction Linux is a multiuser, multitasking operating system that was developed by Linus Torvalds and hundreds of volunteers around the world working over the Internet. The Linux operating system now runs on several machine architectures, including ARMs, Intel 80×86, Sparc, 68K, PowerPC, DEC Alpha, MIPS, and others. The x86 Slackware distribution of Linux runs on most PC processors compatible with the Intel 486 or better, including (but not limited to) the Intel 486, Celeron, Pentium I, MMX, Pro, II, III, Xeon, 4, M, D, Core, Core 2, Core i7, and Atom; AMD 486, K5, K6, K6-II, K6-III, Duron, Athlon, Athlon XP, Athlon MP, Athlon 64, Sempron, Phenom, Phenom II, and Neo; Cyrix 486, 5×86, 6×86, M-II; Via Cyrix III, Via C3, Via Nano; Transmeta Crusoe and Efficeon. Essentially anything that's x86 and 32-bit (with at least i486 opcodes) will do for the 32-bit x86 edition of Slackware, or 64-bit and supporting x86_64 extensions (also known as AMD64, EM64T, or Intel 64) for the x86_64 edition of Slackware. Linux is modeled after the UNIX(R) operating system. The Slackware distribution contains a full program development system with support for C, C++, Fortran-77, LISP, and other languages, full TCP/IP networking with NFS, PPP, CIFS/SMB (Samba), a full implementation of the X Window System, and much more. 1.1. Sources of Documentation If you're new to Slackware, you'll be happy to know there is a *lot* of documentation and help available both on the Internet and on the CD-ROM itself. A great source of general documentation about Linux is the Linux Documentation Project, online at: Here you will find a collection of documents known as the “Linux HOWTOs” as well as other useful guides. For additional help with Slackware, check out the Slackware forum at linuxquestions.org. 2. Hardware Requirements Most PC hardware will work fine with Slackware, but some Plug-and-Play devices can be tricky to set up. In some cases you can work around this by letting DOS initialize the card and then starting Slackware with the Loadlin utility. Setting the computer's BIOS to configure Plug-and-Play cards also may help – to do this, change the “Plug and Play OS” option to “no”. Here's a basic list of what you'll need to install Slackware: 128 megabytes (128MB) or more of RAM. If you have less RAM than this, you might still be able to install, but if so don't expect the best possible experience. You also will need some disk space to install Slackware. For a complete installation, you'll probably want to devote a 10GB *or larger* partition completely to Slackware (you'll need almost 6GB for a full default installation, and then you'll want extra space when you're done). If you haven't installed Slackware before, you may have to experiment. If you've got the drive space, more is going to be better than not enough. Also, you can always install only the first software set (the A series containing only the basic system utilities) and then install more software later once your system is running. If you use SCSI, Slackware supports most SCSI controllers. The “huge” kernels support as much of the boot hardware as possible, including several hardware RAID controllers, Fiber Channel controllers, software RAID in linear and RAID 0 through 6 and RAID 10, LVM (Logical Volume Manager), and kernel support required to have fully encrypted systems. To install from the DVD or CD-ROM, you'll need a supported drive. These days, the chances that your drive is supported by the install kernels is excellent. But, if not, you can always use a USB stick and install via the network. Or, use a floppy disk to install using PXE and the network. See the docs in usb-and-pxe-installers and the etherboot directory within for instructions. 3. Slackware Space Requirements Slackware divides the installable software into categories. (in the old days when people installed Linux from floppy disks, these were often referred to as “disk sets”) Only the A series category (containing the base Linux OS) is mandatory, but you can't do very much on a system that only has the A series installed. Here's an overview of the software categories available for installation, along with the (approximate) amount of drive space needed to install the entire set: - A The base Slackware system. (310 MB) - AP Linux applications. (290 MB) - D Program development tools. (600 MB) - E GNU Emacs. (100 MB) - F FAQs and HOWTOs for common tasks. (35 MB) - K Linux 2.6.33.4 kernel source. (445 MB) - KDE The KDE desktop environment and applications. (925 MB) - KDEI Language support for KDE. (800 MB) - L System libraries. (950 MB) - N Networking applications and utilities. (325 MB) - T TeX typesetting language. (285 MB) - TCL Tcl/Tk/TclX scripting languages and tools. (15 MB) - X X Window System graphical user interface. (300 MB) - XAP Applications for the X Window System. (490 MB) - Y Classic text-based BSD games. (6 MB) If you have the disk space, we encourage you to do a full installation for best results. Otherwise, remember that you must install the A set. You probably also want to install the AP, D, L, and N series, as well as the KDE, X, and XAP sets if you wish to run the X Window System. The Y series is fun, but not required. 3.1 Preparing a Partition for Slackware If you plan to install Slackware onto its own hard drive partition (this offers optimal performance), then you'll need to prepare one or more partitions for it. A partition is a section of a hard drive that has been set aside for use by an operating system. You can have up to four primary partitions on a single hard drive. If you need more than that, you can make what is called an extended partition. This is actually a way to make one of the primary partitions contain several sub-partitions. Usually there won't be any free space on your hard drive. Instead, you will have already partitioned it for the use of other operating systems, such as MS-DOS or Windows. Before you can make your Linux partitions, you'll need to remove one or more of your existing drive partitions to make room for it. Removing a partition destroys the data on it, so you'll want to back it up first. If you've got a large partition that you'd like to shrink to make space for Slackware you might consider using GParted, a partition editor that allows resizing and moving of existing partitions. They have a Live CD and USB image that allows running the program on a minimal OS, as well as versions to boot from PXE or the hard drive. Bootable images with GParted may be found here: There's also the regular version of GNU parted that does the same thing from the command line. It is included in the installer, and as a package in the L series. If you plan to repartition your system manually, you'll need to back up the data on any partitions you plan to change. The usual tool for deleting/creating partitions is the fdisk program. Most PC operating systems have a version of this tool, and if you're running DOS or Windows it's probably best to use the repartitioning tool from that OS. Usually DOS uses the entire drive. Use DOS fdisk to delete the partition. Then create a smaller primary DOS partition, leaving enough space to install Linux. Preferably this should be more than 6GB. If your machine doesn't have a lot of RAM, you'll want another partition for swap space. The swap partition should be equal to the amount of RAM your machine has, but should in any case be at least 128MB. If you don't have that much drive space to spare, the more the better to avoid running out of virtual RAM (especially if you plan on using a graphical desktop). You'll then need to reinstall DOS or Windows on your new DOS partition, and then restore your backup. We'll go into more detail about partitioning later, and you don't need to create any new partitions yet – just make sure you have enough free space on the drive to do an installation (more than 6GB is ideal), or that you have some idea about which existing partition you can use for to install on. 3.2 Booting the Slackware CD-ROM If your machine has a bootable CD-ROM drive (you may need to configure this in the system's BIOS settings) then you'll be able to directly boot the first CD-ROM. If not, then see the files in the usb-and-pxe-installers directory for information about alternative methods of booting the installer. Also, don't neglect to read the CHANGES_AND_HINTS.TXT file, which is probably the most accurate piece of documentation to ship with Slackware (thanks Robby!). Now it's time to boot the disc. Put the Slackware installation CD-ROM in your machine's CD-ROM drive and reboot to load the disc. You'll get an initial information screen and a prompt (called the “boot:” prompt) at the bottom of the screen. This is where you'll enter the name of the kernel that you want to boot with. With most systems you'll want to use the default kernel, called hugesmp.s. Even on a machine with only a single one-core processor, it is recommended to use this kernel if your machine can run it. Otherwise use the huge.s kernel, which should support any 486 or better. To boot the hugesmp.s kernel, just enter hugesmp.s on the boot prompt: boot: hugesmp.s (actually, since the hugesmp.s kernel is the default, you could have just hit ENTER and the machine would go ahead and load the hugesmp.s kernel for you) If you've got some non-standard hardware in your machine (or if hugesmp.s doesn't work, and you're beginning to suspect you need a different kernel), then you'll have to try huge.s. If, for some reason, that still will not boot and you know that your hardware should be supported by the 2.6.33.4 kernel, contact volkerdi at slackware dot com and I will see what I can do. These are the kernels shipped in Slackware: hugesmp.s This is the default installation kernel. If possible, you can save a bit of RAM later (and some ugly warnings at boot time or when trying to load modules when the driver is already built-in) by switching to a generic kernel. In this case that would be gensmp.s, which is a similar kernel but without filesystems and many of the less common drive controllers built in. To support these (at the very least your root filesystem), an initrd (actually an initramfs) is required when a generic kernel is used. Previous versions of Slackware used an ext2 filesystem for this, but now a filesystem-less dynamic kernel-based directory structure is used. A big advantage of this is that the size usable by the initrd is only limited by the amount of RAM in the machine. A disadvantage is that the generic kernels no longer include *any* filesystems besides romfs, so old initrd.gz files are not usable (they would have needed new modules anyway), and it is trickier to get a custom binaries or modules or whatever into the installer for guru-install purposes. It's not impossible though -- think tar to/from a device such as a USB stick, or leveraging ROMFS. gensmp.s The trimmed down, more modular version of hugesmp.s. This can be switched to, after setting up an initrd and reinstalling LILO. It is packaged as a .txz, and can be found on the installed system as: /boot/vmlinuz-generic-smp-2.6.33.4-smp huge.s This is the 486-compatible single processor version of the hugesmp.s kernel. Try this if hugesmp.s does not work on your machine. generic.s The trimmed down, more modular version of huge.s. Found on the system as: /boot/vmlinuz-generic-2.6.33.4 This also requires using an initrd. speakup.s This is like the huge.s (486 compatible loaded kernel), but has support for Speakup and all the SCSI, RAID, LVM, and other features of huge.s. There is no corresponding generic kernel for speakup.s, but the vanilla linux sources may be patched with the speakup sources in source/k (this will probably work on any recent kernel). After that, whatever customizations are needed should be easily adjusted. The speakup.s kernel is used to support hardware speech synthesizers as well as software one like festival (though these require additional programs that are not yet shipped with Slackware). For more information about speakup and its drivers check out:. To use this, you'll need to specify one of the supported synthesizers on the kernel's boot prompt: speakup.s speakup.synth=synth where 'synth' is one of the supported speech synthesizers: acntpc, acntsa, apollo, audptr, bns, decext, decpc, dectlk, dtlk, dummy, keypc, ltlk, soft, spkout, txprt. A serial port may be specified with an option like this: speakup.s speakup.synth=decext speakup.ser=1 Note that speakup serial ports are numbered starting with one (1, 2, 3) rather than the more typical 0, 1, 2 numbering usually seen on Linux. Note that if you use the huge (non-SMP kernel) and plan to compile any third party kernel modules, you may need to apply the kernel patch in /extra/linux-2.6… or, you could just cd to the kernel sources, run “make menuconfig”, make sure that SMP (and the -smp suffix) are turned off, and recompile the kernel with “make”. But, that's for later – after the install. Once you've entered your kernel choice and hit ENTER, the kernel and install program will load from the DVD or CD-ROM, and you'll arrive at the Linux login prompt. (You're running Linux now. Congratulations! To log into the system, enter the name of the superuser account and hit Enter: root Since there is no password on the install CD, you will be logged in right away. 3.3 Using Linux fdisk to create Linux partitions At this point, you should have a large chunk of unpartitioned space on your hard drive that you'll be making into partitions for Slackware. Now you're ready to create your root Linux partition. To do this, you'll use the Linux version of fdisk. To need to partition a hard drive, you need to specify the name of the device when you start fdisk. For example: fdisk /dev/sda (Repartition the first hard drive) fdisk /dev/sdb (Repartition the second hard drive) NOTE: If you prefer, you may also try a newer menu-driven version of Linux fdisk called 'cfdisk'. Rumor has it that MOST people do prefer cfdisk, and “newer” has to be taken in context. cfdisk has many years of testing behind it. Once you've started fdisk, it will display a command prompt. First look at your existing partition table with the 'p' command:) Here we can see that there is one DOS partition on the drive already, starting on the first cylinder and extending to cylinder 2423. Since the drive has 4865 cylinders, the range 2424 - 4865 is free to accept a Linux installation. If the FAT32 partition were using the entire drive, you would have no choice but to delete it entirely (this destroys the partition), or go back and use some kind of partition resizing tool like GNU parted or Partition Magic to create some free space for the installation. If you need to delete a partition, use the 'd' command. You'll be asked which partition number you want to delete – check the partition size to make sure it's the right one. Next, you'll want to use the 'n' command to create a primary partition. This will be your root Linux partition. Command (m for help): n Command action e extended p primary partition (1-4) You'll want to enter 'p' to make a primary partition. Partition number (1-4): 2 Here, you enter “2” since DOS is already using the first primary partition. Fdisk will first ask you which cylinder the partition should start on. Fdisk knows where your last partition left off and will suggest the first available cylinder on the drive as the starting point for the new partition. Go ahead and accept this value. Then, fdisk will want to know what size to make the partition. You can specify this in a couple of ways, either by entering the ending cylinder number directly, or by entering a size. In this case, we'll enter the last cylinder. Here's what the screen looks like as these figures are entered: First cylinder (2424-4865): 2424 Last cylinder or +size or +sizeM or +sizeK (2424-4865): 4700 You have now created your primary Linux partition with a size of 18.7 GB. Next, you'll want to make a Linux swap partition. You do this the same way. First, enter another “n” to make a primary partition: Command (m for help): n Command action e extended p primary partition (1-4) Enter “p” to select a primary partition. Partition 1 is in use by DOS, and you've already used partition 2 for Linux, so you'll want to enter “3” for the new partition number: Partition number (1-4): 3 Since this is the last partition we plan to make on this hard drive, we'll use the end cylinder this time. Here are the entries for this: First cylinder (4701-4865): 4701 Last cylinder or +size or +sizeM or +sizeK (4701-4865): 4865 Now we need to set the type of partition to 82, used for Linux swap. The reason we didn't need to set a partition type the last time is that unless otherwise specified Linux fdisk automatically sets the type of all new partitions to 83 (Linux). To set the partition type, use the “t” command: Command (m for help): t Partition number (1-4): 3 Hex code (type L to list codes): 82 Now you're ready to save the updated partition table information onto your hard drive. Use the “p” command again to check the results and be sure you're satisfied with them: swap This looks good, so we'll use the “w” command to write the data out to the drive's partition table. If you want to exit without updating the partition table (if you've made a mistake), then you can exit without changing anything by using the “q” command instead. When you exit fdisk using the “w” command, fdisk recommends that you reboot the machine to be sure that the changes you've made take effect. Unless you've created extended partitions, you can go ahead and run setup without rebooting. Note: Sometimes fdisk will give you a message like “This drive has more than 1024 cylinders” and warn about possible problems using partitions with DOS. This is because MS-DOS suffers from a limitation that only allows access to the first 1024 cylinders on a hard drive. At one time, LILO used the standard BIOS routines to read sectors, so this was a limitation of LILO, too. Luckily modern versions of LILO use the LBA32 method of accessing sectors, so this limitation no longer applies. If you see the warning from fdisk, you can safely ignore it. 4.0 Installing the Slackware distribution Now that you have one or more Linux partitions, you are now ready to begin installing software onto your hard drive. To start the Slackware install program, enter the command “setup” and hit enter: # setup The installer will start up with a full-color menu on your screen with the various options needed to install Slackware. In general, you'll want to start with the ADDSWAP option. Even if you've already created and activated a swap partition manually, you'll need to run this so Slackware adds the swap partition to your /etc/fstab file. If you don't add it, your system won't use the swap space when you reboot. Installing a typical system involves running the following options from the setup menu in this order: ADDSWAP, TARGET, SOURCE, SELECT, INSTALL, and CONFIGURE. You may also start with KEYMAP if you have a non-US keyboard layout, or with TARGET if you don't want to use a swap partition. For the rest of this section, we'll walk through a typical installation process. 4.1 The ADDSWAP option: First, we select the ADDSWAP option. The system will scan for partitions marked as type “Linux swap” and will ask if you want to use them for swap space. Answer YES, and the system will format the partition and then make it active for swapping. Once it's finished, setup will display a message showing the line it will add to /etc/fstab to configure the swap partition at boot time. Hit enter to continue, and setup will go on to the TARGET section of the install. NOTE: If you created a partition to use for swap space, but setup doesn't see it when it scans your drives, it's possible that the partition type hasn't been set in the partition table. Use the Linux “fdisk” program to list your partitions like this: # fdisk In this case, if /dev/sda3 is meant to be a Linux swap partition, you'll need to start fdisk on drive /dev/sda: # fdisk /dev/sda Command (m for help): t Partition number (1-4): 3 Hex code (type L to list codes): 82 Command (m for help): w This will change the third partition to type 82 (Linux swap) and write the partition table out to /dev/sda. When you run setup again, the ADDSWAP option should detect the Linux swap partition. 4.2 The TARGET option: The next option on the setup menu is TARGET. This lets you select which partition(s) you'd like to install Slackware on, and will format them using a Linux filesystem. Depending on which kernel you chose to boot with, your filesystem choices may include ext2 (the traditional Linux filesystem), ext3 (a journaling version of ext2), and Reiserfs (the first journaling filesystem written for Linux; it stores files in a balanced tree). When you select the TARGET option, the system will scan for “Linux” partitions on your hard drives. If it doesn't find any, you'll need to make sure that you've created partitions using the fdisk program, and that the partitions are labeled as type 83 (Linux). This is the same process shown above. If you've created one or more partitions for Slackware using Linux's fdisk program then you shouldn't have any problems, since Linux fdisk (and cfdisk) sets all new partitions to type 83 (Linux) by default. You will see a menu listing all the Linux partitions. Use the arrow keys to select the partition you'd like to use for your root (or primary) Linux partition and hit enter. The setup program will then ask if you'd like to format the partition, and what type of filesystem to use. If this is a new installation of Slackware, you'll need to do this. Otherwise, if you are installing software onto an existing Linux system, you don't need to format the partition. For example, the partition might be used as your /home and contains home directories that you want to keep. If you choose not to format a partition, you'll see “partition will not be reformatted” on the top of the screen as you confirm your choice, so that there can be no question about it. There are a few options you need to know about when you format Linux partitions. First, you'll need to decide whether or not you'd like to check the partition for bad blocks when you do the format. This is usually not necessary unless you know the drive in question has problems. Checking takes quite a while longer than a normal format (and most IDE drives do self-checking anyway), so you'll probably want to just go ahead and use the “Format” menu option to format the drive without checking. If you have drive problems later on (and can't just replace the hard drive with a better one), then you might want to go back and try again using the “Check” option to map out the bad sectors on the drive. You'll notice that the partition you just formatted is now listed as “in use.” If you made some other partitions for Slackware, you'll need to go through the same process of formatting them, selecting whether or not to check for bad blocks, and setting a reasonable inode density. With these partitions there will be an additional step – you'll need to select where you'd like to put the partition in your directory tree. MS-DOS/Windows assigns a letter such as A:, B:, C:, etc, to each device. Unlike DOS, Linux makes your devices visible somewhere under the root directory (/). You might have /dev/sda1 for your root partition (/) and put /dev/sda2 somewhere underneath it, such as under your /home directory. When prompted for a mount location, just enter a directory such as /home, and hit enter. As you format each additional partition and place it in the filesystem tree, you'll be returned to the partition selection menu. When you've prepared all of your Linux partitions, you'll go on to the SOURCE option. 4.3 The SOURCE option: The next menu option is SOURCE, where you select the source from which to install Slackware. SOURCE displays a menu offering the choice of installation from CD-ROM, a hard drive partition, NFS, HTTP/FTP, or a directory (mounted manually). You'll want to make sure your Slackware CD-ROM is in your drive, and select the first option: "Install from a Slackware CD-ROM" Next, the system will ask you if you'd like to scan for your CD-ROM drive or pick manually from a list. (unless you're trying to show off to your friends, go ahead and let setup scan for the CD-ROM drive automatically). Setup will then try to access the Slackware CD-ROM. If this is successful, setup will tell you that it found and mounted a CD-ROM on a Linux device such as /dev/sr0. If the CD-ROM was successful found, you may skip ahead to the SELECT section below, otherwise read on for some CD-ROM troubleshooting tips. If setup is not successful in accessing the CD-ROM drive, you'll need to figure out why before you can go on. The most common reason for this is that you used a kernel that doesn't support the CD-ROM drive. If that's the case, you need to restart the installation CD-ROM and specify a kernel that contains a driver to support your CD-ROM drive (if the drive is connected to a SCSI card, for example, you'll need to use a kernel with support for that card). You can also try switching to a different console with Alt-F2 and mounting the CD-ROM drive manually and then installing from a pre-mounted directory (if you prefer a hands-on approach). If you have no idea which device an IDE CD-ROM drive is connected to, you should have the system scan for it. You also can look at the messages generated by the system as it boots – you should see a message that Slackware detected your CD-ROM drive along with information about what type of drive it is. You can look at these messages by using the right shift key together with the PageUp and PageDown keys to scroll the screen up and down. installtion support to the Slackware installer. 4.4 The SELECT option: The SELECT option lets you select software to install. When you start the SELECT option, you'll see a menu where you can choose which categories of software you're interested in installing. The first series (called the A series) contains the base filesystem structure and binaries that are crucial for your system to boot and run properly. You must install the A series. Make sure that at least the selection for series A has an [X] next to it. Most of the other choices will also have an [X] next to them, and while you can use the cursor keys and the space bar to unselect items to save space (see the space requirements above for details), you're better off with a complete installation if you have the space for it. Once you've selected the general categories of software you wish to install, hit enter and you'll go on to the INSTALL option. 4.5 The INSTALL option: This option actually installs the selected packages to the hard drive. The first question the INSTALL option will ask is what type of prompting you'd like to use during the installation process. A menu will show several options, including “full”, “newbie”, “menu”, “expert”, “custom”, “tagpath”, and “help”. The help option gives detailed information on each of the choices. Most people will want to use “full”. Others might want “menu”, “expert” or “newbie” mode. We'll cover each of these in detail now. The first option to consider is “full”. If you select this mode, then setup assumes you want to install all the packages in each selected series and installs them all without further prompting. This is fast and easy. Of course, depending on which software categories you've chosen, this can use a lot of drive space. If you use this option, you should be installing to a partition with at least 6GB free (and hopefully more like 20GB or so) to insure that you don't run out of drive space during the installation process. Because Linux allows you to split your installation across multiple partitions, the installer cannot know ahead of time whether the packages you've chosen to install will fit your partitioning scheme. Therefore, it is up to you to make sure that there is enough room. The “newbie” mode (which was formerly known as “normal” mode) installs all of the required packages in each series. For each of the non-required packages (one by one) you'll get a menu where you can answer YES (install the package), NO (do not install the package), or SKIP (skip ahead to the next series). You'll also see a description of what the package does and how much space it will require to help you decide whether you need it or not. The “newbie” mode is verbose, requires input after each package, and is VERY tedious. It certainly takes a lot longer to install using newbie mode, and (in spite of the name), it is easier to make mistakes in newbie mode than by simply doing a full installation. Still, using it is a good way to get a basic education about what software goes into the system since you actually get a chance to read the package descriptions. With a full installation most of the package descriptions will fly by too quickly to read. If you can decide which packages you want from less information, the “menu” or “expert” options are a good choice, and go much faster than a “newbie” mode installation. These options display a menu before installing each series and let you toggle items on or off with the spacebar. In this Slackware release, the “menu” and “expect” install modes act the same, and both options are kept only for consistency. The “expert” mode lets you toggle packages individually, allowing the user to make good or bad decisions, like turning off crucial packages or installing a package that's part of a larger set of software without installing the other parts. If you know exactly what you need, the “expert” mode offers the maximum amount of flexibility. If you don't know what you need, using the “full” mode is strongly suggested. The “custom” and “tagpath” options are only used if you've created “tagfiles” for installation. In the first directory of each disk set is a file called “tagfile” containing a list of all the packages in that series, as well as a flag marking whether the package should be installed automatically, skipped, or the user should be prompted to decide. This is useful for situations where you need to install large numbers of machines (such as in a computer lab), but most users will not need to create tagfiles. If you are interested in using them, look at one of the tagfiles with an editor. If you're new to Slackware, and you have enough drive space, you'll probably want to select the “full” option as the easiest way to install. Otherwise, the “menu” option is another good choice for most beginners. If you think you need (or would just like to see) the extra information offered by the “newbie” mode, go ahead and use that. Don't say you weren't warned about the extra time it requires, though, especially when installing the fragments that make up modular X. Trust us, you'll be better off selecting “full”. Once you have selected a prompting mode, the system begins the installation process. If you've chosen “menu” or “expert” mode, you'll see a menu of software to choose from right away – use the arrow keys and spacebar to pick what you need, and then hit enter to install it. If you've chosen the “newbie” mode, the installation will begin immediately, continuing until it finds optional packages. You'll get a selection menu for each of these. If you selected “full”, now it's time to sit back and watch the packages install. If you've selected too much software, it's possible that your hard drive may run out of space during installation. If this happens, you'll know it because you'll see error messages on the screen as setup tries to install the packages. In such a case, your only choice is to reinstall selecting less software. You can avoid this problem by choosing a reasonable amount of software to begin with, and installing more software later once your system is running. Installing software on a running Slackware system is as easy as it is during the initial installation – just type the following command to mount the Slackware CD-ROM: mount /dev/cdrom /mnt/cdrom Then go to the directory with the packages you want to install, and use the install-packages script: cd /mnt/cdrom/slackware/xap sh install-packages Other options for installing packages later on include “installpkg” and “pkgtool”. For more information about these, see the man pages (“man installpkg”, “man pkgtool”). Once you have installed the software on your system, you'll go on to the CONFIGURE option. 4.6 The CONFIGURE option: The setup's CONFIGURE option does the basic configuration your system needs, such as setting up your mouse, setting your timezone, and more. The CONFIGURE option will first ensure that you've installed a usable Linux kernel on your hard drive. The installation program should automatically install the kernel used to do the initial installation. If you installed using the speakup.s kernel from CD-ROM, the menu will prompt you to re-insert your installation disc and hit enter, and then setup will copy the kernel from the disc to your hard drive. NOTE: If you install a kernel on your system that doesn't boot correctly, you can still boot your system with the CD-ROM. To do this, you need to enter some information on the boot prompt. For example, if your root partition is on /dev/hda1, you'd enter this to boot your system: huge.s root=/dev/hda1 initrd= ro The “initrd=” option tells the kernel not to run the /init script on the installer image in RAM, and the “ro” option makes the root partition initially load as read-only so Linux can safely check the filesystem. Once you've installed a kernel, you'll be asked if you want to make a USB bootstick for your new system. This is a very good idea if you happen to have a spare USB flash stick that you don't mind having COMPLETELY ERASED. , so if you wish to make one, insert a USB flash memory stick when prompted and use the “Create” option to create a USB bootstick for your system. Next you'll be asked what type of mouse you have. Pick the mouse type from the menu (or hit cancel if you don't have a mouse), and setup will create a /dev/mouse link. Most computers use a PS/2 mouse, which is the first choice. After this, other installation scripts will run depending on which packages you've installed. For instance, if you installed the network-* packages you'll be asked if you want to configure your network. 4.7 LILO LILO is the Linux Loader, a program that allows you to boot Linux (and other operating systems) directly from your hard drive. If you installed the LILO package, you now have an opportunity to set it up. Installing LILO can be dangerous. If you make a mistake it's possible to make your hard drive unbootable. If you're new to Linux, it might be a good idea to skip LILO installation and use the bootdisk to start your system at first. You can install LILO later using the 'liloconfig' command after you've had a chance to read the information about it in /usr/doc/lilo-*. If you do decide to go ahead and install LILO, be sure you have a way to boot all the operating systems on your machine in case something goes wrong. If you can't boot Windows again, use the DOS command “FDISK /MBR” to remove LILO from your master boot record. (You can use a Windows Startup Disk for this) The easiest way to set your machine up with LILO is to pick the “simple” choice on the LILO installation menu. This will examine your system and try to set up LILO to be able to boot Windows (DOS) and Linux partitions that it finds. If it locates the OS/2 Boot Manager, it will ask if you'd like to configure the Linux partition so that you can add it to the Boot Manager menu. (NOTE: If you use a disk overlay program for large IDE hard drives such as EZ-DRIVE, please see the warning below before installing LILO) The “expert” option gives you much more control over the configuration of LILO. If you decide to use the “expert” option, here's how you do it. LILO uses a configuration file called /etc/lilo.conf to hold the information about your bootable partitions – the “expert” LILO installation lets you direct the construction of this file. To create the file, first select BEGIN to enter the basic information about where to install LILO. The first menu will ask if you have extra parameters you'd like passed to the Linux kernel at boot time. If you need any extra parameters enter them here. Then you'll be asked if you wish to use the framebuffer console. The 1024x768x256 console setting is a nice one to use in most cases, but you may need to experiment to find the nicest setting for your card. Some look terrible at modes larger than 800×600 because of the default refresh rates, but at least ATI cards are known to look great at 1024x768x256. If you want to use the framebuffer console, select a mode here. Next, decide where you want LILO installed. Usually you'll want to install LILO on the boot drive's MBR (master boot record). If you use a different boot manager (like the one that comes with OS/2) then you'll want to install LILO on your root Linux partition and then add that partition to the boot manager menu using its configuration tool. Under OS/2, this is the fdisk program. NOTE: If you use the EZ-DRIVE utility (a diskmanager program supplied with some large IDE drives to make them usable with DOS) then do not install LILO to the MBR. If you do, you may disable EZ-DRIVE and render your disk unusable with DOS. Instead, install LILO to the superblock of your root Linux partition, and use fdisk to make the partition bootable. (With MS-DOS fdisk, this is called setting the “active” partition) The next menu lets you set a delay before the system boots into the default operating system. If you're using LILO to boot more than one operating system (such as DOS and Linux) then you'll need to set a delay so you can pick which OS you'd like to boot. If you press the SHIFT key during the delay, LILO will display a prompt where you can type a label (typically Windows or Linux) to select which OS to boot. If you set the delay to 'Forever', the system will display a prompt at boot time and wait for you to enter a choice. Next, you need to add entries for each operating system that LILO can boot. The first entry you make will be the machine's default operating system. You can add either a DOS, Linux, or Windows partition first. For example, let's say you select “Linux”. The system will display your Linux partitions and ask which one of them you'd like to boot. Enter the name (like /dev/hda1) of your root Linux partition. Then, you'll be prompted to enter a label. This is the name you will enter at the boot time LILO prompt to select which partition you want to boot. A good choice for this is “Linux”. Adding a DOS or Windows partition is similar. To add a Windows partition to the LILO configuration file, select the Windows option. The system will display your FAT/NTFS partitions and ask which one of them you'd like to boot with LILO. Enter the name of your primary Windows partition. Then enter a label for the partition, like “Windows”. Once you've added all of your bootable partitions, install LILO by selecting the “Install” option. 4.8 Networking Another configuration menu allows you to configure your machine's networking setup. First, enter a hostname for your machine. The default hostname after installation is “darkstar,” but you can enter any name you like. Next, you'll be asked to provide a domain name. If you're running a stand-alone machine (possibly using a dialup link to an Internet Service Provider) then you can pick any name you like. The default domain name is “example.net”. If you are going to add the machine to a local network, you'll need to use the same domain name as the rest of the machines on your network. If you're not sure what this is, contact your network administrator for help. Once you've specified the hostname and domain name, you'll be asked which type of setup you would like: “static IP”, “DHCP”, or “loopback”. Loopback This is the simplest type of setup, defining only a mechanism for the machine to contact itself. If you do not have an Ethernet card, use this selection. This is also the correct selection if you'll be using a PCMCIA (laptop) Ethernet card and want to set up your networking in /etc/pcmcia/network.opts. (you could also configure a PCMCIA card using the “static IP” or “DHCP” options, but in that case will not be able to “hotplug” the card) Finally, this is the right option to use if you have a modem, and will be connecting via dialout and PPP. You'll select loopback now, and then set up your phone connection later using pppsetup or kppp. Static IP If your machine has an Ethernet card with a static IP address assigned to it, you can use this option to set it up. You'll be prompted to enter your machine's IP address, netmask, the gateway IP address, and the nameserver IP address. If you don't know what numbers you should be using, ask the person in charge of the network to help. After entering your information, you'll be asked if you want to probe for your network card. This is a good idea, so say yes. Confirm that the settings are correct, and your networking will be configured to use a static IP address. DHCP DHCP stands for Dynamic Host Configuration Protocol, and is a system where your machine contacts a server to obtain its IP and DNS information. This is the usual way to get an IP address with broadband connections like cable modems (although some more expensive business-class broadband connections may assign static IP addresses). It is very easy to set up a DHCP connection – just select the option. Some providers will give you a DHCP hostname (Cox is one that does) that you'll also need to enter in order to identify yourself to the network. If you don't have a DHCP hostname, just leave it blank and hit ENTER. After entering your information, you'll be asked if you want to probe for your network card. This is a good idea, so say yes. Confirm that the settings are correct, and your networking will be configured to use DHCP. Once you've completed all the configuration menus, you can exit setup and reboot your machine. Simply press ctrl-alt-delete and the kernel will kill any programs that are running, unmount your filesystems, and restart the machine. 5. Booting the installed Slackware system If you've installed LILO, make sure you don't have a disk in your floppy drive – when your machine reboots it should start LILO. Otherwise, insert the bootdisk made for your system during the configuration process and use it to boot. Also, make sure to remove the CD-ROM to avoid booting it, or disable your machine's CD-ROM booting feature in the BIOS settings. The kernel will go through the startup process, detecting your hardware, checking your partitions and starting various processes. Eventually you'll be given a login prompt: darkstar login: Log into the new system as “root”. Welcome to Linux 2.6.33.4. darkstar login: root Last login: Tue May 18 15:36:23 2010 on tty3. Linux 2.6.33.4. You have new mail. darkstar: ~# 6. Post-installation configuration Once the system is running, most of the work is complete. However, there are still a few programs you'll need to configure. We'll cover the most important of these in this section. 6.1 /etc/rc.d/rc.modules This file contains a list of Linux kernel modules. A kernel module is like a device driver under DOS. You can think of the /etc/rc.d/rc.modules file as similar to DOS's CONFIG.SYS. The file specifies which modules the system needs to load to support the machine's hardware. After booting your machine, you may find that some of your hardware isn't detected (usually an Ethernet card). To provide the support, you'll need to load the correct kernel module. Note that modern Linux kernels include a feature that allows the kernel to load its own modules, called udev. This will load many modules automatically without any need to edit rc.modules, and when using udev it might be better to tell it how to load the modules you want automatically rather than loading them at boot time with rc.modules. This is an advanced topic, and outside the scope of this document. If you're interested in this, “man udev” is a good place to start reading. In any case, it's best to not edit rc.modules unless you find that the modules you want to use are not being loaded automatically by udev. You can see a list of the modules that were loaded with the “lsmod” command. Likewise, in the majority of cases “alsaconf” is not required to configure sound. Rather, the “alsamixer” tool is used to unmute the Master and PCM channels and turn up the volume, and the “alsactl store” is used to save the sound defaults. There's a lot more information out there about kernel modules, including lists of module names and the cards they support, as well as extra options you can can add to the module lines to configure the hardware in different ways. The kernel's documentation in /usr/src/linux/Documentation has a lot of good information, as does the information shipped with udev (found under /usr/doc/udev-*). 6.2 Configuring the X Window System Configuring X can be a complex task. The reason for this is the vast numbers of video cards available for the PC architecture, most of which use different programming interfaces. Luckily, X has come a long way since the early days of X386, where monitor modelines had to be tediously calculated. With most hardware, X can now be run with NO configuration file or additional driver! But you still might want to make a configuration file if you'll be using a third party video driver (the installer for that may offer to make it for you), or if you just want to have greater control over the details of the X configuration. To try X without a configuration file, just type “startx” at a command line. If you're satisfied with the result, then you're done. If you would like X to start automatically at boot, see the /etc/inittab file once you've tested “startx” to be sure that X is working.: Xorg -configure Modern versions of X provide a simple way to create an initial xorg.conf file that often will work without any additional configuration, or, at the very least, provide a good base from which to customize the file. To run this command, enter the following in a root terminal: # Xorg -configure The X server probes for available hardware and creates an initial xorg.file located in the /root directory. You can then use this initial file to test the configuration by entering the following: # Xorg -config /root/xorg.conf.new This will load the initial xorg.conf.new file and run the X server. If you see the default black and gray checkered background with a mouse cursor appear, then the configuration was successful. To exit the X server, just press Ctrl+Alt+Backspace simultaneously. Once back at the command line, you can copy this xorg.conf.new file to /etc/X11/xorg.conf and begin making any manual edits necessary to customize your setup. xorgsetup). 6.3 Hardware acceleration with X If you've used xorgsetup or X -configure to configure for your card, and it's one that can take advantage of X's direct rendering support, you'll certainly want to enable this. Check your /etc/X11/xorg.conf and make sure that the glx module is loaded: Load "glx" This line will probably already be in place. 6.4 User Accounts You should make a user account for yourself. Using “root” as your everyday account is dangerous, and is considered bad form (at the very least) since you can accidentally damage your system if you mistype a command. If you're logged in as a normal user, the effects of bad commands will be much more limited. Normally you'll only log in as root to perform system administration tasks, such as setting or changing the root password, installing, configuring, or removing system software, and creating or deleting user accounts. To make an account for yourself, use the 'adduser' program. To start it, type 'adduser' at a prompt and follow the instructions. Going with the default selections for user ID, group ID, and shell should be just fine for most users. You'll want to add your user to the cdrom, audio, video plugdev (plugable devices like USB cameras and flash memory) and scanner groups if you have a computer with multimedia peripherals and want to be able to access these. Add these group names, comma separated, at the following prompt: Additional groups (comma separated) []: Passwords and security When choosing passwords for a Linux system that is connected to a network you should pick a strong password. However, passwords only help protect a system from remote trespassing. It's easy to gain access to a system if someone has physical access to the console. If you forget the root password, you can use the install disc to mount your root partition and edit the files containing the password information. If you have a bootable optical drive, you can use the first installation CD-ROM or the DVD as a rescue disk. At the prompt, you can manually mount the root Linux partition from your hard drive (“fdisk -l” will give you a list) and remove the root password. For example, if your root linux partition is /dev/hda2, here are the commands to use after logging into the install disc as “root”: mount /dev/hda2 /mnt cd /mnt/etc Next, you'll need to edit the “shadow” file to remove root's password. Editors which might be available include “vi”, “emacs”, “pico”, and “nano”. “vi” and “emacs” might be more of an adventure than you need unless you've used them before. The “pico” and “nano” editors are easy for beginners to use. pico shadow At the top of the file, you'll see a line starting with root. Right after root, you'll notice the encrypted password information between two colons. Here's how root's line in /etc/shadow might look: root:EnCl6vi6y2KjU:10266:0::::: To remove root's password, you use the editor to erase the scrambled text between the two colons, leaving a line that looks like this: root::10266:0::::: Save the file and reboot the machine, and you'll be able to log in as root without a password. The first thing you should do is set a new password for root, especially if your machine is connected to a network. Here are some pointers on avoiding weak passwords: 1. Never use your name (or anyone's name), birthdate, license plate, or anything relating to yourself as a password. Someone trying to break into your machine might be able to look these things up. 2. Don't use a password that is any variation of your login name. 3. Do not use words from the dictionary (especially not “password” :) or syllables of two different words concatenated together as your password. There are automated programs floating around on the net that can try them all in a short time. 4. Do not use a number (like 123456) or a password shorter than six characters. The strongest passwords are a mix of letters, numbers, and symbols. Here are some examples of strong passwords (but don't use these : - ^5g!:1? ()lsp@@9 i8#6#1*x ++c$!jke *2zt/mn1 In practice, any password containing one or two words, a number (or two), and a symbol (or two) should be quite secure. 7. For more information For more information, visit our web site at To shop for fine Slackware products (and help keep the project funded), please visit. Email: [email protected] (Information or general inquiries) FTP: (Updates) WWW: (News) Security issues: [email protected] General Hotline: [email protected] 8. Trademarks Slackware is a registered trademark of Slackware Linux, Inc. Linux is a registered trademark of Linus Torvalds. All trademarks are property of their respective owners.
http://docs.slackware.com/playground:howto
2016-02-05T23:53:52
CC-MAIN-2016-07
1454701145578.23
[array(['/lib/images/smileys/icon_smile.gif', ':-)'], dtype=object) array(['/lib/images/smileys/icon_smile.gif', ':-)'], dtype=object) array(['/lib/images/smileys/icon_smile.gif', ':-)'], dtype=object)]
docs.slackware.com
Difference between revisions of "Marketing Working Group" From Joomla! Documentation Revision as of 04:26, 29 May 2014 ,.
https://docs.joomla.org/index.php?title=Marketing_Working_Group&diff=119619&oldid=107219
2016-02-06T00:22:43
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Pages that link to "What can you do with a template?" ← What can you do with a template? The following pages link to What can you do with a template?:View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500) - JDOC:Joomla! 1.5 Template Tutorials Project/Outline (← links) - J2.5:Getting Started with Templates (transclusion) (← links)
https://docs.joomla.org/index.php?title=Special:WhatLinksHere/What_can_you_do_with_a_template%3F&limit=50
2016-02-06T00:37:10
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Difference between revisions of "Customising the JA Purity template/customisations/Modifying the horizontal menu colour" From Joomla! Documentation < J1.5:Customising the JA Purity template | customisations Revision as of 11:50, 12 June; - }
https://docs.joomla.org/index.php?title=Customising_the_JA_Purity_template/customisations/Modifying_the_horizontal_menu_colour&diff=14541&oldid=14538
2016-02-06T01:06:12
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Difference between revisions of "Security Checklist/Where can you learn more about file permissions?" From Joomla! Documentation < Security Checklist Revision as of 17:40, 8 October 2012 (view source)Phild (Talk | contribs) (added local link to Using phpSuExec replacing external link)← Older edit Revision as of 18:40, 8 October 2012 (view source) Phild (Talk | contribs) (added local link to windows permission primer replacing external link)Newer edit → Line 1: Line 1: −{{underconstruction}} * [[How do UNIX file permissions work?|Unix Permissions Primer]] * [[How do UNIX file permissions work?|Unix Permissions Primer]] * [[Using phpSuExec]] * [[Using phpSuExec]] −* Windows Permissions Primer+* [[Windows Permissions Primer]] <noinclude>[[Category:FAQ]] <noinclude>[[Category:FAQ]] Revision as of 18:40, 8 October 2012 Unix Permissions Primer Using phpSuExec Windows Permissions Primer Retrieved from ‘’ Categories: FAQAdministration FAQGetting Started FAQInstallation FAQVersion 1.5 FAQ
https://docs.joomla.org/index.php?title=Security_Checklist/Where_can_you_learn_more_about_file_permissions%3F&diff=prev&oldid=76271
2016-02-06T01:34:48
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
How to respond to public comments automatically. How to respond to private messages automatically. Under Campaign on the left panel, select Auto Reply On the bottom right corner, select +Auto reply Enter “Settings” page. Enter a title for the campaign. Click on “Select a Post” column to choose the post you want to use. You can select “Posted” or “Scheduled” posts. Click on If the customer didn’t trigger any keyword rules, the bot will reply with default reply. Click on Public Reply under Default Reply > Auto Reply Message Enter the prompted response in the textbox. Click on Private Reply under Auto Reply Message to edit the default response for private conversations. In addition, you can use Random Text component to a set of multiple texts. Click on “+ Random Text” to create your response. The bot will randomly pick one to reply. Click on + Auto Reply Message on the top right corner of your window. Give the message a name. Under “Keywords”, enter the keywords you want. (In order to have a more flexible interaction for the bot, add similar words into the “Keywords” section. Select “Include any keywords” or “Include every keyword” Edit the auto reply messages for Public Reply and private Reply. 1.After publishing, if you want to test out your bot, remember to switch to your personal account since with Facebook page identities, one can not trigger bot to respond anything. 2.The Private Reply function for Facebook only allows text messages. If you want to send pictures or other components, you can send private messages to customers to trigger the particular messages. For example: “Seems like you want to know more. Send “Tell me the secret!” word by word to receive the surprise.”
https://docs-en.yoctol.ai/campaign-promotion-guide/auto-reply-for-campaign
2020-09-18T17:01:03
CC-MAIN-2020-40
1600400188049.8
[]
docs-en.yoctol.ai
Multitenancy deployment considerations This topic provides an overview of multitenancy and describes how you can implement multitenancy. Multitenancy enables you to run a single instance of You can implement multitenancy by using CUSTOMERproperty of a server in BMC Server Automation and a corresponding CUSTOMERserver property available in all domains in For information about configuring
https://docs.bmc.com/docs/decisionsupportserverautomation/88/multitenancy-deployment-considerations-629184568.html
2020-09-18T15:59:03
CC-MAIN-2020-40
1600400188049.8
[]
docs.bmc.com
Use this area to provide general information about tax offices you add to the system. Field Used for Name Enter a name of the tax office. Registration number Enter a 3-digit tax office code. Default currency Company type: Specify what kind of a business partner you want to add (select the business partner type) - whether it is a customer/vendor or other. The type determines what transactions you can perform with this company. While creating tax offices, Institution is selected by default. Institution types of companies Besides dealing with vendors and customers, your company has relationships with other types of partners, such as: While creating tax offices, Tax agency is selected by default. Serial number (#) Serial number (#) is assigned automatically when you save the tax office tax office. Active Turn on the Inactive option if you want to deactivate a previously active tax office. If a tax office is inactive, it will not be available for selection or use and will be accessible only on the listing page of the Tax office directory or of the Company directory. By default, this option is turned off and all newly created tax offices are active. You can select active tax offices from the lists for the Tax office/VAT office fields in various VAT documents.
https://docs.codejig.com/uk/entity2305843015656313960/view/4611686018427395996
2020-09-18T16:04:32
CC-MAIN-2020-40
1600400188049.8
[]
docs.codejig.com
- Created by Unknown User (paulb), last modified by Umut Uyurkulak on Jun 26, 2020 Overview CRYENGINE's Asset Browser provides an overview of your project by allowing you to browse its contents, search for particular assets, as well as import new assets in FBX format. The goal is to provide users with the freedom to interact with their project's contents on disk solely through the Editor, to the point where a regular File Explorer should no longer be necessary when working on a CRYENGINE project. In the future all asset management functionality will be added to the Asset Browser. More on the engine's Asset System can be found on Asset System - Generating Metadata. Asset Browser Overview Certain default assets that arrive with your CRYENGINE build are separated from your project's assets; these default assets are situated under the Engine folder, while all project assets are stored under Assets. For Programmers Engine assets can be addressed by prefixing %ENGINE% prefix to file paths, with ICryPak handling all subsequent path resolution. Every hard-coded reference to Engine assets meanwhile must make use of this prefix. Paths with no prefix still do work for reasons of backward compatibility, as CryPak defaults to the Engine directory when the Assets folder contains no file matching the specified path. Opening the Asset Browser To be able to use and view the contents of an existing project within the Asset Browser, .cryasset files will need to be generated as explained in the Asset System - Generating Metadata documentation. The Asset Browser can be accessed via Tools -> Asset Browser. 1. Menu Accessed via the icon situated at the top-right corner of the Asset Browser, the Menu contains the following options: File Edit View Toolbars When a tool has a toolbar, whether this is a default one or a custom one, the options above are also available when right-clicking in the toolbar area (only when a toolbar is already displayed). 2. Navigation Bar Indicates the path of the currently selected folder from the folder tree, also permitting quick switching between previous/successive folders along its path. 3. Folder Tree Lists a directory of folders and their sub-folders included within your project, allowing for quick navigation between these. The Search Folders bar at its top further helps searching for specific folders/sub-folders on the basis of their names. Context Menu Right clicking anywhere within this pane yields a context menu with the following items. 4. Search Results Pane The contents of folders selected within the Folder Tree, assets retrieved by searches and their corresponding search filters are displayed within this Search Results pane. The thumbnails of assets are assigned colors based on their type within the Search Results Pane, and these are visible only when View → Shows Thumbnails/Split Horizontally/Split Vertically is active as seen in the image below. Thumbnails The colors and the types of assets they indicate are as follows: Context menus are generated by right-clicking virtually anywhere on the Search Results pane and depending on where these right-clicks occur, the menus may differ in their listed options as follows. Column Headers Context Menu With View → Shows Details/Split Horizontally/Split Vertically active, asset listings are displayed in column view within the Search Results pane such that each column header corresponds to specific properties of the displayed assets. Right-clicking on the column headers within this pane allows the inclusion of additional property columns. Search Results Pane Context Menu With View → Recursive View inactive, right-clicking on empty space within the Search Results pane without selecting any particular asset yields: Asset Context Menu Generated by right-clicking a specific asset within the Search Results pane, its options include: Assets contained in the GameSDK Sample Project are compressed and stored within .pak files on disk. Since archived files cannot be directly modified, users working with GameSDK may find that the Rename and Open in File Explorer options of the Asset Context Menu are unavailable for assets located within the GameSDK assets directory. Version Control Context Menu With a Version Control System setup, the Folder Tree, Search Result Pane and Asset Context Menus might include additional options: The work files associated with an an. Please refer to the Associating Work Files to Assets section on this page for more information about adding work files to assets. Functionality Additional functionality provided by the Asset Browser includes: Associating Work Files to Assets Work files are those used in the design and development of an asset. For instance, a material (.mtl) file might have multiple Photoshop Document (.psd) files , TIFF, PNG files and or JPEG reference images as its work files. The work files associated with any asset in the Asset Browser can be viewed from the Editor, provided these have been linked to the asset from the Asset Browser. Adding Work Files To add a work file, right-click an asset within the Asset Browser and select the Work Files → Manage Work Files... option from the context menu. This opens the Manage Work Files window through which work files located within a project's asset directory can be linked to the selected asset. Manage Work Files Manage Work Files window Clicking the Add Files option brings up a file browser to locate and select the desired work files, while clicking Save confirms the selection. The different columns within the Manage Work Files window are as follows. If using Version Control, an asset must be checked out before work files can be associated with it. Viewing Work Files Once linked, the work files associated with an asset can be viewed/located by right-clicking that asset's listing within the Asset Browser and selecting Work Files. Work Files Deleting Work File Associations To delete an association between an asset and a specific work file, right-click the asset in the Asset Browser and select the Work Files → Manage Work Files... option to open the Manage Work Files window. Hover over the desired work file with the mouse icon to reveal the icon against its listing, and click the icon to cancel the association between the file and the currently selected asset. While files of virtually any file type may be associated with an asset as its work files, these files must be located within the current project's Assets directory. Copying and Duplicating Pressing Ctrl + C with one or more assets selected within the Asset Browser copies those assets to the clipboard; these copies may be pasted within any folder of the project's Assets directory using Ctrl + V. Similarly pressing Ctrl + D with one or more assets selected creates duplicates of those assets within the same folder. Alternatively, copies and duplicates of assets can also be made using the Copy and Duplicate options of the context menu generated by right-clicking an asset in the Asset Browser. - Although an asset located within the Engine directory can be copied and pasted within a folder of a project's Assets directory, assets cannot be copied to or duplicated in locations within the Engine folder. The Engine folder is immutable by default, meaning its contents cannot be changed from within the Asset Browser. - When activated, the View → Recursive View option in the Asset Browser displays assets located within both the selected folder and its containing folders in the Search Results pane. Since the destination folder might be unclear or ambiguous in this view, copies of assets cannot be pasted with Recursive View active. - Copying and duplicating Level, Schematyc Entity (.schematyc_ent) and Schematyc Library (.schematyc_lib) asset file types are not supported. Favorites The Favorites feature gives users the ability to bookmark and add to a list the most commonly used/liked assets. An asset may be added to/removed from the Favorites by toggling the star icon against their listings when View → Shows Details/Split Horizontally/Split Vertically is active. Star Icons of Assets Once added to the list of Favorites, the star icon against an asset's listing appears filled-in when Shows Details/Split Horizontally/Split Vertically is active, and at the top-left corner of its thumbnail when Shows Thumbnails/Split Horizontally/Split Vertically is active as illustrated here. Star Icons of Asset Thumbnails Either way, an exclusive list of Favorites may be accessed by clicking the star button situated on the left of the Search Assets bar. List of Favorites Rather than display a complete list of starred assets all at once, the list of Favorites only displays those included within the currently selected folder/sub-folder of the Folder Tree. Smart and Advanced Search The Search Assets bar searches for assets included with the Engine/Assets directories of a project, by checking the entered search string against the Name and Type values of every asset. It hence yields in its results those assets whose names or types match the entered string. For example, assume the project's current repository of assets contains materials named my_leaf, my_trunk and grass respectively. A search query containing the words material would yield results inclusive of the my_leaf, my_trunk and grass material assets; similarly a search query containing the words my material would include the my_leaf and my_trunk assets in its results. By default, the Search Assets bar searches for assets within all sub-folders included under the folder currently selected in the Asset Browser's Folder Tree. Users wishing to confine their search results to only the selected folder however, can do so be deactivating the View -> Recursive View option with the search string entered in the Search Assets bar. Filter The Filter button situated to the right of the Search bar enables users to describe specific criteria by which the Search Results pane must list a project's various assets. Clicking the button yields options to Add Criterion, Clear Criteria and Save/Load custom filters as required. Clicking the Add Criterion button presents a dropdown menu from which asset properties may be picked to specify your search criteria; the field adjacent to this dropdown helps specify the property value by which the Asset Browser must filter the list of objects. Search Assets Filter Depending on the selected property, these values may need to be entered manually, picked from a separate dropdown or in the case of properties such as Dependencies, may require users to physically locate and specify assets with which dependency relationships might exist. By default, filtering only yields search results from within the folder currently selected on the Folder Tree. Activating the View → Recursive View option however, allows filtering to yield results from all sub - folders included within the current folder. Clicking the icon inverts the specified search criteria, while/Delete Filter Right-clicking the filter button opens a menu containing the Clear Criteria option, and a list of previously saved filters. With multiple folders selected within the Folder Tree, searching for assets using the Search Bar will yield results from every selected folder.. Manipulating Assets Assets may be included within a level by simply dragging and dropping them from the Asset Browser's Search Results pane into the Viewport. This allows for the quick placement of mesh and particle type assets, while even permitting material assets to be assigned to existing objects within a level. In addition to this, assets may be: Edited Most assets may be edited by double-clicking upon their respective listings within the Asset Browser's Search Results pane; doing so upon Environment (.env), Material (.mtl), GeometryCache (.cax), Particles (.pfx), CSharpScript (.cs), SchematycEntity (.schematyc_ent) and SchematycLibrary (.schematyc_lib) files brings up the relevant editor tool. On the other hand, double-clicking a texture (.dds) that has a TIFF source file opens the Texture compiler settings dialog, where settings such as the compression scheme, Mip map generation parameters and normal/alpha map combinations of the TIFF file can be configured. Please note that an imported asset can be edited by this method only if its source files are available. Files of the Mesh (.cgf), Skeleton (.chr), SkinnedMesh (.skin), AnimatedMesh (.cga), MeshAnimation (.anm), Character (.cdf) and Animation (.caf) types however may only be edited when imported via the FBX importer. Eventual engine releases will gradually allow for all kinds of assets to be edited by double-clicking. Assets with unsaved changes are marked by an asterisk icon at the bottom-right of their thumbnails and names. Deleted By highlighting an asset's listing and hitting the Delete key. If dependencies exist between the asset and others within your project, a warning prompt will appear. Moved A similar prompt appears when an attempt is made to move assets by dragging and dropping their listings to the target folder in the Folder Tree. Imported Assets can be imported via the File -> Import option of the Asset Browser, which automatically triggers the following dialog to choose between the asset's various elements that need importing. Importing Assets dialog All assets are imported with their default settings, which may be changed later in the relevant Editor as explained previously. Alternatively, dragging and dropping an asset from your system's File Explorer to the destination folder within the Asset Browser automatically imports that asset's comprising elements as well. Holding Ctrl while doing so will generate the above dialog. Either way, the following table lists all file types supported by the importer. Multiple assets can easily be dragged and dropped into the Asset Browser by holding down on either the Shift or Ctrl keys. Tool-specific Asset Browsers and Instant Editing The Environment Editor, Material Editor and Particle Editor tools have their own Asset Browser panels, which only display assets relevant to them. These tool-specific Asset Browsers can be added to the windows of the Environment, Material or Particle Editor by selecting the Window → Panels → Asset Browser option from their menu. Once opened, the tool-specific Asset Browser can be docked within the tool window as desired. Only assets that are relevant to the tool can be created within the tool-specific Asset Browser. Moreover, while the functionality of these tool-specific Asset Browsers is the same as the standalone Asset Browser, they have an additional Sync Selection feature which can be activated by clicking the icon. Sync Selection in the Material Editor's Asset Browser When Sync Selection is enabled, selecting an asset in the Asset Browser of the Environment, Material or Particle Editor immediately opens that asset in the Editor. This makes it very easy to cycle through different assets and edit them on the fly. Live Updates for Asset Resource Selectors When an object within a level is selected from the Level Explorer or the Viewport, the Properties tool displays a multitude of options by which an object's parameters may be tweaked. Certain options within this Properties tool allow for custom files (such as that of Mesh, Material or Animation types) to be selected in relation with the asset; this is done by clicking upon the Asset Resource Selector which appears as a folder button against option fields as demonstrated below. Live Updates Clicking upon an Asset Resource Selector automatically opens the Select Asset window, which allows for the desired asset files to be located in the format of an Asset Browser dialog. Any changes/selections made via this dialog is automatically applied to the selected object upon the Viewport. The Asset Browser styled dialog can alternatively be replaced by a standard Open File window by setting the value of ed_enableAssetPickers, a console variable to 0 via the Console. Texture Tool-tips Given a texture asset's listing within the Search Results pane, hovering the mouse over the texture generates an overview of its basic properties as illustrated below. Texture Tooltip Additionally holding CTRL during a mouse hover provides a larger image preview of the texture asset. Texture Tooltip (Large)
https://docs.cryengine.com/display/CEMANUAL/Asset+Browser
2020-09-18T17:40:52
CC-MAIN-2020-40
1600400188049.8
[array(['/download/attachments/35260066/AssetBrowser.jpg?version=2&modificationDate=1559310845000&api=v2', None], dtype=object) array(['/download/attachments/35260066/Colored%20Thumbnails%20Overview%20%285.6%29.jpg?version=2&modificationDate=1559133462000&api=v2', None], dtype=object) array(['/download/attachments/35260066/WorksFiles_VCSMenu%285.6%29%20%281%29.png?version=1&modificationDate=1559312157000&api=v2', None], dtype=object) array(['/download/attachments/35260066/Manage%20Work%20Files%20%285.6%29.jpg?version=5&modificationDate=1593177354000&api=v2', None], dtype=object) array(['/download/attachments/35260066/Manage%20Work%20Files%20Window%20%285.6%29.jpg?version=6&modificationDate=1593177361000&api=v2', None], dtype=object) array(['/download/attachments/35260066/Work%20Files%285.6%29.jpg?version=4&modificationDate=1593177405000&api=v2', None], dtype=object) array(['/download/attachments/35260066/Star%20Icons%20of%20Assets%285.6%29.jpg?version=2&modificationDate=1566399419000&api=v2', None], dtype=object) array(['/download/attachments/35260066/Favorite%20Thumbnails%20%285.6%29.jpg?version=3&modificationDate=1558949319000&api=v2', None], dtype=object) array(['/download/attachments/35260066/Favorites_Icon%285.6%29.jpg?version=1&modificationDate=1566398977000&api=v2', None], dtype=object) array(['/download/attachments/35260066/Search%20Assets%20Filter%20%28For%205.6%29.jpg?version=7&modificationDate=1593177506000&api=v2', None], dtype=object) array(['/download/attachments/35260066/SaveLoadFilter%20%285.6%29.jpg?version=2&modificationDate=1559136749000&api=v2', None], dtype=object) array(['/download/attachments/35260066/Modified_Icon%20%28For%205.6%29.jpg?version=5&modificationDate=1558950090000&api=v2', None], dtype=object) array(['/download/attachments/35260066/AssetBrowserDragDropImportMenu.jpg?version=2&modificationDate=1559138756000&api=v2', None], dtype=object) array(['/download/attachments/35260066/SyncSelection.gif?version=1&modificationDate=1562829473000&api=v2', None], dtype=object) array(['/download/attachments/35260066/LiveUpdates_5_6.gif?version=1&modificationDate=1558951330000&api=v2', None], dtype=object) array(['/download/attachments/35260066/Texture%20Tooltip%285.6%29.jpg?version=1&modificationDate=1566400146000&api=v2', None], dtype=object) array(['/download/attachments/35260066/Texture%20Tooltip%20Large%285.6%29.jpg?version=1&modificationDate=1566400480000&api=v2', None], dtype=object) ]
docs.cryengine.com
Note: UNet is deprecated, and will be removed in the future. A new system is under development. For more information and next steps see this blog post and the FAQ There are several types of DownloadHandlers: DownloadHandlerBufferis used for simple data storage. DownloadHandlerFileis used for downloading and saving file to disk with low memory footprint. DownloadHandlerTextureis used for downloading images. DownloadHandlerAssetBundleis used for fetching AssetBundles. DownloadHandlerAudioClipis used for downloading audio files. DownloadHandlerMovieTextureis used for downloading video files. It is recommended that you use VideoPlayer for video download and movie playback since MovieTexture is deprecated. DownloadHandlerScriptis a special class. On its own, it does nothing. However, this class can be inherited by a user-defined class. This class receives callbacks from the UnityWebRequest system, which can then be used to perform completely custom handling of data as it arrives from the network. The APIs are similar to DownloadHandlerTexture’s interface. UnityWebRequest has a property disposeDownloadHandlerOnDispose, which defaults to true. If this property is true, when UnityWebRequest object is disposed, Dispose() will also be called on attached download handler rendering it useless. If you keep a reference to download handler longer than the reference to UnityWebRequest, you should set disposeDownloadHandlerOnDispose to false. This Download Handler is the simplest, and handles the majority of use cases. It stores received data in a native code buffer. When the download is complete, you can access the buffered data either as an array of bytes or as a text string. using UnityEngine; using UnityEngine.Networking; using System.Collections; public class MyBehaviour : MonoBehaviour { void Start() { StartCoroutine(GetText()); } IEnumerator GetText() { UnityWebRequest www = new UnityWebRequest(""); = new DownloadHandlerBuffer(); yield return; if ( != UnityWebRequest.Result.Success) { Debug.Log(); } else { // Show results as text Debug.Log(); // Or retrieve results as binary data byte[] results =; } } } This is a special download handler for large files. It writes downloaded bytes directly to file, so the memory usage is low regardless of the size of the file being downloaded. The distinction from other download handlers is that you cannot get data out of this one, all data is saved to a file. using System.Collections; using System.IO; using UnityEngine; using UnityEngine.Networking; public class FileDownloader : MonoBehaviour { void Start () { StartCoroutine(DownloadFile()); } IEnumerator DownloadFile() { var uwr = new UnityWebRequest("", UnityWebRequest.kHttpVerbGET); string path = Path.Combine(Application.persistentDataPath, "unity3d.html"); uwr.downloadHandler = new DownloadHandlerFile(path); yield return uwr.SendWebRequest(); if (uwr.result != UnityWebRequest.Result.Success) Debug.LogError(uwr.error); else Debug.Log("File successfully downloaded and saved to " + path); } } Instead of using a DownloadHandlerBuffer to download an image file and then creating a texture from the raw bytes using Texture.LoadImage, it’s more efficient to use DownloadHandlerTexture. This Download Handler stores received data in a UnityEngine.Texture. On download completion, it decodes JPEGs and PNGs into valid UnityEngine.Texture objects. Only one copy of the UnityEngine.Texture is created per DownloadHandlerTexture object. This reduces performance hits from garbage collection. The handler performs buffering, decompression and texture creation in native code. Additionally, decompression and texture creation are performed on a worker thread instead of the main thread, which can improve frame time when loading large textures. Finally, DownloadHandlerTexture only allocates managed memory when finally creating the Texture itself, which eliminates the garbage collection overhead associated with performing the byte-to-texture conversion in script. The following example downloads a PNG file from the internet, converts it to a Sprite, and assigns it to an image: using UnityEngine; using UnityEngine.UI; using UnityEngine.Networking; using System.Collections; [RequireComponent(typeof(Image))] public class ImageDownloader : MonoBehaviour { Image _img; void Start () { _img = GetComponent<UnityEngine.UI.Image>(); Download(""); } public void Download(string url) { StartCoroutine(LoadFromWeb(url)); } IEnumerator LoadFromWeb(string url) { UnityWebRequest wr = new UnityWebRequest(url); DownloadHandlerTexture texDl = new DownloadHandlerTexture(true); wr.downloadHandler = texDl; yield return wr.SendWebRequest(); if (wr.result == UnityWebRequest.Result.Success) { Texture2D t = texDl.texture; Sprite s = Sprite.Create(t, new Rect(0, 0, t.width, t.height), Vector2.zero, 1f); _img.sprite = s; } } } The advantage to this specialized Download Handler is that it is capable of streaming data to Unity’s AssetBundle system. Once the AssetBundle system has received enough data, the AssetBundle is available as a UnityEngine.AssetBundle object. Only one copy of the UnityEngine.AssetBundle object is created. This considerably reduces run-time memory allocation as well as the memory impact of loading your AssetBundle. It also allows AssetBundles to be partially used while not fully downloaded, so you can stream Assets. All downloading and decompression occurs on worker threads. AssetBundles are downloaded via a DownloadHandlerAssetBundle object, which has a special assetBundle property to retrieve the AssetBundle. Due to the way the AssetBundle system works, all AssetBundle must have an address associated with them. Generally, this is the nominal URL at which they’re located (meaning the URL before any redirects). In almost all cases, you should pass in the same URL as you passed to the UnityWebRequest. When using the High Level API (HLAPI), this is done for you. using UnityEngine; using UnityEngine.Networking; using System.Collections; public class MyBehaviour : MonoBehaviour { void Start() { StartCoroutine(GetAssetBundle()); } IEnumerator GetAssetBundle() { UnityWebRequest www = new UnityWebRequest(""); DownloadHandlerAssetBundle handler = new DownloadHandlerAssetBundle(, uint.MaxValue); = handler; yield return; if ( != UnityWebRequest.Result.Success) { Debug.Log(); } else { // Extracts AssetBundle AssetBundle bundle = handler.assetBundle; } } } This download handler is optimized to for downloading audio files. Instead of downloading raw bytes using DownloadHandlerBuffer and then creating AudioClip out of them, you can use this download handler to do it in a more convenient way. using System.Collections; using UnityEngine; using UnityEngine.Networking; public class AudioDownloader : MonoBehaviour { void Start () { StartCoroutine(GetAudioClip()); } IEnumerator GetAudioClip() { using (var uwr = UnityWebRequestMultimedia.GetAudioClip("", AudioType.OGGVORBIS)) { yield return uwr.SendWebRequest(); if (uwr.result != UnityWebRequest.Result.Success) { Debug.LogError(uwr.error); yield break; } AudioClip clip = DownloadHandlerAudioClip.GetContent(uwr); // use audio clip } } } For users who require full control over the processing of downloaded data, Unity provides the DownloadHandlerScript class. By default, instances of this class do nothing. However, if you derive your own classes from DownloadHandlerScript, you may override certain functions and use them to receive callbacks as data arrives from the network. Note: The actual downloads occur on a worker thread, but all DownloadHandlerScript callbacks operate on the main thread. Avoid performing computationally heavy operations during these callbacks. protected void ReceiveContentLength(long contentLength); This function is called when the Content-Length header is received. Note that this callback may occur multiple times if your server sends one or more redirect responses over the course of processing your UnityWebRequest. protected void OnContentComplete(); This function is called when the UnityWebRequest has fully downloaded all data from the server, and has forwarded all received data to the ReceiveData callback. protected bool ReceiveData(byte[] data, long dataLength); This function is called after data has arrived from the remote server, and is called once per frame. The data argument contains the raw bytes received from the remote server, and dataLength indicates the length of new data in the data array. When not using pre-allocated data buffers, the system creates a new byte array each time it calls this callback, and dataLength is always equal to data.Length. When using pre-allocated data buffers, the data buffer is reused, and dataLength must be used to find the number of updated bytes. This function requires a return value of either true or false. If you return false, the system immediately aborts the UnityWebRequest. If you return true, processing continues normally. Many of Unity’s more advanced users are concerned with reducing CPU spikes due to garbage collection. For these users, the UnityWebRequest system permits the pre-allocation of a managed-code byte array, which is used to deliver downloaded data to DownloadHandlerScript’s ReceiveData callback. Using this function completely eliminates managed-code memory allocation when using DownloadHandlerScript-derived classes to capture downloaded data. To make a DownloadHandlerScript operate with a pre-allocated managed buffer, supply a byte array to the constructor of DownloadHandlerScript. Note: The size of the byte array limits the amount of data delivered to the ReceiveData callback each frame. If your data arrives slowly, over many frames, you may have provided too small of a byte array. using UnityEngine; using UnityEngine.Networking; public class LoggingDownloadHandler : DownloadHandlerScript { // Standard scripted download handler - allocates memory on each ReceiveData callback public LoggingDownloadHandler(): base() { } // Pre-allocated scripted download handler // reuses the supplied byte array to deliver data. // Eliminates memory allocation. public LoggingDownloadHandler(byte[] buffer): base(buffer) { } // Required by DownloadHandler base class. Called when you address the 'bytes' property. protected override byte[] GetData() { return null; } // Called once per frame when data has been received from the network. protected override bool ReceiveData(byte[] data, int dataLength) { if(data == null || data.Length < 1) { Debug.Log("LoggingDownloadHandler :: ReceiveData - received a null/empty buffer"); return false; } Debug.Log(string.Format("LoggingDownloadHandler :: ReceiveData - received {0} bytes", dataLength)); return true; } // Called when all data has been received from the server and delivered via ReceiveData. protected override void CompleteContent() { Debug.Log("LoggingDownloadHandler :: CompleteContent - DOWNLOAD COMPLETE!"); } // Called when a Content-Length header is received from the server. protected override void ReceiveContentLengthHeader(ulong contentLength) { Debug.Log(string.Format("LoggingDownloadHandler :: ReceiveContentLength - length {0}", contentLength)); } }
https://docs.unity3d.com/ru/2020.1/Manual/UnityWebRequest-CreatingDownloadHandlers.html
2020-09-18T18:29:37
CC-MAIN-2020-40
1600400188049.8
[]
docs.unity3d.com
1. Introduction Content as a Service provides the editorial data of FirstSpirit via a uniform JSON-based data format that can be consumed by any endpoint. The data is updated transparently when it is changed or released in the FirstSpirit editorial system. With a cloud installation of the Content as a Service platform the data can be efficiently accessed worldwide. In FirstSpirit, only the form configurations and the data maintained by the editorial system are accessed. Templating in FirstSpirit is completely eliminated and is shifted to the front end or front end services (for more information, see FSXA documentation). The FirstSpirit developer can thus concentrate completely on modeling the domain-oriented objects and creating the corresponding input components. The following section describes the conventions for data format and URLs, as well as the optimal delivery of content for different consumers. 2. Introduction FirstSpirit module The CaaS Connect module is the link between FirstSpirit editorial environment and the CaaS platform that acts as the content delivery. This connection is largely invisible to the editorial team and only requires attention in the event of a technical problem. The editors can always view the current state of data directly via the CaaS platform. As soon as a change has been made in FirstSpirit, the synchronization is triggered in the background and the database immediately reflects the change. 2.1. CaaS URLs The smallest addressable element in a CaaS project is a page. The module creates a document in the CaaS platform for each page and language for both the preview and release state. In addition to pages, media are also synchronized. Pictures are multiplied with all project languages and resolutions, while other media are transferred for all languages. Each of these elements can be identified and referenced with a unique URL. 2.1.1. URL schema The CaaS URL for an element consists of four parts: tenant id, project identifier, collection identifier, and document identifier. The tenant id is the maintained name or abbreviation of the tenant. The project identifier is the GID of the FirstSpirit project. The collection identifier allows you to differentiate between release and preview state. For page documents and metadata documents of media it is either 'release.content' or 'preview.content'. The document identifier consists of the FirstSpirit GID of the page as well as language and resolution. Together with the base URL of the CaaS endpoint, this results in the fully qualified URL of an item. Example: The tenant id is defaultTenant. The FirstSpirit project GID is e54cb80e-1f9c-4e8d-84b8-022f473202eb. The fully qualified CaaS URL of a sample page with the GID f3628468-ee53-453f-a26f-a12edcd1f1f0 in preview and for the English language looks like this CaaS endpoint: CaaS URL: defaultTenant/e54cb80e-1f9c-4e8d-84b8-022f473202eb.preview.files/f6910b22-6ae8-4ce1-af45-c7b364b3117a.en_GB 2.1.2. Media URLs For each medium, two types of data fragments are generated, which can be queried in the CaaS platform: Metadata and binary data. Metadata document For a given medium, exactly one metadata document is created for each project language (regardless of whether the medium is language-dependent or not). The metadata document will always be stored in the CaaS platform with a generated CaaS URL. It contains the FirstSpirit metadata of the medium, as well as a list of resolutions and associated resolution-specific URLs. The URL for the binary data of a specific medium for a specific language - and possibly a specific resolution - must be taken from this metadata document. In case the CaaS platform is used to provide the media binary data, the same URL schema of pages or metadata documents is used. The only difference is the collection identifier where either 'release.files' or 'preview.files' is used for the release or preview state. { "_id": "f6910b22-6ae8-4ce1-af45-c7b364b3117a.en_GB", "fsType": "Media", "name": "audio_video_connector", "displayName": "Connecting cable for audio and video", "identifier": "f6910b22-6ae8-4ce1-af45-c7b364b3117a", "uid": "audio_video_connector", "uidType": "MEDIASTORE_LEAF", "fileName": "audio-video-connector", "languageDependent": false, "mediaType": "PICTURE", "description": "Within this demo project \"Mithras Energy\"...", "resolutionsMetaData": { "110x73": { "fileSize": 3885, "extension": "jpg", "mimeType": "image/jpeg", "width": 110, "height": 73, "url": "" }, "ORIGINAL": { "fileSize": 69597, "extension": "jpg", "mimeType": "image/jpeg", "width": 693, "height": 462, "url": "" } }, "metaFormData": {}, "locale": { "identifier": "EN", "country": "GB", "language": "en" }, "_etag": { "$oid": "5f23f63dc9977b5e90c25dc2" } } 2.2. Data format Unlike traditional FirstSpirit projects and earlier module versions, CaaS version 3 or later does not support either output channels or FirstSpirit templating. Instead, a JSON document is generated for each page of the project, based on the toJson standard of FirstSpirit. An adaptation of the format is not possible. This restriction allows to deliver a complete, standardized data format via CaaS platform, so that all consuming endpoints can work with the same data format. Since the standard data format is very comprehensive, the platform offers filtering and aggregation capabilities to reduce data volumes for mobile end points, for example. For more information about CaaS platform, see platform documentation. Since the CaaS URLs derive the document identifier from a unique FirstSpirit ID (among other things), it is necessary to use a filter when querying a document using its name. To query a CaaS page (a FirstSpirit page reference) with the name services a GET request is executed with the following URL:{'name': "services"} For more information on queries for the CaaS platform, see platform documentation. 2.2.1. CaaS JSON format The standard JSON format of FirstSpirit serves as the basis for the CaaS JSON format and is extended by the CaaS Connect module both with CaaS specific JSON format configuration, as well as with some attributes that simplify its usage. The CaaS specific format configurations include reducing the output of datasets to references, the indirect referencing of records of a content projection, and enabling the output of the FirstSpirit metadata. Dataset URLs: FirstSpirit does not address individual datasets and instead works with content projections or selects and embeds datasets in pages. In CaaS projects, individual datasets are identified by a unique URL and can be queried with it. Therefore, pages do not embed datasets, but contain references to the stored datasets in the CaaS platform. { "fsType" : "FS_DATASET", "name" : "st_button_link", "value" : { "fsType" : "DatasetReference", "target" : { "entityType" : "product", "fsType" : "Dataset", "identifier" : "fae0687b-c365-4851-919e-566f4d587201", "schema" : "products" } } } Dataset routes: The route attribute contains the relative route of a dataset. This route is only calculated if a preview page has been selected for the underlying table template. The preview page should contain a content-projection for the rendered table, and the setting "number of entries per page" should be set to "1" (see Content Projection). This is the only way different datasets will have different routes. { "route" : "/Company/Locations/Locations.html" } Fragment metadata: The attribute fragmentMetaData contains the attributes id (fragment ID) and type (fragment type). { "fragmentMetaData": { "id": "378d5ec9_58f1_4dec_83bc_724dc93de5c2", "type": "news" } } Locale: The locale attribute contains the attributes identifier (abbreviation of the language), country (associated country) and language (associated language). { "locale": { "identifier": "EN", "country": "GB", "language": "en" } } Media URL attributes in media metadata: The media metadata provided by FirstSpirit (see Media URLs) usually only includes the URL to a medium that was generated by a URL factory. Content2Section: The JSON data of content projections or its sections contain references to its records when using the standard JSON configuration of FirstSpirit. However, the CaaS Connect module uses a specific configuration for the JSON format so that the records are not referenced directly, but indirectly via a query object. For this purpose, the query object contains identifiable attributes of the content projection. { "displayName" : "blog", "entityType" : "blog", "filterParams" : {}, "fsType" : "Content2Section", "maxPageCount" : 0, "name" : "blog", "ordering" : [ { "ascending" : false, "attribute" : "fs_id" } ], "query" : null, "recordCountPerPage" : 1, "schema" : "global", "template" : { "displayName" : "Blog entry", "fsType" : "TableTemplate", "identifier" : "e657e0f0-0fd3-456f-b5ab-560a879ca748", "name" : "Blog entry", "uid" : "global.blog", "uidType" : "TEMPLATESTORE_SCHEMA" } } 2.3. Preview and release state An essential distinction between release and preview data states is made by both FirstSpirit and the CaaS Connect module. The platform manages both states of data, which are distinguishable by different CaaS URLs (see URL schema). A synchronization of both data states is always based on certain actions that the editors perform in FirstSpirit. Release actions are the only actions that update the release state, all other changes only affect the preview state. 3. Legal information The CaaS is a product of e-Spirit AG, Dortmund, Germany. Only a license agreed upon with e-Spirit AG is valid with respect to the user for using the module. 4. Help The Technical Support of the e-Spirit AG provides expert technical support covering any topic related to the FirstSpirit™ product. You can get and find more help concerning relevant topics in our community. 5..
https://docs.e-spirit.com/module/caas-connect/CaaS_Connect_FSM_Documentation_EN.html
2020-09-18T16:57:11
CC-MAIN-2020-40
1600400188049.8
[]
docs.e-spirit.com
Adding Displays_0<< -.
https://docs.toonboom.com/help/harmony-17/premium/rigging/add-display.html
2020-09-18T17:44:35
CC-MAIN-2020-40
1600400188049.8
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Breakdown/HAR12/HAR12_skeleton_optimization.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Breakdown/anp_connectDisplat.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Breakdown/anp_displaypropertiesbutton.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Breakdown/HAR11/HAR11_Skeleton_Optimization_001.png', None], dtype=object) ]
docs.toonboom.com
Community Calls Revision as of 10:36,., Archived Community Call Schedule.
https://docs.uabgrid.uab.edu/sgw/index.php?title=Community_Calls&diff=prev&oldid=465&printable=yes
2020-09-18T18:17:26
CC-MAIN-2020-40
1600400188049.8
[]
docs.uabgrid.uab.edu
When you add a layer to your map, the layer is typically responsible for fetching the data to be displayed. The data requested can be either raster or vector data. You can think of raster data as information rendered as an image on the server side. Vector data is delivered as structured information from the server and may be rendered for display on the client (your browser). There are many different types of services that provide raster map data. This section deals with providers that conform with the OGC Web Map Service (WMS) specification. We’ll start with a fully working map example and modify the layers to get an understanding of how they work. Let’s take a look at the following code: <.htmlin the root of your workshop directory. Important If you want to keep track of your advances, we invite you to create different files for every exercise, you can call this one for example map-wms.html. The OpenLayers.Layer.WMS constructor requires 3 arguments and an optional fourth. See the API reference for a complete description of these arguments. var imagery = new OpenLayers.Layer.WMS( "Global Imagery", "", {layers: "bluemarble"} ); The first argument, "Global Imagery", is a string name for the layer. This is only used by components that display layer names (like a layer switcher) and can be anything of your choosing. The second argument, "", is the string URL for a Web Map Service. The third argument, {layers: "bluemarble"} is an object literal with properties that become parameters in our WMS request. In this case, we’re requesting images rendered from a single layer identified by the name "bluemarble". Tasks This same WMS offers a layer named "openstreetmap". Change the value of the layers param from "bluemarble" to "openstreetmap". If you just change the name of the layer and refresh your map you will meet a friend of any OpenLayers developer: our loved pink tiles. With Chrome, you can right click on any of them and go to Open Image in New Tab to get an idea of the problem. In addition to the layers parameter, a request for WMS imagery allows for you to specify the image format. The default for this layer is "image/jpeg". Try adding a second property in the params object named format. Set the value to another image type (e.g. "image/png"). Your revised OpenLayers.Layer.WMS Constructor should look like: var imagery = new OpenLayers.Layer.WMS( "Global Imagery", "", {layers: "openstreetmap", format: "image/png"} ); Save your changes and reload the map: A map displaying the "openstreetmap" layer as "image/png". Having worked with dynamically rendered data from a Web Map Service, let’s move on to learn about cached tile services.
https://girona-openlayers-workshop.readthedocs.io/en/latest/layers/wms.html
2020-09-18T16:25:45
CC-MAIN-2020-40
1600400188049.8
[]
girona-openlayers-workshop.readthedocs.io
R Notebook Format¶ Otter Assign is compatible with Otter’s R autograding system and currently supports Jupyter notebook master documents. The format for using Otter Assign with R is very similar to the Python format with a few important differences. Assignment Metadata¶ As with Python, Otter Assign for R also allows you to specify various assignment generation arguments in an assignment metadata cell. ``` BEGIN ASSIGNMENT init_cell: false export_cell: true ... ``` This cell is removed from both output notebooks. Any unspecified keys will keep their default values. For more information about many of these arguments, see Usage and Output. The YAML block below lists all configurations supported with R and their defaults. Any keys that appear in the Python section but not below will be ignored when using Otter Assign with R. requirements: requirements.txt # path to a requirements file for Gradescope; appended by default overwrite_requirements: false # whether to overwrite Otter's default requirements rather than appending files: [] # a list of file paths to include in the distribution directories Autograded Questions¶ Here is an example question in an Otter Assign for R or list of numbers for the point values of each question. If a list of values, each case gets its corresponding value. If a single value, the number is divided by the number of cases so that a question with \(n\) cases has test cases worth \(\frac{\text{points}}{n}\) points. As an example, the question metadata below indicates an autograded question q1 with 3 subparts worth 1, 2, and 1 points, resp. ``` BEGIN QUESTION name: q1 points: - 1 - 2 - 1 ``` Solution Removal¶ Solution cells contain code formatted in such a way that the assign parser replaces lines or portions of lines with prespecified prompts. The format for solution cells in R notebooks is the same as in Python notebooks, described here. Otter Assign’s solution removal for prompts is compatible with normal strings in R, including assigning these to a dummy variable so that there is no undesired output below the cell: # this is OK: . = " # BEGIN PROMPT some.var <- ... " # END PROMPT Test Cells¶. When writing tests, each test cell should be a single call to testthat::test_that and there should be no code outside of the test_that call. For example, instead of ## Test ## data = data.frame() test_that("q1a", { # some test }) do the following: ## Test ## test_that("q1a", { data = data.frame() # some test }) The removal behavior regarding questions with no solution provided holds for R notebooks.
https://otter-grader.readthedocs.io/en/latest/otter_assign/r_notebook_format.html
2020-09-18T18:02:17
CC-MAIN-2020-40
1600400188049.8
[array(['../_images/R_assign_sample_question.png', '../_images/R_assign_sample_question.png'], dtype=object)]
otter-grader.readthedocs.io
Custom Build Images and Live Package Updates Custom Build Images Custom Build Images can be used to provide a customized build environment. If you have specific dependencies that take a long time to install during a build using our default container, you can create your own Docker image and reference it during a build. Images can be hosted on Docker Hub Build settings is visible in the Amplify Console’s App settings menu only when an app is set up for continuous deployment and connected to a git repository. For instructions on this type of deployment, see Getting started with existing code. Configuring a Custom Build Image From your App Detail page, choose App settings > Build settings. From the Build image settings container, choose Edit. Specify your custom build image and choose Save. Custom Build Image Requirements In order for a custom build image to work as an Amplify Console build image there are a few requirements for the image: cURL: When we launch your custom image, we download our build runner into your container, and therefore we require cURL to be present. If this dependency is missing, the build will instantly fail without any output as our build-runner was unable to produce any output. Git: In order to clone your Git repository we require Git to be installed in the image. If this dependency is missing, the ‘Cloning repository’ step will fail. OpenSSH: In order to securely clone your repository we require OpenSSH to set up the SSH key temporarily during the build, the OpenSSH package provides the commands that the build runner requires to do this. (NPM-based builds)Node.JS+NPM: Our build runner does not install Node, but instead relies on Node and NPM being installed in the image. This is only required for builds that require NPM packages or Node specific commands. Live Package Updates Live Package Updates allows you to specify versions of packages and dependencies to use in our default build image. Our default build image comes with several packages and dependencies pre-installed (e.g. Hugo, Amplify CLI, Yarn, etc). Live Package Updates allows you to override the version of these dependencies and specify either a specific version, or always ensure the latest version is installed. If Live Package Updates is enabled, before your build is executed, the build runner will first update (or downgrade) the specified dependencies. This will increase the build time proportional to the time it takes to update the dependencies, but the benefit is that you can ensure the same version of a dependency is used to build your app. Configuring Live Updates From your App Detail page, choose App Settings > Build Settings. From the Build image settings section, choose Edit. Select a package you’d like to change from the Add package version override list. Input either a specific version of this dependency, or keep the default (latest). If latest is used, the dependency will always be upgraded to the latest version available. Choose Save to apply the settings.
https://docs.aws.amazon.com/amplify/latest/userguide/custom-build-image.html
2020-09-18T18:43:38
CC-MAIN-2020-40
1600400188049.8
[]
docs.aws.amazon.com
. Component & Cluster Health¶ Kubernetes¶ An initial overview of Cilium can be retrieved by listing all pods to verify whether all pods have the status Running: $ kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-2hq5z 1/1 Running 0 4d cilium-6kbtz 1/1 Running 0 4d cilium-klj4b 1/1 Running 0 4d cilium-zmjj9 1/1 Running 0 4d If Cilium encounters a problem that it cannot recover from, it will automatically report the failure state via cilium status which is regularly queried by the Kubernetes liveness probe to automatically restart Cilium pods. If a Cilium pod is in state CrashLoopBackoff then this indicates a permanent failure scenario. Detailed Status¶ If a particular Cilium pod is not in running state, the status and health of the agent on that node can be retrieved by running cilium status in the context of that pod: $ kubectl -n kube-system exec -ti cilium-2hq5z -- cilium status KVStore: Ok etcd: 1/1 connected: - 3.2.5 (Leader) ContainerRuntime: Ok docker daemon: OK Kubernetes: Ok OK Kubernetes APIs: ["cilium/v2::CiliumNetworkPolicy", "networking.k8s.io/v1::NetworkPolicy", "core/v1::Service", "core/v1::Endpoint", "core/v1::Node", "CustomResourceDefinition"] Cilium: Ok OK NodeMonitor: Disabled Cilium health daemon: Ok Controller Status: 14/14 healthy Proxy Status: OK, ip 10.2.0.172, port-range 10000-20000 Cluster health: 4/4 reachable (2018-06-16T09:49:58Z) Alternatively, the k8s-cilium-exec.sh script can be used to run cilium status on all nodes. This will provide detailed status and health information of all nodes in the cluster: $ curl -sLO releases.cilium.io/v1.1.0) Detailed information about the status of Cilium can be inspected with the cilium status --verbose command. Verbose output includes detailed IPAM state (allocated addresses), Cilium controller status, and details of the Proxy status. Logs¶ To retrieve log files of a cilium pod, run (replace cilium-1234 with a pod name returned by kubectl -n kube-system get pods -l k8s-app=cilium) $ kubectl -n kube-system logs --timestamps cilium-1234 If the cilium pod was already restarted due to the liveness problem after encountering an issue, it can be useful to retrieve the logs of the pod before the last restart: $ kubectl -n kube-system logs --timestamps -p cilium-1234 Generic¶ When logged in a host running Cilium, the cilium CLI can be invoked directly, e.g.: $. It needs to be enabled via the Helm value global.hubble.enabled=true or the --enable-hubble option on cilium-agent. Jun 2 11:14:46.041 default/tiefighter:38314 kube-system/coredns-66bff467f8-ktk8c:53 to-endpoint FORWARDED UDP Jun 2 11:14:46.041 kube-system/coredns-66bff467f8-ktk8c:53 default/tiefighter:38314 to-endpoint FORWARDED UDP Jun 2 11:14:46.041 default/tiefighter:38314 kube-system/coredns-66bff467f8-ktk8c:53 to-endpoint FORWARDED UDP Jun 2 11:14:46.042 kube-system/coredns-66bff467f8-ktk8c:53 default/tiefighter:38314 to-endpoint FORWARDED UDP Jun 2 11:14:46.042 default/tiefighter:57746 default/deathstar-5b7489bc84-9bftc:80 L3-L4 FORWARDED TCP Flags: SYN Jun 2 11:14:46.042 default/tiefighter:57746 default/deathstar-5b7489bc84-9bftc:80 to-endpoint FORWARDED TCP Flags: SYN Jun 2 11:14:46.042 default/deathstar-5b7489bc84-9bftc:80 default/tiefighter:57746 to-endpoint FORWARDED TCP Flags: SYN, ACK Jun 2 11:14:46.042 default/tiefighter:57746 default/deathstar-5b7489bc84-9bftc:80 to-endpoint FORWARDED TCP Flags: ACK Jun 2 11:14:46.043 default/tiefighter:57746 default/deathstar-5b7489bc84-9bftc:80 to-endpoint FORWARDED TCP Flags: ACK, PSH Jun 2 11:14:46.043 default/deathstar-5b7489bc84-9bftc:80 default/tiefighter:57746 to-endpoint FORWARDED TCP Flags: ACK, PSH Jun 2 11:14:46.043 default/tiefighter:57746 default/deathstar-5b7489bc84-9bftc:80 to-endpoint FORWARDED TCP Flags: ACK, FIN Jun 2 11:14:46.048 default/deathstar-5b7489bc84-9bftc:80 default/tiefighter:57746 to-endpoint FORWARDED TCP Flags: ACK, FIN Jun 2 11:14:46.048 default/tiefighter:57746 default/deathstar-5b7489bc84-9bftc:80 to-endpoint FORWARDED TCP Flags: ACK You may also use -o json to obtain more detailed information about each flow event. In the following example the first command extracts the numeric security identities for all dropped flows which originated in the default/xwing pod in the last three minutes. The numeric security identity can then be used together with the Cilium CLI to obtain more information about why flow was dropped: $ kubectl exec -n kube-system cilium-77lk6 -- \ hubble observe --since 3m --type drop --from-pod default/xwing -o json | \ jq .destination.identity | sort -u 788 $ kubectl exec -n kube-system cilium-77lk6 -- \ cilium policy trace --src-k8s-pod default:xwing --dst-identity 788 ---------------------------------------------------------------- Tracing From: [k8s:class=xwing, k8s:io.cilium.k8s.policy.cluster=default, k8s:io.cilium.k8s.policy.serviceaccount=default, k8s:io.kubernetes.pod.namespace=default, k8s:org=alliance] => To: [k8s:class=deathstar, k8s:io.cilium.k8s.policy.cluster=default, k8s:io.cilium.k8s.policy.serviceaccount=default, k8s:io.kubernetes.pod.namespace=default, k8s:org=empire] Ports: [0/ANY] Resolving ingress policy for [k8s:class=deathstar k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire] * Rule {"matchLabels":{"any:class":"deathstar","any:org":"empire","k8s:io.kubernetes.pod.namespace":"default"}}: selected Allows from labels {"matchLabels":{"any:org":"empire","k8s:io.kubernetes.pod.namespace":"default"}} No label match for [k8s:class=xwing k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=alliance] 1/1 rules selected Found no allow rule Ingress verdict: denied Final verdict: DENIED Please refer to the policy troubleshooting guide for more detail about how to troubleshoot policy related drops. Note Hubble Relay (beta) allows you to query multiple Hubble instances simultaneously without having to first manually target a specific node. See Observing flows with Hubble Relay for more information. Ensure Hubble is running correctly¶ To ensure the Hubble client can connect to the Hubble server running inside Cilium, you may use the hubble status command: $ hubble status Healthcheck (via unix:///var/run/cilium/hubble.sock): Ok Max Flows: 4096 Current Flows: 2542 (62.06%) cilium-agent must be running with the --enable-hubble option in order for the Hubble server to be enabled. When deploying Cilium with Helm, make sure to set the global.hubble.enabled=true value. To check if Hubble is enabled in your deployment, you may look for the following output in cilium status: $ cilium status ... Hubble: Ok Current/Max Flows: 2542/4096 (62.06%), Flows/s: 164.21 Metrics: Disabled ... Note Pods need to be managed by Cilium in order to be observable by Hubble. See how to ensure a pod is managed by Cilium for more details. Observing flows with Hubble Relay¶ Note Hubble Relay is beta software and as such is not yet considered production ready. Hubble Relay is a service which allows to query multiple Hubble instances simultaneously and aggregate the results. As Hubble Relay relies on individual Hubble instances, Hubble needs to be enabled when deploying Cilium. In addition, the Hubble service needs to be exposed on TCP port 4244. This can be done via the Helm values global.hubble.enabled=true and global.hubble.listenAddress=":4244" or the --enable-hubble --hubble-listen-address :4244 options on cilium-agent. Note Enabling Hubble to listen on TCP port 4244 globally has security implications as the service can be accessed without any restriction. Hubble Relay can be deployed using Helm by setting global.hubble.relay.enabled=true. This will deploy Hubble Relay with one replica by default. Once the Hubble Relay pod is running, you may access the service by port-forwarding it: $ kubectl -n kube-system port-forward service/hubble-relay 4245:80 This will forward the Hubble Relay service port ( 80) to your local machine on port 4245. The next step consists of downloading the latest binary release of Hubble CLI from the GitHub release page. Make sure to download the tarball for your platform, verify the checksum and extract the hubble binary from the tarball. Optionally, add the binary to your $PATH if using Linux or MacOS or your %PATH% if using Windows. You can verify that Hubble Relay can be reached by running the following command: $ hubble status --server localhost:4245 This command should return an output similar to the following: Healthcheck (via localhost:4245): Ok Max Flows: 16384 Current Flows: 16384 (100.00%) For convenience, you may set and export the HUBBLE_DEFAULT_SOCKET_PATH environment variable: $ export HUBBLE_DEFAULT_SOCKET_PATH=localhost:4245 This will allow you to use hubbble status and hubble observe commands without having to specify the server address via the --server flag.. $ kubectl -n kube-system exec -ti cilium-2hq5z -- cilium-health status Probe time: 2018-06-16T09:51:58Z Nodes: ip-172-0-52-116.us-west-2.compute.internal (localhost): Host connectivity to 172.0.52.116: ICMP to stack: OK, RTT=315.254µs HTTP to agent: OK, RTT=368.579µs Endpoint connectivity to 10.2.0.183: ICMP to stack: OK, RTT=190.658µs HTTP to agent: OK, RTT=536.665µs ip-172-0-117-198.us-west-2.compute.internal: Host connectivity to 172.0.117.198: ICMP to stack: OK, RTT=1.009679ms HTTP to agent: OK, RTT=1.808628ms Endpoint connectivity to 10.2.1.234: ICMP to stack: OK, RTT=1.016365ms HTTP to agent: OK, RTT=2.29877ms For each node, the connectivity will be displayed for each protocol and path, both to the node itself and to an endpoint on that node. The latency specified is a snapshot at the last time a probe was run, which is typically once per minute. The ICMP connectivity row represents Layer 3 connectivity to the networking stack, while the HTTP connectivity row represents connection to an instance of the cilium-health agent running on the host or as an endpoint. Monitoring Datapath State¶ Sometimes you may experience broken connectivity, which may be due to a number of different causes. A main cause can be unwanted packet drops on the networking level. The tool cilium monitor allows you to quickly inspect and see if and where packet drops happen. Following is an example output (use kubectl exec as in previous examples if running with Kubernetes): $ kubectl -n kube-system exec -ti cilium-2hq5z -- cilium monitor --type drop Listening for events on 2 CPUs with 64x4096 of shared memory Press Ctrl-C to quit: 10.11.13.37 -> 10.11.101.61 EchoRequest xx drop (Policy denied). Handling drop (CT: Map insertion failed)¶ If connectivity fails and cilium monitor --type drop shows xx drop (CT: Map insertion failed), then it is likely that the connection tracking table is filling up and the automatic adjustment of the garbage collector interval is insufficient. Set --conntrack-gc-interval to an interval lower than the default. Alternatively, the value for bpf-ct-global-any-max and bpf-ct-global-tcp-max can be increased. Setting both of these options will be a trade-off of CPU for conntrack-gc-interval, and for bpf-ct-global-any-max and bpf-ct-global-tcp-max the amount of memory consumed. Policy Troubleshooting¶ Ensure pod is managed by Cilium¶ A potential cause for policy enforcement not functioning as expected is that the networking of the pod selected by the policy is not being managed by Cilium. The following situations result in unmanaged pods: -. If pod networking is not managed by Cilium. Ingress and egress policy rules selecting the respective pods will not be applied. See the section Network Policy for more details. You can run the following script to list the pods which are not managed by Cilium: $ ./contrib/k8s See section Policy Tracing for details and examples on how to use the policy tracing feature. Understand the rendering of your policy¶ There are always multiple ways to approach a problem. Cilium can provide the rendering of the aggregate policy provided to it, leaving you to simply compare with what you expect the policy to actually be rather than search (and potentially overlook) every policy. At the expense of reading a very large dump of an endpoint, this is often a faster path to discovering errant policy requests in the Kubernetes API. Start by finding the endpoint you are debugging from the following list. There are several cross references for you to use in this list, including the IP address and pod labels: kubectl -n kube-system exec -ti cilium-q8wvt -- cilium endpoint list When you find the correct endpoint, the first column of every row is the endpoint ID. Use that to dump the full endpoint information: kubectl -n kube-system exec -ti cilium-q8wvt -- cilium endpoint get 59084 Importing this dump into a JSON-friendly editor can help browse and navigate the information here. At the top level of the dump, there are two nodes of note: spec: The desired state of the endpoint status: The current state of the endpoint This is the standard Kubernetes control loop pattern. Cilium is the controller here, and it is iteratively working to bring the status in line with the spec. Opening the status, we can drill down through policy.realized.l4. Do your ingress and egress rules match what you expect? If not, the reference to the errant rules can be found in the derived-from-rules node. policy drops Symptom Library Encapsulation 8472. When running in Native-Routing mode: - Run ip routeor check your cloud provider router and verify that you have routes installed to route the endpoint prefix between all nodes. - Verify that the firewall on each node permits to route the endpoint IPs. Useful Scripts¶ Retrieve Cilium pod managing a particular pod¶ Identifies the Cilium pod that is managing a particular pod in a namespace: k8s-get-cilium-pod.sh <pod> <namespace> Example: $ curl -sLO/k8s-cilium-exec.sh $ ./k8s-cilium-exec.sh uptime 10:15:16 up 6 days, 7:37, 0 users, load average: 0.00, 0.02, 0.00 10:15:16 up 6 days, 7:32, 0 users, load average: 0.00, 0.03, 0.04 10:15:16 up 6 days, 7:30, 0 users, load average: 0.75, 0.27, 0.15 10:15:16 up 6 days, 7:28, 0 users, load average: 0.14, 0.04, 0.01 List unmanaged Kubernetes pods¶ Lists all Kubernetes pods in the cluster for which Cilium does not provide networking. This includes pods running in host-networking mode and pods that were started before Cilium was deployed. k8s-unmanaged.sh Example: $ curl -sLO releases.cilium.io/v1.1.0/tools/k8s-unmanaged.sh $ . Reporting a problem¶ list of prerequisites: - Requires Python >= 2.7.* - Requires kubectl. kubectlshould be pointing to your cluster before running the tool. You can download the latest version of the cilium-sysdump tool using the following command: curl -sLO python cilium-sysdump.zip You can specify from which nodes to collect the system dumps by passing node IP addresses via the --nodes argument: python cilium-sysdump.zip --nodes=$NODE1_IP,$NODE2_IP2 Use --help to see more options: python cilium-sysdump.zip --help Single Node Bugtool¶ If you are not running Kubernetes, it is also possible to run the bug collection tool manually with the scope of a single node:. Note that the command needs to be run from inside the Cilium pod/container. $ - … Debugging information¶ If you are not running Kubernetes, you can use the cilium debuginfo command to retrieve useful debugging information. If you are running Kubernetes, this command is automatically run as part of the system dump.. Slack Assistance¶ The Cilium slack community is helpful first point of assistance to get help troubleshooting a problem or to discuss options on how to address a problem. The slack community is open to everyone. You can request an invite email by visiting Slack. Report an issue via GitHub¶ If you believe to have found an issue in Cilium, please report a GitHub issue and make sure to attach a system dump as described above to ensure that developers have the best chance to reproduce the issue.
https://docs.cilium.io/en/v1.8/troubleshooting/
2020-09-18T17:18:12
CC-MAIN-2020-40
1600400188049.8
[array(['../_images/troubleshooting_policy.png', '../_images/troubleshooting_policy.png'], dtype=object)]
docs.cilium.io
Configuring Dynamic Routes: - NetScalerRouting Tables in NetScaler. NS Kernel NetScaler command line (CLI). The entries in this table are used by the NetScaler in packet forwarding. From the NetScaler CLI, they can be inspected with the show route command. FreeBSD Routing. Network Services Module (NSM) FIB The NSM FIB routing table contains the advertisable routes that are distributed by the dynamic routing protocols to their peers in the network. It may contain: - Connected routes. IP subnets that are directly reachable from the NetScaler. NetScaler CLI that have the - advertise option enabled. Alternatively, if the NetScaler is operating in Static Route Advertisement (SRADV) mode, all static routes configured on the NetScal NetScaler. Black Hole Avoidance Mechanism. Interfaces for Configuring Dynamic RoutingInterfaces for Configuring Dynamic Routing To configure dynamic routing, you can use either the NetScaler GUI. Note: Citrix recommends that you use VTYSH for all commands except those that can be configured only on the NetScaler CLI. Use of the NetScal NetScaler appliance: Dynamic routing protocol reference guides and unsupported commands.
https://docs.citrix.com/en-us/netscaler/12/networking/ip-routing/configuring-dynamic-routes.html
2020-09-18T18:09:49
CC-MAIN-2020-40
1600400188049.8
[]
docs.citrix.com
lasote/conangcc. - lasote/conangcc). # The default profile is automatically adjusted to armv7hf $ cat ~/.conan/profiles/default [settings] os=Linux os_build=Linux arch=armv7hf arch_build=x86_64
https://docs.conan.io/en/1.3/howtos/run_conan_in_docker.html
2020-09-18T17:10:34
CC-MAIN-2020-40
1600400188049.8
[]
docs.conan.io
Knowi enables data discovery, querying, visualization and reporting automation from Redshift along with other unstructured and structured datasources. Overview Connect, extract and transform data from your Redshift, using one of the following options: - Using our Cloud9Agent. See configuration details here. - Through our UI to connect directly, if your Redshift servers are accessible from the cloud. Visualize and Automate your Reporting instantly. UI If you are not a Knowi user, check out our Redshift Instant Reporting page to get started. Connecting The following GIF image shows how to connect to Redshift. Login to Knowi and select the settings icon from left-hand menu pane. Click on Redshift. Either follow the prompts to set up connectivity to your own Redshift database, or, use the pre-configured settings into Knowi's own demo Redshift database to see how it works. If you connecting through an agent, check Internal Datasource to assign it to your agent. The agent (running inside your network) will synchronize it automatically. When connecting from the UI directly to your Redshift database, please follow the connectivity instructions to allow Knowi to access your Redshift. Save the Connection. Click on the "Configure Queries" link on the success bar. Queries & Reports Set up Query to execute. Report Name: Specify a name for the report. Queries can be auto-generated using our Data Discovery & Query Generator feature. You can also enter Redshift queries directly and post process the results optionally: Redshift Query: Modify or enter SQL queries directly. Cloud9QL: Optional SQL-Like post processor for the data returned by the SQL query. See Cloud9QL Docs for more details. Click 'Preview' to see the results. Scheduling: Configure how often this should be run. Select 'None' for a one-time operation. The results are stored within Knowi.' to access dashboards. You can drag and drop the newly created report from the widget list into to the dashboard. Cloud9Agent (StandAlone) Configuration As an alternative to the UI based connectivity above, you can configure Cloud9Agent directly within your network (instead of the UI) to query Redshift. See Cloud9Agent to download and run your agent. Highlights: - Pull data using SQL (and optionally manipulate the results further with Cloud9QL). - Execute queries on a schedule, or, one time. The agent contains a datasource_example_redshift.json and query_example_redshiftoRedshift", "url":"localhost:5432/cloud9demo", "datasource":"redshift", "userId":"cloud9demo", "password":"cloud92014" } ] Query Examples: [ { /* This runs every 10 minutes, on the datasource defined in the any of the files that begin with datasource_ */ "entityName":"Sent", "dsName":"demoRedshift", "queryStr":"select * from sent", "frequencyType":"minute", "frequency":10, "overrideVals":{ /* Tells Knowi to replace all existing values for this dataset with this one */ "replaceAll":true } } ] The query is run every 10 minutes at the top of the hour and replaces all data for that dataset in Knowi.
https://docs.knowi.com/hc/en-us/articles/115006208267-Redshift
2020-09-18T17:22:35
CC-MAIN-2020-40
1600400188049.8
[array(['https://knowi.com/images/docs/redshift-connect.gif', 'Redshift Connect'], dtype=object) ]
docs.knowi.com
You can open other pages from many many places within pages. Pages can be opened either in the content pane of the browser or in a new pop-up window. Where the page is opened depends on the layout type of its layout. Pages with the layout type Modal pop-up or Pop-up will open as a pop-up window, and other pages will be opened in the content. If the target page layout is of the type Legacy, then the page location must be configured manually (for details, see the Location section below). If the target page contains a data view with a page parameter data source, from which the page is opened. For example, a page that is opened by the Create button must contain a data view that is connected to the same entity as the grid. Page title By default the title of the page is taken from the title property of the selected page. You can replace this title with a custom title if necessary. This feature allows you to re-use the same page for the New and Edit buttons of a data grid. By simply setting the titles to, for example, ‘New customer’ and ‘Edit customer’, you can save yourself the trouble of duplicating the rest of the page. Location If the layout of the target page has a layout type configured, the Location property will be unavailable. Instead, the layout type will determine how the page is opened. This eliminates the risk of accidentally modeling a pop-up form with a huge menu bar. This property indicates where the page is shown. Default value: Pop-up
https://docs.mendix.com/refguide7/opening-pages
2020-09-18T16:57:25
CC-MAIN-2020-40
1600400188049.8
[]
docs.mendix.com
License Key Activation License key activation is a necessary action. Because activating your license key helps you receive regular theme update and theme support. You can find your theme’s license key from the mail that we’ve sent you after purchasing our theme or you can find them in your Themebeez Account. Follow the instructions below to activate your license key. - On your WordPress dashboard, click on the License Manager. - Fill in the license key in the field below Cream Magazine Pro License. - Then click on the Save button. If your license key does not activate, then contact us for the support.
https://docs.themebeez.com/kb/cream-magazine-pro/getting-started/license-key-activation/
2020-09-18T17:09:50
CC-MAIN-2020-40
1600400188049.8
[]
docs.themebeez.com
To find User Manual pages on new and updated features in Unity 2020.1, click on the link above or search for “NewIn20202”. This lists Unity 2020.2 User Manual pages containing: To find out more about the new features, changes, and improvements to this Unity version, see the 2020.2 Release Notes. If you are upgrading existing projects from an earlier version to 2020.2, read the Upgrade Guide to 2020.2 for information about how your project may be affected.
https://docs.unity3d.com/2020.2/Documentation/Manual/WhatsNew20202.html
2020-09-18T18:10:41
CC-MAIN-2020-40
1600400188049.8
[]
docs.unity3d.com
This module defines functions related to exceptions and general error handling. It also defines functions intended to aid in unit testing. import core.stdc.stdlib : malloc, free; import std.algorithm.comparison : equal; import std.algorithm.iteration : map, splitter; import std.algorithm.searching : endsWith; import std.conv : ConvException, to; import std.range : front, retro; //); // collectException can be used to test for exceptions Exception e = collectException("abc".to!int); assert(e.file.endsWith("conv.d")); // and just for the exception message string msg = collectExceptionMsg("abc".to!int); writeln(msg); // "Unexpected 'a' when converting from type string to type int" // assertThrown can be used to assert that an exception is thrown assertThrown!ConvException("abc".to!int); // ifThrown can be used to provide a default value if an exception is thrown writeln("x".to!int().ifThrown(0)); // 0 // handle is a more advanced version of ifThrown for ranges auto r = "12,1337z32,54".splitter(',').map!(a => to!int(a)); auto h = r.handle!(ConvException, RangePrimitive.front, (e, r) => 0); assert(h.equal([12, 0, 54])); assertThrown!ConvException(h.retro.equal([54, 0, 12])); // basicExceptionCtors avoids the boilerplate when creating custom exceptions static class MeaCulpa : Exception { mixin basicExceptionCtors; } e = collectException((){throw new MeaCulpa("diagnostic message");}()); writeln(e.msg); // "diagnostic message" writeln(e.file); // __FILE__ writeln(e.line); // __LINE__ - 3 // assumeWontThrow can be used to cast throwing code into `nothrow` void exceptionFreeCode() nothrow { // auto-decoding only throws if an invalid UTF char is given assumeWontThrow("abc".front); } // assumeUnique can be used to cast mutable instance to an `immutable` one // use with care char[] str = " mutable".dup; str[0 .. 2] = "im"; immutable res = assumeUnique(str); writeln(res); // "immutable" Asserts that the given expression does not throw the given type of Throwable. If a Throwable of the given type is thrown, it is caught and does not escape assertNotThrown. Rather, an AssertError is thrown. However, any other Throwables will escape. AssertErrorif the given Throwableis thrown. expression.!`); Asserts that the given expression throws the given type of Throwable. The Throwable is caught and does not escape assertThrown. However, any other Throwables will escape, and if no Throwable of the given type is thrown, then an AssertError is thrown. AssertErrorif the given Throwableis not thrown..`); Enforces that the given value is true. If the given value is false, an exception is thrown. The msg- error message as a string dg- custom delegate that return a string and is only called if an exception occurred ex- custom exception to be thrown. It is lazyand is only created if an exception occurred value, if cast(bool) valueis true. Otherwise, depending on the chosen overload, new Exception(msg), dg()or exis thrown. enforceis used to throw exceptions and is therefore intended to aid in error handling. It is not intended for verifying the logic of your program. That is what assertis for. Also, do not use enforceinside of contracts (i.e. inside of inand outblocks and invariants), because contracts are compiled out when compiling with -release. Dg's safety and purity. import core.stdc.stdlib : malloc, free; import std.conv : ConvException, to; //); assertNotThrown(enforce(true, new Exception("this should not be thrown"))); assertThrown(enforce(false, new Exception("this should be thrown"))); writeln(enforce(123)); // 123 try { enforce(false, "error"); assert(false); } catch (Exception e) { writeln(e.msg); // "error" writeln(e.file); // __FILE__ writeln(e.line); // __LINE__ - 7 } import std.conv : ConvException; alias convEnforce = enforce!ConvException; assertNotThrown(convEnforce(true)); assertThrown!ConvException(convEnforce(false, "blah")); Enforces that the given value is true, throwing an ErrnoException if it is not. value, if cast(bool) valueis true. Otherwise, new ErrnoException(msg)is thrown. It is assumed that the last operation set errnoto an error code corresponding with the failed condition. import core.stdc.stdio : fclose, fgets, fopen; auto f = fopen(__FILE_FULL_PATH__, "r").errnoEnforce; scope(exit) fclose(f); char[100] buf; auto line = fgets(buf.ptr, buf.length, f); enforce(line !is null); // expect a non-empty line Deprecated. Please use enforce instead. This function will be removed 2.089. If !value is false, value is returned. Otherwise, new E(msg, file, line) is thrown. Or if E doesn't take a message and can be constructed with new E(file, line), then new E(file, line) will be thrown. auto f = enforceEx!FileMissingException(fopen("data.txt")); auto line = readln(f); enforceEx!DataCorruptionException(line.length); Ditto Catches and returns the exception thrown from the given expression. If no exception is thrown, then null is returned and result is set to the result of the expression. b; int foo() { throw new Exception("blah"); } assert(collectException(foo(), b)); int[] a = new int[3]; import core.exception : RangeError; assert(collectException!RangeError(a[4], b)); Catches and returns the exception thrown from the given expression. If no exception is thrown, then null is returned. E can be void. foo() { throw new Exception("blah"); } writeln(collectException(foo()).msg); // "blah" Value that collectExceptionMsg returns when it catches an exception with an empty exception message. Casts a mutable array to an immutable array in an idiomatic manner. Technically, assumeUnique just returned by assumeUnique. Typically, assumeUnique is used to return arrays from functions that have allocated and built them. string letters() { char[] result = new char['z' - 'a' + 1]; foreach (i, ref e; result) { e = cast(char)('a' + i); } return assumeUnique(result); }The use in the example above is correct because resultwas private to lettersand is inaccessible in writing after the function returns. The following example shows an incorrect use of assumeUnique.. int[] arr = new int[1]; auto arr1 = arr.assumeUnique; static assert(is(typeof(arr1) == immutable(int)[])); writeln(arr); // null writeln(arr1); // [0] int[string] arr = ["a":1]; auto arr1 = arr.assumeUnique; static assert(is(typeof(arr1) == immutable(int[string]))); writeln(arr); // null writeln(arr1.keys); // ["a"].) expr, if any.)); } writeln(computeLength(3, 4)); // 5 Checks whether a given source object contains pointers or references to a given target object. @nogcbecause inference could fail, see issue 17084. positives or false negatives: doesPointTowill return trueif it is absolutely certain sourcepoints to target. It may produce false negatives, but never false positives. This function should be prefered when trying to validate input data. mayPointTowill return falseif it is absolutely certain sourcedoes not point to target. It may produce false positives, but never false negatives. This function should be prefered for defensively choosing a code path. doesPointTo(x, x)checks whether xhas internal pointers. This should only be done as an assertive test, as the language is free to assume objects don't have internal pointers (TDPL 7.1.3.5). int i = 0; int* p = null; assert(!p.doesPointTo(i)); p = &i; assert( p.doesPointTo(i)); int i; int* p = &i; // trick the compiler when initializing slicep; int[] slice = [0, 1, 2, 3, 4]; int[5] arr = [0, 1, 2, 3, 4]; int*[] slicep = [p];: () { import std.traits : Fields; Thrown if errors that set errno occur. import core.stdc.errno : EAGAIN; auto ex = new ErrnoException("oh no", EAGAIN); writeln(ex.errno); // EAGAIN import core.stdc.errno : errno, EAGAIN; auto old = errno; scope(exit) errno = old; // fake that errno got set by the callee errno = EAGAIN; auto ex = new ErrnoException("oh no"); writeln(ex.errno); // EAGAIN Operating system error code. Constructor which takes an error message. The current global core.stdc.errno.errno value is used as error code. Constructor which takes an error message and error code. ML-style functional exception handling. Runs the supplied expression and returns its result. If the expression throws a Throwable, runs the supplied error handler instead and return its result. The error handler's type must be the same as the expression's type. import std.conv : to; writeln("x".to!int.ifThrown(0)); // 0 import std.conv : ConvException, to; string s = "true"; assert(s.to!int.ifThrown(cast(int) s.to!double) .ifThrown(cast(int) s.to!bool) == 1); s = "2.0"; assert(s.to!int.ifThrown(cast(int) s.to!double) .ifThrown(cast(int) s.to!bool) == 2); // Respond differently to different types of errors alias orFallback = (lazy a) => a.ifThrown!ConvException("not a number") .ifThrown!Exception("number too small"); writeln(orFallback(enforce("x".to!int < 1).to!string)); // "not a number" writeln(orFallback(enforce("2".to!int < 1).to!string)); // "number too small" //))); import std.format : format; // "std.format.FormatException" writeln("%s".format.ifThrown!Exception(e => e.classinfo.name));. structthat preserves the range interface of input. std.range.Takewhen sliced with a specific lower and upper bound (see std.range.primitives.hasSlicing); handledeals with this by takeing 0 from the return value of the handler function and returning that when an exception is caught.` Convenience mixin for trivially sub-classing exceptions Even. class MeaCulpa: Exception { /// mixin basicExceptionCtors; } try throw new MeaCulpa("test"); catch (MeaCulpa e) { writeln(e.msg); // "test" writeln(e.file); // __FILE__ writeln(e.line); // __LINE__ - 5 } © 1999–2019 The D Language Foundation Licensed under the Boost License 1.0.
https://docs.w3cub.com/d/std_exception/
2020-09-18T16:51:29
CC-MAIN-2020-40
1600400188049.8
[]
docs.w3cub.com
Event handlers define the business logic to be performed when an event is received. Event processors are the components that take care of the technical aspects of that processing. They start a unit of work and possibly a transaction. However, they also ensure that correlation data can be correctly attached to all messages created during event processing. A representation of the organization of Event Processors and Event Handlers is depicted below. Event Processors come in roughly two forms: Subscribing and Tracking. Subscribing Event Processors subscribe themselves to a source of Events and are invoked by the thread managed by the publishing mechanism. Tracking Event Processors, on the other hand, pull their messages from a source using a thread that it manages itself. All processors have a name, which identifies a processor instance across JVM instances. Two processors with the same name are considered as two instances of the same processor. All event handlers are attached to a processor whose name is the package name of the Event Handler's class. For example, the following classes: org.axonframework.example.eventhandling.MyHandler, org.axonframework.example.eventhandling.MyOtherHandler, and org.axonframework.example.eventhandling.module.MyHandler will trigger the creation of two processors: org.axonframework.example.eventhandling with 2 handlers, and org.axonframework.example.eventhandling.module with a single handler The Configuration API allows you to configure other strategies for assigning classes to processors, or even assign specific handler instances to specific processors. To order Event Handlers within an Event Processor, the order in which Event Handlers are registered (as described in the Registering Event Handlers section) is guiding. Thus, the ordering in which Event Handlers will be called by an Event Processor for Event Handling is the same as their insertion ordering in the configuration API. If Spring is selected as the mechanism to wire everything, the ordering of the Event Handlers can be explicitly specified by adding the @Order annotation. This annotation should be placed at the class level of your event handler class, and an integer value should be provided to specify the ordering. Not that it is not possible to order event handlers which are not a part of the same event processor. Processors take care of the technical aspects of handling an event, regardless of the business logic triggered by each event. However, the way "regular" (singleton, stateless) event handlers are configured is slightly different from sagas, as different aspects are important for both types of handlers. By default, Axon will use Tracking Event Processors. It is possible to change how handlers are assigned and how processors are configured. The EventProcessingConfigurer class defines a number of methods that can be used to define how processors need to be configured. registerEventProcessorFactory allows you to define a default factory method that creates event processors for which no explicit factories have been defined. registerEventProcessor(String name, EventProcessorBuilder builder) defines the factory method to use to create a processor with given name. Note that such a processor is only created if name is chosen as the processor for any of the available event handler beans. registerTrackingEventProcessor(String name) defines that a processor with given name should be configured as a tracking event processor, using default settings. It is configured with a TransactionManager and a TokenStore, both taken from the main configuration by default. registerTrackingProcessor(String name, Function<Configuration, StreamableMessageSource<TrackedEventMessage<?>>> source, Function<Configuration, TrackingEventProcessorConfiguration> processorConfiguration) defines that a processor with given name should be configured as a tracking processor, and use the given TrackingEventProcessorConfiguration to read the configuration settings for multi-threading. The StreamableMessageSource defines an event source from which this processor should pull events. usingSubscribingEventProcessors() sets the default to subscribing event processors instead of tracking ones. Configurer configurer = DefaultConfigurer.defaultConfiguration().eventProcessing(eventProcessingConfigurer -> eventProcessingConfigurer.usingSubscribingEventProcessors()).eventProcessing(eventProcessingConfigurer -> eventProcessingConfigurer.registerEventHandler(c -> new MyEventHandler())); // Default all processors to subscribing mode.@Autowiredpublic void configure(EventProcessingConfigurer config) {config.usingSubscribingEventProcessors();} Certain aspects of event processors can also be configured in application.properties. axon.eventhandling.processors.name.mode=subscribingaxon.eventhandling.processors.name.source=eventBus If the name of an event processor contains periods ., use the map notation: axon.eventhandling.processors[name].mode=subscribingaxon.eventhandling.processors[name].source=eventBus You can configure a Tracking Event Processor to use multiple sources when processing events. This is useful for compiling metrics across domains or simply when your events are distributed between multiple event stores. Having multiple sources means that there might be a choice of multiple events that the processor could consume at any given instant. Therefore, you can specify a Comparator to choose between them. The default implementation chooses the event with the oldest timestamp (i.e. the event waiting the longest). Multiple sources also means that the tracking processor's polling interval needs to be divided between sources using a strategy to optimize event discovery and minimize overhead in establishing costly connections to the data sources. Therefore, you can choose which source the majority of the polling is done on using the longPollingSource() method in the builder. This ensures one source consumes most of the polling interval whilst also checking intermittently for events on the other sources. The default longPollingSource is done on the last configured source. Create a MultiStreamableMessageSource using its builder() and register it as the message source when calling EventProcessingConfigurer.registerTrackingEventProcessor(). For example:(); @Beanpublic MultiStreamableMessageSource multiStreamableMessageSource(EventStore eventSourceA, EventStore eventStoreB) {return();}@Autowiredpublic void configure(EventProcessingConfigurer config, MultiStreamableMessageSource multiStreamableMessageSource) {config.registerTrackingEventProcessor("NameOfEventProcessor", c -> multiStreamableMessageSource);} It is possible to change how Sagas are configured. Sagas are configured using the SagaConfigurer class: Configurer configurer = DefaultConfigurer.defaultConfiguration().eventProcessing(eventProcessingConfigurer ->eventProcessingConfigurer.registerSaga(MySaga.class,sagaConfigurer -> sagaConfigurer.configureSagaStore(c -> sagaStore).configureRepository(c -> repository).configureSagaManager(c -> manager))); The configuration of infrastructure components to operate sagas is triggered by the @Saga annotation (in the org.axonframework.spring.stereotype package). Axon will then configure a SagaManager and SagaRepository. The SagaRepository will use a SagaStore available in the context (defaulting to JPASagaStore if JPA is found) for the actual storage of sagas. To use different SagaStores for sagas, provide the bean name of the SagaStore to use in the sagaStore attribute of each @Saga annotation. Sagas will have resources injected from the application context. Note that this does not mean Spring-injecting is used to inject these resources. The @Autowired and @javax.inject.Inject annotation can be used to demarcate dependencies, but they are injected by Axon by looking for these annotations on fields and methods. Constructor injection is not (yet) supported. To tune the configuration of sagas, it is possible to define a custom SagaConfiguration bean. For an annotated saga class, Axon will attempt to find a configuration for that saga. It does so by checking for a bean of type SagaConfiguration with a specific name. For a saga class called MySaga, the bean that Axon looks for is mySagaConfiguration. If no such bean is found, it creates a configuration based on available components. If a SagaConfiguration instance is present for an annotated saga, that configuration will be used to retrieve and register components for sagas of that type. If the SagaConfiguration bean is not named as described above, it is possible that saga will be registered twice. It will then receive events in duplicate. To prevent this, you can specify the bean name of the SagaConfiguration using the @Saga annotation: @Autowiredpublic void configureSagaEventProcessing(EventProcessingConfigurer config) {config.registerTokenStore("MySagaProcessor", c -> new MySagaCustomTokenStore())} Errors are inevitable. Depending on where they happen, you may want to respond differently. By default exceptions raised by event handlers are logged and processing continues with the next events. When an exception is thrown when a processor is trying to commit a transaction, update a token, or in any other other part of the process, the exception will be propagated. In case of a Tracking Event Processor, this means the processor will go into error mode, releasing any tokens and retrying at an incremental interval (starting at 1 second, up to max 60 seconds). A Subscribing Event Processor will report a publication error to the component that provided the event. To change this behavior, there are two levels at which you can customize how Axon deals with exceptions: By default these exceptions are logged and processing continues with the next handler or message. This behavior can be configured per processing group: eventProcessingConfigurer.registerDefaultListenerInvocationErrorHandler(conf -> /* create error handler */);// or for a specific processing group:eventProcessingConfigurer.registerListenerInvocationErrorHandler("processingGroup", conf -> /* create error handler */); @Autowiredpublic void configure(EventProcessingConfigurer config) {config.registerDefaultListenerInvocationErrorHandler(conf -> /* create error handler */);// or for a specific processing group:config.registerListenerInvocationErrorHandler("processingGroup", conf -> /* create error handler */);} It is easy to implement custom error handling behavior. The error handling method to implement provides the exception, the event that was handled, and a reference to the handler that was handling the message. You can choose to retry, ignore or rethrow the exception. In the latter case, the exception will bubble up to the event processor level. Exceptions that occur outside of the scope of an event handler, or have bubbled up from there, are handled by the ErrorHandler. The default behavior depends on the processor implementation: A TrackingEventProcessor will go into Error Mode. Then, it will retry processing the event using an incremental back-off period. It will start at 1 second and double after each attempt until a maximum wait time of 60 seconds per attempt is achieved. This back-off time ensures that if another node is able to process events successfully, it will have the opportunity to claim the token required to process the event. The SubscribingEventProcessor will have the exception bubble up to the publishing component of the event, allowing it to deal with it, accordingly. To customize the behavior, you can configure an error handler on the event processor level. eventProcessingConfigurer.registerDefaultErrorHandler(conf -> /* create error handler */);// or for a specific processor:eventProcessingConfigurer.registerErrorHandler("processorName", conf -> /* create error handler */); @Autowiredpublic void configure(EventProcessingConfigurer config) {config.registerDefaultErrorHandler(conf -> /* create error handler */);// or for a specific processing group:config.registerErrorHandler("processingGroup", conf -> /* create error handler */);} To implement custom behavior, implement the ErrorHandler's single method. Based on the provided ErrorContext object, you can decide to ignore the error, schedule retries, perform dead-letter-queue delivery or rethrow the exception. Tracking event processors, unlike subscribing ones, need a token store to store their progress in. Each message a tracking processor receives through its event stream is accompanied by a token. This token allows the processor to reopen the stream at any later point, picking up where it left off with the last event. The Configuration API takes the token store, as well as most other components processors need from the global configuration instance. If no token store is explicitly defined, an InMemoryTokenStore is used, which is not recommended in production. To configure a token store, use the EventProcessingConfigurer to define which implementation to use. To configure a default TokenStore for all processors: Configurer.eventProcessing().registerTokenStore(conf -> ... create token store ...) Alternatively, to configure a TokenStore for a specific processor, use: Configurer.eventProcessing().registerTokenStore("processorName", conf -> ... create token store ...)`. The default TokenStore implementation is defined based on dependencies available in Spring Boot, in the following order: If any TokenStore bean is defined, that bean is used Otherwise, if an EntityManager is available, the JpaTokenStore is defined. Otherwise, if a DataSource is defined, the JdbcTokenStore is created Lastly, the InMemoryToken store is used To override the TokenStore, either define a bean in a Spring @Configuration class: @Beanpublic TokenStore myCustomTokenStore() {return new MyCustomTokenStore();} Alternatively, inject the EventProcessingConfigurer, which allows more fine-grained customization: @Autowiredpublic void configure(EventProcessingConfigurer epConfig) {epConfig.registerTokenStore(conf -> new MyCustomTokenStore());// or, to define one for a single processor:epConfig.registerTokenStore("processorName", conf -> new MyCustomTokenStore());} Note that you can override the token store to use with tracking processors in the EventProcessingConfiguration that defines that processor. Where possible, it is recommended to use a token store that stores tokens in the same database as where the event handlers update the view models. This way, changes to the view model can be stored atomically with the changed tokens. This guarantees exactly once processing semantics. It is possible to tune the performance of Tracking Event Processors by increasing the number of threads processing events on high load by splitting segments and reducing the number of threads when load reduces by merging segments. Splitting and merging is allowed at runtime which allows you to dynamically control the number of segments. This can be done through the Axon Server API or through Axon Framework using the methods splitSegment(int segmentId) and mergeSegment(int segmentId) from TrackingEventProcessor by providing the segmentId of the segment you want to split or merge. Segment Selection Considerations By splitting/merging using Axon Server the most appropriate segment to split or merge is chosen for you. When using the Axon Framework API directly, the segment to split/merge should be deduced by the developer themselves: Split: for fair balancing, a split is ideally performed on the biggest segment Merge: for fair balancing, a merge is ideally performed on the smallest segment Tracking processors can use multiple threads to process an event stream. They do so by claiming a segment which is identified by a number. Normally, a single thread will process a single segment. The number of segments used can be defined. When an event processor starts for the first time, it can initialize a number of segments. This number defines the maximum number of threads that can process events simultaneously. Each node running of a tracking event processor will attempt to start its configured amount of threads to start processing events. Event handlers may have specific expectations on the ordering of events. If this is the case, the processor must ensure these events are sent to these handlers in that specific order. Axon uses the SequencingPolicy for this. The SequencingPolicy is a function that returns a value for any given message. If the return value of the SequencingPolicy function is equal for two distinct event messages, it means that those messages must be processed sequentially. By default, Axon components will use the SequentialPerAggregatePolicy, which makes it so that events published by the same aggregate instance will be handled sequentially. A saga instance is never invoked concurrently by multiple threads. Therefore, a sequencing policy for a saga is irrelevant. Axon will ensure each saga instance receives the events it needs to process in the order they have been published on the event bus. Parallel processing and Subscribing Event Processors Note that subscribing event processors don't manage their own threads. Therefore, it is not possible to configure how they should receive their events. Effectively, they will always work on a sequential-per-aggregate basis, as that is generally the level of concurrency in the command handling component. DefaultConfigurer.defaultConfiguration().eventProcessing(eventProcessingConfigurer -> eventProcessingConfigurer.registerTrackingEventProcessor("myProcessor",c -> c.eventStore(),c -> c.getComponent(TrackingEventProcessorConfiguration.class,() -> TrackingEventProcessorConfiguration.forParallelProcessing(3)))); You can configure the number of threads (on this instance) as well as the initial number of segments that a processor should define, if none are yet available. axon.eventhandling.processors.name.mode=tracking# Sets the number of maximum number threads to start on this nodeaxon.eventhandling.processors.name.threadCount=2# Sets the initial number of segments (i.e. defines the maximum number of overall threads)axon.eventhandling.processors.name.initialSegmentCount=4 Even though events are processed asynchronously from their publisher, it is often desirable to process certain events in the order they are published. In Axon this is controlled by the SequencingPolicy. The SequencingPolicy defines whether events must be handled sequentially, in parallel or a combination of both. Policies return a sequence identifier of a given event. If the policy returns an equal identifier for two events, this means that they must be handled sequentially has finished. The default policy is the SequentialPerAggregatePolicy. It SequencingPolicy interface. This interface defines a single method, getSequenceIdentifierFor, that returns the sequence identifier for a given event. Events for which an equal sequence identifier is returned must be processed sequentially. Events that produce a different sequence identifier may be processed concurrently. Policy implementations may return null if the event may be processed in parallel to any other event. eventProcesingConfigurer.registerSequencingPolicy("processingGroup", conf -> /* define policy */);// or, to change the default:eventProcesingConfigurer.registerDefaultSequencingPolicy(conf -> /* define policy */); For tracking processors, it doesn't matter whether the threads handling the events are all running on the same node or on different nodes hosting the same (logical) tracking processor. When two instances of a tracking processor with the same name are active on different machines, they are considered two instances of the same logical processor. They will 'compete' for segments of the event stream. Each instance will 'claim' a segment, preventing events assigned to that segment from being processed on the other nodes. The TokenStore instance will use the JVM's name (usually a combination of the host name and process ID) as the default nodeId. This can be overridden in TokenStore implementations that support multi-node processing. Distributing events in inter-process environments is supported with Axon Server by default. Alternatively, you can choose other components that you can find in one of the extension modules (Spring AMQP, Kafka). In cases when you want to rebuild projections (view models), replaying past events comes in handy. The idea is to start from the beginning of time and invoke all event handlers anew. The TrackingEventProcessor supports replaying of events. In order to achieve that, you should invoke the resetTokens() method on it. It is important to know that the tracking event processor must not be in active state when starting a reset. Hence it is required to shut it down first, then reset it and once this was successful, after which it can be started up again. Initiating a replay through the TrackingEventProcessor opens up an API to tap into the process of replaying. It is for example possible to define a @ResetHandler, so you can do some preparation prior to resetting. Let's take a look at how we can accomplish a replay of a tracking event processor. First, we will see one simple projecting class: @AllowReplay // 1.@ProcessingGroup("card-summary")public class CardSummaryProjection {//...@EventHandler@DisallowReplay // 2. - It is possible to prevent some handlers from being replayedpublic void on(CardIssuedEvent event) {// This event handler performs a "side effect",// like sending an e-mail or a sms.// Neither, is something we want to reoccur when a// replay happens, hence we disallow this method// to be replayed}@EventHandlerpublic void on(CardRedeemedEvent event, ReplayStatus replayStatus /* 3. */) {// We can wire a ReplayStatus here so we can see whether this// event is delivered to our handler as a 'REGULAR' event or// a 'REPLAY' event// Perform event handling}@ResetHandler // 4. - This method will be called before replay startspublic void onReset(ResetContext resetContext) {// Do pre-reset logic, like clearing out the projection table for a// clean slate. The given resetContext is [optional], allowing the// user to specify in what context a reset was executed.}//...} The CardSummaryProjection shows a couple of interesting things to take note of when it comes to "being aware" of a reset is in progress: An @AllowReplay can be used, situated either on an entire class or an @EventHandler annotated method. It defines whether the given class or method should be invoked when a replay is in transit. Next to allowing a replay, @DisallowReplay could also be used. Similar to @AllowReplay it can be placed on class level and on methods, where it serves the purpose of defining whether the class / method should not be invoked when a replay is in transit. To have more fine-grained control on what (not) to do during a replay, the ReplayStatus parameter can be added. It is an additional parameter which can be added to @EventHandler annotated methods, allowing conditional operations within a method to be performed based on whether a replay is in transit yes or no. If there is certain pre-reset logic which should be performed, like clearing out a projection table, the @ResetHandler annotation should be used. This annotation can only be placed on a method, allowing the addition of a reset context if necessary. The resetContext passed along in the @ResetHandler originates from the operation initiating the TrackingEventProcessor#resetTokens(R resetContext) method. The type of the resetContext is up to the user. With all this in place, we are ready to initiate a reset from our TrackingEventProcessor. To that end, we need to have access to the TrackingEventProcessor we want to reset. For this you should retrieve the EventProcessingConfiguration available in the main Configuration. Added, this is where we can provide an optional reset context to be passed along in the @ResetHandler: public class ResetService {//...public void reset(Configuration config) {EventProcessingConfiguration eventProcessingConfig = config.eventProcessingConfiguration();eventProcessingConfig.eventProcessor("card-summary", TrackingEventProcessor.class).ifPresent(processor -> {processor.shutDown();processor.resetTokens();processor.start();});}} public class ResetService {//...public <R> void resetWithContext(Configuration config, R resetContext) {EventProcessingConfiguration eventProcessingConfig = config.eventProcessingConfiguration();eventProcessingConfig.eventProcessor("card-summary", TrackingEventProcessor.class).ifPresent(processor -> {processor.shutDown();processor.resetTokens(resetContext);processor.start();});}} It is possible to provide a change listener which can validate whenever the replay is done. More specifically, a EventTrackerStatusChangeListener can be configured through the TrackingEventProcessorConfiguration. See the monitoring and metrics for more specifics on the change listener. Partial Replays It is possible to provide a token position to be used when resetting a TrackingEventProcessor, thus specifying from which point in the event log it should start replaying the events. This would require the usage of the TrackingEventProcessor#resetTokens(TrackingToken)or TrackingEventProcessor#resetTokens(Function<StreamableMessageSource<TrackedEventMessage<?>>, TrackingToken>)method, which both provide the TrackingTokenfrom which the reset should start. How to customize a tracking token position is described here. Prior to Axon release 3.3, you could only reset a TrackingEventProcessor to the beginning of the event stream. As of version 3.3 functionality for starting a TrackingEventProcessor from a custom position has been introduced. The TrackingEventProcessorConfiguration provides the option to set an initial token for a given TrackingEventProcessor through the andInitialTrackingToken(Function<StreamableMessageSource, TrackingToken>) builder method. As an input parameter for the token builder function, we receive a StreamableMessageSource which gives us three possibilities to build a token: From the head of event stream: createHeadToken(). From the tail of event stream: createTailToken(). From some point in time: createTokenAt(Instant) and createTokenSince(duration) - Creates a token that tracks all events after given time. If there is an event exactly at the given time, it will be taken into account too. Of course, you can completely disregard the StreamableMessageSource input parameter and create a token by yourself. Below we can see an example of creating a TrackingEventProcessorConfiguration with an initial token on "2007-12-03T10:15:30.00Z": public class Configuration {public TrackingEventProcessorConfiguration customConfiguration() {return TrackingEventProcessorConfiguration.forSingleThreadedProcessing().andInitialTrackingToken(streamableMessageSource -> streamableMessageSource.createTokenAt(Instant.parse("2007-12-03T10:15:30.00Z")));}}
https://docs.axoniq.io/reference-guide/axon-framework/events/event-processors
2020-09-18T16:34:13
CC-MAIN-2020-40
1600400188049.8
[]
docs.axoniq.io
1. Introduction Content as a Service provides the editorial data of FirstSpirit via a uniform JSON-based data format that can be consumed by any endpoint. Using the CaaS Connect module reduces the configuration and administration effort in the FirstSpirit project for the delivery layer considerably. 1.1. Technical requirements Besides the FirstSpirit version 5.2.200710, the module requires access to the CaaS platform. An API key with appropriate permissions must be configured, and the accessibility of the platform endpoint over the network must be ensured. For more information on installing and using the CaaS platform, refer to the appropriate documentation. 2. Installation of the module The module has no dependencies on other modules and can therefore be installed very easily on the FirstSpirit server. Since projects must be explicitly marked as CaaS projects, the project data is not automatically synchronized after the module is installed. In order for the module to work as expected, the service component must be configured. 3. Components In the following chapters all components of the module and their administration are described. 3.1. CaaS service The service component CaaS Connect Service is the link between the projects on the FirstSpirit server and the CaaS platform. In order for project content to be transferred to the platform, the service must be started. If the service is stopped, no synchronization takes place. A valid configuration is required to operate the service. The configuration dialog of the service component CaaS Connect Service shows which fields are optional or mandatory and notifies when the entered values are invalid. The minimal configuration includes a reachable CaaS platform and an API key that has read and write access to all CaaS projects. More information about API keys and permissions can be found in the CaaS platform documentation. CaaS Connect Servicecomponent 3.1.1. Media deployment Media binary data can be stored either in the CaaS platform or in an S3-compatible service. Saving binary data in the CaaS platform is the default configuration. If binary data is to be delivered via a S3 service, the configuration of the service component CaaS Connect Service must be adjusted accordingly. Configuration of an S3 deployment: Both Amazon S3 and S3-compatible services can be used for delivery. The specification of an S3 endpoint is optional. If no end point is specified, it is assumed that Amazon S3 is being accessed. In this case, a client region must be specified so that the Amazon end point can be determined. It is composed as follows: s3.$clientRegion.amazonaws.com. The access credentials for the S3 service are also optional. If there is no specification in the configuration, environment variables and system properties are evaluated. Further information about such a configuration can be found in the official Amazon S3 documentation. 3.2. ProjectApp When adding the project component CaaS Connect project app to a project, it activates the CaaS Connect module for that project. No project-specific configurations are possible and therefore not necessary. However, the configuration dialog of the project component can be used to lookup the CaaS base URLs that are available for the project. Adding the project component generates two jobs for the respective project, CaaS Connect Release Generation and CaaS Connect Preview Generation. These carry out a full synchronization for the release or preview status for the project. 4. Automated configuration of the module Besides the manual configuration via the FirstSpirit interface, CaaS Connect version 3 or later on can also be configured via the file system. To achieve this, all that is necessary is to create a JSON configuration file. The service CaaS Connect Service automatically loads the file TODO CAAS-1343 service-config-name-here.json at startup, which must be located in the FirstSpirit server in TODO CAAS-1343 PATH. The following examples show the format of the configuration file that is required and supported.", // optional, to be used when deploying media to S3 instead of CaaS "mediaConnectorConfig" : { "S3": { // The endpoint of the S3 service "endpointUri": "", // optional, to use if no environment variables or system properties are used for S3 credentials "credentials": { "accessKeyId": "mySecretS3KeyId", "secretAccessKey": "mySecretS3AccessKey" }, "bucketName": "examplebucketname", "timeoutInSeconds": 10, // optional, to be used if a specific region should be used for Amazon S3 "clientRegion": "eu-central-1" } }, // optional, use only if the CaaS platform should be reached via a proxy "proxyAddress": "foo:123" } 5. Error handling Invalid configuration or network problems may cause errors on the CaaS Connect module side. All errors are logged in the FirstSpirit log. 5.1. Network error or overload The module depends on the full availability of the CaaS platform. If the platform is not available or cannot process incoming requests fast enough, the module tries to repeat the requests. If this does not succeed either, an error is displayed in the FirstSpirit log and possible changes to the data are not synchronized with the CaaS platform. The editor is not notified of the error, so monitoring the server log by the administration is essential. If an error occurs it can be corrected either by executing the schedules or by repeating the action that lead to the data change. 5.2. Configuration of the API key The API key configured in the service component is used to synchronize all projects on the server with the CaaS platform. The API key therefore requires write and read permissions for all CaaS projects, on the FirstSpirit server. For more information on configuring API keys, refer to the CaaS platform documentation. 6. Legal information The CaaS is a product of e-Spirit AG, Dortmund, Germany. Only a license agreed upon with e-Spirit AG is valid with respect to the user for using the module. 7. Help The Technical Support of the e-Spirit AG provides expert technical support covering any topic related to the FirstSpirit™ product. You can get and find more help concerning relevant topics in our community..
https://docs.e-spirit.com/module/caas/CaaS_Connect_ServerAdministrator_EN.html
2020-09-18T16:12:48
CC-MAIN-2020-40
1600400188049.8
[]
docs.e-spirit.com
change RPMS_ALLOW_REPOSITORY = Falseto RPMS_ALLOW_REPOSITORY = True, and at the end, add:'), } Edit your <application>.yamland add a repository line: rpms: libpeas: repository:<path to checkouts>/libpeas rationale: Runtime dependency ref: f30 Use fedpkg switch-branchto switch to the rfe from <application.yaml>(or change the ref:in <application.yaml> to be master) Make your changes and commit them30-.
https://docs.fedoraproject.org/ru/flatpak/troubleshooting/
2020-09-18T16:45:17
CC-MAIN-2020-40
1600400188049.8
[]
docs.fedoraproject.org
lv-util LiveView Server — Generates interfaces for EventFlow applications. SYNOPSIS lv-util generate { liveview-project-directory} [ OPTIONS] lv-util upgrade [ OPTIONS] lv-util --help lv-util --version DESCRIPTION lv-util is used to generate a StreamBase interface file on which an EventFlow application can be based for use with a LiveView data table. ARGUMENTS - generate Generates a StreamBase interface file on which an EventFlow application can be based for use with a specified LiveView data table or set of tables. All interface files are generated into the src/main/eventflow/lv-interfacesfolder of the specified project. - --list Lists all the applications and tables in the specified project and shows example lv-util generate command lines for all LiveView data tables in the project. - --type tableschemas |datasource | preprocessor | publisher | transform Specifies the type of interface to generate, using one of the exact keywords as shown. The middle four types correspond to the four types of EventFlow applications that can be associated with a LiveView data table, as described on EventFlow Module Types for LiveView in the LiveView Developer's Guide. Each generatecommand generates one StreamBase interface file into the src/main/eventflow/lv-interfacesfolder per specified data table. - tableschemas Specify this type to generate named schemas for the incoming and outgoing streams in each specified data table. Run this generate command along with any of the following generate commands. Interface files generated with this tableschemaskeyword are included by reference in the interface files generated by the other keywords, and must exist for the other interface files to be valid. - datasource Generates an interface that defines the DataOut output stream for each specified data table. - preprocessor Generates an interface that defines the DataIn input stream and DataOut output stream and for each specified data table. - publisher Generates an interface file that defines streams and named schemas to form the basis for publishing data into each specified data table. - transform Generates an interface file that defines the DataIn input stream and DataOut output stream to form the basis for transforming data before loading into each specified data table. - --tables TableName1,TableName2, ... Specifies one or more LiveView data tables for which to create the specified interface type. To specify more than one table, separate table names with a comma but no space. - --force Specifies overwriting an existing interface file, if present. By default, interface file names are taken from their parent lvconf files. - --out filename Specifies the name of the interface file to be generated. Use this option with caution, because interface files generated with various --typeoptions presume default interface names. - upgrade One-time use tool to upgrade a project using the deprecated static aggregation to author-time aggregation. - --static Upgrades and overwrites the existing lvconf files. - --outdir {name} Writes upgraded files to a different directory. Displays a syntax reminder message, then exits. --version Prints version information and exits. EXAMPLES Generate a tableschema StreamBase interface file for a table: lv-util generate . --tableschemas --tables tablename
https://docs.tibco.com/pub/str/latest/doc/html/reference/lv-util.html
2020-09-18T16:09:58
CC-MAIN-2020-40
1600400188049.8
[]
docs.tibco.com
The Lists and Queries Window Actions common to lists and queries Options for list and query management Sharing lists and queries You define and execute queries using the Query Builder window or by using form based query . When a query is executed the query definition and the hit list is saved for you to use again in the future. This is done using the Lists and queries window. This window is open by default, but if it is closed, you can open it by choosing Window -> Lists and queries. The window contains nodes representing queries and lists, as shown here: You can manage lists and queries using the toolbar and the node's contextual menus. By managing your queries and their results, you can do the following: Create lists of favourite compounds - please also see the cherry picking functions Cherry Picking Combine results of multiple queries in powerful ways Compare the result of a query to the previous times you ran it Restrict the results of a search to the contents of a particular list Add/remove lists to/from the current results Generate lists of values from any field Lists can be used to manage lists of values from a field. Lists can be given names, saved, restored, edited, imported, and exported. New lists can be made from logical combinations of other lists. For instance, you can create a list of your favourite compounds that you can save and examine the next time you use IJC. Lists of values from one field can also be converted to those of a different field. Lists belong to one of two categories - temporary and permanent. Temporary lists do not exist when you restart IJC, while permanent lists persist across restarts. Permanent lists are saved to the database and continue to exist until you choose to delete them. Temporary lists can be made permanent. Both types can be renamed or deleted. To convert a temporary list to a permanent one: Right click on the list and choose 'Make permanent' or 'Make temporary' Drag the list from the temporary folder to the permanent folder (or vice-versa) When you execute a query a temporary list is automatically created for you. These lists are temporary as a large number of them might be generated. You can make any of these temporary lists permanent if you want to re-use it in later IJC sessions. You can also convert the IDs to those of a different field. As we have seen, a temporary list is created as a result of running a query. Such a list corresponds to a list of the values from the ID field of the Entity. These IDs are the IDs of the row, which correspond to the primary key values in the database. This is the only type of field for which temporary lists can be created. However (since IJC 5.3.2), lists of values from fields other than the ID field can also be managed by IJC as permanent lists, and values of lists for one field can be converted to those for a different field. Ways of creating and editing lists are described later. Lists for each field are displayed in a folder for each field. Only the ID field folder is shown by default. Other fields are only shown if there are any lists for the field. List management is most useful for fields that have unique values, like the ID field of the entity. The value of the ID field uniquely identifies a row in the database table. Support for permanent lists for fields other than the ID field was added in IJC 5.3.2 primarily to manage the situation where there was an additional field that is a more meaningful identifier (e.g. the compound ID), However, the values in the field do not have to be unique, and in some cases this can be useful. For instance you can run a search to generate a list of IDs, then convert that list to a list of the molecular formula field to generate a list of all the formulae within that set of results. When that list is applied all structures with any of those formulae will be seen. It is important to realise that this sort of operation is not commutative. e.g. converting a list of Field1 to one for Field2 and then back again to Field1 will not result in the original IDs if the values of either field are not unique. List management supports only integer and text fields. It is not thought likely that other types of fields will be useful. Also, not all text fields will be suitable for list management. It is designed only for values that are simple single line values. More complex multi-line or lengthy text strings will not be suitable. There are a number of ways of creating or editing lists: Automatic creation when queries are executed Manually entering the IDs You can edit a list by right clicking on it an choosing 'Edit list'. The list editor opens allowing you to manually specify the values. These can be typed in or pasted from the clipboard. The syntax is simple text with each value on a separate line. Importing/exporting the values from a file A list can be created by importing from a file. The syntax is simple text with each value on a separate line. Similarly, values in a list can be exported to a file. Perform these operations by choosing 'Import list' or 'Export list' from the right click context menu of the list or by using the icon in the toolbar. {primary} Since the 15.10.5.0 release the user has to have the ROLE_EXPORT_DATA assigned to be able to export permanent lists. The user roles are described here. Converting the values from a different field Values for a list of one field can be converted to values for a different field. This is most commonly used to convert temporary lists generated when a query is executed to a list of values for a different field (e.g. your compound IDs) but can be used to convert between any field. To perform this either: Drag and drop the list to the folder for the required field or Right click context menu of the list and choose 'Convert list'. The convert list dialog will open: You can perform operations on two or more lists at the same time. Select multiple lists and then open the drop-down shown below: A list of operations is shown. They are as follows: Intersection Union XOR A and not B B and not A When you make a choice from the above operations, the following dialog is shown, with the corresponding operation selected: You can apply the operation to an existing list (overwrite one of the input lists) or you can create a new list for the results. Use the "Save result to" area, at the bottom of the above dialog, for this purpose. List operations can only be used for lists of the same field (but remember that lists can be converted from one field to another). When using the ID field you can choose whether the new list is to be temporary or permanent. in the other list. By default, the Lists and Queries window is empty and the toolbar is disabled. The contents of the Lists and Queries window corresponds to the context of the current Data Tree. Queries belong to the Data Tree while lists belong to an entity, typically the Entity at the root of the particular Data Tree that is selected. Usually this corresponds to the Grid View or Form View that is currently selected. If, for instance you switch to a view of a different Data Tree then the Lists and Queries window contents will be updated so that it contains the queries for Data Tree and the lists for the Entity at the root of the Data Tree of that view. The Lists and Queries window has a selector combo box that lets you change the selected data tree and see the lists and queries for the entity at the root of that data tree. The Lists and Queries window shows at least four folders, temporary and permanent queries and temporary and permanent lists for the ID field. If you have created permanent lists for other fields then folders for these will also be present. After you run a query , the temporary query node and the temporary list node each have subnodes. The temporary query node shows the executed queries and the temporary lists node shows the results of the query. Queries and lists can be deleted and executed. Lists can also be edited, sorted, exported, and imported. These actions are available from the toolbar. By right-clicking on a query or a list, you are able to specify that they should become permanent. Details on all these activities are provided below. Operations can be performed on lists and queries either by using the toolbar in the Lists and queries window or by using the right click contextual menu of the individual list or query. Execute : Re-applies the list or query to the data tree, e.g. updates the results, in the main window to show the appropriate list or query. Share : control how the list or query is shared with other users (only present for permanent lists and queries). Make permanent/temporary : Converts between temporary and permanent lists. When you restart IJC only permanent lists will be present. Delete list or query : Delete the list or query. Rename : Assign a new name to the list or query. Properties : Show the properties of the list or query. Add list to current result set : Add the IDs for that list to the current results. Remove list from current result set : Removes the IDs for that list to the current results. Edit list : Lets you edit the content of a list. The contents is just a list of IDs. Convert List : Convert the values of this list to those for a different field. Export list to file : Create a text file with the list IDs. Validate list : Removes any IDs for the lis that are not in the database. Queries are created using the query panel. See the documentation on Running queries for more details. When a query is executed, the query definition is saved as a new temporary query. The list of results is stored as a new temporary list.. See the documentation on Running queries for more details. To set options for managing lists and queries, choose Options under the Tools menu and then click Miscellaneous -> Lists and Query. You should now see the following: You can set the following options: Lists. Maximum number of temporary lists : By default, no more than 10 temporary lists can be created. After that the oldest one is removed when a new one is added. To change this upper limit, specify a different number here. Maximum number of rows in list : The default number of rows is 10,000. Depending on which of the two radio buttons you select, no more temporary lists are created after this point, or new temporary lists over this point are truncated. Note: if you increase this limit you may run into memory limitations. Queries. Temporary queries history limit : By default, no more than 10 temporary queries are present. Do not create temporary query if it returns no hits : By default, even if no hits are returned, a temporary query is created, which you can then rerun. If you select this checkbox, no such temporary query will be created. Do not create temporary query if it returns all rows from database: By default, if all rows are returned, a temporary query is created. This, however, is in most cases a query that you do not want to run again, therefore a temporary query is not created. Select this checkbox to have the temporary query created in this instance too. If you are using a multi-user database then you can use lists and queries that other people have created. You can do this in one of two ways: One user can share a list or query with other users by changing the visibility of the item. This allows direct access to the other person's list or query, but does not let the second user change the first user's list or query. The second user will however see any subsequent changes that the first user makes (the details are loaded when the user first connects to the database). Details of sharing items are described in the document Sharing items with others . To do this right click on the Permanent Lists or Permanent Queries node and select Copy list/query from other user. A dialog will open that allows you to specify the user and the list/query. Note: This process creates a COPY of the list or query. Any changes the other user subsequently makes will not be reflected in your list/query. You would need to take a fresh copy. The new cherry picking functionality is now available. With this feature you can add and remove molecules from the working list easily by using shortcut or the toolbar icon if you like. Please, see the documentation on Cherry Picking for more details.
https://docs.chemaxon.com/display/lts-gallium/list-and-query-management.md
2022-09-25T04:55:52
CC-MAIN-2022-40
1664030334514.38
[]
docs.chemaxon.com
Applies to: Insights and Premium Members Parent's app: iOS or Android Article type: Fundamental steps Red Alerts Listed in Snapshots Parents can access a list of Red Alerts from the weekly Snapshots reports. The Snapshots store the last 28 days of activity. The list includes Red Alerts fixed or ignored (cleared) by Parents. On a Parent's phone or tablet: - In Snapshots, tap SHOW ALL - Go to the bottom and tap VIEW ALL RED FLAGS - Tap on an EVENT - Note the Date, Device, and User The MAC Address can be used to troubleshoot BYOD Devices or Devices connected to the filtered Family WiFi on the Family Zone Box (Australia and New Zealand).
https://docs.familyzone.com/help/see-a-list-of-red-alerts
2022-09-25T05:13:31
CC-MAIN-2022-40
1664030334514.38
[array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/61444dbf7011e8fe4d7b2b67/n/fza-ios-4-0-6-list-red-alerts-001.gif', None], dtype=object) ]
docs.familyzone.com
+ '\'' + '}'; } } #include <hazelcast/client/hazelcast_client.h> struct Employee { std::string surname; hazelcast::client::hazelcast_json_value to_json(){ return hazelcast::client::hazelcast_json_value("{ \"surname\": \"" + surname + "\" }"); } }; Then, let’s create a listener that tracks the(), Predicates.sql(); } } } int main() { hazelcast::client::client_config config; auto hz = hazelcast::new_client(std::move(config)).get(); auto map = hz.get_map("map").get(); hazelcast::client::query::sql_predicate sqlPredicate(hz,"surname=smith"); map->add_entry_listener( hazelcast::client::entry_listener().on_added([](hazelcast::client::entry_event &&event) { std::cout << "Entry Added:" << event << std::endl; }).on_removed([](hazelcast::client::entry_event &&event) { std::cout << "Entry Removed:" << event << std::endl; }).on_updated([](hazelcast::client::entry_event &&event) { std::cout << "Entry Updated:" << event << std::endl; }), sqlPredicate, true).get(); std::cout << "Entry Listener registered" << std::endl; }:) { } } }
https://docs.hazelcast.com/hazelcast/5.0/data-structures/listening-for-map-entries
2022-09-25T05:17:57
CC-MAIN-2022-40
1664030334514.38
[]
docs.hazelcast.com
Activity feed With the activity feed in each project (an exclusive feature for Pro workspaces), you can keep tabs on who's working in the project and what kind of changes they're making. When multiple people are working inside a project, activity feeds are great for keeping everyone organized and giving help where it's most needed. To open up the activity feed pane, open up any project, then open the project settings menu (click the menu icon in the upper left of the screen). Click on "Activity" in the sidebar to see all kinds of project changes—adding new elements, editing profiles, updating views, creating new maps, etc. It will also show you who made those edits and when. Activity feeds are an exclusive feature for Pro workspaces. If you transfer an existing project into a Pro workspace, its activity feed will show all historical activity (not just the activity that happens after you transfer the project).
https://docs.kumu.io/guides/activity-feed.html
2022-09-25T06:16:09
CC-MAIN-2022-40
1664030334514.38
[array(['../images/activity-feed.png', 'Activity feed'], dtype=object)]
docs.kumu.io
How to Letter Writing If you’ve never written a letter before, it’s okay! If you’re from the age of instant messaging and email, you probably haven’t given it much thought. Here are a few tips for writing a good letter: First, distinguish between a formal and informal letter. A formal letter follows specific formats and protocols. For example, it should be addressed to a business or government official. On the other hand, an informal one is written for close family members or friends. The first step in writing a letter is identifying the type of letter you’re writing. Your audience and the information you’re conveying will determine what type of letter to write. For example, an official letter to a college principal is formal. On the other hand, an informal one to an old college professor is more personal and informal. While you’ll want to address a business letter to a business person, it’s important to make sure that you include their name and address, as well as a signature. Once you have figured out the letter you want to write, you can organize the body text. Your body text should be organized into paragraphs, with sophisticated vocabulary and standard spellings and punctuation. The body of the letter should contain the details of what you want to say. This means you should start with an introduction paragraph, which should be short, and mention the letter’s purpose. The next part of the letter is the body text. The body text is the most important part of your letter. It would be best to organize it into paragraphs, using standard spellings and punctuation. A body paragraph will help keep the reader interested in the content. It will also separate each point from another. After the introduction paragraph, you should move into the body text, where you can explain the letter’s subject matter. When writing a personal letter, avoid using too many grammar checkers or spell-checkers. A body paragraph will contain the details of the letter. It should be structured in a manner that will be most effective for the intended audience. It would be best to use standard spelling and punctuation throughout the letter. The introduction paragraph should be brief and mention the reason for the letter. The body text contains all the details of the letter. It is the main part of the letter. It should be formatted according to the audience. The audience of the letter will be able to read it easily. While a formal letter should be written to an audience, a personal one can be written for a friend or family member. An informal letter is written to a friend and isn’t appropriate for a business. A personal letter should be written to a family member. A letter to a college principal will be very formal. A letter to a former college professor, on the other hand, should be more casual. A letterhead doesn’t need to have the recipient’s full name and address. A date and the recipient’s name should be left-justified. When addressing a business letter, make sure to include the company’s name and contact information in the body of the letter. A letter is not just for communicating with other people. It can be a means of communication. If you’re writing to a business, be sure to include your contact’s name and the company’s name. The first step in learning letter writing is identifying the type of letter you need to write. This will be dictated by the recipient and the information you’d like to convey. A formal letter would be written to a college principal, while a personal one would be sent to an old college professor. The letter’s purpose will determine the type of letter you need to write. Depending on the audience, you can use different styles of letters. A formal letter is written with paragraphs. It’s common to include a full name and address, but it’s unnecessary to put it on the letterhead. You can use a standard font or use a typed letter. When you need to send a letter, you should type it or print it. If it’s urgent, you should email the document instead. Aside from the style of letters, it would be best if you also considered their tone and purpose. Learn How to Letter Writing They are learning how to write letters is an important skill. This skill is useful for business purposes and in daily life. It is important to be aware of the correct letter writing format, as this will greatly benefit one’s career. In addition to this, it will enable students to write effective business letters. To learn how to write a business letter, you can follow the following tips. Below are some of the basic guidelines to follow when writing a business or personal letter. A school psychologist or child psychologist should evaluate a child with writing difficulties. A pediatric neurologist or developmental pediatrician can also identify the problem. Another good option is to see an occupational therapist who teaches children how to write. These professionals are often located in the school and community, so it is good to contact them first. In addition to this, you can also look into your insurance provider to see which occupational therapists are available in your area. When your child is ready, you can take them to an occupational therapist to help them with their skills. These professionals are highly trained in dealing with various writing problems and will work with your child’s needs to create effective solutions. If you’re still struggling with letter writing, many resources are available to help you improve your skills. A child psychologist or an occupational therapist will help you find the best solution for your child. A teacher can use videos to help children with their writing skills. Using real-world videos, FluentU creates personalized language lessons for its students. The videos include interviews, news stories, and speeches. It will also help your child learn how to address a letter or envelope. Different countries have different methods of doing this. This will help your child improve their writing skills. This can also be helpful for parents who have children who are not yet able to write letters in English. A child can learn how to write letters with the help of a teacher or a book. A good teacher can help a child understand the importance of a letter and use it to communicate with other people. They can also help their child learn to write by reading books and using multisensory materials. You can also check with your state’s office to see if any requirements are in place. If you have a child who is not writing well, an occupational therapist will give you the necessary assistance. A child who struggles with letter writing may need to see a child psychologist, child psychiatrist, or a developmental pediatrician. These professionals will be able to diagnose the problem and offer solutions. They can also help parents teach their children how to write a letter. There are numerous books on learning how to write a letter. The FluentU video library will allow your child to learn how to write a letter by using real-world videos. To learn how to write a business letter, you can also use FluentU videos. These videos are created with real-world situations, so they are relevant to the language your child needs. In addition to providing lessons on the basics of letter writing, FluentU also offers lesson plans. In addition to learning how to write a business letter, you can also find videos on how to write an envelope. You can also lookup videos that show different ways to write an address in a different country. If your child struggles with letter writing, a school psychologist or child psychologist may help. Occupational therapists can also help. These professionals specialize in learning how to write a business letter and can be found at your local county or school. They can also be found in various country areas, including hospitals and clinics. If your child does not speak the language, you can find a professional in your area who can help.
https://authentic-docs.com/how-to-letter-writing/
2022-09-25T05:51:04
CC-MAIN-2022-40
1664030334514.38
[]
authentic-docs.com
Enhanced language support in the AWS Cloud9 Integrated Development Environment (IDE) AWS Cloud9 provides enhanced support to improve your development experience when coding with the following languages: Java: Extensions allow provide features such as code completion, linting for errors, context-specific actions, and debugging options. Typescript: Language projects offer access to enhanced productivity features for TypeScript.
https://docs.aws.amazon.com/cloud9/latest/user-guide/enhanced-lang-support.html
2022-09-25T06:22:22
CC-MAIN-2022-40
1664030334514.38
[]
docs.aws.amazon.com
Sticky boxes are slide-in popups that remain until the user fills the form or closes it abruptly. ConvertPlug lets you create these sticky boxes that can be placed on the left or the right side of the screen. In order to create a sticky box, you need to follow the steps mentioned below. 1. Select Slide in popups among the modules seen under ConvertPlug. 2. Create a new slide-in popup 3. Select the appropriate template you wish to use. When you hover on a template, you will see two options; Use this and Preview. Click on “Preview” to see how it looks. Click on “Use this” to open it in the editor. Sticky Box Left Under Design -> Advanced Design Options -> Position, select Center Left Sticky Box Right Under Design -> Advanced Design Options -> Position, select Center Right 4. Save You can change the Slide in Entry and Exit Animations by going to: Design -> Slide-in Animations Select the animations from the respective drop down menus.
https://docs.brainstormforce.com/how-to-use-the-sticky-left-and-sticky-right-positions-in-convertplug/
2022-09-25T05:53:53
CC-MAIN-2022-40
1664030334514.38
[array(['https://docs.brainstormforce.com/wp-content/uploads/2016/06/Select-Slide-in-popups-in-ConvertPlug.jpg', 'Select Slide-in popups in ConvertPlug'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/06/Open-new-Slide-in-popup.jpg', 'Open new Slide-in popup'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/06/Select-a-slide-in-popup-template.jpg', 'Select a slide-in popup template'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/06/Select-the-Center-Left-position-for-a-slide-in-popup.jpg', 'Select the Center Left position for a slide-in popup'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/06/Select-the-Center-Right-position-for-a-slide-in-popup.jpg', 'Select the Center Right position for a slide-in popup'], dtype=object) ]
docs.brainstormforce.com
# will be stored to a persistent volume and 10..5 testing, test cases up to 25,000 pods were monitored in a OpenShift Container Platform cluster. will Ansible openshift_metrics role will auto-generate self-signed certificates for use between its components and will generate will then use when creating the route. The router’s default certificate are used if you do not provide your own. To provide your own certificate which will, please see the re-encryption route documentation. Because deploying and configuring all the metric components is handled with OpenShift>/byo/openshift-cluster/openshift-metrics> Container Platform Container Platform Container Platform instance. When your OpenShift Container Platform server is back up and running, metrics will will authenticate.
https://docs.openshift.com/container-platform/3.5/install_config/cluster_metrics.html
2022-09-25T05:07:42
CC-MAIN-2022-40
1664030334514.38
[]
docs.openshift.com
Dynatrace monitors the response times and performance experienced in mobile and desktop browsers. OpsRamp configuration Step 1: Install the integration - Select a client from the All Clients list. - Go to Setup > Integrations > Integrations. - From Available Integrations, select Monitoring > Dynatrace. -: - You can modify the attributes at any time. - You need not follow the same mappings. Dynatrace configuration Step 1: Configure alerts - Log into Dynatrace Admin UI. - Go to Integrations > Problem Notification, select + Set up notification and enter: - Name - Webhook URL: Paste the Webhook URL that you generated from OpsRamp inbound configuration. - Custom payload: Enter the payload. - Default for Alert Profiling - Click Send test notification. - The Set up notifications screen helps Dynatrace integrate with other notification systems. - The Custom Integration area displays a problem notification setup screen. - The ImpactedEntitiesand ProblemDetailsJSONvalues are JSON data types and must not have quotes around them. - The Save button is displayed only if send test notification is successful. Example payload: { "State": "OPEN", "ProblemID": "999", "ProblemTitle": "Dynatrace problem notification test run", "ImpactedEntity": "Myhost1, Myservice1", "ProblemSeverity": "ERROR", "ProblemDetailsText": "Dynatrace problem notification test run details", "ProblemImpact": "INFRASTRUCTURE" } Step 2: Configure alerting profiles - Go to Dynatrace Home Settings > Alerting. - Select Alerting Profiles, enter a name and click Create. - On the <Test 1> page, create the required alerting rule and event filter and click Done.
https://docs.opsramp.com/integrations/a2r/3rd-party/dynatrace/
2022-09-25T04:26:50
CC-MAIN-2022-40
1664030334514.38
[]
docs.opsramp.com
About License Server License Server enables you to manage floating licenses for SOAtest, Virtualize, and CTP. License Server is usually installed as part of a Parasoft DTP deployment, but the standalone version of the application ships with CTP if DTP is not deployed in your organization. The standalone License Server ships as a .war file that you can deploy into new or existing Tomcat servers. For details on adding licenses, see the CTP installation section Do I need the Parasoft License Server Web Application? The Parasoft License Server web application is only needed if: - You want to use License Server to manage floating licenses for Parasoft SOAtest and Parasoft Virtualize, and - Parasoft DTP will not be deployed within the organization Prerequisites - The PSTSec database must be running and may require that the -Xmx parameter be changed to 1024m (see Installing Parasoft User Administration). - Linux or Windows (32-bit or 64-bit) - Tomcat 8.5 - CTP 3.x or later - Java 8 - The Java_JRE variable must be set (required by the Tomcat server) License Server Installation and Configuration - Start the Tomcat server and copy licenseserver.warfile into the <tomcat installation>/webappsfolder. In a browser, go to http://<host>:8080/licenseserver. - Log into License Server using the default username and password for Parasoft User Administration ( admin/admin). If PSTSec is on a different server than the machine hosting License Server, see Connecting to Parasoft User Administration in order to log in. - Enter the licensing information provided by your Parasoft representative for each product you want to manage with License Server: - Enter the Expiration date of your license. - Copy and paste the license code in the Password field or copy and paste the entire license password line as it was sent to you in the open text field. - Click Set license. Restart Tomcat if you need to change License Server's port number or make any other configuration changes after the initial set up. Connecting to Parasoft User Administration You will need to manually configure License Server to connect to User Administration (PSTSec) if User Administration is deployed to a different Tomcat server than the current CTP installation or if the Tomcat server is not running on port 8080. Open the PSTSecConfig.xmlconfiguration file located in the /LicenseServer/conf/directory and set the host and port for the user administration service, for example: <remote-authentication> <enabled>true</enabled> <host>localhost</host> <port>8080</port> </remote-authentication> - Restart Tomcat. LDAP Configuration See Configuring LDAP for instructions on how to connect to an LDAP server. Starting Standalone License Server on a Cloud-based VM If you are deploying the standalone License Server in Microsoft Azure, Amazon AWS, or another cloud service provider, the machine ID may change as the VM is shut down and restarted. Pass the –Dparasoft.cloudvm=true parameter to the Tomcat params to ensure that machine IDs remain stable when VMs are restarted on cloud platforms: set "CATALINA_OPTS=-Dparasoft.cloudvm=true" HASP Key Support You can install Parasoft software on different machines and use a USB HASP Key to provide a machine ID when connecting to License Server. The USB dongle provides a stable machine ID for License Server so that the rest of the hardware can be changed without needing to request new licenses from Parasoft. Contact your Parasoft representative for additional information. Enabling HASP Key on Windows - Stop Tomcat Server. - Open PSTRootConfig.xml configuration file located in the[TOMCAT_HOME]/ LicenseServer/conf directory. - Add the following entry in the <root-config>node and save the configuration file: <external-lock-enabled>true</external-lock-enabled> - Plug the USB HASP key into your machine and wait for the drivers to install. - Start Tomcat Server. When the USB HASP key is removed, the machine ID will revert to the original ID. As a result, License Server will no longer provide licenses. Enabling HASP Key on Linux - Stop Tomcat Server. - Open PSTRootConfig.xml configuration file located in the [TOMCAT_HOME]/ LicenseServer/conf directory. - Add the following entry in the <root-config>node and save the configuration file: <external-lock-enabled>true</external-lock-enabled> - Locate the udev.rules file responsible for USB devices and modify the usb_deviceentry to use MODE="0666". - Execute the following command: udevadm control --reload-rules - Plug the USB HASP key into your machine. If the USB HASP key is already plugged in, run the following command as the root user to programmatically insert key: udevadm trigger Alternatively, you can restart the operating system. - Start Tomcat Server When the USB HASP key is removed, the machine ID will revert to the original ID. As a result, License Server will no longer provide licenses. Using License Server See the latest Parasoft DTP documentation for details on License Server usage and functionality:
https://docs.parasoft.com/display/SOAVIRT9107CTP313/Installing+Parasoft+License+Server
2022-09-25T05:08:11
CC-MAIN-2022-40
1664030334514.38
[]
docs.parasoft.com
Predictive Analytics dashboard Use the Predictive Analytics dashboard to search for different varieties of anomalous events in your data. Predictive Analytics uses the predictive analysis functionality in Splunk to provide statistical information about the results, and identify outliers in your data. The Predictive Analytics dashboard filters are implemented in a series from left to right. Example: The Object filter is populated based on the Data Model selection. To analyze data with predictive analytics, choose a data model, an object, a function, an attribute, and a time range and click Search. Note: The Predictive Analytics dashboard uses data model accelerations available within the time range chosen. If the data model accelerations are unavailable or incomplete for the chosen time range, the predictive search will revert to searching unaccelerated data sources. Prediction Over Time The Prediction Over Time panel shows a predictive analysis of the results over time, based on the time range you chose. The shaded area shows results that fall within two standard deviations of the mean value of the total search results. Outliers The Outliers panel shows those results that fall outside of two standard deviations of the search results. Create a correlation search From this dashboard, create a correlation search based on the search parameters for your current predictive analytics search. This correlation search will create an alert when the correlation search returns an event. Click Make Correlation Search... to open the Create Correlation Search dialog. Select the Security domain and Severity for the notable event created by this search. Add a search name and search description. Click Save. To view and edit correlation searches, go to Configure > General > Custom Searches. See the "Edit Correlation Search page" in the Installation and Configuration manual. Configuration information For information on configuring the Predictive Analytics dashboard, see "Predictive Analytics dashboard" in the Splunk App for Enterprise Security Installation and Configuration Manual. This documentation applies to the following versions of Splunk® Enterprise Security: 3.1, 3.1.1, 3.2, 3.2.1, 3.2.2 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/ES/3.2.1/User/PredictiveAnalyticsdashboard
2022-09-25T04:34:17
CC-MAIN-2022-40
1664030334514.38
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Ever worried about accidentally mixing some red and white creating some unwanted Rosé? Vinsight’s new Vessel Verification feature leverages our new QR Code and Barcode scanning functionality to make it easy for you and your workers to verify the vessels they are working with are the same vessels in your winery operation. Because this task involves scanning barcodes or QR codes, it must be done on an Android or iOS device with a camera and the respective barcode scanning app installed (more information can be found in the Vinsight Barcode/QR Code Scanning General page). In this document: There are three ways to choose which vessels you are going to verify: The basic workflow of verifying vessels is press the scan button, your native scanning app opens (Barcode Scanner or Qrafter), you scan the QR Code or Barcode on the vessel and the app automatically returns to the browser and verifies the vessel you just scanned. You can then press the next button to progress to the next vessel. An example of scanning a QR Code from the Barcode Scanner app on Android After scanning the vessel, you can exit the verification view and return to the winery operation by pressing the exit button at the bottom, or the “x” button in the top right corner of the view, as shown in the screenshot below. Returning to the winery operation you will find feedback of the scan in two different places. All of these sections should be checked by whoever is completing the winery operation to ensure the operation proceeds correctly. Once the vessels have been verified, if the user is in edit or create mode they must save the operation for the verified status to be saved. Otherwise, if the user is in read mode, the changes will be automatically applied.
https://docs.vinsight.net/vessels-verification
2022-09-25T05:34:01
CC-MAIN-2022-40
1664030334514.38
[]
docs.vinsight.net
Viewing Your Dataset Overview of the different views within Aquarium Selecting Data to Review Once you've uploaded a dataset through the Aquarium API, you can go to to begin to explore and understand what's in your dataset. To begin, select your project from the Projects page. Then click the Explore button to select a dataset and/or inference pairing to start reviewing. How to start viewing Aquarium pairs labeled data with corresponding inference data, so each unique labeled dataset uploaded will have the option to click Explore . Grid View The first and default view for dataset exploration is the Grid View . This view lets you quickly look through your dataset and understand what it "looks like" at a glance. Labels, if available, are overlaid over the underlying data. Grid View You can use the "Display Settings" button to toggle settings like label transparency, number of datapoints displayed per row, etc. Accessing the Display Settings Menu Frame View By clicking on an individual datapoint you can access the Frame View and view more detailed information such as its metadata (timestamp, device ID, etc.) and its labels. Navigating the Frame View In the Frame View, you also have a "Similar Data Samples" tab you can pull up and use it to find similar examples to the datapoint you are currently looking at. This makes it easy to analyze patterns you may find interest while reviewing your data. Analysis View The second view for dataset understanding is the Histogram View . You often want to view the distribution of your metadata across your dataset. This is particularly useful for understanding if you have an even spread of classes, times of day, etc. in your dataset. Simply click the dropdown, select a metadata field, and you can see a histogram of the distribution of that value across the dataset. Embedding View The third view for dataset understanding is the Embedding View . The previous methods of data exploration rely a lot on metadata to find interesting parts of your dataset to look at. However, there's not always metadata for important types of variation in your dataset. We can use neural network embeddings to index the raw data in your dataset to better understand its distribution. The embedding view plots variation in the raw underlying data. Each point in the chart represents a single datapoint. In the Image view, each point is a whole image or "row" in your dataset. In the Crop view, each point represents an individual label or inference object that is in a part of the image. The closer points are to each other, the more similar they are. The farther apart they are, the more different. Using the embedding view, you can understand the types of differences in your raw data, find clusters of similar datapoints, and examine outlier datapoints. You can also color the embedding points with metadata to understand the distribution of metadata relative to the distribution of the raw data. Example Embedding view To select a group of points for visualization, you can hold shift + click and draw a lasso around a group of points. You can then scroll through individual examples in the panel with the arrows. You can also adjust the size of the detail panel by dragging the corner. It's also possible to change what part of the image you're looking at in the preview pane. You can zoom in and out of the image preview by "scrolling" like you would with your mouse's scroll wheel / two finger scroll. You can also click and drag the image to pan the view around the image. Updating Label Colors If you want to change the color associated with a specific label, you can do so by clicking on the color square next to the label in the "Display Settings" menu: Setting Max Visible Confidence In the "Display Settings" menu, you can adjust the max visible confidence so that only lower confidence inferences appear: Note that the Min Confidence Threshold setting is slightly different from the Max Confidence Visible . In combination with the Min IOU Threshold , it determines how ground truth labels are matched to inference labels for metrics calculations (see Metrics Methodology for more details). Working In Aquarium - Previous Managing Projects Next - Working In Aquarium Analyzing Your Metadata Last modified 12d ago Copy link Outline Selecting Data to Review Grid View Frame View Analysis View Embedding View Updating Label Colors Setting Max Visible Confidence
https://docs.aquariumlearning.com/aquarium/working-in-aquarium/exploring-your-dataset
2022-09-25T05:03:31
CC-MAIN-2022-40
1664030334514.38
[]
docs.aquariumlearning.com
DCE Container¶ DCE Container is a new feature introduced in DCE 1.3. It allows you to wrap several content elements based on a certain DCE with a Fluid template. This is especially useful for slider libraries (for example). If you want to create one content element for one slide, but you need to wrap all slides with a container <div> element, which initializes the functionality of the library. First, you need to enable the feature: Enable DCE Container¶ Enables/Disables DCE Container feature. This option influences the frontend rendering instantly. When enabled, all content elements (tt_content) in a row based on this DCE are wrapped with the DCE container template. The content elements "in the row": base on the same DCE (with enabled DCE container feature) are located on the same pid are located in the same sys_language_uid are located in the same colPos Any other content element type interrupts the container. Caution Shortcuts are supported, but when your Container starts with a CType:"shortcut" item, it will fail. Note Since DCE version 2.8, DCE container also works inside of EXT:news detail view. Thanks to web-crossing GmbH for sponsoring. Container item limit¶ You can set an item limit (default: 0 / disabled) to limit the number of content elements a container may have. When the the limit is reached, the next content element starts a new container. Hide other container items, when detail page is triggered¶ When the DCE container and the detail page feature is enabled, this option makes it more comfortable to hide all other content elements, which' detail page template is not triggered. When this checkbox is enabled, all items in a container are hidden, if one item in the container is triggered by detail page GET parameter. In this case, the container template is still rendered, just with a single item. Other containers are not affected. Template type¶ Like the default frontend template of DCE, you can outsource the code of the container to file. DCE Container template¶ This template contains the code of the container wrapped around all DCEs within the container. <div class="dce-container"> <f:render </div> All DCEs in the container are stored inside the variable {dces} which is an array of Dce model instances. So when you iterate over the dces array (using f:for loop) you can access single field values or render the default template. So this partial is basically this: <div class="dce-container"> <f:for {dce.render -> f:format.raw()} </f:for> </div> Container iterator¶ When DCE container is enabled, each DCE has the new attribute containerIterator available, which allows you to get info about the position of the DCE inside of the container, like the iterator in Fluid you know from f:for loop: {dce.containerIterator.total} {dce.containerIterator.index} {dce.containerIterator.cycle} {dce.containerIterator.isEven} {dce.containerIterator.isOdd} {dce.containerIterator.isFirst} {dce.containerIterator.isLast} Container in backend¶ When you are using the Simple Backend View, you get a color mark for each content element: The colors being used can be adjusted using PageTS (on root level): tx_dce.defaults.simpleBackendView.containerGroupColors { 10 = #0079BF 11 = #D29034 12 = #519839 13 = #B04632 14 = #838C91 15 = #CD5A91 16 = #4BBF6B 17 = #89609E 18 = #00AECC 19 = #ED2448 20 = #FF8700 } By default, DCE provides ten color codes, which are picked based on the uid of the first content element in the container.
https://docs.typo3.org/p/t3/dce/main/en-us/UsersManual/DceContainer.html
2022-09-25T04:34:26
CC-MAIN-2022-40
1664030334514.38
[array(['../_images/dce-container.png', 'DCE Container configuration'], dtype=object) array(['../_images/dce-container-in-backend.png', 'DCE Container when Simple Backend View is enabled'], dtype=object)]
docs.typo3.org
DataStax Astra release notes DataStax Astra release notes provide information about new and improved features, known and resolved issues, and bug fixes. DataStax Astra release notes provide information about new and improved features, known and resolved issues, and bug fixes. Beta release Update: Integrated CQLSH On February 27, 2020, we brought the Cassandra Query Language SHell (CQLSH) even closer to you by integrating it directly in the DataStax Cloud console. Navigate to your database, click the CQL Console tab, and issue CQL commands to interact with your database. Update: Standalone CQLSH On January 24, 2020, the standalone version of CQLSH was released for connecting to DataStax Astra databases. What does that mean for you? It means connecting to an Astra database from your laptop without requiring DataStax Enterprise (DSE) or DataStax Distribution of Apache Cassandra™ (DDAC). Previously, you used native CQLSH included in one of those products. Today, you can download CQLSH, download the secure connect bundle for your Astra database, and connect with a single command. Update: Free tier On January 7, 2020, the Free tier was released for DataStax Astra, allowing you to create an Astra database with 10 GB for free. Create a database with just a few clicks and start developing within minutes, no credit card required. Open Beta The Beta release of DataStax Astra brings the ability to develop and deploy data-driven applications with a cloud-native service, built on the best distribution of Apache Cassandra™, without the hassles of database and infrastructure administration. Instead of listing the features included in this release, learn about what DataStax Astra is, and then get started with creating your own database. If you have questions, review the FAQ for answers.
https://docs.datastax.com/en/astra/aws/doc/dscloud/astra/releaseNotes.html
2020-03-29T00:18:31
CC-MAIN-2020-16
1585370493121.36
[]
docs.datastax.com
Gets or sets the customization form's layout when it is painted in Excel2007 style. This is a dependency property. Namespace: DevExpress.Xpf.PivotGrid Assembly: DevExpress.Xpf.PivotGrid.v19.2.dll public FieldListLayout FieldListLayout { get; set; } Public Property FieldListLayout As FieldListLayout Use the PivotGridControl.FieldListAllowedLayouts property to specify which layouts can be applied to the Customization Form. If the PivotGridControl.FieldListStyle property is set to 'Excel2007', end-users can change the arrangement of hidden fields displayed within the customization form, using the Customization Form Layout button (see the image below): If a layout is not specified as allowed, it is hidden from the Customization Form Layout menu, and assigning it to the FieldListLayout property has no effect.
https://docs.devexpress.com/WPF/DevExpress.Xpf.PivotGrid.PivotGridControl.FieldListLayout
2020-03-29T00:05:31
CC-MAIN-2020-16
1585370493121.36
[]
docs.devexpress.com
JBoss.orgCommunity Documentation 3.11.0.Final. RESTEasy is distributed under the Apache License 2.0. Some dependencies are covered by other open source licenses. RESTEasy is installed and configured in different ways depending on which environment you are running in. If you are running in WildFly, RESTEasy is already bundled and integrated completely so there is very little you have to do. If you are running in a different environment, there is some manual installation and configuration you will have to do. In WildFly, RESTEasy and the JAX-RS API are automatically loaded into your deployment's classpath if and only if you are deploying a JAX-RS application (as determined by the presence of JAX-RS annotations). However, only some RESTEasy features are automatically loaded. See Table 3.1. If you need any of those libraries which are not loaded automatically, you'll have to bring them in with a jboss-deployment-structure.xml file in the WEB-INF directory of your WAR file. Here's an example: <jboss-deployment-structure> <deployment> <dependencies> <module name="org.jboss.resteasy.resteasy-jackson-provider" services="import"/> </dependencies> </deployment> </jboss-deployment-structure> The services attribute must be set to "import" for modules that have default providers in a META-INF/services/javax.ws.rs.ext.Providers file... RESTEasy is bundled with WildFly, but you may want to upgrade RESTEasy in WildFly to the latest version. The RESTEasy distribution comes with a zip file called resteasy-jboss-modules-<version>.zip. Unzip this file within the modules/system/layers/base/ directory of the WildFly distribution. This will configure WildFly to use new versions of the modules listed in Section 3.1, “RESTEasy modules in WildFly”. RESTEasy is bundled with WildFly and completely integrated as per the requirements of Java EE. You can use it with EJB and CDI and you can rely completely on WildFly to scan for and deploy your JAX-RS services and providers. All you have to provide is your JAX-RS service and provider classes packaged within a WAR either as POJOs, CDI beans, or EJBs. A simple way to configure an application is by simply providing an empty web.xml file. You can of course deploy any custom servlet, filter or security constraint you want to within your web.xml, but none of them are required: <web-app </web-app> Also, RESTEasy context-params (see Section 3.4, “Configuration switches”) are available if you want to tweak or turn on/off any specific RESTEasy feature. Since we're not using a <servlet-mapping> element, we must define a javax.ws.rs.core.Application class (see Section 3.5, “javax.ws.rs.core.Application”) that is annotated with the javax.ws.rs.ApplicationPath annotation. If you return any empty set for classes and singletons, which is the behavior inherited from Application, your WAR will be scanned for resource and provider classes as indicated by the presence of JAX-RS annotations. import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("/root-path") public class MyApplication extends Application { } Note.. Note. As mentioned in Section 3.1.1, “Other RESTEasy modules”,. If you are using RESTEasy outside of WildFly, in a standalone servlet container like Tomcat or Jetty, for example, you will need to include the appropriate RESTEasy jars in your WAR file. You will need the core classes in the resteasy-jaxrs module, and you may need additional facilities like the resteasy-jaxb-provider module. We strongly suggest that you use Maven to build your WAR files as RESTEasy is split into a bunch of different modules: <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jaxrs</artifactId> <version>${resteasy.version}</version> </dependency> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jaxb-provider</artifactId> <version>${resteasy.version}</version> </dependency> You can see sample Maven projects in. If you are not using Maven, you can include the necessary jars by hand. If you download RESTEasy (from, for example) you will get a file like resteasy-jaxrs-<version>-all.zip. If you unzip it you will see a lib/ directory that contains the libraries needed by RESTEasy. Copy these, as needed, into your /WEB-INF/lib directory. Place your JAX-RS annotated class resources and providers within one or more jars within /WEB-INF/lib or your raw class files within /WEB-INF/classes. RESTEasy>${resteasy.version}</version> </dependency> The resteasy-servlet-initializer artifact will not work in Servlet versions older than 3.0. You'll then have to manually declare the RESTEasy servlet in your WEB-INF/web.xml file of your WAR project, and you'll have to use an Application class (see Section 3.5, “javax.ws.rs.core.Application”) which explicitly lists resources and providers. For example: . Note. It is likely that support for pre-3.0 Servlet specifications will be deprecated and eliminated eventually. RESTEasy receives configuration options from <context-param> elements. Note.; } } Note. If your web.xml file does not have a <servlet-mapping> element, you must use an Application class annotated with @ApplicationPath.> JAX-RS 2.0 conforming implementations such as RESTEasy support a client side framework which simplifies communicating with restful applications. In RESTEasy, the minimal set of modules needed for the client framework consists of resteasy}</version> </dependency> Other modules, such as resteasy-jaxb-provider, may be brought in as needed. There are a number of ways in which Providers can be supplied to RESTEasy. Application.getClasses()may supply provider classes. Application.getSingletons()may supply provider objects. Applicationreturns empty sets from getClasses()and getSingletons(), classes annotated with @Providerare discovered automatically. RESTEasy also implements the configuration parameter "resteasy.disable.providers", which can be set to a comma delimited list of fully qualified class names of providers that are not meant to be made available. That list may include any providers supplied by any of the means listed above, and it will override them. ". RESTEasy supports @PathParam annotations with no parameter name.. these parameters. See also MatrixParam. A matrix parameter example is: GET;name=EJB 3.0;author=Bill Burke The basic idea of matrix parameters is that it represents resources that are addressable by their attributes as well as their raw id. RESTEasy supports @QueryParam annotations with no parameter name.. parameters. Like PathParam, your parameter type can be an String, primitive, or class that has a String constructor or static valueOf() method. RESTEasy supports @HeaderParam annotations with no parameter name.. template.. RESTEasy supports @CookieParam annotations with no parameter name.. The @CookieParam annotation allows you to inject the value of a cookie or an object representation of an HTTP request cookie into your method invocation GET /books?num=5 @GET public String getBooks(@CookieParam("sessionid") int id) { ... } @GET public String. RESTEasy supports @FormParam annotations with no parameter". } } extends Customer { @Path("/businessAddress") public String getAddress() {...} } This is a nice-value> </context-param> ... < do content negotiation based in a parameter in query string. To enable this, the web.xml can be configured like follow: <web-app> <display-name>Archetype Created Web Application</display-name> <context-param> <param-name>resteasy.media.type.param.mapping</param-name> <param-value>someName</param-value> </context-param> ... </web-app> The param-value is the name of the query string parameter that RESTEasy will use in the place of the Accept header. Invoking, will give the application/xml media type the highest priority in the content negotiation. In cases where the request contains both the parameter and the Accept header, the parameter will be more relevant. It is possible to left the param-value empty, what will cause the processor to look for a parameter named 'accept'. RESTEasy can automatically marshal and unmarshal a few different message bodies..: @Provider @Produces("text/plain") @Consumes("text/plain") public class DefaultTextPlain implements MessageBodyReader, MessageBodyWriter { public boolean isReadable(Class type, Type genericType, Annotation[] annotations, MediaType mediaType) { // StringTextStar should pick up strings return !String.class.equals(type) && TypeConverter.isConvertable(type); } public Object readFrom(Class type, Type genericType, Annotation[] annotations, MediaType mediaType, MultivaluedMap httpHeaders, InputStream entityStream) throws IOException, WebApplicationException { InputStream delegate = NoContent.noContentCheck(httpHeaders, entityStream); String value = ProviderHelper.readString(delegate, mediaType); return TypeConverter.getType(type, value); } public boolean isWriteable(Class type, Type genericType, Annotation[] annotations, MediaType mediaType) { // StringTextStar should pick up strings return !String.class.equals(type) && !type.isArray(); } public long getSize(Object o, Class type, Type genericType, Annotation[] annotations, MediaType mediaType) { String charset = mediaType.getParameters().get("charset"); if (charset != null) try { return o.toString().getBytes(charset).length; } catch (UnsupportedEncodingException e) { // Use default encoding. } return o.toString().getBytes(StandardCharsets.UTF_8).length; } public void writeTo(Object o, Class type, Type genericType, Annotation[] annotations, MediaType mediaType, MultivaluedMap httpHeaders, OutputStream entityStream) throws IOException, WebApplicationException { String charset = mediaType.getParameters().get("charset"); if (charset == null) entityStream.write(o.toString().getBytes(StandardCharsets.UTF_8)); else entityStream.write(o.toString().getBytes(charset)); } }; ... } context parameter "resteasy.add.charset" may be set to "false". It defaults to "true". Note. By "text" media types, we mean The latter set includes "application/xml-external-parsed-entity" and "application/xml-dtd". describes how the selection process works. @XmlRootEntity When a class is annotated with a @XmlRootElement annotation, RESTEasy will select the JAXBXmlRootElementProvider. This provider handles basic marshaling, if convenience; }.. As a consumer of XML datasets, JAXB is subject to a form of attack known as the XXE (Xml eXternal Entity) Attack (), in which expanding an external entity causes an unsafe file to be loaded. Preventing the expansion of external entities is discussed in Section 21.4, “Configuring Document Marshalling”. The same context.); } } Besides the Jettision JAXB adapter for JSON, RESTEasy also supports integration with the Jackson project. Many users find the output from Jackson much nicer than the Badger format or Mapped format provided by Jettison. For more on Jackson 2, see. Besides JAXB like APIs, it has a JavaBean based model, described at, which allows you to easily marshal Java objects to and from JSON. RESTEasy integrates with the JavaBean model. While Jackson does come with its own JAX-RS integration, RESTEasy expanded it a little, as decribed below. NOTE. The resteasy-jackson-provider module, which is based on the outdated Jackson 1.9.x, is currently deprecated, and will be removed in a release subsequent to 3.1.0.Final. The resteasy-jackson2-provider module is based on Jackson 2. If you're deploying RESTEasy outside of WildFly, add the RESTEasy Jackson provder to your WAR pom.xml build: <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jackson-provider</artifactId> <version>${version.resteasy}</version> </dependency> If you're deploying RESTEasy with WildFly 8, there's nothing you need to do except to make sure you've updated your installation with the latest and greatest RESTEasy. See the Installation/Configuration section of this documentation for more details. If you're deploying RESTEasy outside of WildFly, add the RESTEasy Jackson provder to your WAR pom.xml build: <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jackson2-provider</artifactId> <version>${version.resteasy}</version> </dependency> If you're deploying RESTEasy with WildFly 9 or above, there's nothing you need to do except to make sure you've updated your installation with the latest and greatest RESTEasy. See the Installation/Configuration section of this documentation for more details Note. Because JSONP can be used in Cross Site Scripting Inclusion (XSSI) attacks, Jackson2JsonpInterceptor is disabled by default. Two steps are necessary to enable it: Jackson2JsonpInterceptormust be included in the deployment. For example, a service file META-INF/services/javax.ws.rs.ext.Providers with the line org.jboss.resteasy.plugins.providers.jackson.Jackson2JsonpInterceptormay be included on the classpath If you are using the Jackson 2". In Jackson2 , there is new feature JsonFilter to allow annotate class with @JsonFilter and doing dynamic filtering. Here is an example which defines mapping from "nameFilter" to filter instances and filter bean properties when serilize to json format: @JsonFilter(value="nameFilter") public class Jackson2Product { protected String name; protected int id; public Jackson2Product() { } public Jackson2Product(final int id, final String name) { this.id = id; this.name = name; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getId() { return id; } public void setId(int id) { this.id = id; } } @JsonFilter annotates resource class to filter out some property not to serialize in the json response. To map the filter id and instance we need to create another jackson class to add the id and filter instance map: public class ObjectFilterModifier extends ObjectWriterModifier { public ObjectFilterModifier() { } @Override public ObjectWriter modify(EndpointConfigBase<?> endpoint, MultivaluedMap<String, Object> httpHeaders, Object valueToWrite, ObjectWriter w, JsonGenerator jg) throws IOException { FilterProvider filterProvider = new SimpleFilterProvider().addFilter( "nameFilter", SimpleBeanPropertyFilter.filterOutAllExcept("name")); return w.with(filterProvider); } } Here the method modify() will take care of filtering all properties except "name" property before write. To make this work, we need let RESTEasy know this mapping info. This can be easily set in a WriterInterceptor using Jackson's ObjectWriterInjector: @Provider public class JsonFilterWriteInterceptor implements WriterInterceptor{ private ObjectFilterModifier modifier = new ObjectFilterModifier(); @Override public void aroundWriteTo(WriterInterceptorContext context) throws IOException, WebApplicationException { //set a threadlocal modifier ObjectWriterInjector.set(modifier); context.proceed(); } }: public class ObjectWriterModifierFilter implements Filter { private static ObjectFilterModifier modifier = new ObjectFilterModifier(); @Override public void init(FilterConfig filterConfig) throws ServletException { } @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {..11.0.Final</version> </dependency> It has built in support for JsonObject, JsonArray, and JsonStructure as request or response entities. It should not conflict with Jackson or Jettison if you have that in your path too.:); }}); } input.close(); } }>>() {}); } input.close(); } }) { ... } .(MultipartConstants(MultipartConstants.MULTIPART_RELATED) to tell RESTEasy that we want to send multipart/related packages (that's(MultipartConstants.MULTIPART_RELATED) public void putXopWithMultipartRelated(@XopWithMultipartRelated Xop xop) { // do very important things here } } We used @Consumes(MultipartConstants dynamically; } } { ResteasyClient client = new ResteasyClientBuilder().build(); { ResteasyClient client = new ResteasyClientBuilder().build();(); } }.> { public supports (though not by default - see below) GZIP decompression. If properly configured, the client framework or a JAX-RS service, upon receiving a message body with a Content-Encoding of "gzip", will automatically decompress it. The client framework can (though not by default - see below) automatically set the Accept-Encoding header to be "gzip, deflate" so you do not have to set this header yourself. RESTEasy also supports (though not by default - see below) automatic compression. If the client framework is sending a request or the server is sending a response with the Content-Encoding header set to "gzip", RESTEasy will (if properly configured)() {...} } Note. Decompression carries a risk of attack from a bad actor that can package an entity that will expand greatly. Consequently, RESTEasy disables GZIP compression / decompression by default. There are three interceptors that are relevant to GZIP compression / decompression: GZIPDecodingInterceptorwill install an InputStreamthat decompresses the message body. GZIPEncodingInterceptorwill install an OutputStreamthat compresses the message body. AcceptEncodingGZIPFilterwill add Accept-Encoding with the value "gzip, deflate". If the Accept-Encoding header exists but does not contain "gzip", AcceptEncodingGZIPFilterwill append ", gzip". Note that enabling GZIP compression / decompression does not depend on the presence of this interceptor. If GZIP decompression is enabled, an upper limit is imposed on the number of bytes GZIPDecodingInterceptor will extract from a compressed message body. The default limit is 10,000,000, but a different value can be configured. See below. The interceptors may be enabled by including their classnames in a META-INF/services/javax.ws.rs.ext.Providers file on the classpath. The upper limit on deflated files may be configured by setting the web application context parameter "resteasy.gzip.max.input". If the limit is exceeded on the server side, GZIPDecodingInterceptor will return a Response with status 413 ("Request Entity Too Large") and a message specifying the upper limit. Note. As of release 3.1.0.Final, the GZIP interceptors have moved from package org.jboss.resteasy.plugins.interceptors.encoding to org.jboss.resteasy.plugins.interceptors. and they should be named accordingly in javax.ws.rs.ext.Providers.Builder() // Activate gzip compression on client: .register(AcceptEncodingGZIPFilter.class) .register(GZIPDecodingInterceptor.class) .register(GZIPEncodingInterceptor.class) .build(); The upper limit on deflated files may configured by creating an instance of GZIPDecodingInterceptor with a specific value: Client client = new ResteasyClientBuilder() // Activate gzip compression on client: .register(AcceptEncodingGZIPFilter.class) .register(new GZIPDecodingInterceptor(256)) .register(GZIPEncodingInterceptor.class) .build(); If the limit is exceeded on the client side, GZIPDecodingInterceptor will throw a ProcessingException with a message specifying the upper limit. { ResteasyClient client = new ResteasyClientBuilder().build(); Invocation.Builder request = client.target("").request(); request.acceptEncoding("gzip,compress"); Response response = request.get(); System.out.println("content-encoding: "+ response.getHeaderString("Content-Encoding")); client.close(); } the output will be content-encoding: compress RESTEasy"));. The cache is also automatically invalidated for a particular URI that has PUT, POST, or DELETE invoked on it. You can also obtain a reference to the cache by injecting a org.jboss.resteasy.plugins.cache.ServerCache via the @Context annotation @Context ServerCache cache; @GET public String get(@Context ServerCache cache) {...} To set up the server-side cache.11.0. JAX-RS JAX-RS 2.0 has two different concepts for interceptions: Filters and Interceptors. Filters are mainly used to modify or process incoming and outgoing request headers or response headers. They execute before and after request and response processing. method (i.e. strip .xml and add an Accept header). ContainerRequestFilters can abort the request by calling ContainerRequestContext.abortWith(Response). A filter might want to abort if it implements a custom authentication protocol. After the resource class method is executed, JAX-RS will run all ContainerResponseFilters. These filters allow you to modify the outgoing response before it is marshalling and sent to the client. So given all that, here's some pseudo code to give some understanding of how things work. // execute pre match filters for (ContainerRequestFilter filter : preMatchFilters) { filter.filter(requestContext); if (isAborted(requestContext)) { sendAbortionToClient(requestContext); return; } } // match the HTTP request to a resource class and method JaxrsMethod method = matchMethod(requestContext); // Execute post match filters for (ContainerRequestFilter filter : postMatchFilters) { filter.filter(requestContext); if (isAborted(requestContext)) { sendAbortionToClient(requestContext); return; } } // execute resource class method method.execute(request); // execute response filters for (ContainerResponseFilter filter : responseFilters) { filter.filter(requestContext, responseContext); }). ClientRequestFilters are also allowed to abort the execute of the request and provide a canned response without going over the wire to the server. ClientResponseFilters can modfiy the Response object before it is handed back to application code. Here's some pseudo code to illustrate things. // execute request filters for (ClientRequestFilter filter : requestFilters) { filter.filter(requestContext); if (isAborted(requestContext)) { return requestContext.getAbortedResponseObject(); } } // send request over the wire response = sendRequest(request); // execute response filters for (ClientResponseFilter filter : responseFilters) { filter.filter(requestContext, responseContext); }. @NameBinding works a lot like CDI interceptors. You annotate a custom annotation with @NameBinding and then apply that custom annotation to your filter and resource method.() {...} } Asynchronous HTTP Request Processing is a relatively new technique that allows you to process a single HTTP request using non-blocking I/O and, if desired in separate threads. Some refer to it as COMET capabilities. The primary use case may actually hurt your performance in most common scenarios), but when you start getting a lot of concurrent clients that are blocking like this, there’s a lot of wasted resources and your server does not scale that well.) { response.resume(e); } } }; t.start(); } } AsyncResponse also has other methods to cancel the execution. See javadoc for more details. NOTE: The old RESTEasy proprietary API for async http has been deprecated and may be removed as soon as RESTEasy 3.1. In particular, the RESTEasy @Suspend annotation is replaced by javax.ws.rs.container.Suspended, and org.jboss.resteasy.spi.AsynchronousResponse is replaced by javax.ws.rs.container.AsyncResponse. Note that @Suspended does not have a value field, which represented a timeout limit. Instead, AsyncResponse.setTimeout() may be called.. Asynchronous Job Service. You must use XML declarative security within your web.xml file. Why? It is impossible to implement role-based security portably. In the future, we may have specific JBoss integration, but will not support other environments. NOTE. A SecureRandom object is used to generate unique job ids. For security purposes, the SecureRandom is periodically reseeded. By default, it is reseeded after 100 uses. This value may be configured with the servlet init parameter "resteasy.secure.random.max.use". Asynchronous Job Service. You must use XML declaritive security within your web.xml file. Why? It is impossible to implement role-based security portably. In the future, we may have specific JBoss integration, but will not support other environments. You must enable the Asynchronous> ... </web-app>ProviderFactory.getContextData() are sensitive to the executing thread. For example, given resource method @GET @Path("test") @Produces("text/plain") public CompletionStage<String> text(@Context HttpRequest request) { System.out.println("request (inline): " + request); System.out.println("application (inline): " + ResteasyProviderFactoryProviderFactoryProviderFactory.getContextData(Application.class)); CompletableFuture<String> cs = new CompletableFuture<>(); ExecutorService executor = Executors.newSingleThreadExecutor(); final String httpMethodFinal = request.getHttpMethod(); final Map<String, Object> mapFinal = ResteasyProviderFactory.getContextData(Application.class).getProperties(); executor.submit( new Runnable() { public void run() { System.out.println("httpMethod (async): " + httpMethodFinal); System.out.println("map (async): " + mapFinal); cs.complete("hello"); } }); modules resteasy-rxjava1 and resteasy-rxjava2 add support for RxJava 1 and 2. [Only resteasy-rxjava2 will be discussed here, since resteasy-rxjava1 is deprecated, but the treatment of the two is quite similar.] In particular, = new ResteasyClientBuilder().build(); = new ResteasyClientBuilder().build();. RESTEasy.11.11.0.Final</version> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.5</version> </dependency>(8080); tjws.start(); tjws.getRegistry().addPerRequestResource(RestEasy485Resource.class); } }); } NOTE: TJWS is now deprecated. Consider using the more modern Undertow..11.0.Final</version> </dependency> RESTEasy has integration with the popular Vert.x project as well.. public static void start(VertxResteasyDeployment deployment) throws Exception { VertxJaxrsServer server = new VertxJaxrsServer(); server.setDeployment(deployment); server.setPort(TestPortProvider.getPort()); server.setRootResourcePath(""); server.setSecurityDomain(null); server.start(); } Maven project you must include is: <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-vertx</artifactId> <version>3.11.0.Final</version> </dependency> The server will bootstrap its own Vert.x instance and Http server. When a resource is called, it is done with the Vert.x Event Loop thread, keep in mind to not block this thread and respect the Vert.x programming model, see the related Vert.x manual page. Vert.x extends the RESTEasy registry to provide a new binding scope that creates resources per Event Loop: VertxResteasyDeployment deployment = new VertxResteasyDeployment(); // Create an instance of resource per Event Loop deployment.getRegistry().addPerInstanceResource(Resource.class); The per instance binding scope caches the same resource instance for each event loop providing the same concurrency model than a verticle deployed multiple times. Vert.x can also embed a RESTEasy deployment, making easy to use Jax-RS annotated controller in Vert.x applications: Vertx vertx = Vertx.vertx(); HttpServer server = vertx.createHttpServer(); // Set an handler calling Resteasy server.requestHandler(new VertxRequestHandler(vertx, deployment)); // Start the server server.listen(8080, "localhost"); Vert.x objects can be injected in annotated resources: @GET @Path("/somepath") @Produces("text/plain") public String context( @Context io.vertx.core.Context context, @Context io.vertx.core.Vertx vertx, @Context io.vertx.core.http.HttpServerRequest req, @Context io.vertx.core.http.HttpServerResponse resp) { return "the-response"; }> JSON Web Signature and Encryption (JOSE JWT) is a new specification that can be used to encode content as a string and either digitally sign or encrypt it. I won't go over the spec here Do a Google search on it if you); } flexibility delim-crypto project to use the digital signature framework. <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-crypto</artifactId> <version>3.11.0.Final< client multiple annotation. official. S/MIME (Secure/Multipurpose Internet Mail Extensions) is a standard for public key encryption and signing of MIME data. MIME data being a set of headers and a message body. Its most often seen in the email world when somebody wants to encrypt and/or sign an email message they are sending across the internet. It can also be used for HTTP requests as well which is what the RESTEasy integration with S/MIME is all about. RESTEasy allows you to easily encrypt and/or sign an email message using the S/MIME standard. While the API is described here, you may also want to check out the example projects that come with the RESTEasy distribution. It shows both Java and Python clients exchanging S/MIME formatted messages with a JAX-RS service. You must include the resteasy-crypto project to use the smime framework. <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-crypto</artifactId> <version>3.11.0.Final</version> </dependency> While HTTPS is used to encrypt the entire HTTP message, S/MIME encryption is used solely for the message body of the HTTP request or response. This is very useful if you have a representation that may be forwarded by multiple parties (for example, HornetQ's REST Messaging integration!) and you want to protect the message from prying eyes as it travels across the network. RESTEasy has two different interfaces for encrypting message bodies. One for output, one for input. If your client or server wants to send an HTTP request or response with an encrypted body, it uses the org.jboss.resteasy.security.smime.EnvelopedOutput type. Encrypting a body also requires an X509 certificate which can be generated by the Java keytool command-line interface, or the openssl tool that comes installed on many OS's. Here's an example of using the EnvelopedOutput interface: // server side @Path("encrypted") @GET public EnvelopedOutput getEncrypted() { Customer cust = new Customer(); cust.setName("Bill"); X509Certificate certificate = ...; EnvelopedOutput output = new EnvelopedOutput(cust, MediaType.APPLICATION_XML_TYPE); output.setCertificate(certificate); return output; } // client side X509Certificate cert = ...; Customer cust = new Customer(); cust.setName("Bill"); EnvelopedOutput output = new EnvelopedOutput(cust, "application/xml"); output.setCertificate(cert); Response res = target.request().post(Entity.entity(output, "application/pkcs7-mime").post(); An EnvelopedOutput instance is created passing in the entity you want to marshal and the media type you want to marshal it into. So in this example, we're taking a Customer class and marshalling it into XML before we encrypt it. RESTEasy will then encrypt the EnvelopedOutput using the BouncyCastle framework's SMIME integration. The output is a Base64 encoding and would look something like this: Content-Type: application/pkcs7-mime; smime-type=enveloped-data; name="smime.p7m" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7m" MIAGCSqGSIb3DQEHA6CAMIACAQAxgewwgekCAQAwUjBFMQswCQYDVQQGEwJBVTETMBEGA1UECBMK U29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkAgkA7oW81OriflAw DQYJKoZIhvcNAQEBBQAEgYCfnqPK/O34DFl2p2zm+xZQ6R+94BqZHdtEWQN2evrcgtAng+f2ltIL xr/PiK+8bE8wDO5GuCg+k92uYp2rLKlZ5BxCGb8tRM4kYC9sHbH2dPaqzUBhMxjgWdMCX6Q7E130 u9MdGcP74Ogwj8fNl3lD4sx/0k02/QwgaukeY7uNHzCABgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcE CDRozFLsPnSgoIAEQHmqjSKAWlQbuGQL9w4nKw4l+44WgTjKf7mGWZvYY8tOCcdmhDxRSM1Ly682 Imt+LTZf0LXzuFGTsCGOUo742N8AAAAAAAAAAAAA Decrypting an S/MIME encrypted message requires using the org.jboss.resteasy.security.smime.EnvelopedInput interface. You also need both the private key and X509Certificate used to encrypt the message. Here's an example: // server side @Path("encrypted") @POST public void postEncrypted(EnvelopedInput<Customer> input) { PrivateKey privateKey = ...; X509Certificate certificate = ...; Customer cust = input.getEntity(privateKey, certificate); } // client side ClientRequest request = new ClientRequest(""); EnvelopedInput input = request.getTarget(EnvelopedInput.class); Customer cust = (Customer)input.getEntity(Customer.class, privateKey, cert); Both examples simply call the getEntity() method passing in the PrivateKey and X509Certificate instances requires to decrypt the message. On the server side, a generic is used with EnvelopedInput to specify the type to marshal to. On the server side this information is passed as a parameter to getEntity(). The message is in MIME format: a Content-Type header and body, so the EnvelopedInput class now has everything it needs to know to both decrypt and unmarshall the entity. S/MIME also allows you to digitally sign a message. It is a bit different than the Doseta Digital Signing Framework. Doseta is an HTTP header that contains the signature. S/MIME uses the multipart/signed data format which is a multipart message that contains the entity and the digital signature. So Doseta is a header, S/MIME is its own media type. Generally I would prefer Doseta as S/MIME signatures require the client to know how to parse a multipart message and Doseta doesn't. Its up to you what you want to use. RESTEasy has two different interfaces for creating a multipart/signed message. One for input, one for output. If your client or server wants to send an HTTP request or response with an multipart/signed body, it uses the org.jboss.resteasy.security.smime.SignedOutput type. This type requires both the PrivateKey and X509Certificate to create the signature. Here's an example of signing an entity and sending a multipart/signed entity. // server-side @Path("signed") "); An SignedOutput instance is created passing in the entity you want to marshal and the media type you want to marshal it into. So in this example, we're taking a Customer class and marshalling it into XML before we sign it. RESTEasy will then sign the SignedOutput using the BouncyCastle framework's SMIME integration. The output iwould look something like this: Content-Type: multipart/signed; ------=_Part_0_1083228271.1313024422098 Content-Type: application/pkcs7-signature; name=smime.p7s; smime-type=signed-dataAMYIBVzCCAVMC AQEwUjBFMQswCQYDVQQGEwJBVTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJu ZXQgV2lkZ2l0cyBQdHkgTHRkAgkA7oW81OriflAwCQYFKw4DAhoFAKBdMBgGCSqGSIb3DQEJAzEL BgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTExMDgxMTAxMDAyMlowIwYJKoZIhvcNAQkEMRYE FH32BfR1l1vzDshtQvJrgvpGvjADMA0GCSqGSIb3DQEBAQUABIGAL3KVi3ul9cPRUMYcGgQmWtsZ 0bLbAldO+okrt8mQ87SrUv2LGkIJbEhGHsOlsgSU80/YumP+Q4lYsVanVfoI8GgQH3Iztp+Rce2c y42f86ZypE7ueynI4HTPNHfr78EpyKGzWuZHW4yMo70LpXhk5RqfM9a/n4TEa9QuTU76atAAAAAA AAA= ------=_Part_0_1083228271.1313024422098-- To unmarshal and verify a signed message requires using the org.jboss.resteasy.security.smime.SignedInput interface. You only need the X509Certificate to verify the message. Here's an example of unmarshalling and verifying a multipart/signed entity. // server side @Path("signed") @POST .framework>3.11."/> .... You can specify resteasy configuration options by overriding the resteasy.deployment bean which is an instance of ResteasyDeployment. Here's an example of adding media type suffix mappings as well as enabling the RESTEasy asynchronous job service. <beans xmlns="" xmlns: <!--="asyncJobServiceEnabled" value="true"/> <property name="mediaTypeMappings"> <map> <entry key="json" value="application/json" /> <entry key="xml" value="application/xml" /> </map> </property> </bean> ... A JAX-RS Application subclass. Configuring a web application of this type requires a web.xml and spring-servlet.xml file and a reference to springmvc-resteasy.xml. A servlet definition is required for both the Spring DispatcherServlet and the> The spring-servlet.xml file must import springmvc-resteasy.xml, however this file does not need to be present in the archive. In addition a component-scan, declaration of the packages that contain you application classes is needed. At minimum your spring-servlet.xml should contain these statements. <beans> <import resource="classpath:springmvc-resteasy.xml"/> <context:component-scan </beans> The RESTEasy project does not include its own component for Spring Boot integration, however PayPal has developed a very interesting RESTEasy Spring Boot starter and shared it with the community. You can see below an example of how to use it. Please refer to the relevant documentation on GitHub for further information. First, add dependency com.paypal.springboot:resteasy-spring-boot-starter to your Spring Boot application. It is recommended to you use the latest version. Second, optionally you can register one or more JAX-RS application classes. To do so, just define it as a Spring bean, and it will be automatically registered. See the example below. resteasy-springMVC as demo app. Note. As noted inSection 3.1.2, “Upgrading RESTEasy within WildFly”, the RESTEasy distribution comes with a zip file called resteasy-jboss-modules-<version>.zip, which can be unzipped into the modules/system/layers/base/ directory of WildFly to upgrade to a new version of RESTEasy. Because of the way resteasy-spring is used in WildFly, after unzipping the zip file, it is also necessary to remove the old resteasy-spring jar from modules/system/layers/base/org/jboss/resteasy/resteasy-spring/main/bundled/resteasy-spring-jar. WildFly. has some simple integration with Guice 3. Add the RequestScopeModule to your modules to allow objects to be scoped to the HTTP request by adding the @RequestScoped annotation to your(); } }! ResteasyClient client = new ResteasyClientBuilder().build(); ResteasyWeb = new ResteasyClientBuilder().build(); ResteasyWebTarget target = client.target(""); target.setChunked(b.booleanValue()); Invocation.Builder request = target.request(); Alternatively, it is possible to configure a particular request to be sent in chunked mode: ResteasyClient client = new ResteasyClientBuilder().build(); and the older ApacheHttpClient4Engine, both in package org.jboss.resteasy.client.jaxrs.engines, support chunked mode. See Section Apache HTTP Client 4.x and other backends for more); client.putBasic("hello world"); Alternatively you can use the RESTEasy client extension interfaces directly: ResteasyClient client = new ResteasyClientBuilder().build(); ResteasyWebTarget target = client.target(""); SimpleClient simple = target.proxy(SimpleClient.class); client("")).build(); = new ResteasyClientBuilder().build();Client4Engine is an implementation that uses the pre-Apache 4.3 version, to provide backward compatibility. RESTEasy automatically selects one of these two ClientHttpEngine implementations based upon the detection of the Apache version. = new RESTEasyClient and ApacheHttpClient4and ApacheHttpClient44Engine engine = new ApacheHttpClient4Engine(httpClient, = new ResteasyClientBuilder().httpEngine(engine).build();()).httpEngine(myEngine).build(); Apache pre-4.3 HttpClient implementation uses org.apache.http.impl.conn.SingleClientConnManager to manage a single socket and allows org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager to replace SingleClientConnManager for multithreaded applications. SingleClientConnManager manages a single socket at any given time and supports the use case in which one or more invocations are made serially from a single thread. Here is an example of replacing the SingleClientConnManager with ThreadSafeClientConnManager in ApacheHttpClient4Engine. ClientConnectionManager cm = new ThreadSafeClientConnManager(); HttpClient httpClient = new DefaultHttpClient(cm); ApacheHttpClient4Engine engine = new ApacheHttpClient4Engine(httpClient);4. The Apache 4.3 HttpClient implementation uses org.apache.http.impl.conn.BasicHttpClientConnectionManager to manage a single socket and org.apache.http.impl.conn.PoolingHttpClientConnectionManager to service connection requests from multiple execution threads. RESTEasy's ClientHttpclientBuilder43 and ApacheHttpClient43Engine uses them as well. RESTEasy's default async engine implementation class is ApacheHttpAsyncClient4Engine. It can be set as the active engine by calling method useAsyncHttpEngine in ResteasyClientBuilder. Client asyncClient = new ResteasyClientBuilder().useAsyncHttpEngine() .build(); Future<Response> future = asyncClient .target("").request() l tested, but if = new ResteasyClientBuilder().clientEngine( new JettyClientEngine(new HttpClient())) 54 54 54 54 54 54 54); }); RESTEasy has its own support to generate WADL for its resources, and it supports several different containers. The following text will show you how to use this feature in different containers. RESTEasy WADL uses ResteasyWadlServlet to support servlet container. It can be registered into web.xml to enable WADL feature. Here is an example to show the usages of ResteasyWadlServlet in web.xml: <servlet> <servlet-name>RESTEasy WADL</servlet-name> <servlet-class>org.jboss.resteasy.wadl.ResteasyWadlServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>RESTEasy WADL</servlet-name> <url-pattern>/application.xml</url-pattern> </servlet-mapping> The preceding configuration in web.xml shows how to enable ResteasyWadlServlet and mapped it to /application.xml. And then the WADL can be accessed from the configured URL: /application.xml RESTEasy has provided a ResteasyWadlDefaultResource to generate WADL info for its embedded containers. Here is and example to show how to use it with RESTEasy's Sun JDK HTTP Server container: com.sun.net.httpserver.HttpServer httpServer = com.sun.net.httpserver.HttpServer.create(new InetSocketAddress(port), 10); org.jboss.resteasy.plugins.server.sun.http.HttpContextBuilder contextBuilder = new org.jboss.resteasy.plugins.server.sun.http.HttpContextBuilder(); contextBuilder.getDeployment().getActualResourceClasses() .add(ResteasyWadlDefaultResource.class); contextBuilder.bind(httpServer); ResteasyWadlDefaultResource.getServices() .put("/", ResteasyWadlGenerator .generateServiceRegistry(contextBuilder.getDeployment())); httpServer.start(); From the above code example, we can see how ResteasyWadlDefaultResource is registered into deployment: contextBuilder.getDeployment().getActualResourceClasses() .add(ResteasyWadlDefaultResource.class); Another important thing is to use ResteasyWadlGenerator to generate the WADL info for the resources in deployment at last: ResteasyWadlDefaultResource.getServices() .put("/", ResteasyWadlGenerator .generateServiceRegistry(contextBuilder.getDeployment())); After the above configuration is set, then users can access "/application.xml" to fetch the WADL info, because ResteasyWadlDefaultResource has @PATH set to "/application.xml" as default: @Path("/application.xml") public class ResteasyWadlDefaultResource RESTEasy WADL support for Netty Container is simliar to the support for JDK HTTP Server. It also uses ResteasyWadlDefaultResource to serve '/application.xml' and ResteasyWadlGenerator to generate WADL info for resources. Here is the sample code: ResteasyDeployment deployment = new ResteasyDeployment(); netty = new NettyJaxrsServer(); netty.setDeployment(deployment); netty.setPort(port); netty.setRootResourcePath(""); netty.setSecurityDomain(null); netty.start(); deployment.getRegistry() .addPerRequestResource(ResteasyWadlDefaultResource.class); ResteasyWadlDefaultResource.getServices() .put("/", ResteasyWadlGenerator.generateServiceRegistry(deployment)); Please note for all the embedded containers like JDK HTTP Server and Netty Container, if the resources in the deployment changes at runtime, the ResteasyWadlGenerator.generateServiceRegistry() need to be re-run to refresh the WADL info. The RESTEasy Undertow Container is a embedded Servlet Container, and RESTEasy WADL provides a connector to it. To use RESTEasy Undertow Container together with WADL support, you need to add these three components into your maven dependencies: <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-wadl</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-wadl-undertow-connector</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-undertow</artifactId> <version>${project.version}</version> </dependency> The resteasy-wadl-undertow-connector provides a WadlUndertowConnector to help you to use WADL in RESTEasy Undertow Container. Here is the code example: UndertowJaxrsServer server = new UndertowJaxrsServer().start(); WadlUndertowConnector connector = new WadlUndertowConnector(); connector.deployToServer(server, MyApp.class); The MyApp class shown in above code is a standard JAX-RS 2.0 Application class in your project: @ApplicationPath("/base") public static class MyApp extends Application { @Override public Set<Class<?>> getClasses() { HashSet<Class<?>> classes = new HashSet<Class<?>>(); classes.add(YourResource.class); return classes; } } After the Application is deployed to the UndertowJaxrsServer via WadlUndertowConnector, you can access the WADL info at "/application.xml" prefixed by the @ApplicationPath in your Application class. If you want to override the @ApplicationPath, you can use the other method in WadlUndertowConnector: public UndertowJaxrsServer deployToServer(UndertowJaxrsServer server, Class<? extends Application> application, String contextPath) The "deployToServer" method shown above accepts a "contextPath" parameter, which you can use to override the @ApplicationPath value in the Application class. RESTEasy provides the support for validation mandated by the JAX-RS: Java API for RESTful Web Services 2.1 , given the presence of an implementation of the Bean Validation specification such as Hibernate Validator. module resteasy-validator-provider supplies an implementation of GeneralValidator.); } The validator in resteasy-validator-provider implements.11.0.Final with the current RESTEasy version you want to use. <repositories> <repository> <id>jboss</id> <url></url> </repository> </repositories> <dependencies> <!-- core library --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jaxrs</artifactId> <version>3.11.0.Final</version> </dependency> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-client</artifactId> <version>3.11.0.Final</version> </dependency> <!-- optional modules --> <!-- JAXB support --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jaxb-provider</artifactId> <version>3.11.0.Final</version> </dependency> <!-- multipart/form-data and multipart/mixed support --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-multipart-provider</artifactId> <version>3.11.0.Final</version> </dependency> <!-- RESTEasy Server Cache --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-cache-core</artifactId> <version>3.11.0.Final</version> </dependency> <!-- Ruby YAML support --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-yaml-provider</artifactId> <version>3.11.0.Final</version> </dependency> <!-- JAXB + Atom support --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-atom-provider</artifactId> <version>3.11.0.Final</version> </dependency> <!-- Spring integration --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-spring</artifactId> <version>3.11.0.Final</version> </dependency> <!-- Guice integration --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-guice</artifactId> <version>3.11.0.Final</version> </dependency> <!-- Asynchronous HTTP support with Servlet 3.0 --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>async-http-servlet-3.0</artifactId> <version>3.11.0.Final<>3.11.0.Final</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> The following security related modules are deprecated and meant to be removed in the future: Please refer to for a modern solution for the usecases previously covered by the deprecated modules.. RESTEasy 3.1.0.Final release comes with many changes compared to previous 3.0 point releases. User discernible changes in RESTEasy 3.1.0.Final include In this chapter we focus on changes that might cause existing code to fail or behave in new ways. The audience for this discussion may be partitioned into three subsets, depending on the version of RESTEasy currently in use, the API currently in use, and the API to be used after an upgrade to RESTEasy 3.1. The following APIs are available: RESTEasy 2: RESTEasy 2 conforms to the JAX-RS 1 specification, and adds a variety of additional facilities, such as a client API, a caching system, an interceptor framework, etc. All of these user facing classes and interfaces comprise the RESTEasy 2 API. RESTEasy 3: RESTEasy 3 conforms to the JAX-RS 2 specification, and adds some additional facilities. Many of the non-spec facilities from the RESTEasy 2 API are formalized, in altered form, in JAX-RS 2, in which case the older facilites are deprecated. The non-deprecated user facing classes and interfaces in RESTEasy 3 comprise the RESTEasy 3 API. These definitions are rather informal and imprecise, since the user facing classes / interfaces in Resteasy 3.0.19.Final, for example, are a proper superset of the user facing classes / interfaces in RESTEasy 3.0.1.Final. For this discussion, we identify the API with the version currently in use in a given project. Now, there are three potential target audiences of users planning to upgrade to RESTEasy 3.1.0.Final: Those currently using RESTEasy API 3 with some RESTEasy 3.0.x release Those currently using RESTEasy API 2 with some RESTEasy 2.x or 3.0.x release and planning to upgrade to RESTEasy API 3 Those currently using RESTEasy API 2 with some RESTEasy 2.x or 3.0.x release and planning to continue to use RESTEasy API 2 Of these, users in Group 2 have the most work to do in upgrading from RESTEasy API 2 to RESTEasy API 3. They should consult the separate guide Upgrading from RESTEasy 2 to RESTEasy 3. Ideally, users in Groups 1 and 3 might make some changes to take advantage of new features but would have no changes forced on them by reorganization or altered behavior. Indeed, that is almost the case, but there are a few changes that they should be aware of. All RESTEasy changes are documented in JIRA issues. Issues that describe detectable changes in release 3.1.0.Final that might impact existing applications include When a build() method from org.jboss.resteasy.client.jaxrs.internal.ClientInvocationBuilderin resteasy-client, org.jboss.resteasy.specimpl.LinkBuilderImplin resteasy-jaxrs, org.jboss.resteasy.specimpl.ResteasyUriBuilderin resteasy-jaxrs is called, it will return a new object. This behavior might be seen indirectly. For example, Builder builder = client.target(generateURL(path)).request(); ... Link link = new LinkBuilderImpl().uri(href).build(); ... URI uri = uriInfo.getBaseUriBuilder().path("test").build(); As it says. Depending on the application, it might be necessary to recompile with a target of JDK 1.8 so that calls to RESTEasy code can work. Prior to release 3.1.0.Final, the default behavior of RESTEasy was to use GZIP to compress and decompress messages whenever "gzip" appeared in the Content-Encoding header. However, decompressing messages can lead to security issues, so, as of release 3.1.0.Final, GZIP compression has to be enabled explicitly. For details, see Chapter GZIP Compression/Decompression. Note. Because of some package reorganization due to RESTEASY-1531 (see below), the GZIP interceptors, which used to be in package org.jboss.resteasy.plugins.interceptors.encoding are now in org.jboss.resteasy.plugins.interceptors. This issue is related to refactoring deprecated elements of the RESTEasy 2 API into a separate module, and, ideally, would have no bearing at all on RESTEasy 3. However, a reorganization of packages has led to moving some non-deprecated API elements in the resteasy-jaxrs module: org.jboss.resteasy.client.ClientURI is now org.jboss.resteasy.annotations.ClientURI org.jboss.resteasy.core.interception.JaxrsInterceptorRegistryListener is now org.jboss.resteasy.core.interception.jaxrs.JaxrsInterceptorRegistryListener org.jboss.resteasy.spi.interception.DecoratorProcessor is now org.jboss.resteasy.spi.DecoratorProcessor All of the dynamic features and interceptors in the package org.jboss.resteasy.plugins.interceptors.encoding are now in org.jboss.resteasy.plugins.interceptors Most of the deprecated classes and interfaces from RESTEasy 2 have been segregated in a separate module, resteasy-legacy, as of release 3.1.0.Final. A few remain in module resteasy-jaxrs for technical reasons. Eventually, all such classes and interfaces will be removed from RESTEasy. Most of the relocated elements are internal, so ensuring that resteasy-legacy is on the classpath will make most changes undetectable. One way to do that, of course, is to include it in an application's WAR. In the context of WildFly, it is also possible to use a jboss-deployment-structure.xml file in the WEB-INF directory of your WAR file. For example: <jboss-deployment-structure> <deployment> <dependencies> <module name="org.jboss.resteasy.resteasy-legacy"/> </dependencies> </deployment> </jboss-deployment-structure> There are a few API classes and interfaces from resteasy-jaxrs that have moved to a new package in resteasy-legacy. These are org.jboss.resteasy.annotations.ClientResponseType is now org.jboss.resteasy.annotations.legacy.ClientResponseType org.jboss.resteasy.spi.Link is now org.jboss.resteasy.client.Link org.jboss.resteasy.spi.LinkHeader is now org.jboss.resteasy.client.LinkHeader Many facilities from RESTEasy 2 appear in a different form in RESTEasy 3. For example, much of the client framework in RESTEasy 2 is formalized, in modified form, in JAX-RS 2.0. RESTEasy versions 3.0.x implement both the older deprecated form and the newer conformant form. The deprecated form is moved to legacy module in RESTEasy 3.1 and finally removed in RESTEasy 4. For more information on upgrading from various deprecated facilities in RESTEasy 2, see. There are a number of great books that you can learn REST and JAX-RS from
https://docs.jboss.org/resteasy/docs/3.11.0.Final/userguide/html_single/index.html
2020-03-29T00:51:28
CC-MAIN-2020-16
1585370493121.36
[]
docs.jboss.org
Notebooks Notebooks in VividCortex allows you to share knowledge and add rich context and commentary to charts including text, code snippets, links and images, all in Markdown syntax. Use Notebooks to create runbooks, document an analysis, create an annotated set of KPIs, and more. Creating a New Notebook To create a new Notebook, click “Add A New Notebook” in the top right corner. This will create a new blank Notebook: Notebooks that you create will appear down the left side of the screen. You can filter your Notebooks, which will search through all of the text in each. You can sort by Name or by Last Updated. Editing a Notebook To edit an existing Notebook, select the Notebook from the left hand nav and click “Edit” in the top right corner of the screen. This will open the Editing view: While in Edit mode, you can add content to your Notebook and Preview how it will look. You can also Delete the Notebook from Edit mode. “Visibility” will let you choose who can see, edit, and delete this Notebook. To let others collaborate on this Notebook, such as for an incident document or a common set of KPIs, choose “Anyone in this environment.” For a Notebook. Adding Content Notebooks understand Markdown, which you’re probably already familiar with from GitHub, Stack Overflow, and other technical tools. If you need a refresher, you can view the Editing Guide; this gives a brief overview of the kinds of syntax available, by clicking “Guide” while in Edit mode. Notebooks also let you embed DPM charts. You can embed any of your favorite DPM charts, or create your own using any of the metrics DPM collects. Notebooks use the same syntax for embedding DPM charts as the Charts interface. For example, to add a DPM chart, you can use the following syntax: !chart(chart-name-here) For example, !chart(os-context-switches) will generate this Chart in your Notebook: For the complete list of Charts syntax, see our Charts documentation. Notebooks are unique to an environment. To copy a Notebook from one environment to another, or to make a copy of a Notebook in the same environment, simply copy and paste the Notebook’s text while in Edit mode.
https://docs.vividcortex.com/how-to-use-vividcortex/notebooks/
2020-03-28T23:41:24
CC-MAIN-2020-16
1585370493121.36
[]
docs.vividcortex.com
Discovery Security Sample The Discovery specification does not require that endpoints that participate in the discovery process to be secure. Enhancing the discovery messages with security mitigates various types of attacks (message alteration, denial of service, replay, spoofing). This sample implements custom channels that compute and verify message signatures using the compact signature format (described in Section 8.2 of the WS-Discovery specification). The sample supports both the 2005 Discovery specification and the 1.1 version. The custom channel is applied on top of the existing channel stack for Discovery and Announcement endpoints. This way, a signature header is applied for every message sent. The signature is verified on received messages and when it does not match or when the messages do not have a signature, the messages are dropped. To sign and verify messages, the sample uses certificates. Discussion WCF is very extensible and allows users the possibility to customize channels as desired. The sample implements a discovery secure binding element that builds secure channels. The secure channels apply and verify message signatures and are applied on top of the current stack. The secure binding element builds secure channel factories and channel listeners. Secure Channel Factory The secure channel factory creates output or duplex channels that add a compact signature to message headers. To keep messages as small as possible the compact signature format is used. The structure of a compact signature is shown in the following example. <d:Security ... > [<d:Sig Scheme="xs:anyURI" [KeyId="xs:base64Binary"]? Refs="..." [PrefixList]="xs:NMTOKENS" Sig="xs:base64Binary" ... />]? ... </d:Security> Note The PrefixList was added in the 2008 Discovery version protocol. To compute the signature, the sample determines the expanded signature items. An XML signature ( SignedInfo) is created, using the ds namespace prefix, as required by the WS-Discovery specification. The body and all the headers in discovery and addressing namespaces are referenced in the signature, so they cannot be tampered with. Each referenced element is transformed using the Exclusive Canonicalization ( ), and then an SHA-1 digest value is computed ( ). Based on all referenced elements and their digest values, the signature value is computed using the RSA algorithm ( ). The messages are signed with a client-specified certificate. The store location, name and the certificate subject name must be specified when the binding element is created. The KeyId in the compact signature represents the key identifier of the signing token and is the Subject Key Identifier (SKI) of the signing token or (if the SKI does not exist) a SHA-1 hash of the public key of the signing token. Secure Channel Listener The secure channel listener creates input or duplex channels that verify the compact signature in received messages. To verify the signature, the KeyId specified in the compact signature attached to the message is used to select a certificate from the specified store. If the message does not have a signature or the signature check fails, the messages are dropped. To use the secure binding, the sample defines a factory that creates custom UdpDiscoveryEndpoint and UdpAnnouncementEndpoint with the added discovery secure binding element. These secure endpoints can be used in discovery announcement listeners and discoverable services. Sample Details The sample includes a library and 4 console applications: DiscoverySecurityChannels: A library that exposes the secure binding. The library computes and verifies the compact signature for outgoing/incoming messages. Service: A service exposing ICalculatorService contract, self hosted. The service is marked as Discoverable. The user specifies the details of the certificate used to sign messages by specifying the store location and name and the subject name or other unique identifier for the certificate, and the store where the client certificates are located (the certificates used to check signature for incoming messages). Based on these details, a UdpDiscoveryEndpoint with added security is built and used. Client: This class tries to discover an ICalculatorService and to call methods on the service. Again, a UdpDiscoveryEndpoint with added security is built and used to sign and verify the messages. AnnouncementListener: A self-hosted service that listens for online and offline announcements and uses the secure announcement endpoint. Note If Setup.bat is run multiple times, the certificate manager prompts you for choosing a certificate to add, as there are duplicate certificates. In that case, Setup.bat should be aborted and Cleanup.bat should be called, because the duplicates have already been created. Cleanup.bat also prompts you to choose a certificate to delete. Select a certificate from the list and continue executing Cleanup.bat until no certificates are remaining. To use this sample Execute the Setup.bat script from a Visual Studio command prompt. The sample uses certificates to sign and verify messages. The script creates the certificates using Makecert.exe and then installs them using Certmgr.exe. The script must be run with administrator privileges. To build and run the sample, open the Security.sln file in Visual Studio and choose Rebuild All. Update the solution properties to start multiple projects: select Start for all projects except DiscoverySecureChannels. Run the solution normally. After you are done with the sample, execute the Cleanup.bat script that removes the certificates created for\Scenario\DiscoveryScenario
https://docs.microsoft.com/de-de/dotnet/framework/wcf/samples/discovery-security-sample
2017-12-11T01:54:19
CC-MAIN-2017-51
1512948512054.0
[]
docs.microsoft.com
Bare metal reset/recovery: create recovery media while deploying new devices Recovery media (bare metal recovery) helps restore a Windows device to the factory state, even if the user needs to replace the hard drive or completely wipe the drive clean. You can include this media with new devices that you provide to your customers using the same Windows images used to deploy the devices. Note - The PC firmware/BIOS must be configured so that the PC can boot from the media (USB drive or DVD drive). - The USB flash drive or DVD recovery media must have enough space for the Windows image. - If the Windows images are larger than 32GB or are larger the media you're using (for example, 4.7GB DVDs), you'll need to split the Windows image file to span across multiple DVDs. To create a bootable USB recovery drive for a personal device,. Step 1: Open the Deployment and Imaging Tools Environment - Download and install the Windows Assessment and Deployment Kit (ADK). - On your technician PC: Click Start, and type deployment. Right-click Deployment and Imaging Tools Environment and then select Run as administrator. Step 2: Extract the Windows RE image from the Windows image Mount the Windows image: md c:\mount\Windows Dism /Mount-Image /ImageFile:D:\sources\install.wim /Index:1 /MountDir:C:\mount Copy the Windows RE image. md C:\Images xcopy C:\mount\Windows\System32\Recovery\winre.wim C:\Images\winre.wim /h Unmount the Windows image: Dism /Unmount-Image /MountDir:C:\mount\winre /Discard If you're using a customized partition layout, add bare metal recovery configuration scripts to the working folder, under \sources. For more info, see Bare Metal Reset/Recovery: Enable Your Users to Create 10. Select **Yes, repartition the drives** > **Just remove my files** > **Reset**. Windows resets the computer to its original state by using the recovery image. manufacturing and ResetConfig XML Reference. Related topics Bare Metal Reset/Recovery: Enable Your Users to Create Media Push-Button Reset Overview ResetConfig XML Reference REAgentC Command-Line Options
https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/create-media-to-run-push-button-reset-features-s14
2017-12-11T02:07:06
CC-MAIN-2017-51
1512948512054.0
[]
docs.microsoft.com
Pinned Rows and Columns RadVirtualGrid provides pinning mechanism for both its rows and columns. Through it a given row can be pinned to the top or bottom of the grid. Respectively, a column can be pinned to the left or right side of RadVirtualGrid. Thus, they will not take part in the vertical or horizontal scrolling. This functionality can be controlled through the following methods exposed by the API of the control. RadVirtualGrid exposes built-in commands for its pinning functionality. More information can be found in the Commands Overview. - PinRowTop(int index): Pins a row at a given index on the top. Example 1: Calling the PinRowTop method virtualGrid.PinRowTop(1); Figure 1: RadVirtualGrid with pinned row at the top - PinRowBotton(int index): Pins a row at a given index to the bottom. Example 2: Calling the PinRowBottom method virtualGrid.PinRowBottom(1); Figure 2: RadVirtualGrid with pinned row at the bottom - PinColumnLeft(int index): Pins a column at a given index to the left. Example 3: Calling the PinColumnLeft method virtualGrid.PinColumnLeft(1); Figure 3: RadVirtualGrid with pinned column on the left - PinColumnRight(int index): Pins a column at a given index to the right. Example 4: Calling the PinColumnRight method virtualGrid.PinColumnRight(1); Figure 4: RadVirtualGrid with pinned column on the right Unpinning an already pinned row or column can be achieved through the UnpinRow and UnpinColumn methods: UnpinRow(int index): Unpins a row at a given index. UnpinColumn(int index): Unpins a column at a given index.
https://docs.telerik.com/devtools/wpf/controls/radvirtualgrid/features/pinned-rows-and-columns
2017-12-11T02:16:49
CC-MAIN-2017-51
1512948512054.0
[array(['images/RadVirtualGrid_Features_PinnedRowsColumns_01.png', 'RadVirtualGrid with pinned row on the top'], dtype=object) array(['images/RadVirtualGrid_Features_PinnedRowsColumns_02.png', 'RadVirtualGrid with pinned row at the bottom'], dtype=object) array(['images/RadVirtualGrid_Features_PinnedRowsColumns_03.png', 'RadVirtualGrid with pinned column on the left'], dtype=object) array(['images/RadVirtualGrid_Features_PinnedRowsColumns_04.png', 'RadVirtualGrid with pinned column on the right'], dtype=object)]
docs.telerik.com
System update sets release notes ServiceNow® system update sets enhancements and updates in the Jakarta release. Activation information Platform feature – active by default. New in the Jakarta release Preview and commit update sets in batches A batch update set is a group of update sets you can preview and commit in bulk. The system detects collisions based on "ancestry" and not on date comparisons. Changed in this release. More precise time stamp for updated files: Determine the precise time the system updated a file in your update set by inspecting the Recorded At [sys_recorded_at] field on the Customer Update [sys_update_xml] or Versions [sys_update_version] tables. This field is a more precise time stamp of when the system updated or modified a file than the Updated On [sys_updated_on] field. Back out update set terminology : The choices for resolving conflicts when backing out an update set have been re-named to more clearly describe their effects. The option previously labelled "Back Out" has been changed to "Decide to Use Previous." The option previously labelled "Use Current" has been changed to "Decide to Keep Current." Warnings and confirmation dialogs: Added warnings and confirmation dialogs help prevent update-set scenarios that commonly lead to problems. Update operations limited to one at a time: To provide stability and consistency, the system allows only one update operation at a time. Update operations include upgrading, retrieving an update set, previewing an update set, committing an update set, activating a plugin, Team Dev pushes, and Team Dev pulls.
https://docs.servicenow.com/bundle/jakarta-release-notes/page/release-notes/servicenow-platform/system-update-sets-rn.html
2017-12-11T02:18:44
CC-MAIN-2017-51
1512948512054.0
[]
docs.servicenow.com
SetUserSettings Sets the user settings like multi-factor authentication (MFA). If MFA is to be removed for a particular attribute pass the attribute with code delivery as null. If null list is passed, all MFA options are removed. Request Syntax { "AccessToken": " string", "MFAOptions": [ { "AttributeName": " string", "DeliveryMedium": " string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - AccessToken The access token for the set user settings request. Type: String Pattern: [A-Za-z0-9-_=.]+ Required: Yes - MFAOptions Specifies the options for MFA (e.g., email or phone number). Type: Array of MFAOptionType objects AWS SDKs, see the following:
http://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_SetUserSettings.html
2017-12-11T02:37:22
CC-MAIN-2017-51
1512948512054.0
[]
docs.aws.amazon.com
Tools & Resources Granting the Trusted Program List Settings - Go to Agents > Agent Management. - In the agent tree, click the root domain icon ( ) to include all agents or select specific domains or agents. - Click Settings > Privileges and Other Settings. - On the Privileges tab, go to the Trusted Program List section. - Select Display the Trusted Program List.
http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/scanning-for-securit/scan-privileges-and-/trusted-programs-pri/granting-trusted-pro.aspx
2017-12-11T02:05:03
CC-MAIN-2017-51
1512948512054.0
[]
docs.trendmicro.com
Tools & Resources Security Risk Outbreak Criteria and Notifications Configure OfficeScan to send you and other OfficeScan administrators a notification when the following events occur: Virus/Malware outbreak Spyware/Grayware outbreak Firewall Violations outbreak Shared folder session outbreak Define an outbreak by the number of detections and the detection period. An outbreak is triggered when the number of detections within the detection period is exceeded. OfficeScan comes with a set of default notification messages that inform you and other OfficeScan administrators of an outbreak. You can modify the notifications and configure additional notification settings to suit your requirements. OfficeScan can send security risk outbreak notifications through email, SNMP trap, and Windows NT Event logs. For shared folder session outbreaks, OfficeScan sends notifications through email. Configure settings when OfficeScan sends notifications through these channels. For details, see Administrator Notification Settings.
http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/scanning-for-securit/security-risk-outbre/security-risk-outbre1.aspx
2017-12-11T01:58:50
CC-MAIN-2017-51
1512948512054.0
[]
docs.trendmicro.com
Tools & Resources The Need for a New Solution In the current approach to file-based threat handling, patterns (or definitions) required to protect endpoints are, for the most part, delivered on a scheduled basis. Patterns are delivered in batches from Trend Micro to agents. When a new update is received, the virus/malware prevention software on the agent reloads this batch of pattern definitions for new virus/malware risks into memory. If a new virus/malware risk emerges, this pattern once again needs to be updated partially or fully and reloaded on the agent to ensure continued protection. Over time, there has been a significant increase in the volume of unique emerging threats. The increase in the volume of threats is projected to grow at a near-exponential rate over the coming years. This amounts to a growth rate that far outnumbers the volume of currently known security risks. Going forward, the volume of security risks represents a new type of security risk. The volume of security risks can impact server and workstation performance, network bandwidth usage, and, in general, the overall time it takes to deliver quality protection - or "time to protect". A new approach to handling the volume of threats has been pioneered by Trend Micro that aims to make Trend Micro customers immune to the threat of virus/malware volume. The technology and architecture used in this pioneering effort leverages technology that off-loads the storage of virus/malware signatures and patterns to the cloud. By off-loading the storage of these virus/malware signatures to the cloud, Trend Micro is able to provide better protection to customers against the future volume of emerging security risks.
http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/using-company_name-s/about-company_name-s/the-need-for-a-new-s.aspx
2017-12-11T02:04:10
CC-MAIN-2017-51
1512948512054.0
[]
docs.trendmicro.com
Overview The Licode server-side API provides your server communication with Nuve. Nuve is a Licode module that manages some resources like the videoconference rooms or the tokens to access to a determined room. Server-side API is available in node.js. Initialize Once you compile the API, you need to require it in your node.js server and initialize it with a Service. Call the constructor with the Service Id and the Service Key created in the data base. var N = require('./nuve'); N.API.init(serviceId, serviceKey, nuve_host); With a Service you can create videoconference rooms for your videoconference application and get the neccesary tokens for add participants to them. Also you can ask Nuve about the users connected to a room. You can also compile Nuve API for python. Rooms A Room object represent a videoconference room. In a room participate users that can interchange their streams. Each participant can publish his stream and/or subscribe to the other streams published in the room. A Room object has the following properties: Room.name: the name of the room. Room._id: a unique identifier for the room. Room.p2p(optional): boolean that indicates if the room is a peer - to - peer room. In p2p rooms server side is only used for signalling. Room.mediaConfiguration(optional): a string with the media configuration used for this room. Room.data(optional): additional metadata for the room. In your service you can create a room, get a list of rooms that you have created, get the info about a determined room or delete a room when you don't need it. In all functions you can include an optional error callback ir order to catch possible problems with nuve server. Create Room To create a room you need to specify a name and a callback function. When the room is created Nuve calls this function and returns you the roomId: var roomName = 'myFirstRoom'; N.API.createRoom(roomName, function(room) { console.log('Room created with id: ', room._id); }, errorCallback); You can create peer - to - peer rooms in which users will communicate directly between their browsers using server side only for signalling. var roomName = 'myP2PRoom'; N.API.createRoom(roomName, function(room) { console.log('P2P room created with id: ', room._id); }, errorCallback, {p2p: true}); You can include metadata when creating the room. This metadata will be stored in Room.data field of the room. var roomName = 'myRoomWithMetadata'; N.API.createRoom(roomName, function(room) { console.log('Room created with id: ', room._id); }, errorCallback, {data: {room_color: 'red', room_description: 'Room for testing metadata'}}); You can also specify which media configuration you want to use in the Room. N.API.createRoom(roomName, function(room) { console.log('Room created with id: ', room._id); }, errorCallback, {mediaConfiguration: 'VP8_AND_OPUS'}); Get Rooms You can ask Nuve for a list of the rooms in your service: N.API.getRooms(function(roomList) { var rooms = JSON.parse(roomList); for(var i in rooms) { console.log('Room ', i, ':', rooms[i].name); } }, errorCallback); Get Room Also you can get the info about a determined room with its roomId: var roomId = '30121g51113e74fff3115502'; N.API.getRoom(roomId, function(resp) { var room= JSON.parse(resp); console.log('Room name: ', room.name); }, errorCallback); Delete Room And finally, to delete a determined room: var roomId = '30121g51113e74fff3115502'; N.API.deleteRoom(roomId, function(result) { console.log('Result: ', result); }, errorCallback); Tokens A Token is a string that allows you to add a new participant to a determined room. When you want to add a new participant to a room, you need to create a new token than you will consume in the client-side API. To create a token you need to specify a name and a role for the new participant. Name: a name that identify the participant. Role: indicates de permissions that the user will have in the room. To learn more about how to manage roles and permmisions you can visit this post. Create Token var roomId = '30121g51113e74fff3115502'; var name = 'userName'; var role = ''; N.API.createToken(roomId, name, role, function(token) { console.log('Token created: ', token); }, errorCallback); Users A User object represents a participant in a videoconference room. A User object has the following properties: User.name: the name specified when you created the token. User.role: the role specified when you created the token. You can ask Nuve for a list of the users connected to a determined room. Get Users var roomId = '30121g51113e74fff3115502'; N.API.getUsers(roomId, function(users) { var usersList = JSON.parse(users); console.log('This room has ', usersList.length, 'users'); for(var i in usersList) { console.log('User ', i, ':', usersList[i].name, 'with role: ', usersList[i].role); } }, errorCallback); Examples With a deep intro into the contents out of the way, we can focus putting Server API to use. To do that, we'll utilize a basic example that includes everything we mentioned in the previous section. Now, here's a look at a typical server application implemented in Node.js: We first initialize Nuve (Server API) var N = require('nuve'); N.API.init("531b26113e74ee30500001", "myKey", ""); We also include express support for our server. We prepare it to publish static HTML files in which we will have JavaScript client applications that we will explain later. var express = require('express'); var app = express.createServer(); app.use(express.bodyParser()); app.configure(function () { app.use(express.logger()); app.use(express.static(__dirname + '/public')); }); Requests to /createRoom/ URL will create a new Room in Licode. app.post('/createRoom/', function(req, res){ N.API.createRoom('myRoom', function(roomID) { res.send(roomID); }, function (e) { console.log('Error: ', e); }); }); Requests to /getRooms/ URL will retrieve a list of our Rooms in Licode. app.get('/getRooms/', function(req, res){ N.API.getRooms(function(rooms) { res.send(rooms); }, function (e) { console.log('Error: ', e); }); }); Requests to /getUsers/roomID URL will retrieve a list of users that are connected to room roomID. app.get('/getUsers/:room', function(req, res){ var room = req.params.room; N.API.getUsers(room, function(users) { res.send(users); }, function (e) { console.log('Error: ', e); }); }); Requests to /createToken/roomID URL will create an access token for including a participant in room roomID. app.post('/createToken/:room', function(req, res){ var room = req.params.room; var username = req.body.username; var role = req.body.role; N.API.createToken(room, username, role, function(token) { res.send(token); }, function (e) { console.log('Error: ', e); }); }); Finally, we will start our service, that will listen to port 80 (line 19). app.listen (80);
http://licode.readthedocs.io/en/master/server_api/
2017-12-11T02:18:10
CC-MAIN-2017-51
1512948512054.0
[]
licode.readthedocs.io
Antibiotics¶ PATRIC provides additional detailed information about individual antibiotics. From the AMR Phenotypes tab (shown below), clicking the checkbox on one of the phenotypes enables the Antibiotic button in the green vertical Selection Action Bar on the right side of the table. Clicking the Antibiotic button will display a page of information about the corresponding antibiotic (shown below).
https://docs.patricbrc.org/user_guide/genome_data_and_tools/antibiotics.html
2017-12-11T02:17:49
CC-MAIN-2017-51
1512948512054.0
[array(['../../_images/amr_metadata_amr_phenotype_selection.png', 'Genome AMR Phenotypes Selection'], dtype=object) array(['../../_images/antibiotics_page.png', 'Antibiotics Page'], dtype=object) ]
docs.patricbrc.org
Returns a document by feature matrix with the feature frequencies weighted according to one of several common methods. Some shortcut functions that offer finer-grained control are: tf compute term frequency weights tfidf compute term frequency-inverse document frequency weights docfreq compute document frequencies of features dfm_weight(x, type = c("frequency", "relfreq", "relmaxfreq", "logfreq", "tfidf"), weights = NULL) dfm_smooth(x, smoothing = 1) dfm_weight returns the dfm with weighted values. dfm_smooth returns a dfm whose values have been smoothed by adding the smoothing amount. Note that this effectively converts a matrix from sparse to dense format, so may exceed memory requirements depending on the size of your input matrix. For finer grained control, consider calling the convenience functions directly. Manning, Christopher D., Prabhakar Raghavan, and Hinrich Schutze. Introduction to Information Retrieval. Vol. 1. Cambridge: Cambridge University Press, 2008. dtm <- dfm(data_corpus_inaugural) x <- apply(dtm, 1, function(tf) tf/max(tf)) topfeatures(dtm)#> the of , and . to in a our that #> 10082 7103 7026 5310 4945 4526 2785 2246 2181 1789normDtm <- dfm_weight(dtm, "relfreq") topfeatures(normDtm)#> the , of and . to in our #> 3.7910332 2.7639649 2.6821863 2.0782035 1.9594539 1.7643366 1.0695645 0.8731637 #> a we #> 0.8593092 0.7726443maxTfDtm <- dfm_weight(dtm, type = "relmaxfreq") topfeatures(maxTfDtm)#> the , of and . to in our #> 55.13499 42.22681 39.34995 31.43686 30.76141 26.37869 16.08336 13.97242 #> a we #> 13.38024 13.21974logTfDtm <- dfm_weight(dtm, type = "logfreq") topfeatures(logTfDtm)#> the , of and . to in a #> 182.1856 174.3182 173.3837 167.1782 164.9945 163.2151 150.4070 143.6032 #> our that #> 140.7424 138.9939tfidfDtm <- dfm_weight(dtm, type = "tfidf") topfeatures(tfidfDtm)#> - america union " should constitution #> 55.80272 52.68044 51.14846 48.02566 42.10689 40.21661 #> congress freedom you revenue #> 39.13390 38.31822 35.99430 34.11779# combine these methods for more complex dfm_weightings, e.g. as in Section 6.4 # of Introduction to Information Retrieval head(tfidf(dtm, scheme_tf = "log"))#> Document-feature matrix of: 6 documents, 9,357 features (93.8% sparse).# apply numeric weights str <- c("apple is better than banana", "banana banana apple much better") (mydfm <- dfm(str, remove = stopwords("english")))#> Document-feature matrix of: 2 documents, 4 features (12.5% sparse). #> 2 x 4 sparse Matrix of class "dfm" #> features #> docs apple better banana much #> text1 1 1 1 0 #> text2 1 1 2 1dfm_weight(mydfm, weights = c(apple = 5, banana = 3, much = 0.5))#> Document-feature matrix of: 2 documents, 4 features (12.5% sparse). #> 2 x 4 sparse Matrix of class "dfm" #> features #> docs feat1 feat2 feat3 feat4 #> text1 5 1 3 0 #> text2 5 1 6 0.5# smooth the dfm dfm_smooth(mydfm, 0.5)#> Document-feature matrix of: 2 documents, 4 features (0% sparse). #> 2 x 4 sparse Matrix of class "dfm" #> features #> docs apple better banana much #> text1 1.5 1.5 1.5 0.5 #> text2 1.5 1.5 2.5 1.5
http://docs.quanteda.io/reference/dfm_weight.html
2017-12-11T01:56:15
CC-MAIN-2017-51
1512948512054.0
[]
docs.quanteda.io
Reboot the ExtraHop System and Test Hardware with the Rescue USB Flash Drive This guide explains how to create and use a Rescue USB flash drive to reinstall and recover the ExtraHop® system. When booting from the Rescue USB flash drive, you can also perform memory and hardware tests to ensure that the ExtraHop appliance functions properly. Create the Rescue USB Flash Drive To create the USB flash drive: Download the Rescue USB flash drive image: Log in to the Support Portal at. Scroll down to the Utilities section, and click the Appliance rescue flash drivedownload link. When redirected to the Rescue USB .img file, download the .img file to your local drive. Copy the .img file to a USB flash drive: (Linux or Mac OS) Use the dd command in the terminal: dd if=<file location> of=<location of root block device> (Windows) Use a utility such as Win32 Disk Imager. Displaying the Select Boot Option Screen The Select Boot Option screen has options to boot from the Rescue USB flash drive and to perform memory tests. To display the Select Boot Option screen: If your appliance has a CD drive, make sure that no CD is present. Insert the Rescue USB flash drive into a USB port on the ExtraHop appliance. Ports are located on the front of the appliance. If restarting within the firmware, do the following: Log in to the ExtraHop system Command Line Interface (CLI) with the shelluser account using either a monitor and USB keyboard plugged into the ExtraHop appliance or the iDRAC interface. The default password is the service tag number of the appliance. Enable the privilege commands. The default password is the setup user password. extrahop>enable extrahop# At the prompt, restart the system using the restartcommand, and specify the systemparameter as the component that you want to restart: restart system At the prompt, access the Boot Manager by pressing the F11 key. Click One-shot BIOS Boot Menu under Boot Manager Main Menu. [[CJ comment: Wrong menu item is highlighted. Should get correct shot or delete.]]. To perform memory tests: Display the Select Boot Option screen. Press the down arrow to select Memory Test, and then press Enter. Boot from the Rescue USB Flash Drive Display the Select Boot Option screen.. If the Problems found in Disk Config screen appears, complete the following steps.These steps show default selections. For additional help, contact ExtraHop Support. On the Problems found in Disk Config screen , select Rebuild config by using the arrow keys, and then press Enter. On the Configure Packet Capture screen, select Yes and press Enter. On the Use Defaults for Disk Config screen, select Default and press Enter. On the Datastore RAID LEVEL screen, select the RAID level for your configuration and press Enter. On the Change Disk Config screen, select Skip and press Enter.If you want to change your disk configuration, select Change and follow the prompts. Contact ExtraHop Support if you need help. On the Confirm device selection screen, make sure that the firmware device is the USB drive. Select Yes, and press Enter. Recovery Tasks After rebooting from the Rescue USB flash drive, the ExtraHop system displays the Select Menu Option screen with multiple recovery tasks:: Select ExtraHop System Recovery on the Select Menu Option screen. At the prompt that confirms you want to recover the previous firmware installation, select Yes. If there is no backup directory, the recovery option uses the USB flash drive version to recover the system. At the prompt to continue, select Yes. The recovery may take up to 45 minutes to complete. When the recovery is complete, press Enter and select Yes to restart. ExtraHop System Factory Reset To reinstall the ExtraHop firmware and remove all previous data, including the ExtraHop license: Select ExtraHop System Factory Reset on the Select Menu Option screen. On the Reset Configuration Sub-Menu screen, select Reset System Configuration and select OK. At the prompt that warns about clearing all previous data, select Yes. (case-sensitive). To enter privileged mode, enter the enable command. If prompted, enter the same default password. To configure the recovered ExtraHop system: In privileged mode, use the configuresub-command to enter configuration mode to configure the IP address for the ExtraHop appliance: the IP address is configured, log in to the Admin UI at https://<extrahop-ip>/admin. On the EULA screen, select I Agree and click Submit. On the Admin UI Login page, log in as follows: For Username, type setup. For Password, type the system serial number (case-sensitive). On the Admin page, under System Settings, click License. On the License Administration page, go to Manage License and click Register. Enter the product key provided to you by ExtraHop Networks and click Register. The License Update Successful message confirms the installation. (If an error message appears, the license was not installed.) Click Done. Go back to the Admin page, and under Access Settings, click Support Account. Select Enable and click Save. An encrypted key appears. Copy the entire key, and send it to [email protected]. Click Done. Set the system time. The default time server setting is pool.ntp.org. To configure the time servers manually, refer to the System Settings section of the ExtraHop Admin UI Users Guide. After configuring the time settings, save the running config: In the upper right corner, click View & Save Changes. On the Running Config page, click Update. To configure the connectivity settings, do the following: On the Admin page, under Network Settings, click Connectivity. In the Interface Status section, refer to the diagram of the appliance to determine whether a: Log in to the ExtraHop Support Portal. Scroll down to Firmware Downloads. To install the firmware update, on the Admin page, go to System Settings and click Firmware. On the Firmware page, click Upload.command.. To run the hardware tests: Connect the 10GB interfaces with a single-mode fiber optic cable, and connect interface 1 to interface 2, and interface 3 to interface 4 using Ethernet cables. On the EH9100, also connect interface 5 to interface 6, and interface 7 to interface 8. Select Hardware Tests on the Select Menu Option screen.. Press Page Up and Page Down on your keyboard to view the test results.: Connect the 10GB interfaces with a single-mode fiber optic cable, and connect interface 1 to interface 2, and interface 3 to interface 4 using Ethernet cables. For EH9100, also connect interface 5 to interface 6, and interface 7 to interface 8. Select Hardware Tests on the Select Menu Option screen. On the Select Hardware Tests to Run screen, select option 8, Extended Hard Drive Test. Select Yes to run the extended hard drive test. Migrate RAID0 to RAID10 You can configure the EH9100 appliance for RAID10 using four additional hard drives from ExtraHop. To enable advanced RAID features: Select Migrate RAID0 to RAID10 on the Select Menu Option screen. Select Yes to migrate to RAID10. (Optional) Once you finish the migration process, restart the ExtraHop appliance, and then go to the Admin UI to check the hard drive states and to configure alerts and notifications. How can we improve?
https://docs.extrahop.com/4.1/rescue-usb/
2017-12-11T02:10:20
CC-MAIN-2017-51
1512948512054.0
[array(['resources/images/using-rescue-usb-03.png', None], dtype=object) array(['resources/images/using-rescue-cd-05.png', None], dtype=object) array(['resources/images/using-rescue-cd-06.png', None], dtype=object) array(['resources/images/using-rescue-cd-07.png', None], dtype=object) array(['resources/images/using-rescue-cd-08.png', None], dtype=object) array(['resources/images/using-rescue-cd-09.png', None], dtype=object) array(['resources/images/using-rescue-cd-10.png', None], dtype=object) array(['resources/images/using-rescue-cd-11.png', None], dtype=object) array(['resources/images/using-rescue-cd-12.png', None], dtype=object) array(['resources/images/using-rescue-cd-13.png', None], dtype=object) array(['resources/images/using-rescue-cd-16.png', None], dtype=object) array(['resources/images/using-rescue-cd-18.png', None], dtype=object) array(['resources/images/using-rescue-cd-19.png', None], dtype=object) array(['resources/images/using-rescue-cd-20.png', None], dtype=object) array(['resources/images/using-rescue-cd-21.png', None], dtype=object) array(['resources/images/using-rescue-cd-22.png', None], dtype=object) array(['resources/images/using-rescue-cd-23.png', None], dtype=object) array(['resources/images/using-rescue-cd-24.png', None], dtype=object)]
docs.extrahop.com
Creating. - Rating.Default.css. Upon creating a custom skin for the control, one should edit that particular file, as it contains skin-specific CSS properties, and references to images, colors, borders and backgrounds. Unlike the rest of the controls, RadSpell, however uses only one CSS file – Spell.css. This is so because RadSpell consists of two controls from Telerik AJAX Web UI Suite: RadFormDecorator and RadWidnow. Each one of these controls is styled with two CSS files that are loaded in a certain order. Spell.css RadFormDecorator RadWidnow RadSpell RadFormDecorator RadWidnow
https://docs.telerik.com/devtools/aspnet-ajax/controls/spell/appearance-and-styling/creating-a-custom-skin
2017-12-11T02:23:27
CC-MAIN-2017-51
1512948512054.0
[array(['images/spell-scheme.png', "RadSpell's scheme"], dtype=object)]
docs.telerik.com
, Released [1-November-2010] - Onward -- 1.6 Beta 14, Released [15-November-2010] - Onward -- 1.6 Beta 15, Released [29-November-2010] - Onward -- 1.6 Release Candidate 1, Released ] -]]
https://docs.joomla.org/index.php?title=Joomla!_Codenames&diff=74309&oldid=32874
2015-08-28T01:09:02
CC-MAIN-2015-35
1440644060103.8
[]
docs.joomla.org
Difference between revisions of "How do phpSuExec file permissions work?" From Joomla! Documentation Revision as of 08:55, 19 October 2012 Contents Permissions under phpsuexec What is phpSuExec?. phpSuExec configurations, PHP running as a CGI with "suexec" enabled (su = switch user, allowing one user to "switch" to another if authorised) - phpSuExec. If your host has, or is, implementing phpSuEx.
https://docs.joomla.org/index.php?title=How_do_phpSuExec_file_permissions_work%3F&diff=76776&oldid=76775
2015-08-28T00:26:56
CC-MAIN-2015-35
1440644060103.8
[]
docs.joomla.org
Contributing¶ Setup¶ Fork django-oauth-toolkit repository on GitHub and follow these steps: - Create a virtualenv and activate it - Clone your repository locally - cd into the repository and type pip install -r requirements/optional.txt (this will install both optional and base requirements, useful during development). Pull requests¶ Please avoid providing a pull request from your master and use topic branches instead; you can add as many commits as you want but please keep them in one branch which aims to solve one single issue. Then submit your pull request. To create a topic branch, simply do: git checkout -b fix-that-issue Switched to a new branch 'fix-that-issue' When you’re ready to submit your pull request, first push the topic branch to your GitHub repo: git push origin fix-that-issue Now you can go to your repository dashboard on GitHub and open a pull request starting from your topic branch. You can apply your pull request to the master branch of django-oauth-toolkit (this should be the default behaviour of GitHub user interface). Next you should add a comment about your branch, and if the pull request refers to a certain issue, insert a link to it. The repo managers will be notified of your pull request and it will be reviewed, in the meantime: we try to avoid merge commits when they are not necessary. How to get your pull request accepted¶ We really want your code, so please follow these simple guidelines to make the process as smooth as possible. Run the tests!¶ Django OAuth Toolkit aims to support different Python and Django versions, so we use tox to run tests on multiple configurations. At any time during the development and at least before submitting the pull request, please run the testsuite via: tox The first thing the core committers will do is run this command. Any pull request that fails this test suite will be immediately rejected. Add the tests!¶ Whenever you add code, you have to add tests as well. We cannot accept untested code, so unless it is a peculiar situation you previously discussed with the core commiters, if your pull request reduces the test coverage it will be immediately rejected. Code conventions matter¶ There are no good nor bad conventions, just follow PEP8 (run some lint tool for this) and nobody will argue. Try reading our code and grasp the overall philosophy regarding method and variable names, avoid black magics for the sake of readability, keep in mind that simple is better than complex. If you feel the code is not straightforward, add a comment. If you think a function is not trivial, add a docstrings. The contents of this page are heavily based on the docs from django-admin2
https://django-oauth-toolkit.readthedocs.org/en/latest/contributing.html
2015-08-28T00:11:32
CC-MAIN-2015-35
1440644060103.8
[]
django-oauth-toolkit.readthedocs.org