content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
.
If you are just getting started with Blueprints, this provides a high-level overview of what they are and what they can do.
Get up and running by creating your first Blueprint
The Blueprint Editor Reference page outlines the Blueprint Editor's Interface elements and its basic usage instructions.
Get a general overview of the variables and execution flow of the BLueprints visual scripting system.
Blueprint API reference for Unreal Engine
Reference covering Blueprint shortcuts and useful actions.
The User Guide is the go-to source to learn the different parts of Blueprints and nodes that are available to use within Blueprint graphs.
Tips and tricks to help you make decisions about when to use Blueprints and how best to set them up.
The Blueprint How To page provides several short step-by-step guides for working with Blueprints.
Tools like commenting and breakpoints enable you to follow the flow of your code and make notes for yourself or teammates working with the same system.
Blueprints that declare functions to define an interface between Blueprints.
Allows a Blueprint Class to report on its state to the Level Blueprint.
How to use one Blueprint's functions, events, and variables from another Blueprint.
Overview of when to use different methods of Blueprint Communications.
A sample project created using different methods of Blueprint Communications.
Technical guide for programmers working with Blueprints.
The math expression node allows you to type in a math expression and builds the appropriate sub-graph to create that expression.
Describes the different kinds of Online and Error Related Blueprint Nodes.
Describes the different kinds of Mobile Patch Utility Blueprint nodes.
Explanation of random streams and how to use them in Blueprints.
This document contains an overview of Timelines in Unreal Engine 4 Blueprints.
Blueprints that declare and define macros for reuse in other Blueprints.
Blueprint integer variables can be declared as bitmasks to store binary flags in a more compact fashion. | https://docs.unrealengine.com/en-us/Engine/Blueprints | 2018-10-15T20:12:40 | CC-MAIN-2018-43 | 1539583509690.35 | [] | docs.unrealengine.com |
Tips & Troubleshooting
Campaign Success
This article is designed to help you strategize and implement the most successful mobile campaign to realize your use-case vision.
Maintain Compliance
While all of the following best-practice recommendations may lend themselves to a compliant campaign, having a compliant campaign overall is crucial to your mobile program’s success, both with the carriers and with your end-users.
Put User Experience First
Seeing things from the customer’s perspective will not only enable you to provide better customer support, but can help you better design your program from a customer usage perspective. It helps to ask what customers care most about with regard to any mobile program:
- Joining
- Leaving
- Customer support
- Ease of use
- Conducting their daily lives while subscribed
Beyond simply being a matter of compliance, opt-ins and opt-outs and the way that your messages are formatted when sent to your customer affect their experience with your brand, promoting positive or negative feelings depending upon that experience. Be sure that the customers can join and leave the program easily, obtain customer support when necessary and that the program is easy to use without being disruptive to their lives.
Know Your Vision
What is your mobile program aiming to do? Most likely, it’s one of the following:
- obtain customers for an existing service (marketing)
- notify customers of a new service (marketing)
- supplement an existing or new service (operational, marketing)
- work as support for an existing or new service (operational)
The regulatory guidelines for your mobile campaign will differ depending on your use case and chosen code type, but narrowing down and pinpointing your vision will also assist you with coming up with a successful strategy for implementing your campaign.
Engage Customers in a Meaningful Way
Customers are more likely to feel connected to your mobile campaign if the messages they receive are customizable and personal or local to their area. If your campaign is supplementing a web coupon service, try texting codes for coupons only available in their area, or only for their favourite stores. This will keep your service relevant to the customer.
P2P Guidelines
P2P messages must be “humanlike” in characteristic, and are subject to rigorous carrier filters and standards to protect end-users from unwanted messages. However, carrier SPAM filters are not perfect - sometimes, legitimate traffic is blocked. Understanding what may cause your messages to be blocked is critical, so we’ve put together these guidelines to help best ensure the success of your legitimate P2P messages.
Understanding Carrier Filters
The nature of SPAM filters can range from lists of disallowed words or phrases to adaptive AI that measure message send rate as well as content to validate whether the messages are appropriate for P2P traffic. In the case of the AIs, messages are “scored” according to throughput on a timeframe level ranging from the second to the day based on:
- Distribution & Volume
- Frequency & throughput
- Inbound-to-outbound message ratio
- Content variance
- Presence of spam-like qualities in the content (such as short urls).
Details on how each of the above aspects applies to your messages can be found below.
Bear in mind that most carrier spam blocks will be lifted automatically after a period of time determined by the number of blocked messages on the number in question. Unfortunately, carriers do not allow the explicit duration of this time period to be known. Carriers do not whitelist long code numbers for P2P sending.
Distribution & Volume
- The same message should not be distributed to more than 10 mobiles* from the same long code in a single transaction.
- Message delivery per code should be “humanlike” - in other words, similar to what mobile consumers can send manually.
- The maximum outbound message send is 500/day/code, but for optimal deliverability Aerialink recommends 300-or-fewer/day/code.
- Marketing campaigns, such as bulk notifications, must be sent to the US and Canada via an A2P traffic short code.
*This figure is subject to occasional change based on delivery analysis
Frequency & Throughput
- Messages sent in timed, systematic intervals are not “humanlike” - we recommend that you send messages with time intervals randomized or based on user-triggered events.
- Carrier regulation requires a P2P throughput rate of no more than 1 message/second. Aerialink sets the throughput to 1 message/2 seconds for higher deliverability.
Message Ratio
Because P2P is designed for two-way communications, an outbound-to-inbound ratio of 1:1 is ideal. However, the “Golden Ratio” for P2P traffic is 3:1. In other words, for every three “unique” messages pushed out to end-users within a close time frame, at least one message should be returned. This maintains a sense of the conversation-oriented messaging for which P2P traffic is designed.
Message Content Variance
Repetitive message content may be flagged by Carrier filters as application-generated and subsequently blocked. P2P messaging has its best rate of deliverability when message content varies.
- Individuals don’t typically send the same message over and over, so carriers apply that rule to program messaging.
- P2P messages should be uniquely relevant to a mobile user if possible.
- Use of dynamic content that is inserted into a message template is acceptable.
- If sending out similar content repeatedly is a necessity, try varying the verbiage and sentence structure while maintaining the overall goal and essence.
- Identical or “duplicate” messages are defined as those with the same sender ID, destination number and message content. Duplicate messages sent within the same sixty seconds could be filtered automatically, so care should be taken not to send a single blast twice in one minute when pushing messages out to users.
Message Content Qualities
Short URLs
- Third-party shortened URLs (ex: Bitly, TinyURL, goo.gl. ) are associated with bulk and system-generated messaging that should be running on the 8XX Plus or Short Code routes. Because of this, including them in your content will likely result in carrier blocking of your source number and associated messages.
- Customizing your short URL rather than simply using the auto-generated randomized combination of letters and numbers will help to prevent your messages and number from carrier blocking.
Carrier Blocking
Check out the P2P Guidelines article above for information useful both for preventing and troubleshooting possible carrier blocking of P2P numbers.
Short Code
Some carriers allow short codes to be blocked across the board. Others consider them “marketing numbers” despite their wide range of uses and will allow end-users to block them upon request. Here are some carriers who are known to block short code traffic.
All Traffic Blocked
- Simple Mobile
Blocked Upon Request
- T-Mobile
Delivery Confirmation
Delivery Report “Not Available”
There are two components of receiving delivery reports.
- When sending a message over HTTP you must set the registeredDelivery value as 1. This is an additional parameter in the HTTP Post.
- You must have a deliver HTTP address for DLR’s configured.
Please be sure that both of these have been completed. If they have, check the URL provided to Aerialink for POST to your system for any typos or spaces.
DLR Status “UNDELIV, REJECTD”
These statuses may indicate one of the following issues:
- Carrier blocking
- Invalid destination address
Transactions
Transaction issues usually manifest in one of four ways:
- SMS MT did not reach any destination device(s).
- SMS MO was not received by platform.
- SMS MT was received by one carrier but not another.
- Messages are sent/received but there are long delays.
Mobile Device Issue
If a message never made it to the end-device, there may be an issue occurring with that device, specifically, such as:
- The end-user’s prepaid subscription is out of minutes
- The device was powered off and/or had a weak signal at the time of attempted transmission for such a long period of time that the retry period was exceeded.
- The end-user’s mobile plan does not support SMS
- The end-user has blocked short code messaging (see the Carrier Blocking article above for more information)
API Error
If you are accessing Aerialink via an API and experience a failure in a mobile transaction, check with your dev team - chances are, they may have received an error code which can help you pinpoint the reason for the transaction’s failure.
When an error is returned:
- The appropriate HTTP/SMPP status will be set in the response headers.
- Content-Type will be set to application/json. (?)
- The body will consist of a JSON formatted dictionary with a single key called “error” containing the status, error type, and message.
Here is a sample API error response:
For more information about API errors, see the applicable status/error code page for your chosen API:
Content Formatting
If the content of an SMS contains a phone number, date, time and/or location formatted in a manner the phone recognizes, it will create a link in the body of the message which will conveniently send the end-user to whatever part of their phone can best assist them with that item (e.g. clicking a linked address will send the user to their maps application).
Here we have compiled the known formats phones most often recognize, enabling you to maximize the number of end-users who can make use of this convenient linking feature.
Date & Time
If the date is formatted correctly, it will become a link to the end-user’s calendar application.
Items that are recognized in this category are:
- Day of week
- Day
- Month
- Year
- Time
Day of Week
- Abbreviated (“Mon”)
- Full (“Monday”)
Date
If there is no year, the phone will assume it is for the next occurrence of that date (e.g. if it is currently December and the SMS reads “11/1,” it will assume the appointment is for the following year’s November, not that which has just passed.)
- M/D
- M/D/YY
- M/D/YYYY
- MM/DD
- MM/DD/YY
- MM/DD/YYYY
- Month Day, Year
Time
- HH:MM
Location
Internationally formatted phones may recognize additional formats. The below applies to phones formatted for addresses in the United States.
- Street Number
- Street Name
- City Name
- State
- Zipcode
If any of the latter three items are left out, the phone will attempt to locate the nearest place with that street address (in a town or city of that name, if City is included but state or zip are not, et cetera).
Smart Quotes
If messaging from the Mac Messages application, or any other that turns straight apostrophes and quotes into “smart” apostrophes and quotes, it is advisable to disable the “smart quotes” option.
The “smart quote” is a character in the Unicode Character Set, which is not uniformly supported across all mobile carriers and devices. Because of this, the inclusion of a “smart quote” in a message can result in the following issues:
- Reduction of max character length in message from 160 to 70 as the message is now treated as Unicode, which results in messages in excess of 70 characters being split into 67-character segments (i.e., a message that was originally 160 characters in length will arrive as (and cost as much as) three individual messages)
- Destination character issues, where the desired quote or apostrophe is replaced by an undesired charater or set of characters.
Mac Messages
To turn off Smart Quotes on Mac Messages:
- Go to Edit
- Mouse over Substitutions
- Uncheck “Smart Quotes.”
Privacy & Safety
The following quick list of tips is designed to keep people safe when using SMS, MMS and all services on the mobile ecosystem.
- Never send content or a picture of a piece of personal identification (driver’s license, social security card, passport)
Known Device Issues
Message Thread Issue on iOS with 8XX Numbers
Aerialink has identified a messaging issue affecting iOS handsets operating on AT&T and T-Mobile networks sending SMS messages to toll-free (8xx) landline numbers with prefixes currently in service (1-800, 1-844, 1-855, 1-866, 1-877, 1-888) in ten-digit (country-code-excluded) number format.
Under certain conditions, SMS correspondence between the iOS handsets and 8xx (“toll-free”) numbers may be split into two threads within the native iOS SMS client rather than kept in one singular string of conversation. [Would like to have image here.] This happens when the iOS via AT&T or T-Mobile user omits the 1 (country code) on a destination 8xx number, thus sending the number only in ten-digit (8001234567) rather than eleven-digit (18001234567) format. When the 8xx number replies back, iOS adds the+1 country code to the 8XX number, resulting in the response from the 8xx number not arriving within the same SMS thread as the original message sent, but in a separate thread as though from two different correspondents. However, if the user initiating the SMS thread includes the 1 (sending in eleven-digit format) on the destination 8xx number, the thread will remain intact.
The cause of the issue may potentially be iOS’s number parser (likely a variant of nearly ubiquitous Google library libphonenumber) which applies international format to numbers identified as 8xx (“toll-free”) when received by the iOS messaging application. This issue has not been known to affect future 8xx prefixes such as 1-833 and neither does it affect landline numbers utilizing the same configurations as their non-landline counterparts.
This is a known issue among carriers and 8xx number providers who are currently working together with the hope of getting this fixed in future iOS releases. Aerialink has been in direct communication with AT&T in attempts to resolve this issue.
Today, the workaround is to publish your number with the +1 included in your call to action so promote the inclusion of the country code by end users.
This page was last updated 1539620611275 | http://docs.aerialink.net/help-desk/troubleshooting/ | 2018-10-15T20:25:34 | CC-MAIN-2018-43 | 1539583509690.35 | [array(['apierror.png', None], dtype=object)] | docs.aerialink.net |
Contents Now Platform Custom Business Applications Previous Topic Next Topic Automated Test Framework Scheduled Client Test Runner module ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Automated Test Framework Scheduled Client Test Runner module Open a browser window for running scheduled client-side automated tests. For information about scheduling automated tests, see Schedule automated test suite and Working with scheduled test suites. You can toggle a client test runner to act as either a manual or scheduled client test runner. Table 1. Fields on the Client Runner Test window Field / UI Element Description Form preferences icon () Click to display the form preferences panel. Form preferences panel: Screenshots mode Choose among: Enable for all steps Enable for failed steps Disable for all steps For additional information, see Set the system property to control when the Automated Test Framework captures screenshots. Form preferences panel: Run scheduled tests only Click to toggle between: On (green): use this client test runner to run only scheduled tests and suites Off (gray): use this client test runner to run only manually-started tests and suites. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-application-development/page/administer/auto-test-framework/reference/atf-sched-test-runner-module.html | 2018-10-15T19:48:17 | CC-MAIN-2018-43 | 1539583509690.35 | [] | docs.servicenow.com |
Capture, filter, index, and analyze wire data directly from a network stream.
Release Notes
Known issues, fixed problems, and new features and functionality in this version of Splunk App for Stream.
Installation and Configuration Manual
How to setup, install, configure, and use Splunk App for Stream to capture and analyze streams of network event data. | http://docs.splunk.com/Documentation/StreamApp/6.0 | 2016-08-29T17:57:44 | CC-MAIN-2016-36 | 1471982290497.47 | [] | docs.splunk.com |
The Translation Memory view allows you to drag and drop a folder with translation files from say Dolphin into the view, and then, within few minutes, translation suggestions will be shown automatically on the unit switch. To insert the translation suggestions into the file, use Ctrl+1, Ctrl+2 and so on, depending on the number of suggestion.
Use → to add/manage projects to your Translation Memory. Here you can also import or export data from
tmx file format.
Pressing F7 will open Translation Memory tab, which allows you to query the TM freely. Clicking a search result will open the corresponding file and unit. If you want to quickly open some file in the project (and it is added to TM), then instead of browsing Project Overview you can just type its name into the File mask field, accompanied by '*'.
The TM engine indexes all entries, including non-ready and untranslated ones. This allows it to completely replace the Search-in-Files feature which required scanning every file in the project each time a search is done.
- Batch Translation:
To insert the exactly matching suggestion automatically from the translation memory database, use → OR . This feature is similar rough translation feature in KBabel. | https://docs.kde.org/trunk5/en/kdesdk/lokalize/tm.html | 2016-08-29T18:00:53 | CC-MAIN-2016-36 | 1471982290497.47 | [array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)
array(['tmview.png',
'Lokalize with translation memory search results for the current unit'],
dtype=object) ] | docs.kde.org |
Defining Query Queues
When users run queries in Amazon Redshift, the queries are routed to query queues. Each
query queue contains a number of query slots. Each queue is allocated a portion of the
cluster's available memory. A queue's memory is divided among the queue's query slots.
You can configure WLM properties for each query queue to specify the way that memory is
allocated among slots, how queries can be routed to specific queues at run time, and
when to cancel long-running queries. You can also use the
wlm_query_slot_count parameter, which is separate from the WLM
properties, to temporarily enable queries to use more memory by allocating multiple
slots.
By default, Amazon Redshift configures the following query queues:
One superuser queue.
The superuser queue is reserved for superusers only and it can't be configured. You should only use this queue when you need to run queries that affect the system or for troubleshooting purposes. For example, use this queue when you need to cancel a user's long-running query or to add users to the database. You should not use it to perform routine queries. The queue does not appear in the console, but it does appear in the system tables in the database as the fifth queue. To run a query in the superuser queue, a user must be logged in as a superuser, and must run the query using the predefined
superuserquery group.
One default user queue.
The default queue is initially configured to run five queries concurrently. You can change the concurrency, timeout, and memory allocation properties for the default queue, but you cannot specify user groups or query groups. The default queue must be the last queue in the WLM configuration. Any queries that are not routed to other queues run in the default queue.
Query queues are defined in the WLM configuration. The WLM configuration is an
editable parameter (
wlm_json_configuration) in a parameter group, which can
be associated with one or more clusters. For more information, see Modifying the WLM Configuration.
You can add additional query queues to the default WLM configuration, up to a total of eight user queues. You can configure the following for each query queue:
Concurrency level
User groups
Query groups
WLM memory percent to use
WLM timeout
Concurrency Level
Queries in a queue run concurrently until they reach
the concurrency level defined for that queue. Subsequent
queries then wait in the queue. Each queue can be configured to run up to 50 queries
concurrently. The maximum total concurrency level for all user-defined queues is 50.
The limit includes the default queue, but does not include the reserved Superuser
queue, Amazon Redshift allocates, by default, an equal, fixed share of available memory to
each queue, and an equal, fixed share of a queue's memory to each query slot in the
queue. The proportion of memory allocated to each queue is defined in the WLM
configuration using the
memory_percent_to_use property. At run time, you
can temporarily override the amount of memory assigned to a query by setting
the
wlm_query_slot_count parameter to specify the number of slots
allocated to the query.
By default, WLM queues have a concurrency level of 5. Your workload might benefit from a higher concurrency level in certain cases, such as the following:
If many small queries are forced to wait for long-running queries, create a separate queue with a higher concurrency level and assign the smaller queries to that queue. A queue with a higher concurrency level has less memory allocated to each query slot, but the smaller queries require less memory.
If you have multiple queries that each access data on a single slice, set up a separate WLM queue to execute those queries concurrently. Amazon Redshift will assign concurrent queries to separate slices, which allows multiple queries to execute in parallel on multiple slices. For example, if a query is a simple aggregate with a predicate on the distribution key, the data for the query will be located on a single slice.
As a best practice, we recommend using a concurrency level of 15 or lower. All of the compute nodes in a cluster, and all of the slices on the nodes, participate in parallel query execution. By increasing concurrency, you increase the contention for system resources and limit the overall throughput.
The memory that is allocated to each queue is divided among the query slots in that queue. The amount of memory available to a query is the memory allocated to the query slot in which the query is running, regardless of the number of queries that are actually running concurrently. A query that can run entirely in memory when the concurrency level is 5 might need to write intermediate results to disk if the concurrency level is increased to 20. The additional disk I/O could degrade performance.
If a specific query needs more memory than is allocated to a single query slot,
you can increase the available memory by increasing the wlm_query_slot_count
parameter. The following example sets
wlm_query_slot_count to
10, performs a vacuum, and then resets
wlm_query_slot_count to
1.
set wlm_query_slot_count to 10; vacuum; set wlm_query_slot_count to 1;
For more information, see Improving Query Performance.
User Groups
You can assign a set of user groups to a queue by specify each user group name or by using wildcards. When a member of a listed user group runs a query, that query runs in the corresponding queue. There is no set limit on the number of user groups that can be assigned to a queue. For more information,see Wildcards
Query Groups
You can assign a set of query groups to a queue by specify each user group name or by using wildcards. A query group is simply a label. At run time, you can assign the query group label to a series of queries. Any queries that are assigned to a listed query group will run in the corresponding queue. There is no set limit to the number of query groups that can be assigned to a queue. For more information,see Wildcards, so if you add
dba_* to the
list of user groups for a queue, then any query that is run by a user that belongs to
a group with a name that begins with
dba_, such as
dba_admin or
DBA_primary, is assigned to that queue. The
'?' wildcard character matches any single character, so if the queue includes
user-group
dba?1, then user groups named
dba11 and
dba21 would match, but
dba12 would not match. Wildcards
are disabled by default.
WLM Memory Percent to Use
To specify the amount of available memory that is allocated to a query, you can
set the
WLM Memory Percent to Use parameter. By default, each
user-defined queue is allocated an equal portion of the memory that is available for
user-defined queries. For example, if you have four user-defined queues, each queue
is allocated 25 percent of the available memory. The superuser queue has its own
allocated memory and cannot be modified. To change the allocation, you assign an
integer percentage of memory to each queue, up to a total of 100 percent. Any
unallocated memory is managed by Amazon Redshift and can be temporarily given to a queue if
the queue requests additional memory for processing.
For example, if you configure four queues, you can allocate memory as follows: 20 percent, 30 percent, 15 percent, 15 percent. The remaining 20 percent is unallocated and managed by the service.
WLM Timeout
To limit the amount of time that queries in a given WLM queue are permitted to use, you can set the WLM timeout value for each queue. The timeout parameter specifies the amount of time, in milliseconds, that Amazon Redshift waits for a query to execute before canceling the query. The timeout is based on query execution time and doesn't include time spent waiting in a queue.
The function of WLM timeout is similar to the statement_timeout configuration parameter, except that, where
the
statement_timeout configuration parameter applies to the
entire cluster, WLM timeout is specific to a single queue in the WLM configuration.
WLM Query Queue Hopping
If a read-only query, such as a SELECT statement, is canceled due to a WLM timeout, WLM attempts to route the query to the next matching queue based on the WLM Queue Assignment Rules. If the query doesn't match any other queue definition, the query is canceled; it is not assigned to the default queue. A user-defined function (UDF) or any query that writes to the database cannot be rerouted and is simply canceled. Such queries include data manipulation language (DML) statements, data definition language (DDL) statements, and commands that change the database, such as VACUUM. | http://docs.aws.amazon.com/redshift/latest/dg/cm-c-defining-query-queues.html | 2016-10-20T21:36:44 | CC-MAIN-2016-44 | 1476988717954.1 | [] | docs.aws.amazon.com |
Next: XMLRPC server, Previous: Buildbot Web Resources, Up: WebStatus similarly used to create links from revision
ids to a web-view of your source control system. This will either be
a string, a dict from string (repository ids) to strings, or a callable.
The string should use '%s' to insert
the revision id in the url. I.e. for Buildbot on github:
revlink=''
(The revision id will be URL encoded before inserted in the replacement string)
The callable takes the revision id and repository argument, and should return an URL to the revision.. | http://docs.buildbot.net/0.8.0/WebStatus-Configuration-Parameters.html | 2016-10-20T21:25:00 | CC-MAIN-2016-44 | 1476988717954.1 | [] | docs.buildbot.net |
Common
Notes
The user-defined matvec() function must properly handle the case where v has shape (N,) as well as the (N,1) case. The shape of the return type is handled internally by LinearOperator.
Examples
>>> from scipy.sparse.linalg import LinearOperator >>> from scipy import * >>> def mv(v): ... return array([ 2*v[0], 3*v[1]]) ... >>> A = LinearOperator( (2,2), matvec=mv ) >>> A <2x2 LinearOperator with unspecified dtype> >>> A.matvec( ones(2) ) array([ 2., 3.]) >>> A * ones(2) array([ 2., 3.])
Methods | http://docs.scipy.org/doc/scipy-0.10.1/reference/generated/scipy.sparse.linalg.LinearOperator.html | 2013-12-05T02:16:32 | CC-MAIN-2013-48 | 1386163038307 | [] | docs.scipy.org |
.
Any files placed in a folder called StreamingAssets in a Unity project will be copied verbatim to a particular folder on the target machine. You can retrieve the folder using the Application.streamingAssetsPath property. For reference, the location of this folder varies per platform:
On a desktop computer (Mac OS or Windows) the location of the files can be obtained with the following code:-
path = Application.dataPath + "/StreamingAssets";
On iOS, you should use:-
path = Application.dataPath + "/Raw";
...while on Android, you should use:-
path = "jar:file://" + Application.dataPath + "!/assets/";
Note that on Android, the files are contained within a compressed .jar file (which is essentially the same format as standard zip-compressed files). This means that if you do not use Unity's WWW class to retrieve the file then you will need to use additional software to see inside the .jar archive and obtain the file.
It's always best to use Application.streamingAssetsPath to get the location of the StreamingAssets folder, it will always point to the correct location on the platform where the application is running.
Page last updated: 2013-05-20 | http://docs.unity3d.com/Documentation/Manual/StreamingAssets.html | 2013-12-05T02:51:26 | CC-MAIN-2013-48 | 1386163038307 | [] | docs.unity3d.com |
This:
The execution above will be called next time the "clean" goal is triggered on the project. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=37651 | 2013-12-05T02:16:54 | CC-MAIN-2013-48 | 1386163038307 | [] | docs.codehaus.org |
Starting with Joomla 1.5 and its move to Internationalization and full support of UTF-8, messages for footer.php and other Joomla pages has been moved to a language specific file.
If you want to change the text, go to the language directory, go to the folder of the language you want to change, find the mod_footer.ini file and change the relevant text. For British English, the specific file is language/en-GB/en-GB.mod_footer.ini. Remember that you may not remove copyright and license information from the source code.
If you want to remove the footer entirely, go to Extensions > Module Manager and unpublish the footer module.
Other places where can look for options to make changes are these. If you find code related to footers in these files, you can either "comment it out" or remove it:
Yes. You may remove that message, which is in footer.php. You may however not remove copyright and license information from the source code. | http://docs.joomla.org/index.php?title=Can_you_remove_the_%22Powered_by_Joomla!%22_message%3F&direction=prev&oldid=86113 | 2013-12-05T02:42:27 | CC-MAIN-2013-48 | 1386163038307 | [] | docs.joomla.org |
Ticket #1172 (closed defect: fixed)
cannot hear any sound from dialer in GTA02
Description
- make a phone call from NEO
- build the chatting connection
- cannot hear/speak sounds
Attachments
Change History
comment:3 Changed 6 years ago by sean_chiang@…
HI! Mickey,
After remove all files in ./tmp in oe, rebuild the root filesystem with current
audio state files, It works.
When I told you that sometimes I could hear the sound and sometimes I couldn't
even the puluseaudio is still running last night. I'm wrong, actually it should
be "sometimes the volume is loud enough and sometimes the volume is too low to
hear. I didn't change anything and have no idea what cause it different at
reboot each time. What do you think?
comment:5 Changed 6 years ago by graeme@…
I have been trying to chase this down and I have got to the point I need a
hardware guy to verify that the Power, and GPIO inputs to the Amp are in the
correct state when the phone is stuck in this "quiet" mode.
I think the fault is after the codec as if you turn on the handset speaker it
has the correct loudness.
comment:7 Changed 6 years ago by graeme@…
Yes it happens to me at random, some kernels suffer the problem, some don't
And some dont even always suffer the problem.
comment:8 Changed 6 years ago by mickey@…
- Owner changed from sean_chiang@… to openmoko-kernel@…
- Status changed from assigned to new
- Severity changed from major to blocker
- Cc graeme@… added
I have verified this problem. Changing to "blocker" and reassigning to kernel.
The amp for the mono speaker does not work unless the handset speaker has power.
comment:9 Changed 6 years ago by graeme@…
- Severity changed from blocker to major
comment:10 Changed 6 years ago by mickey@…
- Severity changed from major to blocker
Correct. It's very hard to confirm without opening the device, but I think
you're right.
comment:11 Changed 6 years ago by graeme@…
Ok, this looks more and more like a software bug. Builds from mickeyl and
buildhost fail. DAMP Handset Spk is controlling Stereo Out for some reason.
Amp is working correctly as I can turn it on/off and note the difference.
I have objdump -d both mickeyl modules and mine and they are identical so its
not a difference in building the main driver module.
I don't understand why my builds work and other peoples fail.
comment:12 Changed 6 years ago by ahvenas@…
- Cc ahvenas@… added
comment:13 Changed 6 years ago by graeme@….
comment:14 Changed 6 years ago by graeme@…
Ok, I would love to see the contents of
/sys/bus/platform/devices/soc-audio/codec_reg for both working and non working
situations from people.
After that please try the attached patch on your kernels and see if that makes a
difference.
Changed 6 years ago by graeme@…
- Attachment wm8753-bandaid.patch added
A possible bandaid for the quiet sound problem
comment:15 Changed 6 years ago by tony@…
- Status changed from new to closed
- Resolution set to fixed
solve by Sean_Chiang's alsa file and greame's driver, both function and quality
comment:16 Changed 6 years ago by sean_chiang@…
- Status changed from closed to reopened
- Resolution fixed deleted
Sorry, two different bug. I just fine tune the sound quality, not for this
problem. Sometimes we still could not hear ringtone.
comment:17 Changed 6 years ago by andy@…
Guys is there a way I can reproduce the effect of this bug without making a
call? So if it comes up a "bad" kernel, can we see the bad behaviour in a
simpler way by using aplay from the shell, simplifying the number of things
involved in this issue?
comment:18 Changed 6 years ago by graeme@…
The bug isnt actually about calls, its a general problem. Basically boot the
phone if the startup sound is really quiet or doesnt play at all you triggered
the problem.
Images from buildhost seem to trigger the problem more than other images.
Its not related to pulse as you can shut that down and use aplay and get same
results.
comment:19 Changed 6 years ago by andy@…
Thanks for the clarification... comment #1 says to make a call to reproduce.
I never heard a quiet startup sound in the few times I booted on recent rootfs
with the sound.
There is a class of known problems hiding in the kernel that are driver
startup races. For example a driver wants to change a PMU setting, but maybe
the PMU device does not exist yet, since it only exists after the I2C driver
comes up and I2C is probed. This class of problems makes particular havoc
during suspend / resume due to ordering issues. I mention it because these
races are sensitive to the kernel .config.
Can it be something along those lines? Did you manage to find any difference
from dumping the codec/mixer registers down I2C when it was happy or sad?
Are we talking about the simple transducer on L/ROUT2 of the WM8753L or
L/ROUT1 which has a lot of other circuitry on it?
comment:20 Changed 6 years ago by graeme@…
It seems that L/ROUT2 is working fine, its something in the L/ROUT1 system thats
the problem. My first thought was the Amp was under voltaged or some such failure.
comment:21 Changed 6 years ago by andy@…
There is no shortage of things to go wrong on L/ROUT1 because it is overloaded
several different ways that already make unsolved troubles.
AMP_3V3 for the amp is connected direct to IO_3V3 which is "always on". So it
shouldn't be an issue with amp power otherwise we would see worse behaviours.
Likewise the codec / mixer is powered quite directly from IO_3V3 itself.
Here are some ways we could possibly make that symptom with that path:
There is an AMP_SHUT net that comes from CPU GPJ01, if this was asserted
(HIGH) then the amp would be in a low current standby mode. What it would do
then if audio arrived at the input is unknown, it could conceivably come out
at a low level. So this can be implicated.
Does your version of the kernel still make these ticking noises during boot?
If so then the DL_GSM (CPU GPIO GPJ6) net can be asserted (low), that can
create the trouble by driving a logic level on to the ROUT1 (but not LOUT1)
that would conflict with the amplified audio analogue levels.
Is the headphone socket occupied during this? If somehow JACK_INSERT (CPU
GPIO GPF4) and or nHOLD (CPU GPIO GPF7) were driven by the CPU that could
again conflict with the amplified audio.
Did we confirm there is nothing different with the mixer / codec I2C registers
when it is happy or sad?
Is there no way we can be doing software scaling at the CPU before sending
out?
comment:22 Changed 6 years ago by graeme@…
Julian reports that the bandaid patch fixes the problem for him. Werner could
you apply this as a bandaid until I track down the real cause.
I also spoke to Liam and if this register is misset then it would account for
the too quiet sound.
comment:23 Changed 5 years ago by roh
- Owner changed from openmoko-kernel@… to openmoko-kernel
comment:24 Changed 5 years ago by john_lee
- Status changed from reopened to closed
- HasPatchForReview unset
- Resolution set to fixed | http://docs.openmoko.org/trac/ticket/1172 | 2013-12-05T02:53:44 | CC-MAIN-2013-48 | 1386163038307 | [] | docs.openmoko.org |
1 /* 2 * Copyright 2006 import java.util.List; 19 20 import org.springframework.batch.item.ItemWriter; 21 22 /** 23 * Listener interface for the writing of items. Implementations 24 * of this interface will be notified before, after, and in case 25 * of any exception thrown while writing a list of items. 26 * 27 * @author Lucas Ward 28 * 29 */ 30 public interface ItemWriteListener<S> extends StepListener { 31 32 /** 33 * Called before {@link ItemWriter#write(java.util.List)} 34 * 35 * @param items to be written 36 */ 37 void beforeWrite(List<? extends S> items); 38 39 /** 40 * Called after {@link ItemWriter#write(java.util.List)} This will be 41 * called before any transaction is committed, and before 42 * {@link ChunkListener#afterChunk()} 43 * 44 * @param items written items 45 */ 46 void afterWrite(List<? extends S> items); 47 48 /** 49 * Called if an error occurs while trying to write. Will be called inside a 50 * transaction, but the transaction will normally be rolled back. There is 51 * no way to identify from this callback which of the items (if any) caused 52 * the error. 53 * 54 * @param exception thrown from {@link ItemWriter} 55 * @param items attempted to be written. 56 */ 57 void onWriteError(Exception exception, List<? extends S> items); 58 } | http://docs.spring.io/spring-batch/xref/org/springframework/batch/core/ItemWriteListener.html | 2013-12-05T02:52:15 | CC-MAIN-2013-48 | 1386163038307 | [] | docs.spring.io |
Message-ID: <378213679.763824.1386211223836.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_763823_800904725.1386211223835" ------=_Part_763823_800904725.1386211223835 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Authors: David Maddison, jboner jboner=20
This tutorial does not explain AOP, so if your new to the idea of AOP th= en please check out JavaWorld's= series of articles to get you started.=20
What this tutorial will do is to try to walk you through a simple exampl= e on how you can write, define and weave in an aspect into your application= .=20
Download the latest release and unzip it into the relevant location. Thi= s tutorial is based on the 2.0 version of AspectWerkz but works equally wit= h 1.0 final.=20
The latest distribution can be found here.= p>=20
After installation you need to set the
ASPECTWERKZ_HOME env=
ironment variable to point to the installation directory. This is because q=
uite a few of the scripts use this to find the required libraries. How this=
variable is set depends on you OS. Since I'm using Linux I've amended my <=
code>.bashrc file, windows users could do this by using the control =
panel.
Now we've installed aspectwerkz, we need a test application into which t= o weave our aspects. As is the tradition, I'm going to use the standard Hel= loWorld application.=20
package testAOP; public class HelloWorld { public static void main(String args[]) { HelloWorld world =3D new HelloWorld(); world.greet(); } public String greet() { System.out.println("Hello World!"); } }=20
This is simply a standard Java application, and can be compiled with
Next we need to develop the aspect which will contain the code to be wea= ved into our HelloWorld class. In this example I'm going to output a statem= ent before and after the greet method is called.=20
package testAOP; import org.codehaus.aspectwerkz.joinpoint.JoinPoint; public class MyAspect { public void beforeGreeting(JoinPoint joinPoint) { System.out.println("before greeting..."); } public void afterGreeting(JoinPoint joinPoint) { System.out.println("after greeting..."); } }=20
Notice the signature of the aspect methods. They need to take this
JoinPoint argument otherwise the AspectWerkz weaver won't be able t=
o identify the method when the aspect is weaved in, (and can leave you scra=
tching your head as to why the weaving isn't working!).
(Note: for 2.= 0, specific optimizations can be applied by using the StaticJoinPoint= em> interface or no interface at all. Please refer to the AspectWerkz 2.0 d= ocumentation)
To compile this aspect class you'll need to include the
aspectwerk=
z-0.10.jar in the classpath, i.e.
javac -d target -classpath $ASPECTWERKZ_HOME/lib/aspectwerkz-2.0.RC1.jar My= Aspect.java=20
For AspectWerkz 1.0 final:=20
javac -d target -classpath $ASPECTWERKZ_HOME/lib/aspectwerkz-1.0.jar MyAspe= ct.java=20
At this point we have the test application and the actual aspect code, b=
ut we still need to tell AspectWerkz where to insert the aspect methods (
Specifying pointcuts and advice can be done using either of (or a mixtur= e of), the following methods.=20
The XML definition file is just that, an XML file which specifies the po=
intcuts and advice using XML syntax. Here's one that will weave our MyAspec=
t class into our HelloWorld program (
aop.xml):
<aspectwerkz> <system id=3D"AspectWerkzExample"> <package name=3D"testAOP"> <aspect class=3D"MyAspect"> <pointcut name=3D"greetMethod" expression=3D&q= uot;execution(* testAOP.HelloWorld.greet(..))"/> <advice name=3D"beforeGreeting" type=3D"b= efore" bind-to=3D"greetMethod"/> <advice name=3D"afterGreeting" type=3D"af= ter" bind-to=3D"greetMethod"/> </aspect> </package> </system> </aspectwerkz>=20
Most of this should be pretty straight forward, the main part being the = aspect tag. Whilst I'm not going to explain every bit of this definition fi= le, (I'll leave that up to the official documentation), I will explain a fe= w important points.=20
When specifying the
pointcut the name can be any label you =
like, it's only used to bind the
advice. The expression should=
be any valid expression occording to the Join point selection pattern language=
however you MUST make sure that the full package+clas=
s name is included in the pattern. If this isn't done, or if the pattern is=
slightly wrong, AspectWerkz won't be able to correctly identify the gr=
eet method.
In the
advice tag, the
name attribute should b=
e the name of the method in the aspect class, (specified in the
aspec=
t tag), which you wish to insert at the specific joinpoint. Type is =
set to
before,
after, or
around, dep=
ending on where exactly you wish to inser the method in relation to the joi=
npoint.
bind-to specifies the name of the
pointcut to which this
advice will be bound.
This example identifies the
HelloWorld.greet() method and a=
ssigns it the pointcut label
greetMethod. It then inserts the =
MyAspect.beforeGreeting method just before
greet =
is called, and
MyAspect.afterGreeting just after the
gre=
et method returns.
Annotations provide a way to add metadata to the actual aspect class, ra= ther than specifying it in a seperate definition file. Aspect annotations a= re defined using JavaDoc style comments a complete list of which is availab= le here. Using annotations, our aspect class would look as follows:=20
package testAOP; import org.codehaus.aspectwerkz.joinpoint.JoinPoint; public class MyAspectWithAnnotations { /** * @Before execution(* testAOP.HelloWorld.greet(..)) */ public void beforeGreeting(JoinPoint joinPoint) { System.out.println("before greeting..."); } /** * @After execution(* testAOP.HelloWorld.greet(..)) */ public void afterGreeting(JoinPoint joinPoint) { System.out.println("after greeting..."); } }=20
After adding annotations you need to run a special AspectWerkz tool. Thi=
s is done after compiling your aspect class files, (i.e. after running
AnnotationC compiler, =
can be invoked as follows, passing in the source directory (
.)=
, and the class directory (
target):
java -cp $ASPECTWERKZ_HOME/lib/aspectwerkz-2.0.RC1.jar org.codehaus.aspectw= erkz.annotation.AnnotationC . target=20
For AspectWerkz 1.0 final:=20
java -cp $ASPECTWERKZ_HOME/lib/aspectwerkz-1.0.jar org.codehaus.aspectwerkz= .annotation.AnnotationC . target=20
More information on the
AnnotationC compiler can be found <=
a href=3D"
te compilation" class=3D"external-link" rel=3D"nofollow">here.
Although using annotations means you don't have to write all aspect deta= ils in XML, you do still have to create a tiny XML 'stub' which tells the A= spectWerkz runtime system which Java classes it should load and treat as as= pects. An example of this is show below:=20
<aspectwerkz> <system id=3D"AspectWerkzExample"> <aspect class=3D"testAOP.MyAspectWithAnnotations "/>= ; </system> </aspectwerkz>=20
There are basically two ways to actually weave the code together, one ca= lled online weaving performs the weaving as the classes are loaded= into the JVM. The other is offline weaving, and is done before th= e code is actually run.=20
When using online weaving you need to decide which JVM your goi=
ng to use. This is because the hook which allows AspectWerkz to we=
ave the classes together on the fly, is different in Sun HotSpot (=
where JDI/HotSwap is used), as apposed to BEA JRockit (where a Pre=
Processor is used). The default is setup to use Sun JDK 1.4.2, however if y=
ou want to use JRockit, simply edit the
bin/aspectwerkz file (
Using JRockit is the preferred choice since it will not only perform much be= tter (no need to run in debug mode, which using HotSwap, e.g. Sun and IBM, = requires) and be more stable, but will also work on JDK 1.3, 1.4 and 1.5. <= /p>=20
Performing the weaving is then just a matter of using the
aspectwe=
rkz command line tool to run
java with the relevant cla=
sses, pointing it to the definition file, (even if using annotations you st=
ill need the 'stub' definition file), i.e.
$ASPECTWERKZ_HOME/bin/aspectwerkz -Daspectwerkz.definition.file=3Daop.xml -= cp target testAOP.HelloWorld=20
This produces the expected output:=20
before greeting... Hello World! after greeting...=20
With offline weaving, the test applications classes are modifie=
d on the disk with the aspect calls. That is to say offline weaving amends your actual class definition, (as opposed to online weaving which doesn't modify any classes). To perform offline weaving, =
you use the
aspectwerkz command line tool with the
-offl=
ine option, as follows:
$ASPECTWERKZ_HOME/bin/aspectwerkz -offline aop.xml -cp target target=20
The last option on the command (
target) tells AspectWerkz w=
here your classfiles are and is very important that you type in correctly, =
else nothing will get weaved into your target classes and you will wonder w=
hy nothing is happening.
Running the aspect is then just a matter of invoking your main class, al= though you still need some of the AspectWerkz jar's on your classpath, and = you still need to provide an XML definition file:=20
java -cp $ASPECTWERKZ_HOME/lib/aspectwerkz-2.0.RC1.jar:target=20 -Daspectwerkz.definition.file=3Daop.xml testAOP.HelloWorld=20
For AspectWerkz 1.0 final:=20
java -cp $ASPECTWERKZ_HOME/lib/aspectwerkz-1.0.jar:target=20 -Daspectwerkz.definition.file=3Daop.xml testAOP.HelloWorld=20
Note: Windows users need to replace the ":" path separator= by a ";"=20
This produces the expected output:=20
before greeting... Hello World! after greeting...=20
Now we have learned how to:=20
Want more?=20
Then read the next tutorial Hijacking Hello World or the online documentation= =20
Want to use AOP in your appplication server?=20
Then start by reading this dev2dev article on how to enable AOP in WebLogic Server (= the concepts are generic an works for any application server).=20
This tutorial is based on a tutorial written by David Maddison= a> (with modifications and enhancements by jboner jboner) | http://docs.codehaus.org/exportword?pageId=5033 | 2013-12-05T02:40:23 | CC-MAIN-2013-48 | 1386163038307 | [] | docs.codehaus.org |
Ticket #1831 (closed defect: fixed)
please remove dependency between mediaplayer and pulseaudio in 2008.8
Description!
Attachments
Change History
Changed 5 years ago by hedora
- Attachment openmoko-mediaplayer2-performance-workaround.patch added
comment:2 Changed 5 years ago by roh
- HasPatchForReview set
BatchModify?: set HasPatchForReview? on 'keyword' contains 'patch'
comment:3 Changed 5 years ago by john_lee
- Owner changed from openmoko-devel to john_lee
- Status changed from new to assigned
(inappropriatetly named; in 2008, this patch does 'the right thing'; it's was a 'workaround' in 2007.2) | http://docs.openmoko.org/trac/ticket/1831 | 2013-12-05T02:53:50 | CC-MAIN-2013-48 | 1386163038307 | [] | docs.openmoko.org |
Remote Procedure Calls (RPCs) let you call functions on a remote machine. Invoking an RPC is similar to calling a normal function and almost as easy but there are some important differences to understand. could send RPC calls to everyone to signal that he picked up an item. A server could send an RPC to a particular client to initialize him right after he connects, for example, to give him his player number, spawn location, team color, etc. A client could in turn send an RPC only to the server to specify his starting options, such as the color he prefers or the items he has bought.
A function must be marked as an RPC before it can be invoked remotely. This is done by prefixing the function in the script with an RPC attribute:-
// All RPC calls need the @RPC attribute! @RPC function PrintText (text : String) { Debug.Log(text); }:-
networkView.RPC ("PrintText", RPCMode.All, "Hello world");:-
@RPC function PrintText (text : String, info : NetworkMessageInfo) { Debug.Log(text + " from " + info.sender); }
...
Page last updated: 2011-11-18 | http://docs.unity3d.com/Documentation/Components/net-RPCDetails.html | 2013-12-05T02:51:44 | CC-MAIN-2013-48 | 1386163038307 | [] | docs.unity3d.com |
Description
Below are a couple of quick methods for zipping and unziping java String values. Optionally (with true as the second param) the method 'chunks' the resulting base64 encoded zipped string. Chunking just means long lines are broken at a specific column (line feeds inserted) to make the encoded data more easy to manage (think SSH keys).
Code
Paste the below into a file "zipString.groovy" and run:
Calling the script
and now for the code:
Script Code
Executing the script gives the following output:
Author: Matias Bjarland, Iteego Inc | http://docs.codehaus.org/pages/viewpage.action?pageId=231735697 | 2013-12-05T02:39:57 | CC-MAIN-2013-48 | 1386163038307 | [] | docs.codehaus.org |
...
- Subscribe to the cargo dev mailing list and ask as many question you'd like there!
- Create a JIRA issue on http (you'll need to register). We'll then add you to the cargo-developers group in JIRA and assign the issue to you
- Understand the Cargo project's directory structure. Container implementations are located in
trunk/core/containers/ContainerName.
- Have a look at existing container implementations
- Some containers are simple to read and understand; for example jo or glassfish
- Some other containers are much more feature-complete (remote deployers, datasources, etc.); for example tomcat or jonas.
- Create your container's Maven module, with its package inside.
- Create the following classes:
- A container implementation class named
_ServerNameNxContainerType_Containerwhere
ServerNameis the name of the container,
Nxthe version and
ContainerTypethe type of container (
InstalledLocalor
Remote). For example:
JBoss3xLocalContainer.
- One or several configuration implementation classes named
_ServerNameConfigurationType_Configurationwhere
ConfigurationTypecan be
StandaloneLocal,
ExistingLocalor
Runtime. For example
JBossStandaloneLocalConfiguration.
- One or several deployer implementation classes named
_ServerNameDeployerType_Deployerwhere
DeployerTypecan be
InstalledLocalor
Remote. For example:
JBossInstalledLocalDeployer.
-.
- Finally, implement the
FactoryRegistrythat will register your container to CARGO and make sure you've defined a link to your container's factory registry in
src/main/resources/META-INF/services/org.codehaus.cargo.generic.AbstractFactoryRegistry.
- Run the Cargo build to ensure everything is working. You'll probably find that you haven't followed the Cargo project's coding conventions... Fix those and build again until it passes!
Please note that when you run the build it'll automatically run the samples test suites in your container (provided you've added your container to the generic API as described in the previous step and provided you've defined the right capabilities for your container). See the Building page for more details on the build.
- 30 45 minutes. Just put your container as last, so it is tested daily without blocking the whole CI environment.
... | http://docs.codehaus.org/pages/diffpages.action?pageId=227508404&originalId=227508402 | 2013-12-05T02:41:36 | CC-MAIN-2013-48 | 1386163038307 | [] | docs.codehaus.org |
Mac OS SDK
Overview¶
A Branch link is a web URL that looks like. When this link
is clicked on a Mac it opens a Branch web page that quickly determines if the Mac app can be opened
on the user's computer, and if so, Branch opens the app with a Mac URI scheme like
your-app-scheme://open?link_click_id=348527481794276288.
(If the user doesn't have the app installed Branch can redirect the user to a fallback URL, like an app download page or some other configurable place).
Once your app is running, macOS passes the URI
your-app-scheme://open?link_click_id=348527481794276288 to
your app, which is intercepted by the Branch SDK. The SDK takes the link and makes a network call to our servers, which
translate and expand the parameters in the link to something meaningful for your app. The SDK then notifies your app that a link
was opened with the expanded data.
Working Example
Please refer to our TestBed-Mac project for a working example the uses Branch.
Integrate Branch¶
Configure Branch¶
Complete the Basic integration within Configure your dashboard
Install the Framework¶
Add the Branch.framework as an embedded binary in your app.
You can drag and drop the framework into your app to install it.
In Xcode, click on your project in the Project Navigator, select your app in the Targets area,
select the 'General' tab up top, and scroll down to the 'Embedded Binaries' section. You can drag
the Branch.framework bundle from the
Frameworks/macOS project directory into this area.
Configure Info.plist¶
Add your app scheme to your Info.plist file so macOS knows what schemes your app can handle. This
example shows
testbed-mac as the app scheme. Add just the scheme and not the
:// part.
Here's a snippet of xml you can copy into your Info.plist. Right click on your Info.plist and open it as source code. You can paste this snippet before the final
</dict> tag. Remember to change
YOUR-APP-SCHEME-HERE to the app scheme for your app.
<key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleTypeRole</key> <string>Editor</string> <key>CFBundleURLSchemes</key> <array> <string>YOUR-APP-SCHEME-HERE</string> </array> </dict> </array>
Your app's URI scheme must be the first scheme defined (item 0) in the list.
The Branch SDK will use the first URI Scheme from your list that does not start with
fb,
db,
twitterkit-,
pin, or
com.googleusercontent.apps. These schemes are ignored by Branch since they are commonly used by other app kits for oauth and other uses.
Initialize Branch¶
Start Branch when your app first starts up. In your app delegate, start Branch in your
applicationWillFinishLaunching:
method:
#import <Branch/Branch.h> // In your app delegate class file add this method to start the Branch SDK: - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { // Insert code here to initialize your application // Register for Branch URL notifications: [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(branchWillStartSession:) name:BranchWillStartSessionNotification object:nil]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(branchDidStartSession:) name:BranchDidStartSessionNotification object:nil]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(branchOpenedURLNotification:) name:BranchDidOpenURLWithSessionNotification object:nil]; // Create a Branch configuration object with your key: BranchConfiguration *configuration = [[BranchConfiguration alloc] initWithKey:@"key_live_joQf7gfRz1vebNOoHPFGJhnhFCarsZg0"]; // Start Branch: [[Branch sharedInstance] startWithConfiguration:configuration]; }
Next, add a notification handler so your app can handle the deep links:
- (void) branchWillStartSession:(NSNotification*)notification { NSLog(@"branchWillStartSession: %@", notification.name); NSString *url = notification.userInfo[BranchURLKey] ?: @""; NSLog(@"URL: %@", url); } - (void) branchDidStartSession:(NSNotification*)notification { NSLog(@"branchDidStartSession: %@", notification.name); NSString *url = notification.userInfo[BranchURLKey] ?: @""; NSLog(@"URL: %@", url); BranchSession *session = notification.userInfo[BranchSessionKey]; NSString *data = (session && session.data) ? session.data.description : @""; } - (void) branchOpenedURLNotification:(NSNotification*)notification { NSLog(@"branchOpenedURLNotification: %@", notification.name); NSString *url = notification.userInfo[BranchURLKey] ?: @""; NSLog(@"URL: %@", url); BranchSession *session = notification.userInfo[BranchSessionKey]; // Do something with the link! // In this contrived example we'll load a view controller that plays the song that was in the link: SongViewController *viewController = [SongViewController loadController]; viewController.songTitle = branchSession.linkContent.title; [viewController.window makeKeyAndOrderFront:self]; [viewController playSong]; }
Implement Branch Features¶
Turning on Logging¶
To help debugging your app, you can turn on Branch logging, which logs to the console. Remember to turn it off in your production app.
Property¶
Branch.loggingEnabled
Setting User Identities¶
Often, you might have your own user IDs, or want referral and event data to persist across platforms or uninstall/reinstall. It's helpful if you know your users access your service from different devices. This where we introduce the concept of an 'user identity'.
Method¶
[Branch setUserIdentity:completion:]
See
setUserIdentity:completion:
If you provide a logout function in your app, be sure to clear the user when the logout completes. This will ensure that all the stored parameters get cleared and all events are properly attributed to the right identity.
Warning: This call will clear attribution on the device.
Method¶
[Branch logoutWithCompletion:]
See
logoutWithCompletion:
Tracking User Actions and Events¶
Use the
BranchEvent class.
The
BranchEvent class can be simple to use. For example:
Objective-C
[[Branch sharedInstance] logEvent:[BranchEvent standardEvent:BranchStandardEventAddToCart]];
Swift
Branch.sharedInstance.logEvent(BranchEvent.standardEvent(.addToCart))
For best results use the Branch standard event names defined in
BranchEvent.h. But you can use your own custom event names too:
Objective-C
[Branch sharedInstance] logEvent:[BranchEvent customEventWithName:@"User_Scanned_Item"]];
Swift
Branch.sharedInstance.logEvent(BranchEvent.customEventWithName("User_Scanned_Item"))
Extra event specific data can be tracked with the event as well:
Objective-C
BranchEvent *event = [BranchEvent standardEvent:BranchStandardEventPurchase]; event.transactionID = @"tx-12344555"; event.currency = BNCCurrencyUSD; event.revenue = [NSDecimalNumber decimalNumberWithString:@"12.70"]; event.shipping = [NSDecimalNumber decimalNumberWithString:@"10.20"]; event.tax = [NSDecimalNumber decimalNumberWithString:@"2.50"]; event.coupon = @"coupon_code"; event.affiliation = @"store_affiliation"; event.eventDescription= @"Shopper made a purchase."; event.searchQuery = @"Fashion Scarf"; event.contentItems = @[ branchUniversalObject ]; event.customData = (NSMutableDictionary*) @{ @"Item_Color": @"Red", @"Item_Size": @"Large" }; [event logEvent];
Swift
let event = BranchEvent.standardEvent(.purchase) event.transactionID = "tx-12344555" event.currency = .USD event.revenue = 12.70 event.shipping = 10.20 event.tax = 2.50 event.coupon = "coupon_code" event.affiliation = "store_affiliation" event.eventDescription = "Shopper made a purchase." event.searchQuery = "Fashion Scarf" event.contentItems = [ branchUniversalObject ] event.customData = [ "Item_Color": "Red", "Item_Size": "Large" ] event.logEvent()
Enable or Disable User Tracking¶
In order to help our customers comply with GDPR and other laws that restrict data collection from certain users, we’ve updated our Web SDK with a Do Not Track mode. This way, if a user indicates that they want to remain private on your website, or if you otherwise determine that a particular user should not be tracked, you can continue to make use of the Branch Web SDK (e.g. for creating Branch links) while not tracking that user. This state is persistent, meaning that it’s saved for the user across browser sessions for the web site. This setting can also be enabled across all users for a particular link, or across your Branch links.
[Branch sharedInstance].trackingDisabled = YES;
Branch.sharedInstance().trackingDisabled = true
This will prevent any Branch network requests from being sent, except when deep linking. If someone clicks a Branch link, but does not want to be tracked, we will return the deep linking data back to the app but without capturing any tracking information.
In do-not-track mode, you will still be able to create & share links. The links will not have identifiable information and will be long format links. Event tracking won’t pass data back to the server if a user has expressed to not be tracked. You can change this behavior at any time by calling the above function. The trackingDisabled state is saved and persisted across app runs.
Branch Universal Object¶
Use a BranchUniversalObject to describe content in your app for deep links, content analytics and indexing.
The properties object describes your content in a standard way so that it can be deep linked, shared, or indexed on spotlight for instance. You can set all the properties associated with the object and then call action methods on it to create a link or index the content on Spotlight.
Branch Universal Object best practices¶
Here are a set of best practices to ensure that your analytics are correct, and your content is ranking on Spotlight effectively.
- Set the
canonicalIdentifierto a unique, de-duped value across instances of the app
- Ensure that the
title,
contentDescriptionand
imageUrlproperly represent the object
- Initialize the Branch Universal Object and call
userCompletedActionwith the
BNCRegisterViewEventon page load
- Call
showShareSheetand
createShortLinklater in the life cycle, when the user takes an action that needs a link
- Call the additional object events (purchase, share completed, etc) when the corresponding user action is taken
- Set the
contentIndexModeto
ContentIndexModePublicor
ContentIndexModePrivate. If BranchUniversalObject is set to
ContentIndexModePublic, then content would indexed using
NSUserActivity, or else content would be index using
CSSearchableIndexon Spotlight.
Note: Content indexed using
CSSearchableItem could be removed from Spotlight but cannot be removed if indexed using
NSUserActivity.
Practices to avoid:
1. Don't set the same
title,
contentDescription and
imageUrl across all objects.
2. Don't wait to initialize the object and register views until the user goes to share.
3. Don't wait to initialize the object until you conveniently need a link.
4. Don't create many objects at once and register views in a
for loop.
Branch Universal Object¶
Methods and Properties¶
Objective-C
#import "BranchUniversalObject.h"
BranchUniversalObject *branchUniversalObject = [[BranchUniversalObject alloc] initWithCanonicalIdentifier:@"item/12345"]; branchUniversalObject.title = @"My Content Title"; branchUniversalObject.contentDescription = @"My Content Description"; branchUniversalObject.imageUrl = @""; branchUniversalObject.contentMetadata.contentSchema = BranchContentSchemaCommerceProduct; branchUniversalObject.contentMetadata.customMetadata[@"property1"] = @"blue"; branchUniversalObject.contentMetadata.customMetadata[@"property2"] = @"red";
Swift
let branchUniversalObject: BranchUniversalObject = BranchUniversalObject(canonicalIdentifier: "item/12345") branchUniversalObject.title = "My Content Title" branchUniversalObject.contentDescription = "My Content Description" branchUniversalObject.imageUrl = "" branchUniversalObject.contentMetadata.contentSchema = .product; branchUniversalObject.contentMetadata.customMetadata["property1"] = "blue" branchUniversalObject.contentMetadata.customMetadata["property2"] = "red"
Properties¶
$og_title into the data dictionary of any link created.
$og_description into the data dictionary of any link created.
$og_image_url into the data dictionary of any link created.
BranchUniversalObject.contentMetadata¶
The
BranchUniversalObject.contentMetadata properties further describe your content. These properties are trackable in the Branch dashboard and will be automatically exported to your connected third-party app intelligence partners like Adjust or Mixpanel.
Set the properties of this sub-object depending on the type of content that is relevant to your content. The
BranchUniversalObject.contentMetadata.contentSchema property describes the type of object content. Set other properties as is relevant to the type.
BranchContentSchema enum that best describes the content type. It accepts values like
BranchContentSchemaCommerceProduct and
BranchContentSchemaMediaImage.
BNCProductCategory value, such as
BNCProductCategoryAnimalSupplies or
BNCProductCategoryFurniture.
BranchCondition value, such as
BranchConditionNew or
BranchConditionRefurbished.
Tracking User Interactions With An Object¶
We've added a series of custom events that you'll want to start tracking for rich analytics and targeting. Here's a list below with a sample snippet that calls the register view event.
Methods¶
Objective-C
[branchUniversalObject userCompletedAction:BranchStandardEventViewItem];
Swift
branchUniversalObject.userCompletedAction(BranchStandardEventViewItem)
Parameters¶
None
Returns¶
None
Shortened Links¶
Once you've created your
Branch Universal Object, which is the reference to the content you're interested in, you can then get a link back to it with the mechanisms described below.
Encoding Note¶
One quick note about encoding. Since
NSJSONSerialization supports a limited set of classes, we do some custom encoding to allow additional types. Current supported types include
NSDictionary,
NSArray,
NSURL,
NSString,
NSNumber,
NSNull, and
NSDate (encoded as an ISO8601 string with timezone). If a parameter is of an unknown type, it will be ignored.
Methods¶
Objective-C
#import "BranchLinkProperties.h"
BranchLinkProperties *linkProperties = [[BranchLinkProperties alloc] init]; linkProperties.feature = @"sharing"; linkProperties.channel = @"facebook"; [linkProperties addControlParam:@"$desktop_url" withValue:@""]; [linkProperties addControlParam:@"$ios_url" withValue:@""];
[branchUniversalObject getShortUrlWithLinkProperties:linkProperties andCallback:^(NSString *url, NSError *error) { if (!error) { NSLog(@"success getting url! %@", url); } }];
Swift
let linkProperties: BranchLinkProperties = BranchLinkProperties() linkProperties.feature = "sharing" linkProperties.channel = "facebook" linkProperties.addControlParam("$desktop_url", withValue: "") linkProperties.addControlParam("$ios_url", withValue: "")
branchUniversalObject.getShortUrl(with: linkProperties) { (url, error) in if error == nil { NSLog("got my Branch link to share: %@", url) } }
Link Properties Parameters¶
sharing.
You can do custom redirection by inserting the following optional keys in the dictionary:
You have the ability to control the direct deep linking of each link by inserting the following optional keys in the dictionary:
myapp.com/customalias | https://docs.branch.io/apps/mac-os/ | 2020-01-17T20:21:07 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['/_assets/img/pages/dashboard/fallback.png', 'image'], dtype=object)
array(['https://github.com/BranchMetrics/mac-branch-deep-linking/blob/master/Documentation/Images/EmbeddedBinary.png',
'Add Framework'], dtype=object)
array(['https://github.com/BranchMetrics/mac-branch-deep-linking/blob/master/Documentation/Images/InfoPlist.png',
'Add App Scheme'], dtype=object) ] | docs.branch.io |
public class DCAwareRoundRobinPolicy extends Object implements LoadBalancingPolicy
RoundRobinPolicy, but its DC awareness incurs a slight overhead so the latter should be preferred to this policy in that case.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public static DCAwareRoundRobinPolicy.Builder builder()
public void init(Cluster cluster, Collection<Host> hosts)
initin interface
LoadBalancingPolicy
cluster- the
Clusterinstance for which the policy is created.
hosts- the initial hosts to use.
public HostDistance distance(Host host)
LOCAL. For each remote datacenter, it considers a configurable number of hosts as
REMOTE)
newQueryPlanin interface
LoadBalancingPolicy
loggedKeyspace- the keyspace currently logged in on for this query.
statement- the query for which to build the plan.
public void onUp(Host host)
onUpin interface
LoadBalancingPolicy
host- the host that has been detected up.
public void onDown(Host host)
onDownin interface
LoadBalancingPolicy
host- the host that has been detected down.
public void onAdd(Host host)
onAddin interface
LoadBalancingPolicy
host- the host that has been newly added.
public void onRemove(Host host)
onRemovein interface
LoadBalancingPolicy
host- the removed host.
public void close()
closein interface
LoadBalancingPolicy | https://docs.datastax.com/en/drivers/java-dse/1.0/com/datastax/driver/core/policies/DCAwareRoundRobinPolicy.html | 2020-01-17T19:35:52 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.datastax.com |
In XAF applications, you can use tree-like data representation instead of a grid-like one, which is used in a default UI. For this purpose, the eXpressApp Framework supplies the TreeList Editors module. To represent data as a tree, this module uses the ASPxTreeList and XtraTreeList controls. All you need to do to use the TreeList Editors' module's instruments is to implement particular interfaces in your business classes or use ready-to-use classes form the Business Class Library. In addition, you can access TreeList or ASPxTreeList controls in code to customize the default settings or use their features that are not used by default. For details on these controls, refer to the XtraTreeList and ASPxTreeList documentation. | https://docs.devexpress.com/eXpressAppFramework/112841/concepts/extra-modules/tree-list-editors-module | 2020-01-17T18:14:23 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.devexpress.com |
Developers selling on the WooCommerce Marketplace can edit product pages, access information like usage data and sales, and manage reviews for their products.
Managing products ↑ Back to top
Most of the work for managing products can be done from the All Extensions screen. To get there:
- Log in to the Vendor Dashboard
- Go to Extensions > All Extensions
- Select the extension you want to work on
This will take you to the product overview, which includes:
- Name: The name of the product, as it appears on the product page. Note that this may be different from the product slug or URL.
- Single-site subscription: The base price for a single-site subscription of the product.
- Last updated: The date when the latest version was uploaded, and a link to the changelog.
- Categories: The categories the product is listed under.
- Short Description: The tagline for the product, more on this here.
From there, you can see information about usage, upload a new version or edit the product page.
Editing product pages ↑ Back to top
To edit a product page, select Edit Product Page from the product overview.
For more on how to make the most of your product pages, read the guidelines for writing marketing content.
Understanding product statistics ↑ Back to top
In the product overview, the Statistics tab includes usage information to help developer know what versions of relevant technologies are used with a given product.
These statistics comes from the WooCommerce Tracker and are based on stores that allow for usage tracking. Developers can use this information to make decisions about what versions of WordPress, WooCommerce and PHP to support.
Uploading new versions ↑ Back to top
To upload a new version, select the Versions tab. You’ll see all the versions you’ve previously uploaded, including highlights about what was in each version.
Select Add Version to upload an updated version, then:
- Select the .zip file you want to upload from your computer
- Add the version number
- Click Submit new version
Once submitted, we will run basic tests on the uploaded file. All Vendor Admins will receive email notification throughout the process, including:
- Confirmation: Vendor Admins will get an email when a new version is submitted, confirming that it is in the queue. No action is needed here.
- Rejected: If a version fails any part of our automated testing, Vendor Admins will receive an email letting them know, including the specific error. Developers should try to resolve the error and submit a new version when ready.
- Live: If a version passes automated testing, it will be automatically deployed and made available to customers. No action is needed.
Formatting the changelog
When uploading a version, WooCommerce.com looks for a specific format for the changelog. The name of the file should be
changelog.txtand it should have the following formatting:
*** Product Name Changelog *** yyyy-mm-dd - version x.x.x * Item one * Item two
The date format, the space between the
-, and the word
version are all important there.
Common errors
If an upload fails, there are a few common errors you should check first.
The name of file: WooCommerce.com looks for a specific name for your .zip file and the main folder within it. You’ll see the expected name when you try to upload a product:
That a changelog is present – a file named changelog.txt must be present to serve as a record of what’s changed from version-to-version.
The changelog format – more on what this should look like here.
Understanding product sales ↑ Back to top
Vendors have access to reports on how their products are performing, including:
- Earnings: New sales, renewal sales, and refund amount.
- Commissions: The amount earned by the Vendor from all sales.
- Subscriptions: Total active subscriptions, including new and renewing this month, as well as refund rate, renewal rate, and a breakdown for 1-, 5- and 25-site packs.
To find this information, log in to the Vendor Dashboard and go to Extensions > Sales Report.
Data is available starting November 2017.
Seeing reviews ↑ Back to top
Customers who have purchased a product may leave a rating and review. When there are a minimum of 10 ratings, reviews are show on the product page, and ratings are visible on the product page, category page, and in search results. Read our guidelines around ratings and reviews.
Whether reviews are public or not, Vendor Admins can see reviews for their own products by logging in to the Vendor Dashboard and then going to Extensions > Reviews.
| https://docs.woocommerce.com/document/marketplace-manage-reporting/ | 2020-01-17T19:34:15 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['https://docs.woocommerce.com/wp-content/uploads/2018/04/screen-shot-2018-04-18-at-2-00-05-pm.png?w=950',
'Product overview in WooCommerce Marketplace Vendor Dashboard'],
dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2018/04/screen-shot-2018-04-18-at-1-47-18-pm.png?w=950',
'Edit Product in WooCommerce Marketplace Vendor Dashboard'],
dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2018/04/screen-shot-2018-04-18-at-2-13-25-pm.png?w=950',
'Statistics in WooCommerce Marketplace Vendor Dashboard'],
dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2018/04/screen-shot-2018-04-18-at-1-54-17-pm.png?w=950',
'Versions in WooCommerce Marketplace Vendor Dashboard'],
dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2018/04/Screen-Shot-2018-06-25-at-11.22.05-AM.png',
'expected file name for WooCommerce Marketplace'], dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2018/04/null5.png?w=950',
'Sales Report in WooCommerce Marketplace Vendor Dashboard'],
dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2018/04/null2.png?w=550',
'Vendor Dashboard - reviews'], dtype=object) ] | docs.woocommerce.com |
nmrglue.proc_base¶
A collection of NMR spectral processing functions which operate on the last dimension (1) of 2D arrays. These functions are wrapped by other processing modules but can also be used directly. All parameter are assumed to be in units of points unless otherwise noted.
This module is imported as nmrglue.proc_base and can be called as such. | https://nmrglue.readthedocs.io/en/latest/reference/proc_base.html | 2020-01-17T18:33:30 | CC-MAIN-2020-05 | 1579250590107.3 | [] | nmrglue.readthedocs.io |
CPU Profiling¶
If you built Ceph from source and compiled Ceph for use with oprofile you can profile Ceph’s CPU usage. See Installing Oprofile for details.
Initializing oprofile¶
The first time you use
oprofile you need to initialize it. Locate the
vmlinux image corresponding to the kernel you are now running.
ls /boot sudo opcontrol --init sudo opcontrol --setup --vmlinux={path-to-image} --separate=library --callgraph=6
Starting oprofile¶
To start
oprofile execute the following command:
opcontrol --start
Once you start
oprofile, you may run some tests with Ceph.
Retrieving oprofile Results¶
To retrieve the top
cmon results, execute the following command:
opreport -gal ./cmon | less
To retrieve the top
cmon results with call graphs attached, execute the
following command:
opreport -cal ./cmon | less
Important
After reviewing results, you should reset
oprofile before
running it again. Resetting
oprofile removes data from the session
directory. | https://docs.ceph.com/docs/master/rados/troubleshooting/cpu-profiling/ | 2020-01-17T18:59:18 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.ceph.com |
Message latency metrics
Monitor messaging latency by adding dashboard graph metrics and alerts at the datacenter or node level.
Monitor messaging latency by adding dashboard graphs and alerts for latency metrics at the datacenter or node level. Messaging latency metrics are available for DSE versions 5.1.0 and later.
- Datacenter Messaging Latency [cross-dc-latency]
- The min, median, max, 90th, and 99th percentiles of the message latency between nodes in the same or different destination datacenter. This metric measures how long it takes a message from a node in the source datacenter to reach a node in the destination datacenter. Selecting a destination node within the source datacenter yields lower latency values.
- Node Messaging Latency [cross-node-latency]
- The min, median, max, 90th, and 99th percentiles of the latency of messages between nodes. The time period starts when a node sends a message and ends when the current node receives it. | https://docs.datastax.com/en/opscenter/6.1/opsc/online_help/latencyMetrics.html | 2020-01-17T18:41:34 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.datastax.com |
Gets a command to invoke the Save As dialog that prompts for a file name and saves the current document in a file with the specified path.
readonly fileSaveAsDialog: FileSaveAsDialogCommand
Call the execute method to invoke the command. The method checks the command state (obtained via the getState method) to determine whether the action can be performed.
Saving a document to the server is enabled only when the ASPxRichEdit.WorkDirectory property is defined. | https://docs.devexpress.com/AspNet/js-RichEditCommands.fileSaveAsDialog | 2020-01-17T18:55:38 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.devexpress.com |
Gets a command to force synchronizing the server document model with the client model and execute a callback function if it is necessary.
readonly forceSyncWithServer: ForceSyncWithServerCommand
Call the execute(callback) method to invoke the command. The method checks the command state (obtained via the getState method) to determine whether the action can be performed.
Use this command for sending the latest changes made in a document to the server to avoid losing these changes when a server-side operation (for example, saving) is performed. Note that all changes made in the document after sending client model cannot be handled in the callback.
Usage example:
richEdit.commands.forceSyncWithServer.execute(); richEdit.commands.forceSyncWithServer.execute(function() {}); | https://docs.devexpress.com/AspNet/js-RichEditCommands.forceSyncWithServer | 2020-01-17T18:49:24 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.devexpress.com |
It is important to plan the Snapshot copy transfer schedule and retention for your SnapVault backups.
When planning SnapVault relationships, consider the following guidelines:
For example:
Does the data change often enough throughout the day to make it worthwhile to replicate a Snapshot copy every hour, every two hours, or every four hours?
Do you want to replicate a Snapshot copy every night or just workday nights?
How many weekly Snapshot copies is it useful to keep in the SnapVault secondary volume? | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-900/GUID-AD7F9BDA-FD21-4838-8E83-FBAF520D8DA2.html | 2020-01-17T18:51:00 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.netapp.com |
Clients can access files on Storage Virtual Machines (SVMs) using the SMB protocol, provided that Data ONTAP can properly authenticate the user.
When an SMB client connects to a CIFS server, Data ONTAP authenticates the user with a Windows domain controller. Data ONTAP uses two methods to obtain the domain controllers to use for authentication:
Next, Data ONTAP must obtain UNIX credentials for the user. It does this by using mapping rules on the SVM or by using a default UNIX user instead. For SVMs, you can specify which mapping services to use, local files or LDAP, and the order in which mapping services are searched. Additionally, you can specify the default UNIX user.
Data ONTAP then checks different name services for UNIX credentials for the user, depending on the name services configuration of the SVM. The options are local UNIX accounts, NIS domains, and LDAP domains. You must configure at least one of them so that Data ONTAP can successfully authorize the user. You can specify multiple name services and the order in which they are searched. | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-900/GUID-D02840A9-69EE-4A0E-8AFB-F44E97A92448.html | 2020-01-17T18:47:46 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.netapp.com |
Initializing New Segments
A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 5.x documentation.
Initializing New Segments
Use the gpexpand utility to initialize the new segments, create the expansion schema, and set a system-wide random distribution policy for the database.
The first time you run gpexpand with a valid input file it creates the expansion schema and sets the distribution policy for all tables to DISTRIBUTED RANDOMLY. After these steps are completed, running gpexpand detects if the expansion schema has been created and, if so, performs table redistribution.
Creating an Input File for System Expansion
To begin expansion, gpexpand requires an input file containing information about the new segments and hosts. If you run gpexpand without specifying an input file, the utility displays an interactive interview that collects the required information and automatically creates an input file.
If you create the input file using the interactive interview, you may specify a file with a list of expansion hosts in the interview prompt. If your platform or command shell limits the length of the host list, specifying the hosts with -f may be mandatory.
Creating an input file in Interactive Mode
Before you run gpexpand to create an input file in interactive mode, ensure you know:
- The number of new hosts (or a hosts file)
- The new hostnames (or a hosts file)
- The mirroring strategy used in existing hosts, if any
- The number of segments to add per host, if any
The utility automatically generates an input file based on this information, dbid, content ID, and data directory values stored in gp_segment_configuration, and saves the file in the current directory.
To create an input file in interactive mode
- Log in on the master host as the user who will run your Greenplum Database system; for example, gpadmin.
- Run gpexpand. The utility displays messages about how to prepare for an expansion operation, and it prompts you to quit or continue.
Optionally, specify a hosts file using -f. For example:
$ gpexpand -f /home/gpadmin/new_hosts_file
- At the prompt, select Y to continue.
- Unless you specified a hosts file using -f, you are prompted to enter hostnames. Enter a comma separated list of the hostnames of the new expansion hosts. Do not include interface hostnames. For example:
> sdw4, sdw5, sdw6, sdw7
To add segments to existing hosts only, enter a blank line at this prompt. Do not specify localhost or any existing host name.
- Enter the mirroring strategy used in your system, if any. Options are spread|grouped|none. The default setting is grouped.
Ensure you have enough hosts for the selected grouping strategy. For more information about mirroring, see Planning Mirror Segments.
- Enter the number of new primary segments to add, if any. By default, new hosts are initialized with the same number of primary segments as existing hosts. Increase segments per host by entering a number greater than zero. The number you enter will be the number of additional segments initialized on all hosts. For example, if existing hosts currently have two segments each, entering a value of 2 initializes two more segments on existing hosts, and four segments on new hosts.
- If you are adding new primary segments, enter the new primary data directory root for the new segments. Do not specify the actual data directory name, which is created automatically by gpexpand based on the existing data directory names.
For example, if your existing data directories are as follows:
/gpdata/primary/gp0 /gpdata/primary/gp1
then enter the following (one at each prompt) to specify the data directories for two new primary segments:
/gpdata/primary /gpdata/primary
When the initialization runs, the utility creates the new directories gp2 and gp3 under /gpdata/primary.
- If you are adding new mirror segments, enter the new mirror data directory root for the new segments. Do not specify the data directory name; it is created automatically by gpexpand based on the existing data directory names.
For example, if your existing data directories are as follows:
/gpdata/mirror/gp0 /gpdata/mirror/gp1
enter the following (one at each prompt) to specify the data directories for two new mirror segments:
/gpdata/mirror /gpdata/mirror
When the initialization runs, the utility will create the new directories gp2 and gp3 under /gpdata/mirror.
These primary and mirror root directories for new segments must exist on the hosts, and the user running gpexpand must have permissions to create directories in them.
After you have entered all required information, the utility generates an input file and saves it in the current directory. For example:
gpexpand_inputfile_yyyymmdd_145134
Expansion Input File Format
Use the interactive interview process to create your own input file unless your expansion scenario has atypical needs.
The format for expansion input files is:
hostname:address:port:fselocation:dbid:content:preferred_role:replication_port
For example:
sdw5:sdw5-1:50011:/gpdata/primary/gp9:11:9:p:53011 sdw5:sdw5-2:50012:/gpdata/primary/gp10:12:10:p:53011 sdw5:sdw5-2:60011:/gpdata/mirror/gp9:13:9:m:63011 sdw5:sdw5-1:60012:/gpdata/mirror/gp10:14:10:m:63011
For each new segment, this format of expansion input file requires the following:
Running gpexpand to Initialize New Segments
After you have created an input file, run gpexpand to initialize new segments. The utility automatically stops Greenplum Database segment initialization and restarts the system when the process finishes.
To run gpexpand with an input file
- Log in on the master host as the user who will run your Greenplum Database system; for example, gpadmin.
- Run the gpexpand utility, specifying the input file with -i. Optionally, use -D to specify the database in which to create the expansion schema. For example:
$ gpexpand -i input_file -D database1
The utility detects if an expansion schema exists for the Greenplum Database system. If a schema exists, remove it with gpexpand -c before you start a new expansion operation. See Removing the Expansion Schema.
When the new segments are initialized and the expansion schema is created, the utility prints a success message and exits.
When the initialization process completes, you can connect to Greenplum Database and view the expansion schema. The schema resides in the database you specified with -D or in the database specified by the PGDATABASE environment variable. For more information, see About the Expansion Schema.
Rolling Back a Failed Expansion Setup
You can roll back an expansion setup operation only if the operation fails.
If the expansion fails during the initialization step, while the database is down, you must first restart the database in master-only mode by running the gpstart -m command.
gpexpand --rollback -D database_name | https://gpdb.docs.pivotal.io/5100/admin_guide/expand/expand-initialize.html | 2020-01-17T20:28:44 | CC-MAIN-2020-05 | 1579250590107.3 | [] | gpdb.docs.pivotal.io |
Before upgrading to SnapCenter 3.0 or later, you must be aware of certain limitations.
You must upgrade from SnapCenter 1.0 to SnapCenter 1.1, and then upgrade to SnapCenter 3.0 or later.
You must run the Protect-SmRepository command to protect the repository database and to create new schedules. You do not have to rerun the protection of the repository database if you are upgrading from SnapCenter 2.0 with MySQL Server to SnapCenter 3.0 or later. | http://docs.netapp.com/ocsc-41/topic/com.netapp.doc.ocsc-isg/GUID-35171119-5730-41C5-AACA-D21F35CB5933.html | 2020-01-17T19:40:25 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.netapp.com |
.
Begin by extracting the contents of the downloaded archive and creating a simple script named example.php.
$ cd predis-1.0 $ nano example.php
The script begins by including the class autoloader file and instantiating an object of the class:
require 'autoload.php'; $client = new Predis\Client(array( 'host' => '127.0.0.1', 'port' => 6379, 'password' => 'PASSWORD' ));
Notice that it configures the client object by defining the Redis server host, port and password. Replace these values with actual values for your server.
You can now use the object’s set() and get() methods to add or remove values from the cache. In this example, the set() method stores the value ‘cowabunga’ using the key ‘foo’. The key can then be used with the get() method to retrieve the original value whenever needed.
$client->set('foo', 'cowabunga'); $response = $client->get('foo');
Here’s the complete code for the example.php script:
<?php require 'autoload.php'; $client = new Predis\Client(array( 'host' => '127.0.0.1', 'port' => 6379, 'password' => 'PASSWORD' )); $client->set('foo', 'cowabunga'); $response = $client->get('foo'); echo $response; ?>
Save the file and run it.
$ php example.php
The script will connect to your Redis server, save the value to the key ‘foo’, then retrieve and display it. | https://docs.bitnami.com/bch/apps/redash/troubleshooting/test-redis/ | 2020-01-17T19:31:42 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.bitnami.com |
There are many different ways to get your billing details using our system. We offer a billing API, that is easy to access and consume using a python script we provide in the section below.
There is also a tool on our dashboard that you can view your costs over time: | https://docs.catalystcloud.io/billing.html | 2020-01-17T20:05:23 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.catalystcloud.io |
If you use Help Scout for support, you can integrate it with Rollbar to see recent errors that affected your users who write in to support.
The Rollbar app will appear on the right sidebar in the Conversation view. It looks like this:
Setup Instructions
Note that this requires a paid Help Scout account.
- In Help Scout, click Apps in the top bar
- Click Build a Custom App
- Click Create App
- Fill out the form as follows. Make sure to replace PROJECT_READ_ACCESS_TOKEN with a
readscope access token for the relevant project.
- App Name: Rollbar
- Content Type: Dynamic Content
- Callback URL:
- Secret Key: PROJECT_READ_ACCESS_TOKEN
- Debug Mode: Off
- Mailboxes: check all
- Click Save
Now when you navigate to a conversation in Help Scout, you'll see the Rollbar app showing the most recent 10 occurrences that affected the user who started the conversation.
Updated 9 months ago | https://docs.rollbar.com/docs/helpscout | 2020-01-17T18:55:57 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.rollbar.com |
DK11 for Java | tatukgis.jdk.TGIS_LayerVector.Locate | Overloads | Constructors | Fields | Methods | Properties | Events
Locate a shape.
Available also on: Delphi | .NET | ActiveX.
// Java public TGIS_Shape Locate( TGIS_Point _ptg, double _prec );
// Oxygene public function Locate( _ptg : TGIS_Point; _prec : Double ) : TGIS_Shape; virtual;. | https://docs.tatukgis.com/DK11/api:dk11:java:tatukgis.jdk.tgis_layervector.locate_tgis_point_double | 2020-01-17T20:14:10 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.tatukgis.com |
Do
you have friends and contacts who aren't using a BlackBerry 10 device? If so, you can chat, call, send pictures, and share videos
with these contacts using joyn. If your wireless service provider supports joyn, within
the Contacts app or the Phone app, the
icon appears beside contacts who are using joyn, so
that you can easily find other users.
joyn for BlackBerry 10 might not be available on your device, depending on your wireless service provider.
For information about fees or conditions that might apply when using this application, contact your wireless service provider. | http://docs.blackberry.com/en/smartphone_users/deliverables/62002/mba1380822409210.html | 2014-10-20T08:15:29 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.blackberry.com |
two MVS Session Manager instances, one of the MVS Session Manager instances becomes active and the other standby. The same holds for the two MVS BlackBerry Enterprise Server Connector instances: One becomes active, one standby. The components themselves determine which one becomes active and which one standby. If the active component fails, failover from the active component to the standby component is managed by the components in conjunction with the MVS Witness Server.
In an active-active model, components are always active. If a component fails, the BlackBerry MVS automatically stops using that component. High availability is achieved by providing two components of the same type at the BlackBerry Domain level, and not for every high availability pair.
For example, because MVS Data Manager instances use an active-active model, two MVS Data Manager instances provide redundancy for all BlackBerry MVS Server instances in the BlackBerry Domain. By comparison, there must be one active and one standby MVS Session Manager for every high availability pair.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/es-es/admin/deliverables/43631/BBMVS_high_availability_GW_1970245_11.jsp | 2014-10-20T08:59:39 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.blackberry.com |
changes.mady.by.user David Racodon
Saved on Sep 04, 2013
changes.mady.by.user G. Ann Campbell
Saved on Oct 09, 2013
To check code against rules provided by FindBugsThis plugin is included in the Java Ecosystem.
It provides FindBugs rules.
SonarQube Java Plugin
FindBugs
...
FindBugs requires the compiled classes to run.
Make sure that you compile your source code with debug information on (to get the line numbers in the Java bytecode). It Debug is usually turned on by default (except for compilation with Ant). Otherwise the unless you're compiling with Ant, in which case, you will need to turn it on explicitly. If the debug information is not available, the issues raised by FindBugs will be displayed at the beginning of the file as because the correct line numbers will be missingwere not. | http://docs.codehaus.org/pages/diffpages.action?pageId=232359128&originalId=232359122 | 2014-10-20T08:23:07 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.codehaus.org |
options for each value are Not Set (shown as "..."), Allow, or Deny.
The first thing to notice is the column headings, Admin, Login, Manage, Create, Delete, Edit, Edit State. These are the actions that a use can perform on an object in Joomla. The specific meaning of each action depends on the context. For the Global Configuration screen, they are defined as follows:
On the left side, we have the Groups for the site. In this case, we have the standard 7 groups that we had in version 1.5 plus we have an additional group called "Park Rangers". Notice that our groups are set up with similar permissions as they were Park Rangers to be able to have the ability to login to the back end, you could just change their Login value to "Allow". If you wanted to not allow members of Administrator group to delete objects or change their state, you would change their permissions in these columns to Inherit (or Deny).
For more information, please refer to:
At the top right you will see the toolbar:
The functions are: | http://docs.joomla.org/index.php?title=Help16:Site_Global_Configuration&oldid=34143 | 2014-10-20T09:13:56 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.joomla.org |
Description
The Wsclient plugin adds a remoting client capable of communicating via SOAP. It is compatible with Grails' Xfire plugin 0.8.1.
Installation
The current version of griffon-wsclient-plugin is 0.5
To install just issue the following command
Usage
The plugin will inject the following dynamic methods:
- withWs(Map params, Closure stmts) - executes
stmtsissuing SOAP calls to a remote server.
- withWs(Map params, CallableWithArgs<T> stmts) - executes
stmtsissuing SOAP calls to a remote server.
Where params may contain
These methods are also accessible to any component through the singleton
griffon.plugins.wsclient.WsclientConnector. You can inject these methods to non-artifacts via metaclasses. Simply grab hold of a particular metaclass and call
WsclientConnector.enhance(metaClassInstance).
Examples
This example relies on Grails as the service provider. Follow these steps to configure the service on the Grails side:
- Download a copy of Grails and install it.
- Create a new Grails application. We'll pick 'exporter' as the application name.
- Change into the application's directory. Install the xfire plugin.
- Create a MathServicegrails-app/services/MathService.groovy
- Start the application
Now we're ready to build the Griffon application
- Create a new Griffon application. We'll pick MathClient as the name
- Install the wsclient plugin
- Fix the view script to look like thisgriffon-app/views/MathClientView.groovy
- Let's add required properties to the modelgriffon-app/models/MathClientModel.groovy
- Now for the controller code. Notice that there is minimal error handling in place. If the user types something that is not a number the client will surely break, but the code is sufficient for now.griffon-app/controllers/MathClientController.groovy
- Start the application
All dynamic methods will create a new client when invoked unless you define an id: attribute. When this attribute is supplied the client will be stored as a property on the instance's metaClass. You will be able to access it via regular property access or using the id: again.
Configuration
Dynamic method injection
Dynamic methods will be added to controllers by default. You can change this setting by adding a configuration flag in
Config.groovy | http://docs.codehaus.org/display/GRIFFON/Wsclient+Plugin | 2014-10-20T08:10:10 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.codehaus.org |
Overview
The:
...or read the script itself.
Examples).
Building
To build a production image on your desktop, habanero, do the following:
To build a production image on the remote machine jalapeno, do the following:
Cross Platform Building
To build a production image on the remote PowerPC machine chipotle, do the following:
Since building on a PowerPC machine can take a long time, you might prefer to build on your x86 machine jalapeno and cross-build to chipotle. In that case you would just do the following:.
Full Build Specification
If you want to specify the build fully, you can do something like this:
If you want to specify multiple different GCs you could do:
which would build all three configurations on jalapeno.
Profiled Builds:
Testing).
Running a test-run
To run the pre-commit test-run on your host jalapeno just do:
Running a test
To run the dacapo tests against a production on the host jalapeno, do:
To run the dacapo tests against a FastAdaptive MarkSweep build, on the host jalapeno, do:
To run the dacapo and SPECjvm98 tests against production on the host jalapeno, do: | http://docs.codehaus.org/pages/viewpage.action?pageId=77692988 | 2014-10-20T08:39:51 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.codehaus.org |
The Kiip plugin enables real rewards to be delivered within an app based on user interaction. These rewards come in the form of a "poptart", offering actual rewards such as gift cards to the user.
In order for you to use the Kiip.kiip"] = { publisherId = "com.gremlininteractive" }, }, }
To develop with Kiip, you need the proper developer credentials from the Kiip Developer site.
This plugin is a product of Gremlin Interactive™. All support and questions are handled by the Gremlin Interactive™ team. | http://docs.coronalabs.com/daily/plugin/kiip/ | 2014-10-20T08:08:53 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.coronalabs.com |
]
If you know the date and the city you will make your Joomla!day you should send in your request for a Joomla!day. You can do that over a simple form at the OSM Websitela!Day.:
Please contact OSM directly to request this support.
When you try to find sponsors, don't forget that you are dealing with companies. They expect professional and reliable information and agreements.
Before contacting sponsors, prepare. | http://docs.joomla.org/index.php?title=How_to_Organize_a_Joomla!Day&oldid=63753 | 2014-10-20T09:10:12 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.joomla.org |
Difference between revisions of "SRGB"
Latest revision as of 23:29, 11 September 2019
Contents
Overview[edit]
sRGB colorspace is a complex subject that involves understanding how monitors display color, how pictures/movies are stored, how our eyes percieve color and brightness and how the limited precision of 8-bit images is optimized to store color information more efficiently. sRGB colorspace is very similar to a gamma 2.2 curve, with a linear portion at the low end.
Some other articles that help explain sRGB:
Monitors[edit]
Generally monitors don't display pixel values linearly. There is usually a curve to how much the brightness changes vs. the pixel's color. Many monitors use the inverse sRGB curve as their outputs, but calibration and other color adjustments affect this. This curve by default will make things darker, as is shown in the curve images in the listed articles. This is done to counter the sRGB curve that is applied to images which the monitor manufacturers know they are usually showing. The two curves essentially cancel each other out, resulting in a visually linear display of brightness.
Visualizing and understanding sRGB on Monitors[edit]
File:SRGBVisualized.9.toe
The above
.toe file helps visualize what is going on with your monitor. The node '
fakedMidGrey' is a grid mix of 100% black and 100% white pixels. If you squint a little your eyes will blend these into what it perceieves as mid-grey. Notice how this image is very close is brightness to the '
realMidGrey' which has pixel values of 0.7333, and quite a bit brighter than the '
linearMidGrey' of 0.5. This is because the monitor is darkening the output using the inverse sRGB curve. Note that the monitor's curve is NOT affecting the brightness of '
fakedMidGrey' at all, because it's only showing pixels with the darkest and brightest possible values. The curve won't change these. So we've established what real mid-grey should look like as a constant color. If '
realMidGrey' does not look like '
fakedMidGrey' on your monitor, then it means your monitor curve is not sRGB (gamma 2.2).
Next, look at the '
linearRamp' node. This may look like a correct ramp, especially if you are used to seeing it in TouchDesigner. One would then expect mid-grey to be directly in the middle of this ramp. The sRGBRamp is the same ramp, converted to sRGB space. This ramp may look incorrect depending on what you are used to. Regardless, look at the Over TOPs that are comparing the correct mid-grey with the ramps. Notice how the mid-grey blends in with the sRGB ramp in its middle, while it only blends in with the linearRamp near its top end.
This example should help give a better idea how your monitor shows color values.
Image Data[edit]
Image files such as
.jpg,
.gif etc usually store the data in 8-bits per color channel. 8-bits per color channel only allows for 256 unique levels of information. This isn't enough information to accurately represent colors to the human eye, so tradeoffs need to be made. Human have better color perception for darks rather than brights, so more of these levels are used to encode the dark information than the bright information. This is generally done by brightening the images with a gamma or sRGB curve. This brings up the dark colors and flattens out the bright ones, giving more levels to the darks.
Consequently, sRGB isn't really a way of brightening data or color-correcting data, but is rather a way of compressing color data in a way that stores the information more efficiently for our eyes. If all images were stored as 32-bit float (or even 10-bit fixed likely), then sRGB would not be needed. It can be thought of more as an encoding rather than a color-correction. This is assuming though, that you were using a monitor that wasn't applying the inverse sRGB transform on values it was receiving. Since most monitors do this, we need to apply the sRGB transform to counteract that behavior.
sRGB Pixel Format in TouchDesigner[edit]
The sRGB Pixel Format in TouchDesigner is what OpenGL/DirectX provides as a hardware-accelerated format. When using this pixel format you will notice that this doesn't affect the brightness of the image. This is because it is only using sRGB as a way to encode the color data into the 8-bit pixels it has. When sampling the texture for viewing the GPU inverts the sRGB curve and brings the data back into its original values. This allows the data to be stored using an sRGB curve, but used in linear space, since it's converted to linear during sampling automatically.
This file helps visualize sRGB as an encoding. Notice how with sRGB encoding the same ramp is much more steppy at the bright end, and smoother at the low end. This helps keep more information at the low end, where we have better perception.
Higher Precision Pixel Formats[edit]
Since higher precision pixel formats have many more levels of values available to store information (10-bit has 1024 levels), using a sRGB curve to store the data isn't nesseary. These formats can stay linear, and more file formats that are storing higher bit precision data (such as
.exr with 16-bit float or 32-bit float) will be storing the data linearly.
Rendering[edit]
All rendering operations are done in linear color space, and stored by default in linear space. If you are working with only 8-bit precision, you can store them in sRGB space instead by setting the pixel format to sRGB 8-bit. This will store the color information more efficiently. Downstream TOPs will automatically decode the sRGB to linear when doing their operation. When working in higher...
TouchDesigner and sRGB[edit]
TouchDesigner does not have a proper sRGB pipeline currently. A few things are done wrong by default (which we plan to fix). First, we load files that are sRGB encoded (such as JPG) as-is. Technically we should be either converting them to linear, or load them in an sRGB texture by default.
This can be fixed manually by selecting 'Input is sRGB' on the 'Image' page of the Movie File In TOP. Without doing this we are doing compositing operations (which must be done in linear space) using color data that has a sRGB curve applied to them, which is incorrect. With 'Input is sRGB' on, an sRGB texture will store the data using the sRGB curve, then when color values are sampled in the TOP for any operation, it is converted to linear space automatically.
The other thing that TouchDesigner does not do correctly currently is convert linear data back to sRGB before displaying it. In the Monitors section of this article we showed why that was nesseary. Essentially, we should 'brighten' the image to sRGB curve, to counteract the inverse sRGB curve the monitor has. Currently that can be done by using the OpenColorIO TOP.
TOuch Environment file, the file type used by TouchDesigner to save your project.
The Graphics Processing Unit. This is the high-speed, many-core processor of the graphics card/chip that takes geometry, images and data from the CPU and creates images and processed data.. | https://docs.derivative.ca/index.php?title=SRGB&diff=cur&oldid=16475 | 2021-01-15T17:42:06 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.derivative.ca |
{"title":"Release note 11.05.18","slug":"release-note-110518","body":"## Define compute resources per task run (API)\nWhen creating a task via the API, you are now able to set instance type (top level) and maximum number of parallel instances for your execution without the need to create a new version of the app.\n\nThe following API calls have been modified to allow you to set or retrieve instance information for a specific task:\n* [Create a new draft task]()\n* [Modify a task]()\n* [Get details of a task]()\n\nSupport for compute resource settings per task run will soon be available on our visual interface as well.\n\n## Folders become a standard API feature\nFolders.","_id":"5be05ae1af149e000ff234ef","project":"5773dcfc255e820e00e1cd4d","user":{"name":"Marko Marinkovic","username":"","_id":"5767bc73bb15f40e00a28777"},"createdAt":"2018-11-05T14:59:45.531Z","changelog":[],"__v":0,"metadata":{"title":"","description":"","image":[]}} | https://docs.cavatica.org/v1.0/blog/release-note-110518 | 2021-01-15T16:54:27 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.cavatica.org |
produce stalls in read performance and other problems. For example, two clients read at the same time, one overwrites the row to make update A, and then the other overwrites the row to make update B, removing update A. Reading before writing also corrupts caches and increases IO requirements. To avoid a read-before-write condition, the storage engine groups inserts/updates to be made, and sequentially writes only the updated parts of a row in append mode. Cassandra never re-writes or re-reads existing data, and never overwrites the rows in place.
A log-structured engine that avoids overwrites and uses sequential IO to update data is essential for writing to hard disks (HDD) and solid-state disks (SSD). On HDD, writing randomly involves a higher number of seek operations than sequential writing. The seek penalty incurred can be substantial. Using sequential IO, and thereby avoiding write amplification and disk failure, Cassandra accommodates inexpensive, consumer SSDs extremely well. | https://docs.datastax.com/en/cassandra-oss/2.1/cassandra/dml/dml_manage_ondisk_c.html | 2021-01-15T18:18:41 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.datastax.com |
You review the job failure error message in the Cause field on the Event details page and determine that the job failed because of a Snapshot copy error. You then proceed to the Volume / Health details page to gather more information.
You must have the Application Administrator role.
Protection Job Failed. Reason: (Transfer operation for relationship 'cluster2_src_svm:cluster2_src_vol2->cluster3_dst_svm: managed_svc2_vol3' ended unsuccessfully. Last error reported by Data ONTAP: Failed to create Snapshot copy 0426cluster2_src_vol2snap on volume cluster2_src_svm:cluster2_src_vol2. (CSM: An operation failed due to an ONC RPC failure.).) Job DetailsThis message provides the following information:
The job involved a protection relationship between the source volume cluster2_src_vol2 on the virtual server cluster2_src_svm and the destination volume managed_svc2_vol3 on the virtual server named cluster3_dst_svm.
In this scenario, you can identify the cause and potential corrective actions of the job failure. However, resolving the failure requires that you access either the System Manager web UI or the ONTAP CLI commands. | https://docs.netapp.com/ocum-98/topic/com.netapp.doc.onc-um-protect/GUID-A3D71D7A-F0C3-4DD2-BADC-A0B7E4FB70F5.html?lang=en | 2021-01-15T17:42:11 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.netapp.com |
Explore
You can search for campaigns, volunteer opportunities, and organizations that interest you.
You can search for customized offerings in Featured Philanthropy Cloud content. Integrated search capabilities are also available throughout the app.
When searching for nonprofits or volunteer opportunities, you can filter your search results based on Location.
Volunteer opportunities can also be filtered by Date and attendance type, such as In-Person Volunteering or Virtual Volunteering. | https://docs.philanthropycloud.com/docs/en/Explore_topic-3623 | 2021-01-15T18:46:45 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.philanthropycloud.com |
Ways of Getting a Painting Company
Finding a Painting Company is of the things that you should think of when painting your building. But it must be in your mind that many companies are ready to offer you painting services. For this reason, you should know the steps that will help you in getting the best painting companies. read more now and get the information that will help you in getting the best painting company. Number one, the painting companies have grown in demand that is why you will find a lot of them in the market. So the first thing to do is asking for recommendations.
As mentioned above, many people are working with the painting companies already, talking to them will help you get the best. Talking to these people will help you a lot since they are aware of the kind of work these companies are offering. Go to the internet and find. The longer the company has been in the market the more experience they will get. According to the record, Phyxter has been in the industry for a long.
these painting services have worked with so many customers hence are aware of the things that are involved during the project. If you need to be sure of the work offered by the companies, then look at the things that they have done in the past. The first thing to do is asking the customers to show you some of the work the companies they will refer you to have done. when you look at the official website of the company, you will find pictures of the work that these companies have done.. | http://docs-prints.com/2020/10/05/what-has-changed-recently-with-25/ | 2021-01-15T17:18:48 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs-prints.com |
If a restore is being performed as part of the recovery procedure, consider using the tungsten_provision_slave tool. This is will work for restoring from the Primary or a Replica and is faster if you do not already have a backup ready to be restored. For more information, see Section 5.6.1.1, “Provision or Reprovision a Replica”.
Data can be restored to a Replica by performing a backup on a different Replica, transferring the backup information to the Replica you want to restore, and then running restore process.
For example, to restore the
host3
from a backup performed on
host2 :
Run the backup operation on
host2 :
. | https://docs.continuent.com/tungsten-clustering-5.4/operations-restore-otherreplica.html | 2021-01-15T17:34:23 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.continuent.com |
Categorizing pages
Categorizing is useful when you want to distinguish related pages in the system based on, for example, their topic. You can create a category hierarchy in which you can map the various topics the pages on your site cover. You can also use an alternative approach to sorting pages as described in Tagging pages.
Developers can create pages so that users can then view pages based on the categories they are interested in.
Listing pages based on categories
To categorize pages:
Was this page helpful? | https://docs.xperience.io/k12sp/managing-website-content/working-with-pages/categorizing-pages | 2021-01-15T18:43:38 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.xperience.io |
Authors: The Grails Team
Version:).
Before installing Grails 4.0.4 you will need as a minimum a Java Development Kit (JDK) installed version 1.8 or above. Download the appropriate JDK for your operating system, run the installer, and then set up an environment variable called JAVA_HOME pointing to the location of this installation.
JAVA_HOME
To automate the installation of Grails we recommend SDKMAN which greatly simplifies installing and managing multiple Grails versions.
On some platforms (for example OS X) the Java installation is automatically detected. However in many cases you will want to manually configure the location of Java. For example, if you’re using bash or another variant of the Bourne Shell:
export JAVA_HOME=/Library/Java/Home
export PATH="$PATH:$JAVA_HOME/bin"
My Computer/Advanced/Environment Variables
The first step to getting up and running with Grails is to install the distribution.
The best way to install Grails on *nix systems is with SDKMAN which greatly simplifies installing and managing multiple Grails versions.
To install the latest version of Grails using SDKMAN, run this on your terminal:
sdk install grails
You can also specify a version
sdk install grails 4.0.4
You can find more information about SDKMAN usage on the SDKMAN Docs
For manual installationils to your profile
export GRAILS_HOME=/path/to/grails
On Windows this is typically a matter of setting an environment variable under My Computer/Advanced/Environment Variables
Then add the bin directory to your PATH variable:
bin
PATH
On Unix/Linux based systems this can be done by adding export PATH="$PATH:$GRAILS_HOME/bin" to your profile
export PATH="$PATH:$GRAILS_HOME/bin"
On Windows this is done by modifying the Path environment variable under My Computer/Advanced/Environment Variables
Path
If Grails is working correctly you should now be able to type grails -version in the terminal window and see output similar to this:
grails -version
Grails version: 4.0.4
To create a Grails application you first need to familiarize yourself with the usage of the grails command which is used in the following manner:
grails
grails <<command name>>
Run create-app to create an application:
grails create-app helloworld
This will create a new directory inside the current one that contains the project. Navigate to this directory in your console:
cd helloworld
Let’s now take the new project and turn it into the classic "Hello world!" example. First, change into the "helloworld" directory you just created and start the Grails interactive console:
$ cd helloworld
$ grails
You should see a prompt that looks like this:
What we want is a simple page that just prints the message "Hello World!" to the browser. In Grails, whenever you want a new page you just create a new controller action for it. Since we don’t yet have a controller, let’s create one now with the create-controller command:
grails> create-controller hello
Don’t forget that in the interactive console, we have auto-completion on command names. So you can type "cre" and then press <tab> to get a list of all create-* commands. Type a few more letters of the command name and then <tab> again to finish.
create-*
The above command will create a new controller in the grails-app/controllers/helloworld directory called HelloController.groovy. Why the extra helloworld directory? Because in Java land, it’s strongly recommended that all classes are placed into packages, so Grails defaults to the application name if you don’t provide one. The reference page for create-controller provides more detail on this.
grails-app/controllers/helloworld
HelloController.groovy
helloworld
We now have a controller so let’s add an action to generate the "Hello World!" page. In any text editor, edit the new controller — the HelloController.groovy file — by adding a render line. The edited file’s code should look like this:
package helloworld
class HelloController {
def index() {
render "Hello World!"
}
}
The action is simply a method. In this particular case, it calls a special method provided by Grails to render the page.
Job done. To see your application in action, you just need to start up a server with another command called run-app:
grails> run-app
This will start an embedded server on port 8080 that hosts your application. You should now be able to access your application at the URL - try it!
To set a context path for your application, you can add a configuration property in grails-app/conf/application.yml:
grails-app/conf/application.yml
server:
servlet:
context-path: /helloworld
With the above configuration in place the server will instead startup at the URL.
Alternatively, you can also set the context path via the command line:
grails> run-app -Dgrails.server.servlet.context-path=/helloworld
run-app -port=9090
--stacktrace
grails { pathingJar = true }
build.gradle
The result will look something like this:
This is the Grails intro page which is rendered by the grails-app/view/index.gsp file. It detects the presence of your controllers and provides links to them. You can click on the "HelloController" link to see our custom page containing the text "Hello World!". Voila! You have your first working Grails application.
grails-app/view/index.gsp
One final thing: a controller can contain many actions, each of which corresponds to a different page (ignoring AJAX at this point). Each page is accessible via a unique URL that is composed from the controller name and the action name: /<appname>/<controller>/<action>. This means you can access the Hello World page via /helloworld/hello/index, where 'hello' is the controller name (remove the 'Controller' suffix from the class name and lower-case the first letter) and 'index' is the action name. But you can also access the page via the same URL without the action name: this is because 'index' is the default action. See the end of the controllers and actions section of the user guide to find out more on default actions.
Since 3.0, Grails has an interactive mode which makes command execution faster since the JVM doesn’t have to be restarted for each command. To use interactive mode simple type 'grails' from the root of any projects and use TAB completion to get a list of available commands. See the screenshot below for an example:
For more information on the capabilities of interactive mode refer to the section on Interactive Mode in the user guide.
IntelliJ IDEA is an excellent IDE for Grails 4.0 development. It comes in 2 editions, the free community edition and the paid-for ultimate edition.
The community edition can be used for most things, although GSP syntax higlighting is only part of the ultimate edition
To get started with Intellij IDEA and Grails 4.0 simply go to File / Open and point IDEA at your build.gradle file to import and configure the project.
File / Open
There are several excellent text editors that work nicely with Groovy and Grails. See below for references:
A bundle is available for Groovy / Grails support in Textmate.
A plugin can be installed via Sublime Package Control for the Sublime Text Editor.
See this post for some helpful tips on how to setup VIM as your Grails editor of choice.
A package is available for use with the Atom editor.
An extension is available for use with Visual Studio Code.
grails-app
conf - Configuration sources
conf
controllers - Web controllers - The C in MVC.
controllers
domain - The application domain. - The M in MVC
domain
i18n - Support for internationalization (i18n).
i18n
services - The service layer.
services
taglib - Tag libraries.
taglib
utils - Grails specific utilities.
utils
views - Groovy Server Pages or JSON Views - The V in MVC.
views
src/main/scripts - Code generation scripts.
src/main/scripts
src/main/groovy - Supporting sources
src/main/groovy
src/test/groovy - Unit and integration tests.
src/test/groovy
Grails applications can be run with the built in Tomcat server using the run-app command which will load a server on port 8080 by default:
grails run-app
You can specify a different port by using the -port argument:
-port
grails run-app -port=8090
Note that it is better to start up the application in interactive mode since a container restart is much quicker:
$ grails
grails> run-app
| Grails application running at in environment: development
grails> stop-app
| Shutting down application...
| Application shutdown.
grails> run-app
| Grails application running at in environment: development
You can debug a grails app by simply right-clicking on the Application.groovy class in your IDE and choosing the appropriate action (since Grails 3).
Application.groovy
Alternatively, you can run your app with the following command and then attach a remote debugger to it.
grails run-app --debug-jvm
More information on the run-app command can be found in the reference guide.
The create-* commands in Grails automatically create unit or integration tests for you within the src/test/groovy directory. It is of course up to you to populate these tests with valid test logic, information on which can be found in the section on Unit and integration tests.
To execute tests you run the test-app command as follows:
grails test-app
Grails applications can be deployed in a number of different ways.
If you are deploying to a traditional container (Tomcat, Jetty etc.) you can create a Web Application Archive (WAR file), and Grails includes the war command for performing this task:
grails war
This will produce a WAR file under the build/libs directory which can then be deployed as per your container’s instructions.
build/libs
Note that by default Grails will include an embeddable version of Tomcat inside the WAR file, this can cause problems if you deploy to a different version of Tomcat. If you don’t intend to use the embedded container then you should change the scope of the Tomcat dependencies to provided prior to deploying to your production container in build.gradle:
provided
provided "org.springframework.boot:spring-boot-starter-tomcat"
If you are building a WAR file to deploy on Tomcat 7 then in addition you will need to change the target Tomcat version in the build. Grails is built against Tomcat 8 APIs by default.
To target a Tomcat 7 container, insert a line to build.gradle above the dependencies { } section:
dependencies { }
ext['tomcat.version'] = '7.0.59'
Unlike most scripts which default to the development environment unless overridden, the war command runs in the production environment by default. You can override this like any script by specifying the environment name, for example:
development
war
production
grails dev war
If you prefer not to operate a separate Servlet container then you can simply run the Grails WAR file as a regular Java application. Example:
grails war
java -Dgrails.env=prod -jar build/libs/mywar-0.1.war
When deploying Grails you should always run your containers JVM with the -server option and with sufficient memory allocation. A good set of VM flags would be:
-server
-server -Xmx768M -XX:MaxPermSize=256m
Grails runs on any container that supports Servlet 3.0 and above and is known to work on the following specific container products:
Tomcat 7
GlassFish 3 or above
Resin 4 or above
JBoss 6 or above
Jetty 8 or above
Oracle Weblogic 12c or above
IBM WebSphere 8.0 or above
Some containers have bugs however, which in most cases can be worked around. A list of known deployment issues can be found on the Grails wiki.
In addition, reference the Grails Guides for tips on how to deploy Grails to various popular Cloud services.
Grails ships with a few convenience targets such as create-controller, create-domain-class and so on that will create controllers and different artefact types for you.
NOTE: These are just for your convenience and you can just as easily use an IDE or your favourite text editor.
For example to create the basis of an application you typically need a domain model:
grails create-app helloworld
cd helloworld
grails create-domain-class book
This will result in the creation of a domain class at grails-app/domain/helloworld/Book.groovy such as:
grails-app/domain/helloworld/Book.groovy
package helloworld
class Book {
}
There are many such create-* commands that can be explored in the command line reference guide.
To get started quickly with Grails it is often useful to use a feature called scaffolding to generate the skeleton of an application. To do this use one of the generate-* commands such as generate-all, which will generate a controller (and its unit test) and the associated views:
generate-*
grails generate-all helloworld.Book
You will need to upgrade your Grails version defined in gradle.properties.
gradle.properties
Grails 3 app’s gradle.properties
...
grailsVersion=3.3.8
...
Grails 4 app’s gradle.properties
...
grailsVersion=4.0.0
...
If you were using GORM, you will need to update the version defined in gradle.properties.
...
gormVersion=6.1.10.RELEASE
...
...
gormVersion=7.0.2
...
GORM DSL entries should be move to runtime.groovy. For instance, using following GORM configuration in the application.groovy is not supported and will break the application:
runtime.groovy
application.groovy
grails.gorm.default.mapping = {
id generator: 'identity'
}
Grails 4.0 is built on Spring 5 and Spring Boot 2.1. See the migration guide and release notes if you are using Spring specific features.
Grails 4.x supports a minimum version of Hibernate 5.4 and GORM 7.x. Several changes have been made to GORM to support the newer version of Hibernate and simplify GORM itself.
The details of these changes are covered in the GORM upgrade documentation.
Please check the Spring Boot Actuator documentation since it has changed substantially from Spring Boot 1.5 the version Grails 3.x used.
If you had configuration such as:
endpoints:
enabled: false
jmx:
enabled: true
unique-names: true
replace it with:
spring:
jmx:
unique-names: true
management:
endpoints:
enabled-by-default: false
Previous versions of Grails used a reloading agent called Spring Loaded. Since this library is no longer maintained and does not support Java 11 support for Spring Loaded has been removed.
As a replacement, Grails 4 applications include Spring Boot Developer Tools dependencies in the build.gradle build script. If you are migrating a Grails 3.x app, please include the following set of dependencies:
.
..
...
configurations {
developmentOnly
runtimeClasspath {
extendsFrom developmentOnly
}
}
dependencies {
developmentOnly("org.springframework.boot:spring-boot-devtools")
...
..
}
...
..
.
Also you should configure the necessary excludes for Spring Developer Tools in application.yml:
application.yml
spring:
devtools:
restart:
exclude:
- grails-app/views/**
- grails-app/i18n/**
- grails-app/conf/**
The above configuration prevents the server from restarting when views or message bundles are changed.
Grails 4 is built on top of Spring Boot 2.1. Grails 3 apps were built on top of Spring Boot 1.x.
Your Grails 3 app’s build.gradle may have such configuration:
bootRun {
addResources = true
...
}
Grails 4 apps are built on top of Spring Boot 2.1. Starting from Spring Boot 2.0, the addResources property no longer exists. Instead, you need to set the sourceResources property to the source set that you want to use. Typically that’s sourceSets.main. This is described in the Spring Boot Gradle plugin’s documentation.
addResources
sourceSets.main
Your Grails 4 app’s build.gradle can be configured:
bootRun {
sourceResources sourceSets.main
...
}
Spring Boot’s new Gradle Plugin:
The bootRepackage task has been replaced with bootJar and bootWar tasks for building executable jars and wars respectively. Both tasks extend their equivalent standard Gradle jar or war task, giving you access to all of the usual configuration options and behaviour.
The bootRepackage task has been replaced with bootJar and bootWar tasks for building executable jars and wars respectively. Both tasks extend their equivalent standard Gradle jar or war task, giving you access to all of the usual configuration options and behaviour.
// enable if you wish to package this plugin as a standalone application
bootRepackage.enabled = false
// enable if you wish to package this plugin as a standalone application
bootJar.enabled = false
Grails 3 apps by default used Gradle 3.5. Grails 4 apps use Gradle 5.
To upgrade to Gradle 5 execute:
./gradlew wrapper --gradle-version 5.0
Due to changes in Gradle 5, transitive dependencies are no longer resolved for plugins. If your project makes use of a plugin that has transitive dependencies, you will need to add those explicitly to your build.gradle file.
If you customized your app’s build, other migrations may be necessary. Please check
Gradle Upgrading your build documentation.
Spring Boot 2.1 includes native support for the H2 database web console. Since this is already included in Spring Boot the equivalent feature has been removed from Grails. The H2 console is therefore now available at /h2-console instead of the previous URI of /dbconsole. See Using H2’s Web Console in the Spring Boot documentation for more information.
/h2-console
/dbconsole
If you were using GORM for Hibernate implementation in your Grails 3 app, you will need to upgrade to Hibernate 5.4.
A Grails 3 build.gradle such as:
dependencies {
...
compile "org.grails.plugins:hibernate5"
compile "org.hibernate:hibernate-core:5.1.5.Final"
}
will be in Grails 4:
dependencies {
...
compile "org.grails.plugins:hibernate5"
compile "org.hibernate:hibernate-core:5.4.0.Final"
}
Geb 1.1.x (a JDK 1.7 compatible version) was the version shipped by default with Grails 3. Grails 4 is no longer compatible with Java 1.7. You should migrate to Geb 2.3.
In Grails 3, if your build.gradle looks like:
dependencies {
testCompile "org.grails.plugins:geb:1.1.2"
testRuntime "org.seleniumhq.selenium:selenium-htmlunit-driver:2.47.1"
testRuntime "net.sourceforge.htmlunit:htmlunit:2.18"
}
In Grails 4, you should replace it with:
buildscript {
repositories {
...
}
dependencies {
...
classpath "gradle.plugin.com.energizedwork.webdriver-binaries:webdriver-binaries-gradle-plugin:$webdriverBinariesVersion" (1)
}
}
...
..
repositories {
...
}
apply plugin:"idea"
...
...
apply plugin:"com.energizedwork.webdriver-binaries" (1)
dependencies {
...
testCompile "org.grails.plugins:geb" (4)
testRuntime "org.seleniumhq.selenium:selenium-chrome-driver:$seleniumVersion" (5)
testRuntime "org.seleniumhq.selenium:selenium-firefox-driver:$seleniumVersion" (5)
testRuntime "org.seleniumhq.selenium:selenium-safari-driver:$seleniumSafariDriverVersion" (5)
testCompile "org.seleniumhq.selenium:selenium-remote-driver:$seleniumVersion" (5)
testCompile "org.seleniumhq.selenium:selenium-api:$seleniumVersion" (5)
testCompile "org.seleniumhq.selenium:selenium-support:$seleniumVersion" (5)
}
webdriverBinaries {
chromedriver "$chromeDriverVersion" (2)
geckodriver "$geckodriverVersion" (3)
}
tasks.withType(Test) {
systemProperty "geb.env", System.getProperty('geb.env')
systemProperty "geb.build.reportsDir", reporting.file("geb/integrationTest")
systemProperty "webdriver.chrome.driver", System.getProperty('webdriver.chrome.driver')
systemProperty "webdriver.gecko.driver", System.getProperty('webdriver.gecko.driver')
}
gebVersion=2.3
seleniumVersion=3.12.0
webdriverBinariesVersion=1.4
hibernateCoreVersion=5.1.5.Final
chromeDriverVersion=2.44 (2)
geckodriverVersion=0.23.0 (3)
seleniumSafariDriverVersion=3.14.0
geb-spock
Create also a Geb Configuration file at src/integration-test/resources/GebConfig.groovy.
src/integration-test/resources/GebConfig.groovy
import org.openqa.selenium.chrome.ChromeDriver
import org.openqa.selenium.chrome.ChromeOptions
import org.openqa.selenium.firefox.FirefoxDriver
import org.openqa.selenium.firefox.FirefoxOptions
import org.openqa.selenium.safari.SafariDriver
environments {
// You need to configure in Safari -> Develop -> Allowed Remote Automation
safari {
driver = { new SafariDriver() }
}
// run via “./gradlew -Dgeb.env=chrome iT”
chrome {
driver = { new ChromeDriver() }
}
// run via “./gradlew -Dgeb.env=chromeHeadless iT”
chromeHeadless {
driver = {
ChromeOptions o = new ChromeOptions()
o.addArguments('headless')
new ChromeDriver(o)
}
}
// run via “./gradlew -Dgeb.env=firefoxHeadless iT”
firefoxHeadless {
driver = {
FirefoxOptions o = new FirefoxOptions()
o.addArguments('-headless')
new FirefoxDriver(o)
}
}
// run via “./gradlew -Dgeb.env=firefox iT”
firefox {
driver = { new FirefoxDriver() }
}
}
The following classes, which were deprecated in Grails 3.x, have been removed in Grails 4. Please, check the list below to find a suitable replacement:
Removed Class
Alternative
org.grails.datastore.gorm.validation.constraints.UniqueConstraint
org.grails.datastore.gorm.validation.constraints.builtin.UniqueConstraint
grails.util.BuildScope
grails.transaction.GrailsTransactionTemplate
grails.gorm.transactions.GrailsTransactionTemplate
org.grails.transaction.transform.RollbackTransform
org.grails.datastore.gorm.transactions.transform.RollbackTransform
grails.transaction.NotTransactional
grails.gorm.transactions.NotTransactional
grails.transaction.Rollback
grails.gorm.transactions.Rollback
grails.transaction.Transactional
grails.gorm.transactions.Transactional
org.grails.config.FlatConfig
org.grails.core.metaclass.MetaClassEnhancer
Use traits instead.
org.grails.core.util.ClassPropertyFetcher
org.grails.datastore.mapping.reflect.ClassPropertyFetcher
org.grails.transaction.transform.TransactionalTransform
org.grails.datastore.gorm.transactions.transform.TransactionalTransform
grails.core.ComponentCapableDomainClass
grails.core.GrailsDomainClassProperty
Use the org.grails.datastore.mapping.model.MappingContext API instead
org.grails.datastore.mapping.model.MappingContext
org.grails.core.DefaultGrailsDomainClassProperty
org.grails.core.MetaGrailsDomainClassProperty
org.grails.core.support.GrailsDomainConfigurationUtil
Use the org.grails.datastore.mapping.model.MappingContext and org.grails.datastore.mapping.model.MappingFactory APIs instead
org.grails.datastore.mapping.model.MappingFactory
org.grails.plugins.domain.DomainClassPluginSupport
org.grails.plugins.domain.support.GormApiSupport
org.grails.plugins.domain.support.GrailsDomainClassCleaner
Handled by org.grails.datastore.mapping.model.MappingContext now
grails.validation.AbstractConstraint
Use org.grails.datastore.gorm.validation.constraints.AbstractConstraint instead
org.grails.datastore.gorm.validation.constraints.AbstractConstraint
grails.validation.AbstractVetoingConstraint
org.grails.datastore.gorm.validation.constraints.AbstractVetoingConstraint
org.grails.datastore.gorm.validation.constraints.AbstractVetoingConstraint
grails.validation.CascadingValidator
grails.gorm.validation.CascadingValidator
grails.validation.ConstrainedProperty
grails.gorm.validation.ConstrainedProperty
grails.validation.Constraint
grails.gorm.validation.Constraint
grails.validation.ConstraintFactory
org.grails.datastore.gorm.validation.constraints.factory.ConstraintFactory
grails.validation.VetoingConstraint
grails.gorm.validation.VetoingConstraint
grails.validation.ConstraintException
org.grails.validation.BlankConstraint
org.grails.datastore.gorm.validation.constraints.BlankConstraint
org.grails.validation.ConstrainedPropertyBuilder
org.grails.datastore.gorm.validation.constraints.builder.ConstrainedPropertyBuilder
org.grails.validation.ConstraintDelegate
org.grails.validation.ConstraintsEvaluatorFactoryBean
org.grails.datastore.gorm.validation.constraints.eval.ConstraintsEvaluator
org.grails.validation.CreditCardConstraint
org.grails.datastore.gorm.validation.constraints.CreditCardConstraint
org.grails.validation.DefaultConstraintEvaluator
org.grails.datastore.gorm.validation.constraints.eval.DefaultConstraintEvaluator
org.grails.validation.DomainClassPropertyComparator
org.grails.validation.EmailConstraint
org.grails.datastore.gorm.validation.constraints.EmailConstraint
org.grails.validation.GrailsDomainClassValidator
grails.gorm.validation.PersistentEntityValidator
org.grails.validation.InListConstraint
org.grails.datastore.gorm.validation.constraints.InListConstraint
org.grails.validation.MatchesConstraint
org.grails.datastore.gorm.validation.constraints.MatchesConstraint
org.grails.validation.MaxConstraint
org.grails.datastore.gorm.validation.constraints.MaxConstraint
org.grails.validation.MaxSizeConstraint
org.grails.datastore.gorm.validation.constraints.MaxSizeConstraint
org.grails.validation.MinConstraint
org.grails.datastore.gorm.validation.constraints.MinConstraint
org.grails.validation.MinSizeConstraint
org.grails.datastore.gorm.validation.constraints.MinSizeConstraint
org.grails.validation.NotEqualConstraint
org.grails.datastore.gorm.validation.constraints.NotEqualConstraint
org.grails.validation.NullableConstraint
org.grails.datastore.gorm.validation.constraints.NullableConstraint
org.grails.validation.RangeConstraint
org.grails.datastore.gorm.validation.constraints.RangeConstraint
org.grails.validation.ScaleConstraint
org.grails.datastore.gorm.validation.constraints.ScaleConstraint
org.grails.validation.SizeConstraint
org.grails.datastore.gorm.validation.constraints.SizeConstraint
org.grails.validation.UrlConstraint
org.grails.datastore.gorm.validation.constraints.UrlConstraint
org.grails.validation.ValidatorConstraint
org.grails.datastore.gorm.validation.constraints.ValidatorConstraint
org.grails.validation.routines.DomainValidator
Replaced by newer version of commons-validation
org.grails.validation.routines.InetAddressValidator
org.grails.validation.routines.RegexValidator
org.grails.validation.routines.ResultPair
org.grails.validation.routines.UrlValidator
grails.web.JSONBuilder
groovy.json.StreamingJsonBuilder
For those who have added a dependency on the grails-java8 plugin, all you should need to do is simply remove the dependency. All of the classes in the plugin have been moved out to their respective projects.
grails-java8
A few of the profiles supported in Grails 3.x will no longer be maintained going forward and as a result it is no longer possible to create applications when them in the shorthand form. When upgrading existing projects, it will be necessary to supply the version for these profiles.
org.grails.profiles:angularjs → org.grails.profiles:angularjs:1.1.2
org.grails.profiles:angularjs
org.grails.profiles:angularjs:1.1.2
org.grails.profiles:webpack → org.grails.profiles:webpack:1.1.6
org.grails.profiles:webpack
org.grails.profiles:webpack:1.1.6
org.grails.profiles:react-webpack → org.grails.profiles:react-webpack:1.0.8
org.grails.profiles:react-webpack
org.grails.profiles:react-webpack:1.0.8
In Grails 3 no configuration or additional changes were necessary to use the Spring @Scheduled annotation. In Grails 4 you must apply the @EnableScheduling annotation to your application class in order for scheduling to work.
@Scheduled
@EnableScheduling
It may seem odd that in a framework that embraces "convention-over-configuration" that we tackle this topic now. With Grails' default settings you can actually develop an application without doing any configuration whatsoever, as the quick start demonstrates, but it’s important to learn where and how to override the conventions when you need to. Later sections of the user guide will mention what configuration settings you can use, but not how to set them. The assumption is that you have at least read the first section of this chapter!
Configuration in Grails is generally split across 2 areas: build configuration and runtime configuration.
Build configuration is generally done via Gradle and the build.gradle file. Runtime configuration is by default specified in YAML in the grails-app/conf/application.yml file.
If you prefer to use Grails 2.0-style Groovy configuration then it is possible to specify configuration using Groovy’s ConfigSlurper syntax. Two Groovy configuration files are available: grails-app/conf/application.groovy and grails-app/conf/runtime.groovy:
grails-app/conf/application.groovy
grails-app/conf/runtime.groovy
Use application.groovy for configuration that doesn’t depend on application classes
Use runtime.groovy for configuration that does depend on application classes
This separation is necessary because configuration values defined in application.groovy are available to the Grails CLI, which needs to be able to load application.groovy before the application has been compiled. References to application classes in application.groovy will cause an exception when these commands are executed by the CLI:
Error occurred running Grails CLI:
startup failed:script14738267015581837265078.groovy: 13: unable to resolve class com.foo.Bar
For Groovy configuration the following variables are available to the configuration script:
userHome
Location of the home directory for the account that is running the Grails application.
grailsHome
Location of the directory where you installed Grails. If the GRAILS_HOME environment variable is set, it is used.
GRAILS_HOME
appName
The application name as it appears in build.gradle.
appVersion
The application version as it appears in build.gradle.
For example:
my.tmp.dir = "${userHome}/.grails/tmp"
If you want to read runtime configuration settings, i.e. those defined in application.yml, use the grailsApplication object, which is available as a variable in controllers and tag libraries:
class MyController {
def hello() {
def recipient = grailsApplication.config.getProperty('foo.bar.hello')
render "Hello ${recipient}"
}
}
The config property of the grailsApplication object is an instance of the Config interface and provides a number of useful methods to read the configuration of the application.
config
grailsApplication
In particular, the getProperty method (seen above) is useful for efficiently retrieving configuration properties, while specifying the property type (the default type is String) and/or providing a default fallback value.
getProperty
class MyController {
def hello(Recipient recipient) {
//Retrieve Integer property 'foo.bar.max.hellos', otherwise use value of 5
def max = grailsApplication.config.getProperty('foo.bar.max.hellos', Integer, 5)
//Retrieve property 'foo.bar.greeting' without specifying type (default is String), otherwise use value "Hello"
def greeting = grailsApplication.config.getProperty('foo.bar.greeting', "Hello")
def message = (recipient.receivedHelloCount >= max) ?
"Sorry, you've been greeted the max number of times" : "${greeting}, ${recipient}"
}
render message
}
}
Notice that the Config instance is a merged configuration based on Spring’s PropertySource concept and reads configuration from the environment, system properties and the local application configuration merging them into a single object.
Config
GrailsApplication can be easily injected into services and other Grails artifacts:
GrailsApplication
import grails.core.*
class MyService {
GrailsApplication grailsApplication
String greeting() {
def recipient = grailsApplication.config.getProperty('foo.bar.hello')
return "Hello ${recipient}"
}
}
Accessing configuration dynamically at runtime can have a small effect on application performance. An alternative approach is to implement the GrailsConfigurationAware interface, which provides a setConfiguration method that accepts the application configuration as a parameter when the class is initialized. You can then assign relevant configuration properties to instance properties on the class for later usage.
setConfiguration
The Config instance has the same properties and usage as the injected GrailsApplication config object. Here is the service class from the previous example, using GrailsConfigurationAware instead of injecting GrailsApplication:
GrailsConfigurationAware
import grails.core.support.GrailsConfigurationAware
class MyService implements GrailsConfigurationAware {
String recipient
String greeting() {
return "Hello ${recipient}"
}
void setConfiguration(Config config) {
recipient = config.getProperty('foo.bar.hello')
}
}
You can use Spring’s Value annotation to inject configuration values:
import org.springframework.beans.factory.annotation.*
class MyController {
@Value('${foo.bar.hello}')
String recipient
def hello() {
render "Hello ${recipient}"
}
}
Value
As you can see, when accessing configuration settings you use the same dot notation as when you define them.
The application.yml file was introduced in Grails 3.0, and YAML is now the preferred format for configuration files.
Suppose you are using the JDBC_CONNECTION_STRING command line argument and you want to access the same in the yml file then it can be done in the following manner:
JDBC_CONNECTION_STRING
production:
dataSource:
url: '${JDBC_CONNECTION_STRING}'
Similarly system arguments can be accessed.
You will need to have this in build.gradle to modify the bootRun target if grails run-app is used to start the application
bootRun
bootRun {
systemProperties = System.properties
}
For testing the following will need to change the test task as follows
test
test {
systemProperties = System.properties
}
Grails will read application.(properties|yml) from the ./config or the current directory by default.
As Grails is a SpringBoot configuration options are available as well, for documentation please consult:
application.(properties|yml)
./config
Grails has a set of core settings that are worth knowing about. Their defaults are suitable for most projects, but it’s important to understand what they do because you may need one or more of them later.
On the runtime front, i.e. grails-app/conf/application.yml, there are quite a few more core settings:
grails.enable.native2ascii - Set this to false if you do not require native2ascii conversion of Grails i18n properties files (default: true).
grails.enable.native2ascii
grails.views.default.codec - Sets the default encoding regime for GSPs - can be one of 'none', 'html', or 'base64' (default: 'none'). To reduce risk of XSS attacks, set this to 'html'.
grails.views.default.codec
grails.views.gsp.encoding - The file encoding used for GSP source files (default: 'utf-8').
grails.views.gsp.encoding
grails.mime.file.extensions - Whether to use the file extension to dictate the mime type in Content Negotiation (default: true).
grails.mime.file.extensions
grails.mime.types - A map of supported mime types used for Content Negotiation.
grails.mime.types
grails.serverURL - A string specifying the server URL portion of absolute links, including server name e.g. grails.serverURL="". See createLink. Also used by redirects.
grails.serverURL
grails.views.gsp.sitemesh.preprocess - Determines whether SiteMesh preprocessing happens. Disabling this slows down page rendering, but if you need SiteMesh to parse the generated HTML from a GSP view then disabling it is the right option. Don’t worry if you don’t understand this advanced property: leave it set to true.
grails.views.gsp.sitemesh.preprocess
grails.reload.excludes and grails.reload.includes - Configuring these directives determines the reload behavior for project specific source files. Each directive takes a list of strings that are the class names for project source files that should be excluded from reloading behavior or included accordingly when running the application in development with the run-app command. If the grails.reload.includes directive is configured, then only the classes in that list will be reloaded.
grails.reload.excludes
grails.reload.includes
run-app
Since Grails 3.0, logging is handled by the Logback logging framework and can be configured with the grails-app/conf/logback.groovy file.
grails-app/conf/logback.groovy
logback.groovy
logback.xml
For more information on configuring logging refer to the Logback documentation on the subject.
Grails artifacts (controllers, services …) get injected a log property automatically.
log
Prior to Grails 3.3.0, the name of the
logger for Grails Artifact followed the convention grails.app.<type>.<className>, where type is the
type of the artifact, for example, controllers or services, and className is the fully
qualified name of the artifact.
grails.app.<type>.<className>
className
Grails 3.3.x simplifies logger names. The next examples illustrate the changes:
BookController.groovy located at grails-app/controllers/com/company NOT annotated with @Slf4j
BookController.groovy
grails-app/controllers/com/company
@Slf4j
Logger Name (Grails 3.3.x or higher)
Logger Name (Grails 3.2.x or lower)
com.company.BookController
grails.app.controllers.com.company.BookController
BookController.groovy located at grails-app/controllers/com/company annotated with @Slf4j
BookService.groovy located at grails-app/services/com/company NOT annotated with @Slf4j
BookService.groovy
grails-app/services/com/company
com.company.BookService
grails.app.services.com.company.BookService
BookService.groovy located at grails-app/services/com/company annotated with @Slf4j
BookDetail.groovy located at src/main/groovy/com/company annotated with @Slf4j
BookDetail.groovy
src/main/groovy/com/company
com.company.BookDetail
When Grails logs a stacktrace, the log message may include the names and values of all of the request parameters for the current request.
To mask out the values of secure request parameters, specify the parameter names in the grails.exceptionresolver.params.exclude config property:
grails.exceptionresolver.params.exclude
grails:
exceptionresolver:
params:
exclude:
- creditCard
Request parameter logging may be turned off altogether by setting the grails.exceptionresolver.logRequestParameters
config property to false. The default value is true when the application is running in DEVELOPMENT mode and false for all other
environments.
grails.exceptionresolver.logRequestParameters
grails:
exceptionresolver:
logRequestParameters: false
If you set the configuration property logging.config, you can instruct Logback to use an external configuration file.
logging.config
Logback
logging:
config: /Users/me/config/logback.groovy
Alternatively, you can supply the configuration file location with a system property:
$ ./gradlew -Dlogging.config=/Users/me/config/logback.groovy bootRun
Or, you could use an environment variable:
$ export LOGGING_CONFIG=/Users/me/config/logback.groovy
$ ./gradlew bootRun
Grails provides the following GORM configuration options:
grails.gorm.failOnError - If set to true, causes the save() method on domain classes to throw a grails.validation.ValidationException if.
grails.gorm.failOnError
true
save()
grails.validation.ValidationException
For example, to enable failOnError for all domain classes:
grails:
gorm:
failOnError: true
and to enable failOnError for domain classes by package:
grails:
gorm:
failOnError:
- com.companyname.somepackage
- com.companyname.someotherpackage
To setup Grails to use an HTTP proxy there are two steps. Firstly you need to configure the grails CLI to be aware of the proxy if you wish to use it to create applications and so on. This can be done using the GRAILS_OPTS environment variable, for example on Unix systems:
GRAILS_OPTS
export GRAILS_OPTS="-Dhttps.proxyHost=127.0.0.1 -Dhttps.proxyPort=3128 -Dhttp.proxyUser=test -Dhttp.proxyPassword=test"
https.proxyPort
https.proxyUser
http.proxyUser
http.proxyPassword
For Windows systems the environment variable can be configured under My Computer/Advanced/Environment Variables.
With this configuration in place the grails command can connect and authenticate via a proxy.
Secondly, since Grails uses Gradle as the build system, you need to configure Gradle to authenticate via the proxy. For instructions on how to do this see the Gradle user guide section on the topic.
Every new Grails application features an Application class within the grails-app/init directory.
Application
grails-app/init
The Application class subclasses the GrailsAutoConfiguration class and features a static void main method, meaning it can be run as a regular application.
static void main
There are several ways to execute the Application class, if you are using an IDE then you can simply right click on the class and run it directly from your IDE which will start your Grails application.
This is also useful for debugging since you can debug directly from the IDE without having to connect a remote debugger when using the run-app --debug-jvm command from the command line.
run-app --debug-jvm
You can also package your application into a runnable WAR file, for example:
$ grails package
$ java -jar build/libs/myapp-0.1.war
This is useful if you plan to deploy your application using a container-less approach.
There are several ways in which you can customize the Application class.
By default Grails will scan all known source directories for controllers, domain class etc., however if there are packages in other JAR files you wish to scan you can do so by overriding the packageNames() method of the Application class:
packageNames()
class Application extends GrailsAutoConfiguration {
@Override
Collection<String> packageNames() {
super.packageNames() + ['my.additional.package']
}
...
}
The Application class can also be used as a source for Spring bean definitions, simply define a method annotated with the Bean and the returned object will become a Spring bean. The name of the method is used as the bean name:
class Application extends GrailsAutoConfiguration {
@Bean
MyType myBean() {
return new MyType()
}
...
}
The Application class also implements the GrailsApplicationLifeCycle interface which all plugins implement.
This means that the Application class can be used to perform the same functions as a plugin. You can override the regular plugins hooks such as doWithSpring, doWithApplicationContext and so on by overriding the appropriate method:
doWithSpring
doWithApplicationContext
class Application extends GrailsAutoConfiguration {
@Override
Closure doWithSpring() {
{->
mySpringBean(MyType)
}
}
...
}
Grails supports the concept of per environment configuration. The application.yml and application.groovy files in the grails-app/conf directory can use per-environment configuration using either YAML or the syntax provided by ConfigSlurper. As an example consider the following default application.yml definition provided by Grails:
grails-app/conf
...
The above can be expressed in Groovy syntax in application.groovy as follows:
dataSource {
pooled = false
driverClassName = "org.h2.Driver"
username = "sa"
}
environments {
development {
dataSource {
dbCreate = "create-drop"
url = "jdbc:h2:mem:devDb"
}
}
test {
dataSource {
dbCreate = "update"
url = "jdbc:h2:mem:testDb"
}
}
production {
dataSource {
dbCreate = "update"
url = "jdbc:h2:prodDb"
properties {
jmxEnabled = true
initialSize = 5
}
}
}
}
Notice how the common configuration is provided at the top level and then an environments block specifies per environment settings for the dbCreate and url properties of the DataSource.
environments
dbCreate
url
DataSource
Grails' command line has built in capabilities to execute any command within the context of a specific environment. The format is:
grails <<environment>> <<command name>>
In addition, there are 3 preset environments known to Grails: dev, prod, and test for development, production and test. For example to create a WAR for the test environment you would run:
dev
prod
grails test war
To target other environments you can pass a grails.env variable to any command:
grails.env
grails -Dgrails.env=UAT run-app
Within your code, such as in a Gant script or a bootstrap class you can detect the environment using the Environment class:
import grails.util.Environment
...
switch (Environment.current) {
case Environment.DEVELOPMENT:
configureForDevelopment()
break
case Environment.PRODUCTION:
configureForProduction()
break
}
It’s often desirable to run code when your application starts up on a per-environment basis. To do so you can use the grails-app/init/BootStrap.groovy file’s support for per-environment execution:
grails-app/init/BootStrap.groovy
def init = { ServletContext ctx ->
environments {
production {
ctx.setAttribute("env", "prod")
}
development {
ctx.setAttribute("env", "dev")
}
}
ctx.setAttribute("foo", "bar")
}
The previous BootStrap example uses the grails.util.Environment class internally to execute. You can also use this class yourself to execute your own environment specific logic:
BootStrap
grails.util.Environment
Environment.executeForCurrentEnvironment {
production {
// do something in production
}
development {
// do something only in development
}
}
Since Grails is built on Java technology setting up a data source requires some knowledge of JDBC (the technology that stands for Java Database Connectivity).
If you use a database other than H2 you need a JDBC driver. For example for MySQL you would need Connector/J.
Drivers typically come in the form of a JAR archive. It’s best to use the dependency resolution to resolve the jar if it’s available in a Maven repository, for example you could add a dependency for the MySQL driver like this:
dependencies {
runtime 'mysql:mysql-connector-java:5.1.29'
}
Once you have the JAR resolved you need to get familiar with how Grails manages its database configuration. The configuration can be maintained in either grails-app/conf/application.groovy or grails-app/conf/application.yml. These files contain the dataSource definition which includes the following settings:
driverClassName - The class name of the JDBC driver
driverClassName
username - The username used to establish a JDBC connection
username
password - The password used to establish a JDBC connection
url - The JDBC URL of the database
dbCreate - Whether to auto-generate the database from the domain model - one of 'create-drop', 'create', 'update', 'validate', or 'none'
pooled - Whether to use a pool of connections (defaults to true)
pooled
logSql - Enable SQL logging to stdout
logSql
formatSql - Format logged SQL
formatSql
dialect - A String or Class that represents the Hibernate dialect used to communicate with the database. See the org.hibernate.dialect package for available dialects.
dialect
readOnly - If true makes the DataSource read-only, which results in the connection pool calling setReadOnly(true) on each Connection
readOnly
setReadOnly(true)
Connection
transactional - If false leaves the DataSource’s transactionManager bean outside the chained BE1PC transaction manager implementation. This only applies to additional datasources.
transactional
false
persistenceInterceptor - The default datasource is automatically wired up to the persistence interceptor, other datasources are not wired up automatically unless this is set to true
persistenceInterceptor
properties - Extra properties to set on the DataSource bean. See the Tomcat Pool documentation. There is also a Javadoc format documentation of the properties.
properties
jmxExport - If false, will disable registration of JMX MBeans for all DataSources. By default JMX MBeans are added for DataSources with jmxEnabled = true in properties.
jmxExport
jmxEnabled = true
type - The connection pool class if you want to force Grails to use it when there are more than one available.
type
A typical configuration for MySQL in application.groovy may be something like:
jdbcInterceptors = "ConnectionState;StatementCache(max=200)"
defaultTransactionIsolation = java.sql.Connection.TRANSACTION_READ_COMMITTED
}
}
dataSource {
boolean pooled = true // type declaration results in ignored local variable
...
}
Example of advanced configuration using extra properties: {
// Documentation for Tomcat JDBC Pool
//
//
ignoreExceptionOnPreLoad = true
//
jdbcInterceptors = "ConnectionState;StatementCache(max=200)"
defaultTransactionIsolation = java.sql.Connection.TRANSACTION_READ_COMMITTED // safe default
// controls for leaked connections
abandonWhenPercentageFull = 100 // settings are active only when pool is full
removeAbandonedTimeout = 120
removeAbandoned = true
// use JMX console to change this setting at runtime
logAbandoned = false // causes stacktrace recording overhead, use only for debugging
// JDBC driver properties
// Mysql as example
dbProperties {
// Mysql specific driver properties
//
// let Tomcat JDBC Pool handle reconnecting
autoReconnect=false
// truncation behaviour
jdbcCompliantTruncation=false
// mysql 0-date conversion
zeroDateTimeBehavior='convertToNull'
// Tomcat JDBC Pool's StatementCache is used instead, so disable mysql driver's cache
cachePrepStmts=false
cacheCallableStmts=false
// Tomcat JDBC Pool's StatementFinalizer keeps track
dontTrackOpenResources=true
// performance optimization: reduce number of SQLExceptions thrown in mysql driver code
holdResultsOpenOverStatementClose=true
// enable MySQL query cache - using server prep stmts will disable query caching
useServerPrepStmts=false
// metadata caching
cacheServerConfiguration=true
cacheResultSetMetadata=true
metadataCacheSize=100
// timeouts for TCP/IP
connectTimeout=15000
socketTimeout=120000
// timer tuning (disable)
maintainTimeStats=false
enableQueryTimeouts=false
// misc tuning
noDatetimeStringSync=true
}
}
}
Hibernate can automatically create the database tables required for your domain model. You have some control over when and how it does this through the dbCreate property, which can take these values:
create - Drops the existing schema and creates the schema on startup, dropping existing tables, indexes, etc. first.
create-drop - Same as create, but also drops the tables when the application shuts down cleanly.
update - Creates missing tables and indexes, and updates the current schema without dropping any tables or data. Note that this can’t properly handle many schema changes like column renames (you’re left with the old column containing the existing data).
validate - Makes no changes to your database. Compares the configuration with the existing database schema and reports warnings.
any other value - does nothing
Setting the dbCreate setting to "none" is recommended once your schema is relatively stable and definitely when your application and database are deployed in production. Database changes are then managed through proper migrations, either with SQL scripts or a migration tool like Flyway or Liquibase. The Database Migration plugin uses Liquibase.
The previous example configuration assumes you want the same config for all environments: production, test, development etc.
Grails' DataSource definition is "environment aware", however, so you can do:
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
dialect = org.hibernate.dialect.MySQL5InnoDBDialect
// other common settings here
}
environments {
production {
dataSource {
url = "jdbc:mysql://liveip.com/liveDb"
// other environment-specific settings here
}
}
}
The dbCreate property of the DataSource definition is important as it dictates what Grails should do at runtime with regards to automatically generating the database tables from GORM classes. The options are described in the DataSource section:
create
create-drop
update
validate
no value
In development mode dbCreate is by default set to "create-drop", but at some point in development (and certainly once you go to production) you’ll need to stop dropping and re-creating the database every time you start up your server.
It’s tempting to switch to update so you retain existing data and only update the schema when your code changes, but Hibernate’s update support is very conservative. It won’t make any changes that could result in data loss, and doesn’t detect renamed columns or tables, so you’ll be left with the old one and will also have the new one.
Grails supports migrations with Liquibase or Flyway via plugins.
Database Migration
Flyway
The actual dataSource bean is wrapped in a transaction-aware proxy so you will be given the connection that’s being used by the current transaction or Hibernate Session if one is active.
dataSource
Session
If this were not the case, then retrieving a connection from the dataSource would be a new connection, and you wouldn’t be able to see changes that haven’t been committed yet (assuming you have a sensible transaction isolation setting, e.g. READ_COMMITTED or better).
READ_COMMITTED
The H2 database console is a convenient feature of H2 that provides a web-based interface to any database that you have a JDBC driver for, and it’s very useful to view the database you’re developing against. It’s especially useful when running against an in-memory database.
You can access the console by navigating to in a browser. See the Spring Boot H2 Console Documentation for more information on the options available.
By default all domain classes share a single DataSource and a single database, but you have the option to partition your domain classes into two or more data sources.
The default DataSource configuration in grails-app/conf/application.yml looks something like this:
dataSource:
pooled: true
jmxExport: true
driverClassName: org.h2.Driver
username: sa
This configures a single DataSource with the Spring bean named dataSource. To configure extra data sources, add a dataSources block (at the top level, in an environment block, or both, just like the standard DataSource definition) with a custom name. For example, this configuration adds a second DataSource, using MySQL in the development environment and Oracle in production:
dataSources
dataSource:
pooled: true
jmxExport: true
driverClassName: org.h2.Driver
username: sa
dataSources:
lookup:
dialect: org.hibernate.dialect.MySQLInnoDBDialect
driverClassName: com.mysql.jdbc.Driver
username: lookup
password: secret
url: jdbc:mysql://localhost/lookup
dbCreate: update
...
dataSources:
lookup:
dialect: org.hibernate.dialect.Oracle10gDialect
driverClassName: oracle.jdbc.driver.OracleDriver
username: lookup
password: secret
url: jdbc:oracle:thin:@localhost:1521:lookup
dbCreate: update
You can use the same or different databases as long as they’re supported by Hibernate.
If you need to inject the lookup datasource in a Grails artefact, you can do it like this:
lookup
DataSource dataSource_lookup
If a domain class has no DataSource configuration, it defaults to the standard 'dataSource'. Set the datasource property in the mapping block to configure a non-default DataSource. For example, if you want to use the ZipCode domain to use the 'lookup' DataSource, configure it like this:
'dataSource'
datasource
mapping
ZipCode
'lookup'
class ZipCode {
String code
static mapping = {
datasource 'lookup'
}
}
A domain class can also use two or more data sources. Use the datasources property with a list of names to configure more than one, for example:
datasources
class ZipCode {
String code
static mapping = {
datasources(['lookup', 'auditing'])
}
}
If a domain class uses the default DataSource and one or more others, use the special name 'DEFAULT' to indicate the default DataSource:
'DEFAULT'
class ZipCode {
String code
static mapping = {
datasources(['lookup', 'DEFAULT'])
}
}
If a domain class uses all configured data sources, use the special value 'ALL':
'ALL'
class ZipCode {
String code
static mapping = {
datasource 'ALL'
}
}
If a domain class uses more than one DataSource then you can use the namespace implied by each DataSource name to make GORM calls for a particular DataSource. For example, consider this class which uses two data sources:
The first DataSource specified is the default when not using an explicit namespace, so in this case we default to 'lookup'. But you can call GORM methods on the 'auditing' DataSource with the DataSource name, for example:
def zipCode = ZipCode.auditing.get(42)
...
zipCode.auditing.save()
As you can see, you add the DataSource to the method call in both the static case and the instance case.
You can also partition annotated Java classes into separate datasources. Classes using the default datasource are registered in grails-app/conf/hibernate.cfg.xml. To specify that an annotated class uses a non-default datasource, create a hibernate.cfg.xml file for that datasource with the file name prefixed with the datasource name.
grails-app/conf/hibernate.cfg.xml
hibernate.cfg.xml
For example if the Book class is in the default datasource, you would register that in grails-app/conf/hibernate.cfg.xml:
Book
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE hibernate-configuration PUBLIC
'-//Hibernate/Hibernate Configuration DTD 3.0//EN'
''>
<hibernate-configuration>
<session-factory>
<mapping class='org.example.Book'/>
</session-factory>
</hibernate-configuration>
and if the Library class is in the "ds2" datasource, you would register that in grails-app/conf/ds2_hibernate.cfg.xml:
Library
grails-app/conf/ds2_hibernate.cfg.xml
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE hibernate-configuration PUBLIC
'-//Hibernate/Hibernate Configuration DTD 3.0//EN'
''>
<hibernate-configuration>
<session-factory>
<mapping class='org.example.Library'/>
</session-factory>
</hibernate-configuration>
The process is the same for classes mapped with hbm.xml files - just list them in the appropriate hibernate.cfg.xml file.
Like Domain classes, by default Services use the default DataSource and PlatformTransactionManager. To configure a Service to use a different DataSource, use the static datasource property, for example:
PlatformTransactionManager
class DataService {
static datasource = 'lookup'
void someMethod(...) {
...
}
}
A transactional service can only use a single DataSource, so be sure to only make changes for domain classes whose DataSource is the same as the Service.
Note that the datasource specified in a service has no bearing on which datasources are used for domain classes; that’s determined by their declared datasources in the domain classes themselves. It’s used to declare which transaction manager to use.
If you have a Foo domain class in dataSource1 and a Bar domain class in dataSource2, if WahooService uses dataSource1, a service method that saves a new Foo and a new Bar will only be transactional for Foo since they share the same datasource. The transaction won’t affect the Bar instance. If you want both to be transactional you’d need to use two services and XA datasources for two-phase commit, e.g. with the Atomikos plugin.
Foo
dataSource1
Bar
dataSource2
WahooService
Grails does not by default try to handle transactions that span multiple data sources.
You can enable Grails to use the Best Effort 1PC pattern for handling transactions across multiple datasources. To do so you must set the grails.transaction.chainedTransactionManagerPostProcessor.enabled setting to true in application.yml:
grails.transaction.chainedTransactionManagerPostProcessor.enabled
grails:
transaction:
chainedTransactionManagerPostProcessor:
enabled: true.
The BE1PC implementation was added in Grails 2.3.6. . Before this change additional datasources didn’t take part in transactions initiated in Grails. The transactions in additional datasources were basically in auto commit mode. In some cases this might be the wanted behavior. One reason might be performance: on the start of each new transaction, the BE1PC transaction manager creates a new transaction to each datasource. It’s possible to leave an additional datasource out of the BE1PC transaction manager by setting transactional = false in the respective configuration block of the additional dataSource. Datasources with readOnly = true will also be left out of the chained transaction manager (since 2.3.7).
transactional = false
readOnly = true
By default, the BE1PC implementation will add all beans implementing the Spring PlatformTransactionManager interface to the chained BE1PC transaction manager. For example, a possible JMSTransactionManager bean in the Grails application context would be added to the Grails BE1PC transaction manager’s chain of transaction managers.
JMSTransactionManager
You can exclude transaction manager beans from the BE1PC implementation with this configuration option:
grails:
transaction:
chainedTransactionManagerPostProcessor:
enabled: true
blacklistPattern: '.*'
The exclude matching is done on the name of the transaction manager bean. The transaction managers of datasources with transactional = false or readOnly = true will be skipped and using this configuration option is not required in that case.
When the Best Efforts 1PC pattern isn’t suitable for handling transactions across multiple transactional resources (not only datasources), there are several options available for adding XA/2PC support to Grails applications.
The Spring transactions documentation contains information about integrating the JTA/XA transaction manager of different application servers. In this case, you can configure a bean with the name transactionManager manually in resources.groovy or resources.xml file.
transactionManager
resources.groovy
resources.xml
You can detect the application version using Grails' support for application metadata using the GrailsApplication class. For example within controllers there is an implicit grailsApplication variable that can be used:
def version = grailsApplication.metadata.getApplicationVersion()
You can retrieve the version of Grails that is running with:
def grailsVersion = grailsApplication.metadata.getGrailsVersion()
or the GrailsUtil class:
GrailsUtil
import grails.util.GrailsUtil
...
def grailsVersion = GrailsUtil.grailsVersion
Dependency resolution is handled by the Gradle build tool, all dependencies are defined in the build.gradle file. Refer to the Gradle user guide for more information.
Grails 3.0’s command line system differs greatly from previous versions of Grails and features APIs for invoking Gradle for build related tasks, as well as performing code generation.
When you type:.
When you type the following command::
gradle
grails -Dapp.foo=bar run-app
app.foo-app --integration
integrationTest
When you create a Grails application with the create-app command by default the "web" profile is used:
grails create-app myapp
You can specify a different profile with the profile argument:
grails create-app myapp --profile=rest-api
Profiles encapsulate the project commands, templates and plugins that are designed to work for a given profile. The source for the profiles can be found on Github, whilst the profiles themselves are published as JAR files to the Grails central repository.
To find out what profiles are available use the list-profiles command:
$ grails list-profiles
For more information on a particular profile use the profile-info command:
$ grails profile-info rest-api
profile-info
list-profiles
By default Grails will resolve profiles from the Grails central repository. However, you can override what repositories will be searched by specifying repositories in the USER_HOME/.grails/settings.groovy file.
USER_HOME/.grails/settings.groovy
If you want profiles to be resolved with a custom repository in addition to the Grails central repository, you must specify Grails central in the file as well:
grails {
profiles {
repositories {
myRepo {
url = ""
snapshotsEnabled = true
}
grailsCentral {
url = ""
snapshotsEnabled = true
}
}
}
}
create-app
USER_HOME/.m2/settings.xml
It is also possible to store simple credentials for profile repositories directly in the USER_HOME/.grails/settings.groovy file.
grails {
profiles {
repositories {
myRepo {
url = ""
snapshotsEnabled = true
username = "user"
password = "pass"
}
...
}
}
}
To create an application that uses a custom profile, you must specify the full artifact.
$ grails create-app myapp --profile=com.mycompany.grails.profiles:myprofile:1.0.0
To make this process easier, you can define defaults for a given profile in the USER_HOME/.grails/settings.groovy file.
grails {
profiles {
groupId = "com.mycompany.grails.profiles"
version = "1.0.0"
}
repositories {
...
}
}
}
With the default values specified, the command to create an application using that profile becomes:
$ grails create-app myapp --profile=myprofile
The idea behind creating a new profile is that you can setup a default set of commands and plugins that are tailored to a particular technology or organisation.
To create a new profile you can use the create-profile command which will create a new empty profile that extends the base profile:
$ grails create-profile mycompany
The above command will create a new profile in the "mycompany" directory where the command is executed. If you start interactive mode within the directory you will get a set of commands for creating profiles:
$ cd mycompany
$ grails
| Enter a command name to run. Use TAB for completion:
grails>
create-command create-creator-command create-feature create-generator-command create-gradle-command create-template
The commands are as follows:
create-command - creates a new command that will be available from the Grails CLI when the profile is used
create-command
create-creator-command - creates a command available to the CLI that renders a template (Example: create-controller)
create-creator-command
create-generator-command - creates a command available to the CLI that renders a template based on a domain class (Example: generate-controller)
create-generator-command
create-feature - creates a feature that can be used with this profile
create-feature
create-gradle-command - creates a CLI command that can invoke gradle
create-gradle-command
create-template - creates a template that can be rendered by a command
create-template
To customize the dependencies for your profile you can specify additional dependencies in profile.yml.
profile.yml
Below is an example profile.yml file:
features:
defaults:
- hibernate
- asset-pipeline
build:
plugins:
- org.grails.grails-web
excludes:
- org.grails.grails-core
dependencies:
compile:
- "org.mycompany:myplugin:1.0.1"
With the above configuration in place you can publish the profile to your local repository with gradle install:
gradle install
$ gradle install
Your profile is now usable with the create-app command:
$ grails create-app myapp --profile mycompany
With the above command the application will be created with the "mycompany" profile which includes an additional dependency on the "myplugin" plugin and also includes the "hibernate" and "asset-pipeline" features (more on features later).
Note that if you customize the dependency coordinates of the profile (group, version etc.) then you may need to use the fully qualified coordinates to create an application:
$ grails create-app myapp --profile com.mycompany:mycompany:1.0.1
One profile can extend one or many different parent profiles. To define profile inheritance you can modify the build.gradle of a profile and define the profile dependences. For example typically you want to extend the base profile:
base
dependencies {
runtime "org.grails.profiles:base:$baseProfileVersion"
}
By inheriting from a parent profile you get the following benefits:
When the create-app command is executed the parent profile’s skeleton is copied first
Dependencies and build.gradle is merged from the parent(s)
The application.yml file is merged from the parent(s)
CLI commands from the parent profile are inherited
Features from the parent profile are inherited
To define the order of inheritance ensure that your dependencies are declared in the correct order. For example:
dependencies {
runtime "org.grails.profiles:plugin:$baseProfileVersion"
runtime "org.grails.profiles:web:$baseProfileVersion"
}
In the above snippet the skeleton from the "plugin" profile is copied first, followed by the "web" profile. In addition, the "web" profile overrides commands from the "plugin" profile, whilst if the dependency order was reversed the "plugin" profile would override the "web" profile.
Any profile created with the create-profile command already comes configured with a grails-profile-publish plugin defined in build.gradle:
grails-profile-publish
apply plugin: "org.grails.grails-profile-publish"
To publish a profile using this plugin to the Grails central repository first upload the source to Github (closed source profiles will not be accepted). Then register for an account on Bintray and configure your keys as follows in the profile’s build.gradle file:
grailsPublish {
user = 'YOUR USERNAME'
key = 'YOUR KEY'
githubSlug = 'your-repo/your-profile'
license = 'Apache-2.0'
}
githubSlug
foo/bar
With this in place you can run gradle publishProfile to publish your profile:
gradle publishProfile
$ gradle publishProfile
The profile will be uploaded to Bintray. You can then go the Grails profiles repository and request to have your profile included by clicking "Include My Package" button on Bintray’s interface (you must be logged in to see this).
The aforementioned grails-profile-publish plugin configures Gradle’s Maven Publish plugin. In order to publish to an internal repository all you need to do is define the repository in build.gradle. For example:
publishing {
repositories {
maven {
credentials {
username "foo"
password "bar"
}
url ""
}
}
}
Once configured you can publish your plugin with gradle publish:
gradle publish
$ gradle publish
A profile is a simple directory that contains a profile.yml file and directories containing the "commands", "skeleton" and "templates" defined by the profile. Example:
/web
commands/
create-controller.yml
run-app.groovy
...
features/
asset-pipeline/
skeleton
feature.yml
skeleton/
grails-app/
controllers/
...
build.gradle
templates/
artifacts/
Controller.groovy
profile.yml
The above example is a snippet of structure of the 'web' profile. The profile.yml file is used to describe the profile and control how the build is configured.
The profile.yml can contain the following child elements.
A list of Maven repositories to include in the generated build. Example:
repositories:
- ""
A list of Maven repositories to include in the buildscript section of the generated build. Example:
build:
repositories:
- ""
A list of Gradle plugins to configure in the generated build. Example:
build:
plugins:
- eclipse
- idea
- org.grails.grails-core
A list of Gradle plugins to exclude from being inherited from the parent profile:
build:
excludes:
- org.grails.grails-core
A map of scopes and dependencies to configure. The excludes scope can be used to exclude from the parent profile. Example:
excludes
dependencies:
excludes:
- "org.grails:hibernate:*"
build:
- "org.grails:grails-gradle-plugin:$grailsVersion"
compile:
- "org.springframework.boot:spring-boot-starter-logging"
- "org.springframework.boot:spring-boot-autoconfigure"
A default list of features to use if no explicit features are specified.
features:
defaults:
- hibernate
- asset-pipeline
A list of files to exclude from parent profile’s skeletons (supports wildcards).
skeleton:
excludes:
- gradlew
- gradlew.bat
- gradle/
The target folder that parent profile’s skeleton should be copied into. This can be used to create multi-project builds.
skeleton:
parent:
target: app
Which file extensions should be copied from the profile as binary. Inherited and combined from parent profiles.
skeleton:
binaryExtensions: [exe, zip]
File patterns that should be marked as executable in the resulting application. Inherited and combined from parent profiles. The patterns are parsed with Ant.
skeleton:
executable:
- "**/gradlew*"
- "**/grailsw*"
Text to be displayed to the user after the application is created
instructions: Here are some instructions
When the create-app command runs it takes the skeleton of the parent profiles and copies the skeletons into a new project structure.
The build.gradle file is generated is result of obtaining all of the dependency information defined in the profile.yml files and produces the required dependencies.
The command will also merge any build.gradle files defined within a profile and its parent profiles.
The grails-app/conf/application.yml file is also merged into a single YAML file taking into account the profile and all of the parent profiles.
A profile can define new commands that apply only to that profile using YAML or Groovy scripts. Below is an example of the create-controller command defined in YAML:
description:
- Creates a controller
- usage: 'create-controller <<controller name>>'
- completer: org.grails.cli.interactive.completers.DomainClassCompleter
- argument: "Controller Name"
description: "The name of the controller"
steps:
- command: render
template: templates/artifacts/Controller.groovy
destination: grails-app/controllers/`artifact.package.path`/`artifact.name`Controller.groovy
- command: render
template: templates/testing/Controller.groovy
destination: src/test/groovy/`artifact.package.path`/`artifact.name`ControllerSpec.groovy
- command: mkdir
location: grails-app/views/`artifact.propertyName`
Commands defined in YAML must define one or many steps. Each step is a command in itself. The available step types are:
render - To render a template to a given destination (as seen in the previous example)
mkdir - To make a directory specified by the location parameter
mkdir
location
execute - To execute a command specified by the class parameter. Must be a class that implements the Command interface.
execute
class
gradle - To execute one or many Gradle tasks specified by the tasks parameter.
For example to invoke a Gradle task, you can define the following YAML:
description: Creates a WAR file for deployment to a container (like Tomcat)
minArguments: 0
usage: |
war
steps:
- command: gradle
tasks:
- war
If you need more flexiblity than what the declarative YAML approach provides you can create Groovy script commands. Each Command script is extends from the GroovyScriptCommand class and hence has all of the methods of that class available to it.
For more information on creating CLI commands see the section on creating custom scripts in the Command Line section of the user guide.
A Profile feature is a shareable set of templates and dependencies that may span multiple profiles. Typically you create a base profile that has multiple features and child profiles that inherit from the parent and hence can use the features available from the parent.
To create a feature use the create-feature command from the root directory of your profile:
$ grails create-feature myfeature
This will create a myfeature/feature.yml file that looks like the following:
myfeature/feature.yml
description: Description of the feature
# customize versions here
# dependencies:
# compile:
# - "org.grails.plugins:myplugin2:1.0"
#
As a more concrete example. The following is the feature.yml file from the "asset-pipeline" feature:
feature.yml
description: Adds Asset Pipeline to a Grails project
build:
plugins:
- asset-pipeline
dependencies:
build:
- 'com.bertramlabs.plugins:asset-pipeline-gradle:2.5.0'
runtime:
- "org.grails.plugins:asset-pipeline"
The structure of a feature is as follows:
FEATURE_DIR
feature.yml
skeleton/
grails-app/
conf/
application.yml
build.gradle
The contents of the skeleton get copied into the application tree, whilst the application.yml and build.gradle get merged with their respective counterparts in the profile by used.
With the feature.yml you can define additional dependencies. This allows users to create applications with optional features. For example:
$ grails create-app myapp --profile myprofile --features myfeature,hibernate
The above example will create a new application using your new feature and the "hibernate" feature.
save
/books/${id}
/books/${id}/edit
edit
PUT
}
Grails provides a number of traits which provide access to properties and behavior that may be accessed from various Grails artefacts as well as arbitrary Groovy classes which are part of a Grails project. Many of these traits are automatically added to Grails artefact classes (like controllers and taglibs, for example) and are easy to add to other classes.
Grails artefacts are automatically augmented with certain traits at compile time.
grails.artefact.DomainClass
grails.web.databinding.WebDataBinding
org.grails.datastore.gorm.GormEntity
org.grails.datastore.gorm.GormValidateable
grails.artefact.gsp.TagLibraryInvoker
grails.artefact.AsyncController
grails.artefact.controller.RestResponder
grails.artefact.Controller
grails.artefact.Interceptor
grails.artefact.TagLibrary
Below is a list of other traits provided by the framework. The javadocs provide more detail about methods and properties related to each trait.
grails.web.api.WebAttributes
Common Web Attributes
grails.web.api.ServletAttributes
Servlet API Attributes
grails.web.databinding.DataBinder
Data Binding API
grails.artefact.controller.support.RequestForwarder
Request Forwarding API
grails.artefact.controller.support.ResponseRedirector
Response Redirecting API
grails.artefact.controller.support.ResponseRenderer
Response Rendering API
grails.validation.Validateable
Validation API
WebAttributes is one of the traits provided by the framework. Any Groovy class may implement this trait to inherit all of the properties and behaviors provided by the trait.
package demo
import grails.web.api.WebAttributes
}
}
The traits are compatible with static compilation…
package demo
import grails.web.api.WebAttributes
import groovy.transform.CompileStatic
@CompileStatic
}
}
REST is not really a technology in itself, but more an architectural pattern. REST is very simple and just involves using plain XML or JSON as a communication medium, combined with URL patterns that are "representational" of the underlying system, and HTTP methods such as GET, PUT, POST and DELETE.
Each HTTP method maps to an action type. For example GET for retrieving data, POST for creating data, PUT for updating and so on.
Grails includes flexible features that make it easy to create RESTful APIs. Creating a RESTful resource can be as simple as one line of code, as demonstrated in the next section.
The easiest way to create a RESTful API in Grails is to expose a domain class as a REST resource. This can be done by adding the grails.rest.Resource transformation to any domain class:
grails.rest.Resource
import grails.rest.*
@Resource(uri='/books')
class Book {
String title
static constraints = {
title blank:false
}
}
Simply by adding the Resource transformation and specifying a URI, your domain class will automatically be available as a REST resource in either XML or JSON formats. The transformation will automatically register the necessary RESTful URL mapping and create a controller called BookController.
You can try it out by adding some test data to BootStrap.groovy:
BootStrap.groovy
def init = { servletContext ->
new Book(title:"The Stand").save()
new Book(title:"The Shining").save()
}
And then hitting the URL, which will render the response like:
<?xml version="1.0" encoding="UTF-8"?>
<book id="1">
<title>The Stand</title>
</book>
If you change the URL to you will get a JSON response such as:
{"id":1,"title":"The Stand"}
If you wish to change the default to return JSON instead of XML, you can do this by setting the formats attribute of the Resource transformation:
formats
import grails.rest.*
@Resource(uri='/books', formats=['json', 'xml'])
class Book {
...
}
With the above example JSON will be prioritized. The list that is passed should contain the names of the formats that the resource should expose. The names of formats are defined in the grails.mime.types setting of application.groovy:
grails.mime.types = [
...
json: ['application/json', 'text/json'],
...
xml: ['text/xml', 'application/xml']
]
See the section on Configuring Mime Types in the user guide for more information.
Instead of using the file extension in the URI, you can also obtain a JSON response using the ACCEPT header. Here’s an example using the Unix curl tool:
curl
$ curl -i -H "Accept: application/json" localhost:8080/books/1
{"id":1,"title":"The Stand"}
This works thanks to Grails' Content Negotiation features.
You can create a new resource by issuing a POST request:
$ curl -i -X POST -H "Content-Type: application/json" -d '{"title":"Along Came A Spider"}' localhost:8080/books
HTTP/1.1 201 Created
Server: Apache-Coyote/1.1
...
Updating can be done with a PUT request:
$ curl -i -X PUT -H "Content-Type: application/json" -d '{"title":"Along Came A Spider"}' localhost:8080/books/1
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
...
Finally a resource can be deleted with DELETE request:
$ curl -i -X DELETE localhost:8080/books/1
HTTP/1.1 204 No Content
Server: Apache-Coyote/1.1
...
As you can see, the Resource transformation enables all of the HTTP method verbs on the resource. You can enable only read-only capabilities by setting the readOnly attribute to true:
import grails.rest.*
@Resource(uri='/books', readOnly=true)
class Book {
...
}
In this case POST, PUT and DELETE requests will be forbidden.
If.
/books/1/publisher
A more detailed explanation on creating RESTful URL mappings can be found in the URL Mappings section of the user guide.
The link tag offers an easy way to link to any domain class resource:
However, currently you cannot use g:link to link to the DELETE action and most browsers do not support sending the DELETE method directly.
The best way to accomplish this is to use a form submit:
<form action="/book/2" method="post">
<input type="hidden" name="_method" value="DELETE"/>
</form>
Grails supports overriding the request method via the hidden _method parameter. This is for browser compatibility purposes. This is useful when using restful resource mappings to create powerful web interfaces.
To make a link fire this type of event, perhaps capture all click events for links with a data-method attribute and issue a form submit via JavaScript.
_method
data-method
A common requirement with a REST API is to expose different versions at the same time. There are a few ways this can be achieved in Grails.
A common approach is to use the URI to version APIs (although this approach is discouraged in favour of Hypermedia). For example, you can define the following URL mappings:
"/books/v1"(resources:"book", namespace:'v1')
"/books/v2"(resources:"book", namespace:'v2')
That will match the following controllers:
package myapp.v1
class BookController {
static namespace = 'v1'
}
package myapp.v2
class BookController {
static namespace = 'v2'
}
This approach has the disadvantage of requiring two different URI namespaces for your API.
As an alternative Grails supports the passing of an Accept-Version header from clients. For example you can define the following URL mappings:
Accept-Version
"/books"(version:'1.0', resources:"book", namespace:'v1')
"/books"(version:'2.0', resources:"book", namespace:'v2')
Then in the client simply pass which version you need using the Accept-Version header:
$ curl -i -H "Accept-Version: 1.0" -X GET
Another approach to versioning is to use Mime Type definitions to declare the version of your custom media types (see the section on "Hypermedia as the Engine of Application State" for more information about Hypermedia concepts). For example, in application.groovy you can declare a custom Mime Type for your resource that includes a version parameter (the 'v' parameter):
grails.mime.types = [
all: '*/*',
book: "application/vnd.books.org.book+json;v=1.0",
bookv2: "application/vnd.books.org.book+json;v=2.0",
...
}
Then override the renderer (see the section on "Customizing Response Rendering" for more information on custom renderers) to send back the custom Mime Type in grails-app/conf/spring/resourses.groovy:
grails-app/conf/spring/resourses.groovy
import grails.rest.render.json.*
import grails.web.mime.*
beans = {
bookRendererV1(JsonRenderer, myapp.v1.Book, new MimeType("application/vnd.books.org.book+json", [v:"1.0"]))
bookRendererV2(JsonRenderer, myapp.v2.Book, new MimeType("application/vnd.books.org.book+json", [v:"2.0"]))
}
Then update the list of acceptable response formats in your controller:
class BookController extends RestfulController {
static responseFormats = ['json', 'xml', 'book', 'bookv2']
// ...
}
Then using the Accept header you can specify which version you need using the Mime Type:
$ curl -i -H "Accept: application/vnd.books.org.book+json;v=1.0" -X GET
The Resource transformation is a quick way to get started, but typically you’ll want to customize the controller logic, the rendering of the response or extend the API to include additional actions.
The easiest way to get started doing so is to create a new controller for your resource that extends the grails.rest.RestfulController super class. For example:
grails.rest.RestfulController
class BookController extends RestfulController<Book> {
static responseFormats = ['json', 'xml']
BookController() {
super(Book)
}
}
To customize any logic you can just override the appropriate action. The following table provides the names of the action names and the URIs they map to:
As an example, if you have a nested resource then you would typically want to query both the parent and the child identifiers. For example, given the following URL mapping:
"/authors"(resources:'author') {
"/books"(resources:'book')
}
You could implement the nested controller as follows:
class BookController extends RestfulController {
static responseFormats = ['json', 'xml']
BookController() {
super(Book)
}
@Override
protected Book queryForResource(Serializable id) {
Book.where {
id == id && author.id == params.authorId
}.find()
}
}
The example above subclasses RestfulController and overrides the protected queryForResource method to customize the query for the resource to take into account the parent resource.
RestfulController
queryForResource
The RestfulController class contains code which does data binding for actions like save and update. The class defines a getObjectToBind() method which returns a value which will be used as the source for data binding. For example, the update action does something like this…
getObjectToBind()
class RestfulController<T> {
def update() {
T instance = // retrieve instance from the database...
instance.properties = getObjectToBind()
// ...
}
// ...
}
By default the getObjectToBind() method returns the request object. When the request object is used as the binding source, if the request has a body then the body will be parsed and its contents will be used to do the data binding, otherwise the request parameters will be used to do the data binding. Subclasses of RestfulController may override the getObjectToBind() method and return anything that is a valid binding source, including a Map or a DataBindingSource. For most use cases binding the request is appropriate but the getObjectToBind() method allows for changing that behavior where desired.
request
You can also customize the behaviour of the controller that backs the Resource annotation.
The class must provide a constructor that takes a domain class as it’s argument. The second constructor is required for supporting Resource annotation with readOnly=true.
This is a template that can be used for subclassed RestfulController classes used in Resource annotations:
class SubclassRestfulController<T> extends RestfulController<T> {
SubclassRestfulController(Class<T> domainClass) {
this(domainClass, false)
}
SubclassRestfulController(Class<T> domainClass, boolean readOnly) {
super(domainClass, readOnly)
}
}
You can specify the super class of the controller that backs the Resource annotation with the superClass attribute.
superClass
import grails.rest.*
@Resource(uri='/books', superClass=SubclassRestfulController)
class Book {
String title
static constraints = {
title blank:false
}
}
If you don’t want to take advantage of the features provided by the RestfulController super class, then you can implement each HTTP verb yourself manually. The first step is to create a controller:
$ grails create-controller book
Then add some useful imports and enable readOnly by default:
import grails.gorm.transactions.*
import static org.springframework.http.HttpStatus.*
import static org.springframework.http.HttpMethod.*
@Transactional(readOnly = true)
class BookController {
...
}
Recall that each HTTP verb matches a particular Grails action according to the following conventions:
The key to implementing REST actions is the respond method introduced in Grails 2.3. The respond method tries to produce the most appropriate response for the requested content type (JSON, XML, HTML etc.)
For example, to implement the index action, simply call the respond method passing the list of objects to respond with:
def index(Integer max) {
params.max = Math.min(max ?: 10, 100)
respond Book.list(params), model:[bookCount: Book.count()]
}
Note that in the above example we also use the model argument of the respond method to supply the total count. This is only required if you plan to support pagination via some user interface.
The respond method will, using Content Negotiation, attempt to reply with the most appropriate response given the content type requested by the client (via the ACCEPT header or file extension).
If the content type is established to be HTML then a model will be produced such that the action above would be the equivalent of writing:
def index(Integer max) {
params.max = Math.min(max ?: 10, 100)
[bookList: Book.list(params), bookCount: Book.count()]
}
By providing an index.gsp file you can render an appropriate view for the given model. If the content type is something other than HTML then the respond method will attempt to lookup an appropriate grails.rest.render.Renderer instance that is capable of rendering the passed object. This is done by inspecting the grails.rest.render.RendererRegistry.
index.gsp
grails.rest.render.Renderer
grails.rest.render.RendererRegistry
By default there are already renderers configured for JSON and XML, to find out how to register a custom renderer see the section on "Customizing Response Rendering".
The show action, which is used to display and individual resource by id, can be implemented in one line of Groovy code (excluding the method signature):
def show(Book book) {
respond book
}
By specifying the domain instance as a parameter to the action Grails will automatically attempt to lookup the domain instance using the id parameter of the request. If the domain instance doesn’t exist, then null will be passed into the action. The respond method will return a 404 error if null is passed otherwise once again it will attempt to render an appropriate response. If the format is HTML then an appropriate model will produced. The following action is functionally equivalent to the above action:
def show(Book book) {
if(book == null) {
render status:404
}
else {
return [book: book]
}
}
The save action creates new resource representations. To start off, simply define an action that accepts a resource as the first argument and mark it as Transactional with the grails.gorm.transactions.Transactional transform:
Transactional
@Transactional
def save(Book book) {
...
}
Then the first thing to do is check whether the resource has any validation errors and if so respond with the errors:
if(book.hasErrors()) {
respond book.errors, view:'create'
}
else {
...
}
In the case of HTML the 'create' view will be rendered again so the user can correct the invalid input. In the case of other formats (JSON, XML etc.), the errors object itself will be rendered in the appropriate format and a status code of 422 (UNPROCESSABLE_ENTITY) returned.
If there are no errors then the resource can be saved and an appropriate response sent:
book.save flush:true
withFormat {
html {
flash.message = message(code: 'default.created.message', args: [message(code: 'book.label', default: 'Book'), book.id])
redirect book
}
'*' { render status: CREATED }
}
In the case of HTML a redirect is issued to the originating resource and for other formats a status code of 201 (CREATED) is returned.
The update action updates an existing resource representation and is largely similar to the save action. First define the method signature:
@Transactional
def update(Book book) {
...
}
If the resource exists then Grails will load the resource, otherwise null is passed. In the case of null, you should return a 404:
if(book == null) {
render status: NOT_FOUND
}
else {
...
}
Then once again check for errors validation errors and if so respond with the errors:
if(book.hasErrors()) {
respond book.errors, view:'edit'
}
else {
...
}
In the case of HTML the 'edit' view will be rendered again so the user can correct the invalid input. In the case of other formats (JSON, XML etc.) the errors object itself will be rendered in the appropriate format and a status code of 422 (UNPROCESSABLE_ENTITY) returned.
book.save flush:true
withFormat {
html {
flash.message = message(code: 'default.updated.message', args: [message(code: 'book.label', default: 'Book'), book.id])
redirect book
}
'*' { render status: OK }
}
In the case of HTML a redirect is issued to the originating resource and for other formats a status code of 200 (OK) is returned.
The delete action deletes an existing resource. The implementation is largely similar to the update action, except the delete() method is called instead:
delete()
book.delete flush:true
withFormat {
html {
flash.message = message(code: 'default.deleted.message', args: [message(code: 'Book.label', default: 'Book'), book.id])
redirect action:"index", method:"GET"
}
'*'{ render status: NO_CONTENT }
}
Notice that for an HTML response a redirect is issued back to the index action, whilst for other content types a response code 204 (NO_CONTENT) is returned.
To see some of these concepts in action and help you get going, the Scaffolding plugin, version 2.0 and above, can generate a REST ready controller for you, simply run the command:
$ grails generate-controller <<Domain Class Name>>
Calling Grails REST services - as well as third-party services - is very straightforward using the Micronaut HTTP Client. This HTTP client has both a low-level API and a higher level AOP-driven API, making it useful for both simple requests as well as building declarative, type-safe API layers.
To use the Micronaut HTTP client you must have the micronaut-http-client dependency on your classpath. Add the following dependency to your build.gradle file.
micronaut-http-client
compile 'io.micronaut:micronaut-http-client'
The HttpClient interface forms the basis for the low-level API. This interfaces declares methods to help ease executing HTTP requests and receive responses.
The majority of the methods in the HttpClient interface returns Reactive Streams Publisher instances, and a sub-interface called RxHttpClient is included that provides a variation of the HttpClient interface that returns RxJava Flowable types. When using HttpClient in a blocking flow, you may wish to call toBlocking() to return an instance of BlockingHttpClient.
HttpClient
toBlocking()
There are a few ways by which you can obtain a reference to a HttpClient. The most simple way is using the create method
List<Album> searchWithApi(String searchTerm) {
String baseUrl = ""
HttpClient client = HttpClient.create(baseUrl.toURL()).toBlocking() (1)
HttpRequest request = HttpRequest.GET("/search?limit=25&media=music&entity=album&term=${searchTerm}")
HttpResponse<String> resp = client.exchange(request, String)
client.close() (2)
String json = resp.body()
ObjectMapper objectMapper = new ObjectMapper() (3)
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
SearchResult searchResult = objectMapper.readValue(json, SearchResult)
searchResult.results
}
BlockingHttpClient
SearchResult
Consult the Http Client section of the Micronaut user guide for more information on using the HttpClient low-level API.
A declarative HTTP client can be written by adding the @Client annotation to any interface or abstract class. Using Micronaut’s AOP support (see the Micronaut user guide section on Introduction Advice), the abstract or interface methods will be implemented for you at compilation time as HTTP calls. Declarative clients can return data-bound POGOs (or POJOs) without requiring special handling from the calling code.
@Client
package example.grails
import io.micronaut.http.annotation.Get
import io.micronaut.http.client.annotation.Client
@Client("")
interface GrailsAppForgeClient {
@Get("/{version}/profiles")
List<Map> profiles(String version)
}
Note that HTTP client methods are annotated with the appropriate HTTP method, such as @Get or @Post.
@Get
To use a client like the one in the above example, simply inject an instance of the client into any bean using the @Autowired annotation.
@Autowired
@Autowired GrailsAppForgeClient appForgeClient
List<Map> profiles(String grailsVersion) {
respond appForgeClient.profiles(grailsVersion)
}
For more details on writing and using declarative clients, consult the Http Client section of the Micronaut user guide.
Since Grails 3.1, Grails supports a tailored profile for creating REST applications that provides a more focused set of dependencies and commands.
To get started with the REST profile, create an application specifying rest-api as the name of the profile:
rest-api
$ grails create-app my-api --profile rest-api
This will create a new REST application that provides the following features:
Default set of commands for creating and generating REST endpoints
Defaults to using JSON views for rendering responses (see the next section)
Fewer plugins than the default Grails plugin (no GSP, no Asset Pipeline, nothing HTML related)
You will notice for example in the grails-app/views directory that there are *.gson files for rendering the default index page and as well as any 404 and 500 errors.
*.gson
If you issue the following set of commands:
$ grails create-domain-class my.api.Book
$ grails generate-all my.api.Book
Instead of CRUD HTML interface a REST endpoint is generated that produces JSON responses. In addition, the generated functional and unit tests by default test the REST endpoint.
Since Grails 3.1, Grails supports a profile for creating applications with AngularJS that provides a more focused set of dependencies and commands. The angular profile inherits from the REST profile and therefore has all of the commands and properties that the REST profile has.
To get started with the AngularJS profile, create an application specifying angularjs as the name of the profile:
angularjs
$ grails create-app my-api --profile angularjs
This will create a new Grails application that provides the following features:
Default set of commands for creating AngularJS artefacts
Gradle plugin to manage client side dependencies
Gradle plugin to execute client side unit tests
Asset Pipeline plugins to ease development
By default the AngularJS profile includes GSP support in order to render the index page. This is necessary because the profile is designed around asset pipeline.
The new commands are:
create-ng-component
create-ng-controller
create-ng-directive
create-ng-domain
create-ng-module
create-ng-service
The AngularJS profile is designed around a specific project structure. The create-ng commands will automatically create modules where they do not exist.
create-ng
$ grails create-ng-controller foo
This will produce a fooController.js file in grails-app/assets/javascripts/${default package name}/controllers.
fooController.js
grails-app/assets/javascripts/${default package name}/controllers
javascripts
grails.codegen.angular.assetDir
$ grails create-ng-domain foo.bar
This will produce a Bar.js file in grails-app/assets/javascripts/foo/domains. It will also create the "foo" module if it does not already exist.
Bar.js
grails-app/assets/javascripts/foo/domains
$ grails create-ng-module foo.bar
This will produce a foo.bar.js file in grails-app/assets/javascripts/foo/bar. Note the naming convention for modules is different than other artefacts.
foo.bar.js
grails-app/assets/javascripts/foo/bar
$ grails create-ng-service foo.bar --type constant
This will produce a bar.js file in grails-app/assets/javascripts/foo/services. It will also create the "foo" module if it does not already exist. The create-ng-service command accepts a flag -type. The types that can be used are:
bar.js
grails-app/assets/javascripts/foo/services
-type
service
factory default
value
provider
constant
Along with the artefacts themselves, the profile will also produce a skeleton unit test file under src/test/javascripts for each create command.
src/test/javascripts
The Gradle Bower Plugin is used to manage dependencies with bower. Visit the plugin documentation to learn how to use the plugin.
The Gradle Karma Plugin is used to execute client side unit tests. All generated tests are written with Jasmine. Visit the plugin documentation to learn how to use the plugin.
The AngularJS profile includes several asset pipeline plugins to make development easier.
JS Closure Wrap Asset Pipeline will wrap your Angular code in immediately invoked function expressions.
Annotate Asset Pipeline will annotate your dependencies to be safe for minification.
Template Asset Pipeline will put your templates into the $templateCache to prevent http requests to retrieve the templates.
$templateCache
Since Grails 3.2.1, Grails supports a profile for creating applications with Angular that provides a more future facing setup.
The biggest change in this profile is that the profile creates a multi project gradle build. This is the first profile to have done so. The Angular profile relies on the Angular CLI to manage the client side application. The server side application is the same as an application created with the rest-api profile.
To get started with the Angular profile, create an application specifying angular as the name of the profile:
angular
$ grails create-app my-app --profile angular
This will create a my-app directory with the following contents:
my-app
client/
gradle/
gradlew
gradlew.bat
server/
settings.gradle
The entire client application lives in the client folder and the entire server application lives in the server folder.
client
server
To use this profile, you should have Node, NPM, and the Angular CLI installed. Node should be at least version 5 and NPM should be at least version 3.
Node && NPM
Angular CLI
The Angular profile is designed to be used with the Angular CLI. The CLI was used to create the client application side of the profile to start with. The CLI provides commands to do most of the things you would want to do with the client application, including creating components or services. Because of that, the profile itself provides no commands to do those same things.
To execute the server side application only, you can execute the bootRun task in the server project:
./gradlew server:bootRun
The same can be done for the client application:
./gradlew client:bootRun
To execute both, you must do so in parallel:
./gradlew bootRun --parallel
The default client application that comes with the profile provides some tests that can be executed. To execute tests in the application:
./gradlew test
The test task will execute unit tests with Karma and Jasmine.
./gradlew integrationTest
The integrationTest task will execute e2e tests with Protractor.
Because the client side and server side will be running on separate ports, CORS configuration is required. By default the profile will configure the server side to allow CORS from all hosts via the following config:
See the section on CORS in the user guide for information on configuring this feature for your needs.
As mentioned in the previous section the REST profile by default uses JSON views to render JSON responses. These play a similar role to GSP, but instead are optimized for outputing JSON responses instead of HTML.
You can continue to separate your application in terms of MVC, with the logic of your application residing in controllers and services, whilst view related matters are handled by JSON views.
JSON views also provide the flexibility to easily customize the JSON presented to clients without having to resort to relatively complex marshalling libraries like Jackson or Grails' marshaller API.
If you are using the REST or AngularJS profiles then the JSON views plugin will already be included and you can skip the remainder of this section. Otherwise you will need to modify your build.gradle to include the necessary plugin to activate JSON views:
compile 'org.grails.plugins:views-json:1.0.0' // or whatever is the latest version
In order to compile JSON views for production deployment you should also activate the Gradle plugin by first modifying the buildscript block:
buildscript
buildscript {
...
dependencies {
...
classpath "org.grails.plugins:views-gradle:1.0.0"
}
}
Then apply the org.grails.plugins.views-json Gradle plugin after any Grails core gradle plugins:
org.grails.plugins.views-json
...
apply plugin: "org.grails.grails-web"
apply plugin: "org.grails.plugins.views-json"
This will add a compileGsonViews task to Gradle, which is invoked prior to creating the production JAR or WAR file.
compileGsonViews
JSON views go into the grails-app/views directory and end with the .gson suffix. They are regular Groovy scripts and can be opened in any Groovy editor.
.gson
Example JSON view:
json.person {
name "bob"
}
The above JSON view produces:
{"person":{"name":"bob"}}
There is an implicit json variable which is an instance of StreamingJsonBuilder.
Example usages:
json(1,2,3) == "[1,2,3]"
json { name "Bob" } == '{"name":"Bob"}'
json([1,2,3]) { n it } == '[{"n":1},{"n":2},{"n":3}]'
Refer to the API documentation on StreamingJsonBuilder for more information about what is possible.
You can define templates starting with underscore _. For example given the following template called _person.gson:
_
_person.gson
model {
Person person
}
json {
name person.name
age person.age
}
You can render it with a view as follows:
model {
Family family
}
json {
name family.father.name
age family.father.age
oldestChild g.render(template:"person", model:[person: family.children.max { Person p -> p.age } ])
children g.render(template:"person", collection: family.children, var:'person')
}
Alternatively for a more concise way to invoke templates, using the tmpl variable:
model {
Family family
}
json {
name family.father.name
age family.father.age
oldestChild tmpl.person( family.children.max { Person p -> p.age } ] )
children tmpl.person( family.children )
} add additional JSON output:
json g.render(book) {
pages 1000
}
There are a few useful conventions you can follow when creating JSON views. For example if you have a domain class called Book, then creating a template located at grails-app/views/book/_book.gson and using the respond method will result in rendering the template:
grails-app/views/book/_book.gson
def show(Long id) {
respond Book.get(id)
}
In addition if an error occurs during validation by default Grails will try to render a template called grails-app/views/book/_errors.gson, otherwise it will try to render grails-app/views/errors/_errors.gson if the former doesn’t exist.
grails-app/views/book/_errors.gson
grails-app/views/errors/_errors.gson
This is useful because when persisting objects you can respond with validation errors to render these aforementioned templates:
@Transactional
def save(Book book) {
if (book.hasErrors()) {
transactionStatus.setRollbackOnly()
respond book.errors
}
else {
// valid object
}
}
If a validation error occurs in the above example the grails-app/views/book/_errors.gson template will be rendered.
For more information on JSON views (and Markup views), see the JSON Views user guide.
If you are looking for a more low-level API and JSON or Markup views don’t suite your needs then you may want to consider implementing a custom renderer.
The default renderers for XML and JSON can be found in the grails.rest.render.xml and grails.rest.render.json packages respectively. These use the Grails converters (grails.converters.XML and grails.converters.JSON) by default for response rendering.
grails.rest.render.xml
grails.rest.render.json
grails.converters.XML
grails.converters.JSON
You can easily customize response rendering using these default renderers. A common change you may want to make is to include or exclude certain properties from rendering.
As mentioned previously, Grails maintains a registry of grails.rest.render.Renderer instances. There are some default configured renderers and the ability to register or override renderers for a given domain class or even for a collection of domain classes. To include a particular property from rendering you need to register a custom renderer by defining a bean in grails-app/conf/spring/resources.groovy:
grails-app/conf/spring/resources.groovy
import grails.rest.render.xml.*
beans = {
bookRenderer(XmlRenderer, Book) {
includes = ['title']
}
}
To exclude a property, the excludes property of the XmlRenderer class can be used:
XmlRenderer
import grails.rest.render.xml.*
beans = {
bookRenderer(XmlRenderer, Book) {
excludes = ['isbn']
}
}
As mentioned previously, the default renders use the grails.converters package under the covers. In other words, under the covers they essentially do the following:
grails.converters
import grails.converters.*
...
render book as XML
// or render book as JSON
Why the separation between converters and renderers? Well a renderer has more flexibility to use whatever rendering technology you chose. When implementing a custom renderer you could use Jackson, Gson or any Java library to implement the renderer. Converters on the other hand are very much tied to Grails' own marshalling implementation.
If you want even more control of the rendering or prefer to use your own marshalling techniques then you can implement your own Renderer instance. For example below is a simple implementation that customizes the rendering of the Book class:
package myapp
import grails.rest.render.*
import grails.web.mime.MimeType
class BookXmlRenderer extends AbstractRenderer<Book> {
BookXmlRenderer() {
super(Book, [MimeType.XML,MimeType.TEXT_XML] as MimeType[])
}
void render(Book object, RenderContext context) {
context.contentType = MimeType.XML.name
def xml = new groovy.xml.MarkupBuilder(context.writer)
xml.book(id: object.id, title:object.title)
}
}
The AbstractRenderer super class has a constructor that takes the class that it renders and the MimeType(s) that are accepted (via the ACCEPT header or file extension) for the renderer.
AbstractRenderer
MimeType
To configure this renderer, simply add it is a bean to grails-app/conf/spring/resources.groovy:
beans = {
bookRenderer(myapp.BookXmlRenderer)
}
The result will be that all Book instances will be rendered in the following format:
<book id="1" title="The Stand"/>
A grails.rest.render.ContainerRenderer is a renderer that renders responses for containers of objects (lists, maps, collections etc.). The interface is largely the same as the Renderer interface except for the addition of the getComponentType() method, which should return the "contained" type. For example:
grails.rest.render.ContainerRenderer
getComponentType()
class BookListRenderer implements ContainerRenderer<List, Book> {
Class<List> getTargetType() { List }
Class<Book> getComponentType() { Book }
MimeType[] getMimeTypes() { [ MimeType.XML] as MimeType[] }
void render(List object, RenderContext context) {
....
}
}
You can also customize rendering on a per action basis using Groovy Server Pages (GSP). For example given the show action mentioned previously:
You could supply a show.xml.gsp file to customize the rendering of the XML:
show.xml.gsp
<%@page contentType="application/xml"%>
<book id="${book.id}" title="${book.title}"/>
HATEOAS, an abbreviation for Hypermedia as the Engine of Application State, is a common pattern applied to REST architectures that uses hypermedia and linking to define the REST API.
Hypermedia (also called Mime or Media Types) are used to describe the state of a REST resource, and links tell clients how to transition to the next state. The format of the response is typically JSON or XML, although standard formats such as Atom and/or HAL are frequently used.
HAL is a standard exchange format commonly used when developing REST APIs that follow HATEOAS principals. An example HAL document representing a list of orders can be seen below:
{
"_links": {
"self": { "href": "/orders" },
"next": { "href": "/orders?page=2" },
"find": {
"href": "/orders{?id}",
"templated": true
},
"admin": [{
"href": "/admins/2",
"title": "Fred"
}, {
"href": "/admins/5",
"title": "Kate"
}]
},
"currentlyProcessing": 14,
"shippedToday": 20,
"_embedded": {
"order": [{
"_links": {
"self": { "href": "/orders/123" },
"basket": { "href": "/baskets/98712" },
"customer": { "href": "/customers/7809" }
},
"total": 30.00,
"currency": "USD",
"status": "shipped"
}, {
"_links": {
"self": { "href": "/orders/124" },
"basket": { "href": "/baskets/97213" },
"customer": { "href": "/customers/12369" }
},
"total": 20.00,
"currency": "USD",
"status": "processing"
}]
}
}
To return HAL instead of regular JSON for a resource you can simply override the renderer in grails-app/conf/spring/resources.groovy with an instance of grails.rest.render.hal.HalJsonRenderer (or HalXmlRenderer for the XML variation):
grails.rest.render.hal.HalJsonRenderer
HalXmlRenderer
import grails.rest.render.hal.*
beans = {
halBookRenderer(HalJsonRenderer, rest.test.Book)
}
You will also need to update the acceptable response formats for the resource so that the HAL format is included. Not doing so will result in a 406 - Not Acceptable response being returned from the server.
This can be done by setting the formats attribute of the Resource transformation:
import grails.rest.*
@Resource(uri='/books', formats=['json', 'xml', 'hal'])
class Book {
...
}
Or by updating the responseFormats in the controller:
class BookController extends RestfulController {
static responseFormats = ['json', 'xml', 'hal']
// ...
}
With the bean in place requesting the HAL content type will return HAL:
$ curl -i -H "Accept: application/hal+json"
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Type: application/hal+json;charset=ISO-8859-1
{
"_links": {
"self": {
"href": "",
"hreflang": "en",
"type": "application/hal+json"
}
},
"title": "\"The Stand\""
}
To use HAL XML format simply change the renderer:
import grails.rest.render.hal.*
beans = {
halBookRenderer(HalXmlRenderer, rest.test.Book)
}
To return HAL instead of regular JSON for a list of resources you can simply override the renderer in grails-app/conf/spring/resources.groovy with an instance of grails.rest.render.hal.HalJsonCollectionRenderer:
grails.rest.render.hal.HalJsonCollectionRenderer
import grails.rest.render.hal.*
beans = {
halBookCollectionRenderer(HalJsonCollectionRenderer, rest.test.Book)
}
$": {
"book": [
{
"_links": {
"self": {
"href": "",
"hreflang": "en",
"type": "application/hal+json"
}
},
"title": "The Stand"
},
{
"_links": {
"self": {
"href": "",
"hreflang": "en",
"type": "application/hal+json"
}
},
"title": "Infinite Jest"
},
{
"_links": {
"self": {
"href": "",
"hreflang": "en",
"type": "application/hal+json"
}
},
"title": "Walden"
}
]
}
}
Notice that the key associated with the list of Book objects in the rendered JSON is book which is derived from the type of objects in the collection, namely Book. In order to customize the value of this key assign a value to the collectionName property on the HalJsonCollectionRenderer bean as shown below:
collectionName
HalJsonCollectionRenderer
import grails.rest.render.hal.*
beans = {
halBookCollectionRenderer(HalCollectionJsonRenderer, rest.test.Book) {
collectionName = 'publications'
}
}
With that in place the rendered HAL will look like the following:
$": {
"publications": [
{
"_links": {
"self": {
"href": "",
"hreflang": "en",
"type": "application/hal+json"
}
},
"title": "The Stand"
},
{
"_links": {
"self": {
"href": "",
"hreflang": "en",
"type": "application/hal+json"
}
},
"title": "Infinite Jest"
},
{
"_links": {
"self": {
"href": "",
"hreflang": "en",
"type": "application/hal+json"
}
},
"title": "Walden"
}
]
}
}
If you wish to use a custom Mime Type then you first need to declare the Mime Types in grails-app/conf/application.groovy:
grails.mime.types = [
all: "*/*",
book: "application/vnd.books.org.book+json",
bookList: "application/vnd.books.org.booklist+json",
...
]
Then override the renderer to return HAL using the custom Mime Types:
import grails.rest.render.hal.*
import grails.web.mime.*
beans = {
halBookRenderer(HalJsonRenderer, rest.test.Book, new MimeType("application/vnd.books.org.book+json", [v:"1.0"]))
halBookListRenderer(HalJsonCollectionRenderer, rest.test.Book, new MimeType("application/vnd.books.org.booklist+json", [v:"1.0"]))
}
In the above example the first bean defines a HAL renderer for a single book instance that returns a Mime Type of application/vnd.books.org.book+json. The second bean defines the Mime Type used to render a collection of books (in this case application/vnd.books.org.booklist+json).
application/vnd.books.org.book+json
application/vnd.books.org.booklist+json
With this in place issuing a request for the new Mime Type returns the necessary HAL:
$ curl -i -H "Accept: application/vnd.books.org.book+json"
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Type: application/vnd.books.org.book+json;charset=ISO-8859-1
{
"_links": {
"self": {
"href": "",
"hreflang": "en",
"type": "application/vnd.books.org.book+json"
}
},
"title": "\"The Stand\""
}
An important aspect of HATEOAS is the usage of links that describe the transitions the client can use to interact with the REST API. By default the HalJsonRenderer will automatically create links for you for associations and to the resource itself (using the "self" relationship).
HalJsonRenderer
However you can customize link rendering using the link method that is added to all domain classes annotated with grails.rest.Resource or any class annotated with grails.rest.Linkable. For example, the show action can be modified as follows to provide a new link in the resulting output:
grails.rest.Linkable
def show(Book book) {
book.link rel:'publisher', href: g.createLink(absolute: true, resource:"publisher", params:[bookId: book.id])
respond book
}
Which will result in output such as:
{
"_links": {
"self": {
"href": "",
"hreflang": "en",
"type": "application/vnd.books.org.book+json"
}
"publisher": {
"href": "",
"hreflang": "en"
}
},
"title": "\"The Stand\""
}
The link method can be passed named arguments that match the properties of the grails.rest.Link class.
grails.rest.Link
Atom is another standard interchange format used to implement REST APIs. An example of Atom output can be seen below:
<>
To use Atom rendering again simply define a custom renderer:
import grails.rest.render.atom.*
beans = {
halBookRenderer(AtomRenderer, rest.test.Book)
halBookListRenderer(AtomCollectionRenderer, rest.test.Book)
}
Vnd.Error is a standardised way of expressing an error response.
By default when a validation error occurs when attempting to POST new resources then the errors object will be sent back allow with a 422 respond code:
$ curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X POST -d ""
HTTP/1.1 422 Unprocessable Entity
Server: Apache-Coyote/1.1
Content-Type: application/json;charset=ISO-8859-1
{
"errors": [
{
"object": "rest.test.Book",
"field": "title",
"rejected-value": null,
"message": "Property [title] of class [class rest.test.Book] cannot be null"
}
]
}
If you wish to change the format to Vnd.Error then simply register grails.rest.render.errors.VndErrorJsonRenderer bean in grails-app/conf/spring/resources.groovy:
grails.rest.render.errors.VndErrorJsonRenderer
beans = {
vndJsonErrorRenderer(grails.rest.render.errors.VndErrorJsonRenderer)
// for Vnd.Error XML format
vndXmlErrorRenderer(grails.rest.render.errors.VndErrorXmlRenderer)
}
Then if you alter the client request to accept Vnd.Error you get an appropriate response:
$ curl -i -H "Accept: application/vnd.error+json,application/json" -H "Content-Type: application/json" -X POST -d ""
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Type: application/vnd.error+json;charset=ISO-8859-1
[
{
"logref": "book.nullable,
"message": "Property [title] of class [class rest.test.Book] cannot be null",
"_links": {
"resource": {
"href": ""
}
}
}
]
The framework provides a sophisticated but simple mechanism for binding REST requests to domain objects and command objects. One way to take advantage of this is to bind the request property in a controller the properties of a domain class. Given the following XML as the body of the request, the createBook action will create a new Book and assign "The Stand" to the title property and "Stephen King" to the authorName property.
createBook
authorName
<?xml version="1.0" encoding="UTF-8"?>
<book>
<title>The Stand</title>
<authorName>Stephen King</authorName>
</book>
class BookController {
def createBook() {
def book = new Book()
book.properties = request
// ...
}
}
Command objects will automatically be bound with the body of the request:
class BookController {
def createBook(BookCommand book) {
// ...
}
}
class BookCommand {
String title
String authorName
}
If the command object type is a domain class and the root element of the XML document contains an id attribute, the id value will be used to retrieve the corresponding persistent instance from the database and then the rest of the document will be bound to the instance. If no corresponding record is found in the database, the command object reference will be null.
<?xml version="1.0" encoding="UTF-8"?>
<book id="42">
<title>Walden</title>
<authorName>Henry David Thoreau</authorName>
</book>
class BookController {
def updateBook(Book book) {
// The book will have been retrieved from the database and updated
// by doing something like this:
//
// book == Book.get('42')
// if(book != null) {
// book.properties = request
// }
//
// the code above represents what the framework will
// have done. There is no need to write that code.
// ...
}
}
The data binding depends on an instance of the DataBindingSource interface created by an instance of the DataBindingSourceCreator interface. The specific implementation of DataBindingSourceCreator will be selected based on the contentType of the request. Several implementations are provided to handle common content types. The default implementations will be fine for most use cases. The following table lists the content types which are supported by the core framework and which DataBindingSourceCreator implementations are used for each. All of the implementation classes are in the org.grails.databinding.bindingsource package.
DataBindingSourceCreator
contentType
org.grails.databinding.bindingsource
application/xml, text/xml
xmlDataBindingSourceCreator
XmlDataBindingSourceCreator
application/json, text/json
jsonDataBindingSourceCreator
JsonDataBindingSourceCreator
application/hal+json
halJsonDataBindingSourceCreator
HalJsonDataBindingSourceCreator
application/hal+xml
halXmlDataBindingSourceCreator
HalXmlDataBindingSourceCreator
In order to provide your own DataBindingSourceCreator for any of those content types, write a class which implements
DataBindingSourceCreator and register an instance of that class in the Spring application context. If you
are replacing one of the existing helpers, use the corresponding bean name from above. If you are providing a
helper for a content type other than those accounted for by the core framework, the bean name may be anything that
you like but you should take care not to conflict with one of the bean names above.
The DataBindingSourceCreator interface defines just 2 methods:
package org.grails.databinding.bindingsource
import grails.web.mime.MimeType
import grails.databinding.DataBindingSource
/**
* A factory for DataBindingSource instances
*
* @since 2.3
* @see DataBindingSourceRegistry
* @see DataBindingSource
*
*/
interface DataBindingSourceCreator {
/**
* `return All of the {`link MimeType} supported by this helper
*/
MimeType[] getMimeTypes()
/**
* Creates a DataBindingSource suitable for binding bindingSource to bindingTarget
*
* @param mimeType a mime type
* @param bindingTarget the target of the data binding
* @param bindingSource the value being bound
* @return a DataBindingSource
*/
DataBindingSource createDataBindingSource(MimeType mimeType, Object bindingTarget, Object bindingSource)
}
AbstractRequestBodyDataBindingSourceCreator
is an abstract class designed to be extended to simplify writing custom DataBindingSourceCreator classes. Classes which
extend AbstractRequestbodyDatabindingSourceCreator need to implement a method named createBindingSource
which accepts an InputStream as an argument and returns a DataBindingSource as well as implementing the getMimeTypes
method described in the DataBindingSourceCreator interface above. The InputStream argument to createBindingSource
provides access to the body of the request.
AbstractRequestbodyDatabindingSourceCreator
createBindingSource
DataBindingSource
getMimeTypes
The code below shows a simple implementation.
package com.demo.myapp.databinding
import grails.web.mime.MimeType
import grails.databinding.DataBindingSource
import org...databinding.SimpleMapDataBindingSource
import org...databinding.bindingsource.AbstractRequestBodyDataBindingSourceCreator
/**
* A custom DataBindingSourceCreator capable of parsing key value pairs out of
* a request body containing a comma separated list of key:value pairs like:
*
* name:Herman,age:99,town:STL
*
*/
class MyCustomDataBindingSourceCreator extends AbstractRequestBodyDataBindingSourceCreator {
@Override
public MimeType[] getMimeTypes() {
[new MimeType('text/custom+demo+csv')] as MimeType[]
}
@Override
protected DataBindingSource createBindingSource(InputStream inputStream) {
def map = [:]
def reader = new InputStreamReader(inputStream)
// this is an obviously naive parser and is intended
// for demonstration purposes only.
reader.eachLine { line ->
def keyValuePairs = line.split(',')
keyValuePairs.each { keyValuePair ->
if(keyValuePair?.trim()) {
def keyValuePieces = keyValuePair.split(':')
def key = keyValuePieces[0].trim()
def value = keyValuePieces[1].trim()
map<<key>> = value
}
}
}
// create and return a DataBindingSource which contains the parsed data
new SimpleMapDataBindingSource(map)
}
}
An instance of MyCustomDataSourceCreator needs to be registered in the spring application context.
MyCustomDataSourceCreator
beans = {
myCustomCreator com.demo.myapp.databinding.MyCustomDataBindingSourceCreator
// ...
}
With that in place the framework will use the myCustomCreator bean any time a DataBindingSourceCreator is needed
to deal with a request which has a contentType of "text/custom+demo+csv".
myCustomCreator
No direct support is provided for RSS or Atom within Grails. You could construct RSS or ATOM feeds with the render method’s XML capability..
Gra.
Within a domain class constraints are defined with the constraints property that is assigned a code block:
class User {
String login
String password
String email
Integer age
static constraints = {
...
}
}
You then use method calls that match the property name for which the constraint applies in combination with named parameters to specify constraints:
class User {
...
static constraints = {
login size: 5..15, blank: false, unique: true
password size: 5..15, blank: false
age min: 18
}
}
In this example we’ve declared that the login property must be between 5 and 15 characters long, it cannot be blank and must be unique. We’ve also applied other constraints to the password, email and age properties.
A complete reference for the available constraints can be found in the Quick Reference section under the Constraints heading.
Note that constraints are only evaluated once which may be relevant for a constraint that relies on a value like an instance of java.util.Date.
java.util.Date
class User {
...
static constraints = {
// this Date object is created when the constraints are evaluated, not
// each time an instance of the User class is validated.
birthDate max: new Date()
}
}
It’s very easy to attempt to reference instance variables from the static constraints block, but this isn’t legal in Groovy (or Java). If you do so, you will get a MissingPropertyException for your trouble. For example, you may try
MissingPropertyException
class Response {
Survey survey
Answer answer
static constraints = {
survey blank: false
answer blank: false, inList: survey.answers
}
}
See how the inList constraint references the instance property survey? That won’t work. Instead, use a custom validator:
inList
survey
class Response {
...
static constraints = {
survey blank: false
answer blank: false, validator: { val, obj -> val in obj.survey.answers }
}
}
In this example, the obj argument to the custom validator is the domain instance that is being validated, so we can access its survey property and return a boolean to indicate whether the new value for the answer property, val, is valid.
obj
answer
val
Call the validate method to validate a domain class instance:
def user = new User(params)
if (user.validate()) {
// do something with user
}
else {
user.errors.allErrors.each {
println it
}
}
The errors property on domain classes is an instance of the Spring Errors interface. The Errors interface provides methods to navigate the validation errors and also retrieve the original values.
Errors
Within Grails there are two phases of validation, the first one being data binding which occurs when you bind request parameters onto an instance such as:
def user = new User(params)
At this point you may already have errors in the errors property due to type conversion (such as converting Strings to Dates). You can check these and obtain the original input value using the Errors API:
if (user.hasErrors()) {
if (user.errors.hasFieldErrors("login")) {
println user.errors.getFieldError("login").rejectedValue
}
}
The second phase of validation happens when you call validate or save. This is when Grails will validate the bound values against the constraints you defined. For example, by default the save method calls validate before executing, allowing you to write code like:
if (user.save()) {
return user
}
else {
user.errors.allErrors.each {
println it
}
}
A common pattern in Grails is to use Command Objects for validating user-submitted data and then copy the properties of the command object to the relevant domain classes. This often means that your command objects and domain classes share properties and their constraints. You could manually copy and paste the constraints between the two, but that’s a very error-prone approach. Instead, make use of Grails' global constraints and import mechanism.
In addition to defining constraints in domain classes, command objects and other validateable classes, you can also define them in grails-app/conf/runtime.groovy:
grails.gorm.default.constraints = {
'*'(nullable: true, size: 1..20)
myShared(nullable: false, blank: false)
}
These constraints are not attached to any particular classes, but they can be easily referenced from any validateable class:
class User {
...
static constraints = {
login shared: "myShared"
}
}
Note the use of the shared argument, whose value is the name of one of the constraints defined in grails.gorm.default.constraints. Despite the name of the configuration setting, you can reference these shared constraints from any validateable class, such as command objects.
shared
grails.gorm.default.constraints
The '*' constraint is a special case: it means that the associated constraints ('nullable' and 'size' in the above example) will be applied to all properties in all validateable classes. These defaults can be overridden by the constraints declared in a validateable class.
Grails 2 introduced an alternative approach to sharing constraints that allows you to import a set of constraints from one class into another.
Let’s say you have a domain class like so:
class User {
String firstName
String lastName
String passwordHash
static constraints = {
firstName blank: false, nullable: false
lastName blank: false, nullable: false
passwordHash blank: false, nullable: false
}
}
You then want to create a command object, UserCommand, that shares some of the properties of the domain class and the corresponding constraints. You do this with the importFrom() method:
UserCommand
importFrom()
class UserCommand {
String firstName
String lastName
String password
String confirmPassword
static constraints = {
importFrom User
password blank: false, nullable: false
confirmPassword blank: false, nullable: false
}
}
This will import all the constraints from the User domain class and apply them to UserCommand. The import will ignore any constraints in the source class (User) that don’t have corresponding properties in the importing class (UserCommand). In the above example, only the 'firstName' and 'lastName' constraints will be imported into UserCommand because those are the only properties shared by the two classes.
User
If you want more control over which constraints are imported, use the include and exclude arguments. Both of these accept a list of simple or regular expression strings that are matched against the property names in the source constraints. So for example, if you only wanted to import the 'lastName' constraint you would use:
exclude
...
static constraints = {
importFrom User, include: ["lastName"]
...
}
or if you wanted all constraints that ended with 'Name':
...
static constraints = {
importFrom User, include: [/.*Name/]
...
}
Of course, exclude does the reverse, specifying which constraints should not be imported.
Typically if you get a validation error you redirect back to the view for rendering. Once there you need some way of displaying errors. Grails supports a rich set of tags for dealing with errors. To render the errors as a list you can use renderErrors:
<g:renderErrors
If you need more control you can use hasErrors and eachError:
<g:hasErrors
<ul>
<g:eachError
<li>${err}</li>
</g:eachError>
</ul>
</g:hasErrors>
It is often useful to highlight using a red box or some indicator when a field has been incorrectly input. This can also be done with the hasErrors by invoking it as a method. For example:
<div class='value ${hasErrors(bean:user,field:'login','errors')}'>
<input type="text" name="login" value="${fieldValue(bean:user,field:'login')}"/>
</div>
This code checks if the login field of the user bean has any errors and if so it adds an errors CSS class to the div, allowing you to use CSS rules to highlight the div.
user
div
Each error is actually an instance of the FieldError class in Spring, which retains the original input value within it. This is useful as you can use the error object to restore the value input by the user using the fieldValue tag:
<input type="text" name="login" value="${fieldValue(bean:user,field:'login')}"/>
This code will check for an existing FieldError in the User bean and if there is obtain the originally input value for the login field.
FieldError
Another important thing to note about errors in Grails is that error messages are not hard coded anywhere. The FieldError class in Spring resolves messages from message bundles using Grails' i18n support.
The codes themselves are dictated by a convention. For example consider the constraints we looked at earlier:
package com.mycompany.myapp
class User {
...
static constraints = {
login size: 5..15, blank: false, unique: true
password size: 5..15, blank: false
age min: 18
}
}
If a constraint is violated, Grails looks by convention for a message code:
blank
className.propertyName.blank
creditCard
className.propertyName.creditCard.invalid
className.propertyName.email.invalid
className.propertyName.not.inList
matches
className.propertyName.matches.invalid
max
className.propertyName.max.exceeded
maxSize
className.propertyName.maxSize.exceeded
min
className.propertyName.min.notmet
minSize
className.propertyName.minSize.notmet
notEqual
className.propertyName.notEqual
nullable
className.propertyName.nullable
range
className.propertyName.range.toosmall or className.propertyName.range.toobig
className.propertyName.range.toosmall
className.propertyName.range.toobig
size
className.propertyName.size.toosmall or className.propertyName.size.toobig
className.propertyName.size.toosmall
className.propertyName.size.toobig
unique
className.propertyName.unique
className.propertyName.url.invalid
validator
classname.propertyName. + String returned by Closure
classname.propertyName.
In the case of the blank constraint this would be user.login.blank so you would need a message such as the following in your grails-app/i18n/messages.properties file:
user.login.blank
user.login.blank=Your login name must be specified!
The class name is looked for both with and without a package, with the packaged version taking precedence. So for example, com.mycompany.myapp.User.login.blank will be used before user.login.blank. This allows for cases where your domain class message codes clash with a plugin’s.
com.mycompany.myapp.User.login.blank
For a reference on what codes are for which constraints refer to the reference guide for each constraint (e.g. blank).
The renderErrors tag will automatically look up messages for you using the message tag. If you need more control of rendering you can handle this yourself:
<g:hasErrors
<ul>
<g:eachError
<li><g:message</li>
</g:eachError>
</ul>
</g:hasErrors>
In this example within the body of the eachError tag we use the message tag in combination with its error argument to read the message for the given error.
error
Domain.
constraints
Classes which define the static constraints property and implement the Validateable trait will be validateable. Consider this example:
package com.mycompany.myapp
import grails.validation.Validateable
class User implements Validateable {
...
static constraints = {
login size: 5..15, blank: false, unique: true
password size: 5..15, blank: false
age min: 18
}
}
Accessing the constraints on a validateable object is slightly different. You can access a command object’s constraints programmatically in another context by accessing the constraintsMap static property of the class. That property is an instance of Map<String, ConstrainedProperty>
constraintsMap
Map<String, ConstrainedProperty>
In the example above, accessing User.constraintsMap.login.blank would yield false, while
User.constraintsMap.login.unique would yield true.
User.constraintsMap.login.blank
User.constraintsMap.login.unique
Grails defines the notion of a service layer. The Grails team discourages the embedding of core application logic inside controllers, as it does not promote reuse and a clean separation of concerns.
Services in Grails are the place to put the majority of the logic in your application, leaving controllers responsible for handling request flow with redirects and so on.
You can create a Grails service by running the create-service command from the root of your project in a terminal window:
grails create-service helloworld.simple
grails.defaultPackage
The above example will create a service at the location grails-app/services/helloworld/SimpleService.groovy. A service’s name ends with the convention Service, other than that a service is a plain Groovy class:
grails-app/services/helloworld/SimpleService.groovy
Service
package helloworld
class SimpleService {
}
Services are typically involved with coordinating logic between domain classes, and hence often involved with persistence that spans large operations. Given the nature of services, they frequently require transactional behaviour. You can use programmatic transactions with the withTransaction method, however this is repetitive and doesn’t fully leverage the power of Spring’s underlying transaction abstraction.
Services enable transaction demarcation, which is a declarative way of defining which methods are to be made transactional. To enable transactions on a service use the Transactional transform:
import grails.gorm.transactions.*
@Transactional
class CountryService {
}
The result is that all methods are wrapped in a transaction and automatic rollback occurs if a method throws an exception (both Checked or Runtime exceptions) or an Error. The propagation level of the transaction is by default set to PROPAGATION_REQUIRED.
new
new BookService()
In versions of Grails prior to Grails 3.1, Grails created Spring proxies and used the transactional property to enable and disable proxy creation. These proxies are disabled by default in applications created with Grails 3.1 and above in favor of the @Transactional transformation.
@Transactional
For versions of Grails 3.1.x and 3.2.x, if you wish to renable this feature (not recommended) then you must set grails.spring.transactionManagement to true or remove the configuration in grails-app/conf/application.yml or grails-app/conf/application.groovy.
grails.spring.transactionManagement
In Grails 3.3.x Spring proxies for transaction management has been dropped completely, and you must use Grails' AST transforms. In Grails 3.3.x, if you wish to continue to use Spring proxies for transaction management you will have to configure them manually, using the appropriate Spring configuration.
Grails also provides @Transactional and @NotTransactional annotations for cases where you need more fine-grained control over transactions at a per-method level or need to specify an alternative propagation level. For example, the @NotTransactional annotation can be used to mark a particular method to be skipped when a class is annotated with @Transactional.
@NotTransactional
transactional=false
In this example listBooks uses a read-only transaction, updateBook uses a default read-write transaction, and deleteBook is not transactional (probably not a good idea given its name).
listBooks
updateBook
deleteBook
import grails.gorm.transactions.Transactional
class BookService {
@Transactional(readOnly = true)
def listBooks() {
Book.list()
}
@Transactional
def updateBook() {
// ...
}
def deleteBook() {
// ...
}
}
You can also annotate the class to define the default transaction behavior for the whole service, and then override that default per-method:
import grails.gorm.transactions.Transactional
@Transactional
class BookService {
def listBooks() {
Book.list()
}
def updateBook() {
// ...
}
def deleteBook() {
// ...
}
}
This version defaults to all methods being read-write transactional (due to the class-level annotation), but the listBooks method overrides this to use a read-only transaction:
import grails.gorm.transactions.Transactional
@Transactional
class BookService {
@Transactional(readOnly = true)
def listBooks() {
Book.list()
}
def updateBook() {
// ...
}
def deleteBook() {
// ...
}
}
Although updateBook and deleteBook aren’t annotated in this example, they inherit the configuration from the class-level annotation.
For more information refer to the section of the Spring user guide on Using @Transactional.
Unlike Spring you do not need any prior configuration to use Transactional; just specify the annotation as needed and Grails will detect them up automatically.
An instance of TransactionStatus is available by default in Grails transactional service methods.
import grails.gorm.transactions.Transactional
@Transactional
class BookService {
def deleteBook() {
transactionStatus.setRollbackOnly()
}
}
Given two domain classes such as:
class Movie {
String title
}
class Book {
String title
static mapping = {
datasource 'books'
}
}
You can supply the desired data source to @Transactional or @ReadOnly annotations.
@ReadOnly
import grails.gorm.transactions.ReadOnly
import grails.gorm.transactions.Transactional
import groovy.transform.CompileStatic
@CompileStatic
class BookService {
@ReadOnly('books')
List<Book> findAll() {
Book.where {}.findAll()
}
@Transactional('books')
Book save(String title) {
Book book = new Book(title: title)
book.save()
book
}
}
@CompileStatic
class MovieService {
@ReadOnly
List<Movie> findAll() {
Movie.where {}.findAll()
}
}
When using transactions there are important considerations you must take into account with regards to how the underlying persistence session is handled by Hibernate. When a transaction is rolled back the Hibernate session used by GORM is cleared. This means any objects within the session become detached and accessing uninitialized lazy-loaded collections will lead to a LazyInitializationException.
LazyInitializationException
To understand why it is important that the Hibernate session is cleared. Consider the following example:
class Author {
String name
Integer age
static hasMany = [books: Book]
}
If you were to save two authors using consecutive transactions as follows:
Author.withTransaction { status ->
new Author(name: "Stephen King", age: 40).save()
status.setRollbackOnly()
}
Author.withTransaction { status ->
new Author(name: "Stephen King", age: 40).save()
}
Only the second author would be saved since the first transaction rolls back the author save() by clearing the Hibernate session. If the Hibernate session were not cleared then both author instances would be persisted and it would lead to very unexpected results.
It can, however, be frustrating to get a LazyInitializationException due to the session being cleared.
For example, consider the following example:
class AuthorService {
void updateAge(id, int age) {
def author = Author.get(id)
author.age = age
if (author.isTooOld()) {
throw new AuthorException("too old", author)
}
}
}
class AuthorController {
def authorService
def updateAge() {
try {
authorService.updateAge(params.id, params.int("age"))
}
catch(e) {
render "Author books ${e.author.books}"
}
}
}
In the above example the transaction will be rolled back if the age of the Author age exceeds the maximum value defined in the isTooOld() method by throwing an AuthorException. The AuthorException references the author but when the books association is accessed a LazyInitializationException will be thrown because the underlying Hibernate session has been cleared.
isTooOld()
AuthorException
To solve this problem you have a number of options. One is to ensure you query eagerly to get the data you will need:
class AuthorService {
...
void updateAge(id, int age) {
def author = Author.findById(id, [fetch:[books:"eager"]])
...
In this example the books association will be queried when retrieving the Author.
Another solution is to redirect the request after a transaction rollback:
class AuthorController {
AuthorService authorService
def updateAge() {
try {
authorService.updateAge(params.id, params.int("age"))
}
catch(e) {
flash.message = "Can't update age"
redirect action:"show", id:params.id
}
}
}
In this case a new request will deal with retrieving the Author again. And, finally a third solution is to retrieve the data for the Author again to make sure the session remains in the correct state:
class AuthorController {
def authorService
def updateAge() {
try {
authorService.updateAge(params.id, params.int("age"))
}
catch(e) {
def author = Author.read(params.id)
render "Author books ${author.books}"
}
}
}
A common use case is to rollback a transaction if there are validation errors. For example consider this service:
import grails.validation.ValidationException
class AuthorService {
void updateAge(id, int age) {
def author = Author.get(id)
author.age = age
if (!author.validate()) {
throw new ValidationException("Author is not valid", author.errors)
}
}
}
To re-render the same view that a transaction was rolled back in you can re-associate the errors with a refreshed instance before rendering:
import grails.validation.ValidationException
class AuthorController {
def authorService
def updateAge() {
try {
authorService.updateAge(params.id, params.int("age"))
}
catch (ValidationException e) {
def author = Author.read(params.id)
author.errors = e.errors
render view: "edit", model: [author:author]
}
}
}
By default, access to service methods is not synchronised, so nothing prevents concurrent execution of those methods.
flow
conversation - In web flows the service will exist for the scope of the conversation. ie a root flow and its sub flows
conversation
session - A service is created for the scope of a user session
singleton (default) - Only one instance of the service ever exists
java.io.Serializable
To enable one of the scopes, add a static scope property to your class whose value is one of the above, for example
static scope = "flow"
Starting with Grails 2.3, new applications are generated with configuration that defaults the scope of controllers to singleton.
If singleton controllers interact with prototype scoped services, the services effectively behave as per-controller singletons.
If non-singleton services are required, controller scope should be changed as well.
See Controllers and Scopes in the user guide for more information.
You can also configure whether the service is lazily initialized. By default, this is set to true, but you can disable this and make initialization eager with the lazyInit property:
lazyInit
static lazyInit = false
A key aspect of Grails services is the ability to use Spring Framework's dependency injection features. Grails supports "dependency injection by convention". In other words, you can use the property name representation of the class name of a service to automatically inject them into controllers, tag libraries, and so on.
As an example, given a service called BookService, if you define a property called bookService in a controller as follows:
BookService
bookService
class BookController {
def bookService
...
}
In this case, the Spring container will automatically inject an instance of that service based on its configured scope. All dependency injection is done by name. You can also specify the type as follows:
class AuthorService {
BookService bookService
}
To be consistent with standard JavaBean conventions, if the first 2 letters of the class name are upper case, the property name is the same as the class name. For example, the property name of the JDBCHelperService class would be JDBCHelperService, not jDBCHelperService or jdbcHelperService.
JDBCHelperService
jDBCHelperService
jdbcHelperService
See section 8.8 of the JavaBean specification for more information on de-capitalization rules.
Be careful when injecting the non-default datasources. For example, using this config:
dataSources:
dataSource:
pooled: true
jmxExport: true
.....
secondary:
pooled: true
jmxExport: true
.....
You can inject the primary dataSource like you would expect:
class BookSqlService {
def dataSource
}
But to inject the secondary datasource, you have to use Spring’s Autowired injection or resources.groovy.
secondary
Autowired
class BookSqlSecondaryService {
@Autowired
@Qualifier('dataSource_secondary')
def dataSource2
}
You can inject services in other services with the same technique. If you had an AuthorService that needed to use the BookService, declaring the AuthorService as follows would allow that:
AuthorService
class AuthorService {
def bookService
}
You can even inject services into domain classes and tag libraries, which can aid in the development of rich domain models and views:
class Book {
...
def bookService
def buyBook() {
bookService.buyBook(this)
}
}
The default bean name which is associated with a service can be problematic if there are multiple services with the same name defined in different packages. For example consider the situation where an application defines a service class named com.demo.ReportingService and the application uses a plugin named ReportingUtilities and that plugin provides a service class named com.reporting.util.ReportingService.
com.demo.ReportingService
ReportingUtilities
com.reporting.util.ReportingService
The default bean name for each of those would be reportingService so they would conflict with each other. Grails manages this by changing the default bean name for services provided by plugins by prefixing the bean name with the plugin name.
reportingService
In the scenario described above the reportingService bean would be an instance of the com.demo.ReportingService class defined in the application and the reportingUtilitiesReportingService bean would be an instance of the com.reporting.util.ReportingService class provided by the ReportingUtilities plugin.
reportingUtilitiesReportingService
For all service beans provided by plugins, if there are no other services with the same name within the application or other plugins in the application then a bean alias will be created which does not include the plugin name and that alias points to the bean referred to by the name that does include the plugin name prefix.
For example, if the ReportingUtilities plugin provides a service named com.reporting.util.AuthorService and there is no other AuthorService in the application or in any of the plugins that the application is using then there will be a bean named reportingUtilitiesAuthorService which is an instance of this com.reporting.util.AuthorService class and there will be a bean alias defined in the context named authorService which points to that same bean.
com.reporting.util.AuthorService
reportingUtilitiesAuthorService
authorService
Groovy is a dynamic language and by default Groovy uses a dynamic dispatch mechanism to carry out method calls and property access. This dynamic dispatch mechanism provides a lot of flexibility and power to the language. For example, it is possible to dynamically add methods to classes at runtime and it is possible to dynamically replace existing methods at runtime. Features like these are important and provide a lot of power to the language. However, there are times when you may want to disable this dynamic dispatch in favor of a more static dispatch mechanism and Groovy provides a way to do that. The way to tell the Groovy compiler that a particular class should compiled statically is to mark the class with the groovy.transform.CompileStatic annotation as shown below.
import groovy.transform.CompileStatic
@CompileStatic
class MyClass {
// this class will be statically compiled...
}
See these notes on Groovy static compilation for more details on how CompileStatic works and why you might want to use it.
One limitation of using CompileStatic is that when you use it you give up access to the power and flexibility offered by dynamic dispatch. For example, in Grails you would not be able to invoke a GORM dynamic finder from a class that is marked with CompileStatic because the compiler cannot verify that the dynamic finder method exists, because it doesn’t exist at compile time. It may be that you want to take advantage of Groovy’s static compilation benefits without giving up access to dynamic dispatch for Grails specific things like dynamic finders and this is where grails.compiler.GrailsCompileStatic comes in. GrailsCompileStatic behaves just like CompileStatic but is aware of certain Grails features and allows access to those specific features to be accessed dynamically.
GrailsCompileStatic
The GrailsCompileStatic annotation may be applied to a class or methods within a class.
import grails.compiler.GrailsCompileStatic
@GrailsCompileStatic
class SomeClass {
// all of the code in this class will be statically compiled
def methodOne() {
// ...
}
def methodTwo() {
// ...
}
def methodThree() {
// ...
}
}
import grails.compiler.GrailsCompileStatic
class SomeClass {
// methodOne and methodThree will be statically compiled
// methodTwo will be dynamically compiled
@GrailsCompileStatic
def methodOne() {
// ...
}
def methodTwo() {
// ...
}
@GrailsCompileStatic
def methodThree() {
// ...
}
}
It is possible to mark a class with GrailsCompileStatic and exclude specific methods by marking them with GrailsCompileStatic and specifying that the type checking should be skipped for that particular method as shown below.
import grails.compiler.GrailsCompileStatic
import groovy.transform.TypeCheckingMode
@GrailsCompileStatic
class SomeClass {
// methodOne and methodThree will be statically compiled
// methodTwo will be dynamically compiled
def methodOne() {
// ...
}
@GrailsCompileStatic(TypeCheckingMode.SKIP)
def methodTwo() {
// ...
}
def methodThree() {
// ...
}
}
Code that is marked with GrailsCompileStatic will all be statically compiled except for Grails specific interactions that cannot be statically compiled but that GrailsCompileStatic can identify as permissible for dynamic dispatch. These include things like invoking dynamic finders and DSL code in configuration blocks like constraints and mapping closures in domain classes.
Care must be taken when deciding to statically compile code. There are benefits associated with static compilation but in order to take advantage of those benefits you are giving up the power and flexibility of dynamic dispatch. For example if code is statically compiled it cannot take advantage of runtime metaprogramming enhancements which may be provided by plugins.
The grails.compiler.GrailsTypeChecked annotation works a lot like the GrailsCompileStatic annotation except that it only enables static type checking, not static compilation. This affords compile time feedback for expressions which cannot be validated statically at compile time while still leaving dynamic dispatch in place for the class.
import grails.compiler.GrailsTypeChecked
@GrailsTypeChecked
class SomeClass {
// all of the code in this class will be statically type
// checked and will be dynamically dispatched at runtime
def methodOne() {
// ...
}
def methodTwo() {
// ...
}
def methodThree() {
// ...
}
}
Automated testing is a key part of Grails. Hence, Grails provides many ways to making testing easier from low level unit testing to high level functional tests. This section details the different capabilities that Grails offers for testing.
The first thing to be aware of is that all of the create-* and generate-\* commands create unit or integration tests automatically. For example if you run the create-controller command as follows:
generate-\*
unit
integration
grails create-controller com.acme.app.simple
Grails will create a controller at grails-app/controllers/com/acme/app/SimpleController.groovy, and also a unit test at src/test/groovy/com/acme/app/SimpleControllerSpec.groovy. What Grails won’t do however is populate the logic inside the test! That is left up to you.
grails-app/controllers/com/acme/app/SimpleController.groovy
src/test/groovy/com/acme/app/SimpleControllerSpec.groovy
Tests
Test
Tests are run with the test-app command:
The command will produce output such as:
-------------------------------------------------------
Running Unit Tests...
Running test FooTests...FAILURE
Unit Tests Completed in 464ms ...
-------------------------------------------------------
Tests failed: 0 errors, 1 failures
whilst showing the reason for each test failure.
-clean
Grails writes both plain text and HTML test reports to the target/test-reports directory, along with the original XML files. The HTML reports are generally the best ones to look at.
target/test-reports
Using Grails' interactive mode confers some distinct advantages when executing tests. First, the tests will execute significantly faster on the second and subsequent runs. Second, a shortcut is available to open the HTML reports in your browser:
You can also run your unit tests from within most IDEs.
You can selectively target the test(s) to be run in different ways. To run all tests for a controller named SimpleController you would run:
SimpleController
grails test-app SimpleController
This will run any tests for the class named SimpleController. Wildcards can be used…
grails test-app *Controller
This will test all classes ending in Controller. Package names can optionally be specified…
grails test-app some.org.*Controller
or to run all tests in a package…
grails test-app some.org.*
or to run all tests in a package including subpackages…
grails test-app some.org.**.*
You can also target particular test methods…
grails test-app SimpleController.testLogin
This will run the testLogin test in the SimpleController tests. You can specify as many patterns in combination as you like…
testLogin
grails test-app some.org.* SimpleController.testLogin BookController
-rerun
grails test-app *.ProductControllerSpec
In order to debug your tests via a remote debugger, you can add --debug-jvm after grails in any commands, like so:
--debug-jvm
grails --debug-jvm test-app
This will open the default Java remote debugging port, 5005, for you to attach a remote debugger from your editor / IDE of choice.
grails-debug
In addition to targeting certain tests, you can also target test phases. By default Grails has two testing phases unit and integration.
integration.
phase:type
To execute unit tests you can run:
grails test-app -unit
To run integration tests you would run…
grails test-app -integration
Test and phase targeting can be applied at the same time:
grails test-app some.org.**.* -unit
This would run all tests in the unit phase that are in the package some.org or a subpackage.
some.org
Unit testing are tests at the "unit" level. In other words you are testing individual methods or blocks of code without consideration for surrounding infrastructure. Unit tests are typically run without the presence of physical resources that involve I/O such as databases, socket connections or files. This is to ensure they run as quick as possible since quick feedback is important.
Since Grails 3.3, the Grails Testing Support Framework is used for all unit tests. This support provides a set of traits.'
}
}
For more information on writing tests with Grails Testing Support see the dedicated documentation.
Versions of Grails below 3.2 used the Grails Test Mixin Framework which was based on the @TestMixin AST transformation. This library has been superceded by the simpler and more IDE friendly trait based implementation. However you can still use it by adding the following dependency to your Grails application:
@TestMixin
testCompile "org.grails:grails-test-mixins:3.3.0"
This may be useful if you are, for example, upgrading an existing application to Grails 3.3.x.
Integration tests differ from unit tests in that you have full access to the Grails environment within the test. You can create an integration test using the create-integration-test command:
$ grails create-integration-test Example
The above command will create a new integration test at the location src/integration-test/groovy/<PACKAGE>/ExampleSpec.groovy.
src/integration-test/groovy/<PACKAGE>/ExampleSpec.groovy
Grails uses the test environment for integration tests and loads the application prior to the first test run. All tests use the same application state.
Integration test methods run inside their own database transaction by default, which is rolled back at the end of each test method. This means that data saved during a test is not persisted to the database (which is shared across all tests). The default generated integration test template includes the Rollback annotation:
import grails.testing.mixin.integration.Integration
import grails.gorm.transactions.*
import spock.lang.*
@Integration
@Rollback
class ExampleSpec extends Specification {
...
void "test something"() {
expect:"fix me"
true == false
}
}
The Rollback annotation ensures that each test method runs in a transaction that is rolled back. Generally this is desirable because you do not want your tests depending on order or application state.
Rollback
In Grails 3.0 tests rely on grails.gorm.transactions.Rollback annotation to bind the session in integration tests. Though each test method transaction is rolled back, the setup() method uses a separate transaction that is not rolled back.
Data will persist to the database and will need to be cleaned up manually if setup() sets up data and persists them as shown in the below sample:
setup()
import grails.testing.mixin.integration.Integration
import grails.gorm.transactions.*
import spock.lang.*
@Integration
@Rollback
class BookSpec extends Specification {
void setup() {
// Below line would persist and not roll back
new Book(name: 'Grails in Action').save(flush: true)
}
void "test something"() {
expect:
Book.count() == 1
}
}
To automatically roll back setup logic, any persistence operations need to be called from the test method itself so that they are run within the test method’s rolled back transaction. Similar to usage of the setupData() method shown below:
setupData()
import grails.testing.mixin.integration.Integration
import grails.gorm.transactions.*
import spock.lang.*
@Integration
@Rollback
class BookSpec extends Specification {
void setupData() {
// Below line would roll back
new Book(name: 'Grails in Action').save(flush: true)
}
void "test something"() {
given:
setupData()
expect:
Book.count() == 1
}
}
Another transactional approach could be to use Spring’s @Rollback instead.
import grails.testing.mixin.integration.Integration
import org.springframework.test.annotation.Rollback
import spock.lang.*
@Integration
@Rollback
class BookSpec extends Specification {
void setup() {
new Book(name: 'Grails in Action').save(flush: true)
}
void "test something"() {
expect:
Book.count() == 1
}
}
If you do have a series of tests that will share state you can remove the Rollback and the last test in the suite should feature the DirtiesContext annotation which will shutdown the environment and restart it fresh (note that this will have an impact on test run times).
To obtain a reference to a bean you can use the Autowired annotation. For example:
...
import org.springframework.beans.factory.annotation.*
@Integration
@Rollback
class ExampleServiceSpec extends Specification {
@Autowired
ExampleService exampleService
...
void "Test example service"() {
expect:
exampleService.countExamples() == 0
}
}
To integration test controllers it is recommended you use create-functional-test command to create a Geb functional test. See the following section on functional testing for more information.
Functional tests involve making HTTP requests against the running application and verifying the resultant behaviour. This is useful for end-to-end testing scenarios, such as making REST calls against a JSON API.
Grails by default ships with support for writing functional tests using the Geb framework. To create a functional test you can use the create-functional-test command which will create a new functional test:
create-functional-test
$ grails create-functional-test MyFunctional
The above command will create a new Spock spec called MyFunctionalSpec.groovy in the src/integration-test/groovy directory. The test is annotated with the Integration annotation to indicate it is an integration test and extends the GebSpec super class:
MyFunctionalSpec.groovy
src/integration-test/groovy
GebSpec
@Integration
class HomeSpec extends GebSpec {
def setup() {
}
def cleanup() {
}
void "Test the home page renders correctly"() {
when:"The home page is visited"
go '/'
then:"The title is correct"
$('title').text() == "Welcome to Grails"
}
}
When the test is run the application container will be loaded up in the background and you can send requests to the running application using the Geb API.
Note that the application is only loaded once for the entire test run, so functional tests share the state of the application across the whole suite.
In addition the application is loaded in the JVM as the test, this means that the test has full access to the application state and can interact directly with data services such as GORM to setup and cleanup test data.
The Integration annotation supports an optional applicationClass attribute which may be used to specify the application class to use for the functional test. The class must extend GrailsAutoConfiguration.
Integration
applicationClass
@Integration(applicationClass=com.demo.Application)
class HomeSpec extends GebSpec {
// ...
}
If the applicationClass is not specified then the test runtime environment will attempt to locate the application class dynamically which can be problematic in multiproject builds where multiple application classes may be present.
When running the server port by default will be randomly assigned. The Integration annotation adds a property of serverPort to the test class that you can use if you want to know what port the application is running on this isn’t needed if you are extending the GebSpec as shown above but can be useful information.
serverPort
If you want to run the tests on a fixed port (defined by the server.port configuration property), you need to manually annotate your test with @SpringBootTest:
server.port
@SpringBootTest
import grails.testing.mixin.integration.Integration
import org.springframework.boot.test.context.SpringBootTest
import spock.lang.Specification
@Integration
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.DEFINED_PORT)
class MySpec extends Specification {
// ...
}.
The scaffolding includes locale specific labels for domain classes and domain fields. For example, if you have a Book domain class with a title field:.
Grails is first and foremost a web application framework, but it is also a platform. By exposing a number of extension points that let you extend anything from the command line interface to the runtime configuration engine, Grails can be customised to suit almost any needs. To hook into this platform, all you need to do is create a plugin.
Extending the platform may sound complicated, but plugins can range from trivially simple to incredibly powerful. If you know how to build a Grails application, you’ll know how to create a plugin for sharing a data model or some static resources.
Creating a Grails plugin is a simple matter of running the command:
grails create-plugin <<PLUGIN NAME>>
This will create a web-plugin project for the name you specify. For example running grails create-plugin example would create a new web-plugin project called example.
grails create-plugin example
example
In Grails 3.0 you should consider whether the plugin you create requires a web environment or whether the plugin can be used with other profiles. If your plugin does not require a web environment then use the "plugin" profile instead of the default "web-plugin" profile:
grails create-plugin <<PLUGIN NAME>> --profile=plugin
Make sure the plugin name does not contain more than one capital letter in a row, or it won’t work. Camel case is fine, though.
Being a regular Grails project has a number of benefits in that you can immediately test your plugin by running (if the plugin targets the "web" profile):
The structure of a Grails plugin is very nearly the same as a Grails application project’s except that in the src/main/groovy directory under the plugin package structure you will find a plugin descriptor class (a class that ends in "GrailsPlugin"). For example:
import grails.plugins.*
class ExampleGrailsPlugin extends Plugin {
...
}
All plugins must have this class under the src/main/groovy directory, otherwise they are not regarded as a plugin. The plugin class defines metadata about the plugin, and optionally various hooks into plugin extension points (covered shortly).
You can also provide additional information about your plugin using several special properties:
title - short one-sentence description of your plugin
grailsVersion - The version range of Grails that the plugin supports. eg. "1.2 > *" (indicating 1.2 or higher)
grailsVersion
author - plugin author’s name
authorEmail - plugin author’s contact e-mail
authorEmail
developers - Any additional developers beyond the author specified above.
developers
description - full multi-line description of plugin’s features
documentation - URL of the plugin’s documentation
documentation
license - License of the plugin
license
issueManagement - Issue Tracker of the plugin
issueManagement
scm - Source code management location of the plugin
scm
Here is a slimmed down example from the Quartz Grails plugin:
package quartz
@Slf4j
class QuartzGrailsPlugin extends Plugin {
// the version or versions of Grails the plugin is designed for
def grailsVersion = "3.0.0.BUILD-SNAPSHOT > *"
// resources that are excluded from plugin packaging
def pluginExcludes = [
"grails-app/views/error.gsp"
]
def title = "Quartz" // Headline display name of the plugin
def author = "Jeff Brown"
def authorEmail = "[email protected]"
def description = '''\
Adds Quartz job scheduling features
'''
def profiles = ['web']
List loadAfter = ['hibernate3', 'hibernate4', 'hibernate5', 'services']
def documentation = ""
def license = "APACHE"
def issueManagement = [ system: "Github Issues", url: "" ]
def developers = [
[ name: "Joe Dev", email: "[email protected]" ]
]
def scm = [ url: "" ]
Closure doWithSpring()......
To make your plugin available for use in a Grails application run the install command:
install
grails install
This will install the plugin into your local Maven cache. Then to use the plugin within an application declare a dependency on the plugin in your build.gradle file and include mavenLocal() in your repositories hash:
mavenLocal()
...
repositories {
...
mavenLocal()
}
...
compile "org.grails.plugins:quartz:0.1"
If you wish to setup a plugin as part of a multi project build then follow these steps.
Step 1: Create the application and the plugin
Using the grails command create an application and a plugin:
$ grails create-app myapp
$ grails create-plugin myplugin
Step 2: Create a settings.gradle file
In the same directory create a settings.gradle file with the following contents:
settings.gradle
include "myapp", "myplugin"
The directory structure should be as follows:
PROJECT_DIR
- settings.gradle
- myapp
- build.gradle
- myplugin
- build.gradle
Step 3: Declare a project dependency on the plugin
Within the build.gradle of the application declare a dependency on the plugin within the plugins block:
plugins
grails {
plugins {
compile project(':myplugin')
}
}
Step 4: Configure the plugin to enable reloading
In the plugin directory, add or modify the gradle.properties file. A new property exploded=true needs to be set in order for the plugin to add the exploded directories to the classpath.
exploded=true
Step 5: Run the application
Now run the application using the grails run-app command from the root of the application directory, you can use the verbose flag to see the Gradle output:
verbose
$ cd myapp
$ grails run-app -verbose
You will notice from the Gradle output that plugins sources are built and placed on the classpath of your application:
:myplugin:compileAstJava UP-TO-DATE
:myplugin:compileAstGroovy UP-TO-DATE
:myplugin:processAstResources UP-TO-DATE
:myplugin:astClasses UP-TO-DATE
:myplugin:compileJava UP-TO-DATE
:myplugin:configScript UP-TO-DATE
:myplugin:compileGroovy
:myplugin:copyAssets UP-TO-DATE
:myplugin:copyCommands UP-TO-DATE
:myplugin:copyTemplates UP-TO-DATE
:myplugin:processResources
:myapp:compileJava UP-TO-DATE
:myapp:compileGroovy
:myapp:processResources UP-TO-DATE
:myapp:classes
:myapp:findMainClass
:myapp:bootRun
Grails application running at in environment: development
Although the create-plugin command creates certain files for you so that the plugin can be run as a Grails application, not all of these files are included when packaging a plugin. The following is a list of artefacts created, but not included by package-plugin:
grails-app/build.gradle (although it is used to generate dependencies.groovy)
grails-app/build.gradle
dependencies.groovy
grails-app/conf/application.yml (renamed to plugin.yml)
Everything within /src/test/*\*
/src/test/*\*
SCM management files within *\*/.svn/*\* and *\*/CVS/*\*
*\*/.svn/*\*
*\*/CVS/*\*
When developing a plugin you may create test classes and sources that are used during the development and testing of the plugin but should not be exported to the application.
To exclude test sources you need to modify the pluginExcludes property of the plugin descriptor AND exclude the resources inside your build.gradle file. For example say you have some classes under the com.demo package that are in your plugin source tree but should not be packaged in the application. In your plugin descriptor you should exclude these:
pluginExcludes
com.demo
// resources that should be loaded by the plugin once installed in the application
def pluginExcludes = [
'**/com/demo/**'
]
And in your build.gradle you should exclude the compiled classes from the JAR file:
jar {
exclude "com/demo/**/**"
}
In Grails 2.x it was possible to specify inline plugins in BuildConfig, in Grails 3.x this functionality has been replaced by Gradle’s multi-project build feature.
BuildConfig
To set up a multi project build create an appliation and a plugin in a parent directory:
Then create a settings.gradle file in the parent directory specifying the location of your application and plugin:
include 'myapp', 'myplugin'
Finally add a dependency in your application’s build.gradle on the plugin:
compile project(':myplugin')
Using this technique you have achieved the equivalent of inline plugins from Grails 2.x.
The preferred way to distribute plugin is to publish to the official Grails Central Plugin Repository. This will make your plugin visible to the list-plugins command:
grails list-plugins
which lists all plugins that are in the central repository. Your plugin will also be available to the plugin-info command:
grails plugin-info [plugin-name]
which prints extra information about it, such as its description, who wrote, etc.
A plugin can add new commands to the Grails 3.0 interactive shell in one of two ways. First, using the create-script you can create a code generation script which will become available to the application. The create-script command will create the script in the src/main/scripts directory:
create-script
+ src/main/scripts <-- additional scripts here
+ grails-app
+ controllers
+ services
+ etc.
Code generation scripts can be used to create artefacts within the project tree and automate interactions with Gradle.
If you want to create a new shell command that interacts with a loaded Grails application instance then you should use the create-command command:
$ grails create-command MyExampleCommand
This will create a file called grails-app/commands/PACKAGE_PATH/MyExampleCommand.groovy that extends ApplicationCommand:
grails-app/commands/PACKAGE_PATH/MyExampleCommand.groovy
import grails.dev.commands.*
class MyExampleCommand implements ApplicationCommand {
boolean handle(ExecutionContext ctx) {
println "Hello World"
return true
}
}
An ApplicationCommand has access to the GrailsApplication instance and is subject to autowiring like any other Spring bean.
ApplicationCommand
You can also inform Grails to skip the execution of Bootstrap.groovy files with a simple property in your command:
Bootstrap.groovy
class MyExampleCommand implements ApplicationCommand {
boolean skipBootstrap = true
boolean handle(ExecutionContext ctx) {
...
}
}
For each ApplicationCommand present Grails will create a shell command and a Gradle task to invoke the ApplicationCommand. In the above example you can invoke the MyExampleCommand class using either:
MyExampleCommand
$ grails my-example
Or
$ gradle myExample
The Grails version is all lower case hyphen separated and excludes the "Command" suffix.
The main difference between code generation scripts and ApplicationCommand instances is that the latter has full access to the Grails application state and hence can be used to perform tasks that interactive with the database, call into GORM etc.
In Grails 2.x Gant scripts could be used to perform both these tasks, in Grails 3.x code generation and interacting with runtime application state has been cleanly separated.
A plugin can add new artifacts by creating the relevant file within the grails-app tree.
+ grails-app
+ controllers <-- additional controllers here
+ services <-- additional services here
+ etc. <-- additional XXX here
When a plugin provides a controller it may also provide default views to be rendered. This is an excellent way to modularize your application through plugins. Grails' view resolution mechanism will first look for the view in the application it is installed into and if that fails will attempt to look for the view within the plugin. This means that you can override views provided by a plugin by creating corresponding GSPs in the application’s grails-app/views directory.
For example, consider a controller called BookController that’s provided by an 'amazon' plugin. If the action being executed is list, Grails will first look for a view called grails-app/views/book/list.gsp then if that fails it will look for the same view relative to the plugin.
grails-app/views/book/list.gsp
However if the view uses templates that are also provided by the plugin then the following syntax may be necessary:
<g:render
Note the usage of the plugin attribute, which contains the name of the plugin where the template resides. If this is not specified then Grails will look for the template relative to the application.
By default Grails excludes the following files during the packaging process:
plugin.yml
The default UrlMappings.groovy file is not excluded, so remove any mappings that are not required for the plugin to work. You are also free to add a UrlMappings definition under a different name which will be included. For example a file called grails-app/controllers/BlogUrlMappings.groovy is fine.
grails-app/controllers/BlogUrlMappings.groovy
The list of excludes is extensible with the pluginExcludes property:
// resources that are excluded from plugin packaging
def pluginExcludes = [
"grails-app/views/error.gsp"
]
This is useful for example to include demo or test resources in the plugin repository, but not include them in the final distribution.
Before looking at providing runtime configuration based on conventions you first need to understand how to evaluate those conventions from a plugin. Every plugin has an implicit application variable which is an instance of the GrailsApplication interface.
The GrailsApplication interface provides methods to evaluate the conventions within the project and internally stores references to all artifact classes within your application.
Artifacts implement the GrailsClass interface, which represents a Grails resource such as a controller or a tag library. For example to get all GrailsClass instances you can do:
GrailsClass
for (grailsClass in application.allClasses) {
println grailsClass.name
}
GrailsApplication has a few "magic" properties to narrow the type of artefact you are interested in. For example to access controllers you can use:
for (controllerClass in application.controllerClasses) {
println controllerClass.name
}
The dynamic method conventions are as follows:
*Classes - Retrieves all the classes for a particular artefact name. For example application.controllerClasses.
*Classes
application.controllerClasses
get*Class - Retrieves a named class for a particular artefact. For example application.getControllerClass("PersonController")
get*Class
application.getControllerClass("PersonController")
is*Class - Returns true if the given class is of the given artefact type. For example application.isControllerClass(PersonController)
is*Class
application.isControllerClass(PersonController)
The GrailsClass interface has a number of useful methods that let you further evaluate and work with the conventions. These include:
getPropertyValue - Gets the initial value of the given property on the class
hasProperty - Returns true if the class has the specified property
hasProperty
newInstance - Creates a new instance of this class.
newInstance
getName - Returns the logical name of the class in the application without the trailing convention part if applicable
getName
getShortName - Returns the short name of the class without package prefix
getShortName
getFullName - Returns the full name of the class in the application with the trailing convention part and with the package name
getFullName
getPropertyName - Returns the name of the class as a property name
getPropertyName
getLogicalPropertyName - Returns the logical property name of the class in the application without the trailing convention part if applicable
getLogicalPropertyName
getNaturalName - Returns the name of the property in natural terms (e.g. 'lastName' becomes 'Last Name')
getNaturalName
getPackageName - Returns the package name
getPackageName
For a full reference refer to the javadoc API.
Grails provides a number of hooks to leverage the different parts of the system and perform runtime configuration by convention.
First, you can hook in Grails runtime configuration overriding the doWithSpring method from the Plugin class and returning a closure that defines additional beans. For example the following snippet is from one of the core Grails plugins that provides i18n support:
import org.springframework.web.servlet.i18n.CookieLocaleResolver
import org.springframework.web.servlet.i18n.LocaleChangeInterceptor
import org.springframework.context.support.ReloadableResourceBundleMessageSource
import grails.plugins.*
class I18nGrailsPlugin extends Plugin {
def version = "0.1"
Closure doWithSpring() {{->
messageSource(ReloadableResourceBundleMessageSource) {
basename = "WEB-INF/grails-app/i18n/messages"
}
localeChangeInterceptor(LocaleChangeInterceptor) {
paramName = "lang"
}
localeResolver(CookieLocaleResolver)
}}
}
This plugin configures the Grails messageSource bean and a couple of other beans to manage Locale resolution and switching. It using the Spring Bean Builder syntax to do so.
In previous versions of Grails it was possible to dynamically modify the generated web.xml. In Grails 3.x there is no web.xml file and it is not possible to programmatically modify the web.xml file anymore.
web.xml
However, it is possible to perform the most commons tasks of modifying the Servlet environment in Grails 3.x.
If you want to add a new Servlet instance the simplest way is simply to define a new Spring bean in the doWithSpring method:
Closure doWithSpring() {{->
myServlet(MyServlet)
}}
If you need to customize the servlet you can use Spring Boot’s ServletRegistrationBean:
Closure doWithSpring() {{->
myServlet(ServletRegistrationBean, new MyServlet(), "/myServlet/*") {
loadOnStartup = 2
}
}}
Just like Servlets, the simplest way to configure a new filter is to simply define a Spring bean:
Closure doWithSpring() {{->
myFilter(MyFilter)
}}
However, if you want to control the order of filter registrations you will need to use Spring Boot’s FilterRegistrationBean:
myFilter(FilterRegistrationBean) {
filter = bean(MyFilter)
urlPatterns = ['/*']
order = Ordered.HIGHEST_PRECEDENCE
}
GrailsWebRequestFilter
HiddenHttpMethodFilter
Sometimes it is useful to be able do some runtime configuration after the Spring ApplicationContext has been built. In this case you can define a doWithApplicationContext closure property.
class SimplePlugin extends Plugin{
def name = "simple"
def version = "1.1"
@Override
void doWithApplicationContext() {
def sessionFactory = applicationContext.sessionFactory
// do something here with session factory
}
}
Grails 3.0 makes it easy to add new traits to existing artefact types from a plugin. For example say you wanted to add methods for manipulating dates to controllers. This can be done by defining a trait in src/main/groovy:
package myplugin
@Enhances("Controller")
trait DateTrait {
Date currentDate() {
return new Date()
}
}
The @Enhances annotation defines the types of artefacts that the trait should be applied to.
@Enhances
As an alternative to using the @Enhances annotation above, you can implement a TraitInjector to tell Grails which artefacts you want to inject the trait into at compile time:
package myplugin
@CompileStatic
class ControllerTraitInjector implements TraitInjector {
@Override
Class getTrait() {
SomeTrait
}
@Override
String[] getArtefactTypes() {
['Controller'] as String[]
}
}
The above TraitInjector will add the SomeTrait to all controllers. The getArtefactTypes method defines the types of artefacts that the trait should be applied to.
TraitInjector
SomeTrait
getArtefactTypes
A TraitInjector implementation can also implement the SupportsClassNode interface to apply traits to only those artefacts which satisfy a custom requirement.
For example, if a trait should only be applied if the target artefact class has a specific annotation, it can be done as below
package myplugin
@CompileStatic
class AnnotationBasedTraitInjector implements TraitInjector, SupportsClassNode {
@Override
Class getTrait() {
SomeTrait
}
@Override
String[] getArtefactTypes() {
['Controller'] as String[]
}
boolean supports(ClassNode classNode) {
return GrailsASTUtils.hasAnnotation(classNode, SomeAnnotation)
}
}
Above TraitInjector will add the SomeTrait to only those controllers which has the SomeAnnotation declared.
SomeAnnotation
The framework discovers trait injectors by way of a META-INF/grails.factories descriptor that is in the .jar file. This descriptor is automatically generated. The descriptor generated for the code shown above would look like this:
META-INF/grails.factories
#Grails Factories File
grails.compiler.traits.TraitInjector=
myplugin.ControllerTraitInjector,myplugin.DateTraitTraitInjector
That file is generated automatically and added to the .jar file at build time. If for any reason the application defines its own grails.factories file at src/main/resources/META-INF/grails.factories, it is important that the trait injectors be explicitly defined in that file. The auto-generated metadata is only reliable if the application does not define its own src/main/resources/META-INF/grails.factores file.
grails.factories
src/main/resources/META-INF/grails.factories
src/main/resources/META-INF/grails.factores
Grails plugins let you register dynamic methods with any Grails-managed or other class at runtime. This work is done in a doWithDynamicMethods method.
doWithDynamicMethods
class ExamplePlugin extends Plugin {
void doWithDynamicMethods() {
for (controllerClass in grailsApplication.controllerClasses) {
controllerClass.metaClass.myNewMethod = {-> println "hello world" }
}
}
}
In this case we use the implicit application object to get a reference to all of the controller classes' MetaClass instances and add a new method called myNewMethod to each controller. If you know beforehand the class you wish the add a method to you can simply reference its metaClass property.
myNewMethod
metaClass
For example we can add a new method swapCase to java.lang.String:
swapCase
java.lang.String
class ExamplePlugin extends Plugin {
@Override
void doWithDynamicMethods() {
String.metaClass.swapCase = {->
def sb = new StringBuilder()
delegate.each {
sb << (Character.isUpperCase(it as char) ?
Character.toLowerCase(it as char) :
Character.toUpperCase(it as char))
}
sb.toString()
}
assert "UpAndDown" == "uPaNDdOWN".swapCase()
}
}
The doWithDynamicMethods closure gets passed the Spring ApplicationContext instance. This is useful as it lets you interact with objects within it. For example if you were implementing a method to interact with Hibernate you could use the SessionFactory instance in combination with a HibernateTemplate:
SessionFactory
HibernateTemplate
import org.springframework.orm.hibernate3.HibernateTemplate
class ExampleHibernatePlugin extends Plugin{
void doWithDynamicMethods() {
for (domainClass in grailsApplication.domainClasses) {
domainClass.metaClass.static.load = { Long id->
def sf = applicationContext.sessionFactory
def template = new HibernateTemplate(sf)
template.load(delegate, id)
}
}
}
}
Also because of the autowiring and dependency injection capability of the Spring container you can implement more powerful dynamic constructors that use the application context to wire dependencies into your object at runtime:
class MyConstructorPlugin {
void doWithDynamicMethods()
for (domainClass in grailsApplication.domainClasses) {
domainClass.metaClass.constructor = {->
return applicationContext.getBean(domainClass.name)
}
}
}
}
Here we actually replace the default constructor with one that looks up prototyped Spring beans instead!
Often it is valuable to monitor resources for changes and perform some action when they occur. This is how Grails implements advanced reloading of application state at runtime. For example, consider this simplified snippet from the Grails ServicesPlugin:
ServicesPlugin
class ServicesGrailsPlugin extends Plugin {
...
def watchedResources = "file:./grails-app/services/**/*Service.groovy"
...
void onChange( Map<String, Object> event) {
if (event.source) {
def serviceClass = grailsApplication.addServiceClass(event.source)
def serviceName = "${serviceClass.propertyName}"
beans {
"$serviceName"(serviceClass.getClazz()) { bean ->
bean.autowire = true
}
}
}
}
}
First it defines watchedResources as either a String or a List of strings that contain either the references or patterns of the resources to watch. If the watched resources specify a Groovy file, when it is changed it will automatically be reloaded and passed into the onChange closure in the event object.
watchedResources
onChange
event
The event object defines a number of useful properties:
event.source - The source of the event, either the reloaded Class or a Spring Resource
event.source
event.ctx - The Spring ApplicationContext instance
event.ctx
< | http://docs.grails.org/4.0.4/guide/single.html | 2021-01-15T18:46:52 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.grails.org |
Last modified: May 13, 2020
Overview
Due to networking requirements, you cannot run an IPv6-only cPanel & WHM server. You must have at least one IPv4 address.
This interface adds or deletes an IPv6 address range from the server. Use this interface to add at least one IPv6 address range before you assign dedicated IPv6 addresses to a user in WHM’s Assign IPv6 Address interface (WHM >> Home >> IP Functions >> Assign IPv6 Address).
An IPv6 address range cannot contain a server’s shared IP address. The system will not allow you to set a shared IP address that exists within any configured IPv6 address range.
Getting started
Before you use this interface, you must ensure that IPv6 functions properly on your server:
For IPv6 to function on a cPanel & WHM server, the
cpsrvd daemonmust listen on IPv6 addresses. To enable this functionality, select On for the Listen on IPv6 Addresses setting in the System section of WHM’s Tweak Settings interface (WHM >> Home >> Server Configuration >> Tweak Settings).
Use the steps in our Guide to IPv6 documentation to check whether IPv6 functions properly.
Add Range
To add a range of IPv6 addresses, perform the following steps:
Click Add Range.
Enter a name for the IPv6 address range in the Range Name text box.Important:
-. This description must contain 256 characters or fewer.
Click Add Range to add the IPv6 address range, or click Cancel to close the form.
IPv6 Address Ranges table
The IPv6 Address Ranges table lists information about each IPv6 address range on the server.
IP Range Details
This column lists the following information for each IPv6 address range on the server:
The range name.
The address range.
The range notes.
RESERVED, if you selected Reserved as the address range’s type.
Actions
To delete an IPv6 address range, click Delete Range for that range in the IPv6 Address Ranges table, and then click OK.
You cannot delete a range that currently belongs to a user on your server. To remove an IPv6 address range from an account, use WHM’s Assign IPv6 Address interface (WHM >> Home >> IP Functions >> Assign IPv6 Address). | https://docs.cpanel.net/whm/ip-functions/ipv6-ranges/ | 2021-01-15T17:41:04 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.cpanel.net |
jax.numpy.sort¶
jax.numpy.
sort(a, axis=-1, kind='quicksort', order=None)[source]¶
Return a sorted copy of an array.
LAX-backend implementation of
sort(). Original docstring below.
- Parameters
a (array_like) – Array to be sorted.
axis (int or None, optional) – Axis along which to sort. If None, the array is flattened before sorting. The default is -1, which sorts along the last axis.
kind ({'quicksort', 'mergesort', 'heapsort', 'stable'}, optional) – Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort or radix sort under the covers and, in general, the actual implementation will vary with data type. The ‘mergesort’ option is retained for backwards compatibility.
order (str
sorted_array – Array of the same type and shape as a.
- Return type
- four. introsort. When sorting does not make enough progress it switches to heapsort. This implementation makes quicksort O(n*log(n)) in the worst case.
‘stable’ automatically chooses the best stable sorting algorithm for the data type being sorted. It, along with ‘mergesort’ is currently mapped to timsort or radix sort depending on the data type. timsort details, refer to CPython listsort.txt. ‘mergesort’ and ‘stable’ are mapped to radix sort for integer data types. Radix sort is an O(n) sort instead of O(n log n).
Changed in version 1.18.0.
NaT now sorts to the end of arrays for consistency with NaN.')]) | https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.sort.html | 2021-01-15T17:45:01 | CC-MAIN-2021-04 | 1610703495936.3 | [] | jax.readthedocs.io |
QtMultimedia.audiooverview
Audio Features
Qt Multimedia offers a range of audio classes, covering both low and high level approaches to audio input, output and processing. In addition to traditional audio usage, the Qt Audio Engine QML types offer high level 3D positional audio for QML applications. See that documentation for more information.
Audio Implementation Details
Playing Compressed Audio
For playing media or audio files that are not simple, uncompressed audio, you can use the QMediaPlayer C++ class, or the Audio and MediaPlayer QML types. The QMediaPlayer class and associated QML types are also capable of playing video, if required. The compressed audio formats supported does depend on the operating system environment, and also what media plugins the user may have installed.
Here is how you play a local file using C++:
player = new QMediaPlayer; // ... player->setMedia(QUrl::fromLocalFile("/Users/me/Music/coolsong.mp3")); player->setVolume(50); player->play();
You can also put files (even remote URLs) into a playlist:
player = new QMediaPlayer; playlist = new QMediaPlaylist(player); playlist->addMedia(QUrl("")); playlist->addMedia(QUrl("")); // ... playlist->setCurrentIndex(1); player->play();
Recording Audio to a File
For recording audio to a file, the QAudioRecorder class allows you to compress audio data from an input device and record it.")); audioRecorder->record();
Low Latency Sound Effects
In addition to the raw access to sound devices described above, the QSoundEffect class (and SoundEffect QML type) offers a slightly higher level way to play sounds. These classes allow you to specify a WAV format file which can then be played with low latency when necessary. Both QSoundEffect and SoundEffect have essentially the same API.
You can adjust the number of loops a sound effect is played, as well as the volume (or muting) of the effect.
For older, Qt 4.x based applications QSound is also available. Applications are recommended to use QSoundEffect where possible.
Monitoring Audio Data During Playback or Recording
The QAudioProbe class allows you to monitor audio data being played or recorded in the higher level classes like QMediaPlayer, QCamera and QAudioRecorder. After creating your high level class, you can simply set the source of the probe to your class, and receive audio buffers as they are processed. This is useful for several audio processing tasks, particularly for visualization or adjusting gain. You cannot modify the buffers, and they may arrive at a slightly different time than the media pipeline processes them.
Here's an example of installing a probe during recording:
Low Level Audio Playback and RecordingAudioOutput class offers raw audio data output, while QAudioInput offers raw audio data input. Both classes have adjustable buffers and latency, so they are suitable for both low latency use cases (like games or VOIP) and high latency (like music playback). The available hardware determines what audio outputs and inputs are available.
Push and Pull
The low level audio classes can operate in two modes -
push and
pull. In
pull mode, the audio device is started by giving it a QIODevice. For an output device, the QAudioOutput class will pull data from the QIODevice (using QIODevice::read()) when more audio data is required. Conversely, for
pull mode with QAudioInput,.setCodec("audio/x-raw"); desiredFormat.setSampleType(QAudioFormat::UnSignedInt); desiredFormat.setSampleRate(48000); desiredFormat.setSampleSize(16); QAudioDecoder *decoder = new QAudioDecoder(this); decoder->setAudioFormat(desiredFormat); decoder->setSourceFilename("level1.mp3"); connect(decoder, SIGNAL(bufferReady()), this, SLOT(readBuffer())); decoder->start(); // Now wait for bufferReady() signal and call decoder->read()
Examples
There are both C++ and QML examples available. | https://phone.docs.ubuntu.com/en/apps/api-qml-current/QtMultimedia.audiooverview | 2021-01-15T18:00:14 | CC-MAIN-2021-04 | 1610703495936.3 | [] | phone.docs.ubuntu.com |
Example cases
- Exchange documents with business partner (B2B scenario)
- Group discussion
- Meeting
- Approval
-)
Mongodb questions
- How to work with update commands? Write side? Is our idea valid?
- How to test scalability?
- How to deal with relations?
- How to handle schema updates in the code?<< | http://docs.codehaus.org/pages/viewpage.action?pageId=208830570 | 2014-04-16T08:20:16 | CC-MAIN-2014-15 | 1397609521558.37 | [array(['/download/attachments/196640952/NewExplorerLayout.png?version=1&modificationDate=1300186324970&api=v2',
None], dtype=object)
array(['/download/attachments/196640952/NewExplorerToDo.png?version=1&modificationDate=1300186362446&api=v2',
None], dtype=object)
array(['/download/attachments/196640952/NewExplorerHome.png?version=1&modificationDate=1299583572030&api=v2',
None], dtype=object)
array(['/download/attachments/196640952/NewExplorerMyFlows.png?version=1&modificationDate=1299583572083&api=v2',
None], dtype=object)
array(['http://lifehacking.nl/wp-content/uploads/Screen-shot-2011-03-01-at-20.41.15.png',
None], dtype=object)
array(['http://images.apple.com/euro/macosx/lion/images/overview_mail20110222.jpg',
None], dtype=object) ] | docs.codehaus.org |
THE EMPORIA GAOETTE
June 8, 1939
Dear Miss LeHand:
I am sending the president a book that ± think he will
enjoy. It is not a serious book. It is a book of verse, gay,
beautiful and sometimes satirical, and humorous. Man, a grand
old man who died a quarter of a century ago writes it. Enclosed
I am sending a letter to go with the book and I am doing so because
at one time or another the President has suggested this way as
a sure way to get to him.
Respectfully yours,
W.A White | http://docs.fdrlibrary.marist.edu/psf/box32/t304gg02.html | 2014-04-16T08:31:22 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.fdrlibrary.marist.edu |
About location services
When you turn on location services, you can use the BlackBerry Protect website to view the approximate location of your device on a map. This feature can help you find your device if it is ever lost.
Your device sends its location information only when you request it through the BlackBerry Protect website or when the battery power level is low; your device doesn't report its location on a regular basis. However, if you don't want your device to send location information to the BlackBerry Protect website, you can turn off location services.
If location services is turned on and you share your BlackBerry Protect account with another person, that person can log in to the BlackBerry Protect website at any time to view the approximate location of your device.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/49499/als1343677000571.jsp | 2014-04-16T08:03:45 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.blackberry.com |
Technical Documentation¶
The use of computational simulations in many areas of science has proven to be reliable, faster and cheaper than experimental campaigns. However, the parametric analysis needs a large amount of simulations which is not feasible when using huge codes that are time and resources consuming. An efficient solution to overcome this issue is to construct models that are able to estimate correctly the responds of the codes. These models, called Surroagte Models, require a realistic amount of evaluation of the codes and the general procedure to construct them consists in:
- Generating a sample space:
- Produce a set of data from which to run the code. The points contained in this set all called snapshot.
- Learning the link between the input the output data:
- From the previously generated set of data, we can compute a model, which is build using gaussian process [Rasmussen2006] or polynomial chaos expansion [Najm2009].
- Predictng solutions from a new set of input data:
- The model can finaly be used to interpolate a new snapshot from a new set of input data.
Warning
The model cannot be used for extrapolation. Indeed, it has been constructed using a sampling of the space of parameters. If we want to predict a point which is not contained within this space, the error is not contained as the point is not balanced by points surrounding it. As a famous catastrophe, an extrapolation of the physical properties of an o-ring of the Challenger space shuttle lead to an explosion during lift-off [Draper1995].
Both Proper Orthogonal Decomposition (POD) and Kriging (PC, RBF, etc.) are techniques that can interpolate data using snapshots. The main difference being that POD compresses the data it uses to use only the relevant modes whereas Kriging method doesn’t reduce the size of the used snapshots. On the other hand, POD cannot reconstruct data from a domain missing ones [Gunes2006]. Thus, the strategy used by BATMAN consists in:
As a reference, here is some bibliography:
Content of the package¶
The BATMAN package includes:
doccontains the documentation,
batmancontains the module implementation,
test_casescontains some example.
General functionment¶
The package is composed of several python modules which are self contained within the directory
batman.
Following is a quick reference:
ui: command line interface,
space: defines the (re)sampling space,
surrogate: constructs the surrogate model,
uq: uncertainty quantification,
visualization: uncertainty visualization,
pod: constructs the POD,
driver: contains the main functions,
tasks: defines the context to compute each snapshot from,
functions: defines usefull test functions,
misc: defines the logging configuration and the settings schema.
Using it¶
After BATMAN has been installed,
batman is available as a command line tool or it can be imported in python. The CLI is defined in
ui. The module imports the package and use the function defined in
driver.
Thus BATMAN is launched using:
batman settings.json
An
output directory is created and it contains the results of the computation splited across the following folders:
snapshots,
surrogate,
- [
predictions],
- [
uq].
Content of
test_cases¶
This folder contains ready to launch examples:
Basic_functionis a simple 1-input_parameter function,
Michalewiczis a 2-input_parameters non-linear function,
Ishigamiis a 3-input_parameters,
G_Functionis a 4-input_parameters,
Channel_Flowis a 2-input_parameters with a functionnal output,
Mascaretmake use of MASCARET open source software (not included).
In every case folder, there is
README.rst file that summarizes and explains it. | https://batman.readthedocs.io/en/1.7.2-lucius/technical.html | 2021-09-16T16:21:12 | CC-MAIN-2021-39 | 1631780053657.29 | [] | batman.readthedocs.io |
Kong's Kubernetes Ingress Controller v2.0.0 is currently in beta. Check out the beta documentation and try out v2.0.0 for yourself.
Expose an external application
This example shows how we can expose a service located outside the Kubernetes cluster using an Ingress..
Create a Kubernetes service
First we need to create a Kubernetes Service type=ExternalName using the hostname of the application we want to expose.
echo " kind: Service apiVersion: v1 metadata: name: proxy-to-httpbin spec: ports: - protocol: TCP port: 80 type: ExternalName externalName: httpbin.org " | kubectl create -f -
Create an Ingress to expose the service at the path
/foo
echo ' apiVersion: extensions/v1beta1 kind: Ingress metadata: name: proxy-from-k8s-to-httpbin annotations: konghq.com/strip-path: "true" kubernetes.io/ingress.class: kong spec: rules: - http: paths: - path: /foo backend: serviceName: proxy-to-httpbin servicePort: 80 ' | kubectl create -f -
Test the service
$ curl -i $PROXY_IP/foo | https://docs.konghq.com/kubernetes-ingress-controller/1.3.x/guides/using-external-service/ | 2021-09-16T15:58:52 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.konghq.com |
LPoint3f¶
- class LPoint3f¶
Bases:
LVecBase3f
This is a three-component point in space (as opposed to a three-component vector, which represents a direction and a distance). Some of the methods are slightly different between
LPoint3and
LVector3; in particular, subtraction of two points yields a vector, while addition of a vector and a point yields a point.
Inheritance diagram
- LPoint3f(LVecBase3f const ©)¶
- LPoint3f(LVecBase2f const ©, float z)¶
- LPoint3f cross(LVecBase3f const &other) const¶
- static TypeHandle get_class_type(void)¶
- LPoint2f get_xy(void) const¶
Returns a 2-component vector that shares just the first two components of this vector.
- LPoint2f get_xz(void) const¶
Returns a 2-component vector that shares just the first and last components of this vector.
- LPoint2f get_yz(void) const¶
Returns a 2-component vector that shares just the last two components of this vector.
- LPoint3f normalized(void) const¶
Normalizes the vector and returns the normalized vector as a copy. If the vector was a zero-length vector, a zero length vector will be returned.
- LPoint3f const &origin(CoordinateSystem cs = ::CS_default)¶
Returns the origin of the indicated coordinate system. This is always 0, 0, 0 with all of our existing coordinate systems; it’s hard to imagine it ever being different.
- LPoint3f project(LVecBase3f const &onto) const¶
Returns a new vector representing the projection of this vector onto another one. The resulting vector will be a scalar multiple of onto.
- LPoint3f rfu(float right, float fwd, float up, CoordinateSystem cs = ::CS_default)¶
Returns a point described by right, forward, up displacements from the origin, wherever that maps to in the given coordinate system. | https://docs.panda3d.org/1.10/cpp/reference/panda3d.core.LPoint3f | 2021-09-16T16:22:31 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.panda3d.org |
Exchanges¶
Scalaris's utility token, SCA, can be purchased and sold in several ways, each with different advantages and disadvantages. This list is provided for informational purposes only. Services listed here have not been evaluated or endorsed by Scalaris. Please exercise discretion when using third-party services.
Centralized Exchanges¶
Centralized exchanges are one of the most popular ways to trade cryptocurrency. Some serve different markets, some are in direct competition, some have cheaper fees, and some are subject to more or less strict regulatory requirements. They are operated by a single company, which may be obliged by the laws of the jurisdiction in which it operates to collect data on its customers.The following options are available:
Decentralized Exchanges¶
Decentralized exchanges are purely peer-to-peer instead of requiring a trusted entity to manage the transaction. This is the most secure form of trading. The following options are available:
Note: For trade at Scalaris DX is require fixed Taker fee 0.01 at SCA, for Place orders no fee.
How to buy SCA at Scalaris DX - you must create placer order at market SCA-"Token Asset" with need ammount SCA and price at "Token Asset", then wait before some trader take your order.
If you already have some SCA ammount for pay Taker fee to service nodes , you can simple take any selling orders of SCA.
Instant Exchanges¶. The following options are available: | https://docs.scalaris.info/project/exchanges/ | 2021-09-16T15:28:21 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.scalaris.info |
Incorta Security Guide
This guide summarizes the Incorta security model and optional security configurations. It also describes the common considerations for securing the Incorta Direct Data Platform.
Architecture Reference
An Incorta Cluster may consist of one or more host machines. Each host may run one or more applications and services. The following diagram describes a three host Incorta Cluster that requires Shared Storage.
As the diagram is for illustrative purposes, it does not show all applications and services. In addition, certain applications and services are optional such as the Notebook Add-on service. An enterprise cluster topology for Incorta typically supports high availability and disaster recovery. To learn more about configurations for high availability and disaster recovery, please review Configure High Availability.
Here is a detailed list of the application and configurations for each host in the diagram:
Security for Hosts, Applications, and Services
Incorta encourages administrators to implement security for all hosts, applications, and services in an Incorta Cluster wherever possible.
Secure Access to Linux Host
To access a host in an Incorta Cluster, only use Secure Shell access (SSH) that requires a .PEM or .PPK key.
Secure Linux Host Users
Create only the required Linux host users. Incorta does not recommend using the root user. Instead, create an Incorta user with the required permissions and access. Consider restricting the Bash Shell commands for the Incorta user.
Access to Shared Storage
Create a shared mount that only the Incorta Linux user has access to. The Incorta Linux user requires Read, Write, and Execute permissions.
Secure Apache ZooKeeper
For more details about how to best secure Apache ZooKeeper, please review Secure ZooKeeper.
Secure Apache Tomcat
For more details about how to best secure Apache Tomcat, please review Secure Tomact with TLS.
Secure Apache Spark
By default, security in Apache Spark is off. To learn more about security and Apache Spark, please review .
Secure MySQL 5.6
To learn more about securing MySQL 5.6, please review.
Required Host Ports
Incorta strongly encourages only exposing the required ports for an Incorta Cluster and limiting ports to a whitelist of Private or Public IPs. Please be aware that applications such as Apache Spark often require a range of ports for distributed processes such as broadcast and shuffle operations. Here are the list of common ports that an Incorta Cluster requires:
Incorta Security
Each application and service that comprises the Incorta Direct Data Platform requires security configuration and administration. Security configuration and administration cover the following functional areas:
- Secure Communications
- Authentication, Authorization, and Access
- Audit Access
- Data Security
Secure Communications
To ensure secure communications between users and Incorta, administrators must implement HTTPS. One way to enable HTTPS is to configure NGINX Web Server and to use this server as Web Proxy for Incorta. A common approach is to configure HTTPS with Let’s Encrypt SSL.
Let’s Encrypt is a Certificate Authority (CA) that provides free TLS/SSL certificates to enable HTTPS on web servers. Let’s Encrypt provides a Certbot client that automates most of the steps required to obtain a certificate and to configure it within the NGINX web server. Incorta recommends that organizations use their own Trusted Certificates.
To learn more about HTTPS with Nginx, please review Configuring HTTPS Servers.
To learn more about Let’s Encrypt, please visit.
It is possible to configure a load balancer in front of Incorta with a public URL that redirects traffic to Incorta. This prevents direct access to the Incorta Analytics Service URL by normal users. For an example of how to configure Apache Web Server, please visit Configure an Apache Web Server.
Node Agent Communications
As a web application, the Cluster Management Console communicates to Node Agents in order to help start and stop services. These communications are Protocol Buffer messages that are encoded and decoded. Incorta does not send sensitive data using the NIO channel.
Public and Private Key Management
Currently, Incorta does not share public or private keys between hosts in an Incorta cluster. Incorta uses the same 128-Bit Advanced Encryption Standard (AES-128) bit cipher to encrypt passwords or secret keys for data sources.
Authentication, Authorization, and Access Configuration
There are two ways for a user to access data stored in the Incorta Direct Data Platform:
- Connect to the Incorta Analytics Service with the SQLi interface using the PostgreSQL protocol.
In both cases, Incorta will authenticate the user using a tenant name, username, and password. Authentication is tenant specific in Incorta, and as such, is a Tenant Configuration in the Cluster Management Console.
Metadata that describes objects in Incorta such as tenants, schemas, business schema, session variables, and dashboards are accessible using the Incorta Command Line Interface (CLI). The Incorta CLI only allows for the import and export of metadata and does not expose data stored in Incorta.
Authentication Types
Incorta supports various types of user authentication for a given tenant. To learn more, review Tenant Security Configurations.
Incorta supports various Authentication Types for the Incorta Analytics Services as a tenant configuration. In the 4.9 version of Incorta, the SQli interface only supports mixed mode authentication support for both the Incrota Analytics Services and the SQLi interface.
Incorta Authentication
Incorta authentication consists of a username and password. For a given tenant, a CMC administrator can configure the password policy for Incorta Authentication. This includes the following policy properties:
- Minimum Password Length
- Password Cannot Include Username
- Require Lower Case Letters
- Require Upper Case Letters
- Require Digits
- Require Special Characters
SSO (Single Sign On)
The Incorta Analytics Service supports Security Assertion Markup Language Type 2 (SAML2) for Single Sign On:
LDAP (Lightweight Directory Access Protocol)
Incorta Analytics supports the Lightweight Directory Access Protocol (LDAP). You can also use SSO with LDAP.
To learn more, visit Configure LDAP in Incorta documentation.
Authorization and Access
Incorta’s security model is optimistic, meaning that Incorta enforces the least restrictive role permissions and access rights. The Incorta security model is based on two common approaches to enterprise security:
- Role Based Access Control (RBAC)
- Discretionary Access Control (DAC)
Role Based Access Control
Role Based Access Control (RBAC) enforces access to certain features and functionality within the Incorta Analytics Services. The Incorta Loader Services is not accessible. The Incorta Cluster Management console is a separate web interface, and allows for one single administrator user.
There is no direct way to assign a Role to a user, with two exceptions:
- All users inherit the User role
- A tenant administrator inherits the SuperRole role unless otherwise configured for the tenant
In Incorta, a user belongs to zero or more Groups, and a Group is assigned to zero or more Roles. Roles are immutable. You cannot create, edit, or delete a Role. Here are the available Roles in Incorta:
- Analyze User
Manages folders and dashboards and has access to the Analyzer screen. This role creates Dashboards with shared and personal (requires Schema Manager) schemas. This role also shares with the Share option, shares through email, or schedules Dashboards for sharing using email.
- Individual Analyzer
Creates new dashboards using shared or personal schemas (requires Schema Manager). This role cannot share or send dashboards via email.
- Dashboard Analyzer
In addition to viewing and sharing the dashboards available to the user role, this role will also be able to personalize the dashboards shared with them.
- Privileged User
Shares and schedules sending dashboards using emails.
Schema Manager
Creates schemas and data sources and loads the data into the schemas. This role also shares the schemas with other users so they can create dashboards.
- SuperRole Manages users, groups, and roles. Can create users and groups. This role also creates schemas and dashboards without requiring any additional roles. This is the master Admin role.
- User
The default roles assigned to an end-user assigned to a group. This role views any dashboard shared with them. This role can apply filters but cannot change the underlying metadata.
- User Manager
Creates and manages groups and users. Creates groups and adds roles. Adds users to groups.
The following table describes the Access Rights to feature and functionality for each Role.
Discretionary Access Control (DAC)
With Discretionary Access Control (DAC), a user who owns an object — schema, business schema, session variable, or dashboard — is able to control the access to the object. In other words, the object owner defines the Access Control List (ACL) associated with the object. An ACL is a list of users and groups. For each user and group, the owner can set and revoke the access rights. Only the owner of an object can delete the object.
For an object in Incorta, there are three possible access rights:
- Can View: Has view (read) access
- Can Share: Has view (read) and share access
- Can Edit/Manage: Has view (read), share, and edit access
Example
Luke is the owner of a dashboard. Luke shares the dashboard with Jake, granting View access to Jake. Luke also gives Share access to the Analyst group and Edit access to Niki. Paul belongs to the Analyst group, and for that reason can both view and share the dashboard. Paul shares the dashboard to the Business group. Niki, who has Edit access to the dashboard, changes the access rights for the Business group, giving that group Edit access to the dashboard. Rachel belongs to the Business group and attempts to delete the dashboard. As Luke is the owner of the dashboard, Incorta prevents the deletion. Luke changes the access rights for the Analyst and Business groups from Edit to Share.
To learn more about best practice for DAC and RBAC in Incorta, please review Managing dashboards and folders.
It is also possible to review object permission history in Incorta. Incorta captures access right assignments in the Incorta Metadata database. The Permissions Dashboard provides a view to permission grants and revocations for all objects in Incorta.
Audit Access
Incorta tracks all user activities for a given tenant in an Audit.csv file. User activities include when a user:
- Creates, edits, shares, and deletes an object such as a dashboard
- Loads data for a schema or a table
- Downloads data for an insight in a dashboard
- Signs outs
Incorta writes a log of all user activities for a given tenant to an Audit.csv file. To learn more about Incorta’s Audit capabilities, please review SOX Compliance.
All update and delete actions performed by users against Incorta objects are also captured and stored in Incorta’s metadata database in the Action table. Incorta provides an Audit Action dashboard for tracking this history.
Data Security
As a Unified Data Analytics Platform, Incorta ingests data from external data sources and allows users to upload local files and folders. In this regard, Incorta mirrors and copies data from the specified source systems. Incorta users are unable to modify or edit data that Incorta ingests.
By default, Incorta encrypts sensitive data such as user passwords, data source passwords, data source security tokens, and data on disk using AES-128. For more information about AES encryption please review Advanced Encryption Standard (AES).
Incorta stores the secret key internally within application code. This key is not exposed and cannot be modified.
For data ingested into Incorta, there are several security considerations:
- Encryption of data source credentials
- Encryption of data at rest
- Encryption of defined table columns
- Row Level Security (RLS)
Encryption of data source credentials
A data source such as a MySQL database requires a username and password. Incorta encrypts the password that Incorta stores for the connection. When making a data source connection, Incorta decrypts the encrypted value.
Encryption of data at rest
Incorta ingests data from external data sources and allows users to upload local files and folders.
Local files and folders
A user may upload one or more files and one or more folders to Incorta. Incorta copies the files from the local machine and stores the files in the Tenants directory in Shared Storage. The default path to data directory is:
/home/incorta/IncortaAnalytics/Tenants/<Tenant_Name>/data
Incorta does not encrypt uploaded local files and folders. Incorta does support using password protected MS Excel Files.
External Data Source
Some data source connectors natively encrypt ingested data. For example, an Apache Kafka data source automatically encrypts data that Incorta consumes from the specified Kafka topic. The encrypted data is in CSV format.
Other data source connectors that ingest data from an application or service may not encrypt data. Oracle Fusion, Google Drive, Box, and DropBox are examples of data source connectors that currently do not encrypt ingested data.
To find out more about encryption support for ingested data, please review Supported Data Source Connectors and connect with Incorta support at [email protected].
Encryption of defined table columns
Using the Table Editor for a table in a given schema, a schema developer can explicitly define an Incorta column for encryption.
Incorta suggests storing all sensitive data using the Column Encrypt property for a column in a table in a schema.
When loading data, Incorta encrypts these columns using 128-Bit AES encryption and stores the data in encrypted form on disk in Shared Storage. Shared Storage consists of Direct Data Mapping (DDM snapshot) files and Apache Parquet files. Only when reading the encrypted data does Incorta decrypt the data.
Row Level Security (RLS)
Using the Table Editor for a table in a given schema, a schema developer can implement a Runtime Security Filter. With a runtime security filter, it is possible to implement Row Level Security (RLS) with Incorta. Row level security typically determines which user or group of users has view access to one or more rows. To learn more about runtime security filters and their practical application for RLS, please review the following documentation and community article:
- Runtime Security Filters
-
Additional Security Considerations
Incorta is a Java Virtual Machine (JVM) web application that ingests potentially sensitive data and stores data in-memory and in shared storage. As a Unified Data Analytics Platform that potentially stores sensitive data, there are several additional security concerns that Incorta endeavors to address. In general, these security concerns are:
- Injection attacks
- Materialized views and Apache Spark
- Sensitive data in Heap Dumps
- User impersonation
Injection attacks
An injection attack is an attacker’s attempt to send data to an application in a way that will change the meaning of commands being sent to an interpreter. Every interpreter has a parser. An injection attack attempts to trick a parser into interpreting data as command.
For example, consider the following change to SQL where an attacker appends “OR 1=1” to the predicate of an SQL query as in “WHERE ID=101 OR 1=1”. Now, instead of just one result being returned, all results are returned. The SQL parser interprets the untrusted data as a part of the SQL query command.
The injection context describes when an application uses untrusted data as part of a command, document, or other data structure. The goal of an injection attack is to:
- Break out of the command, document, or structure’s context
- Modify the meaning of the command, document, or structure so as to cause harm
As a web application whose purpose is to process enterprise application data for modern analytics, Incorta is subject to a variety of injection attacks based on the injection context such as:
- SQL queries
- LDAP queries (if configured)
- Operating system command interpreters
- XML documents (Incorta stores metadata as XML
- HTML documents (Incorta renders HTML)
- JSON structures (Incorta ingests JSON, for example, using Kafka, as well as renders JSON for data visualizations)
- HTTP headers
- File paths
- URLs
- JavaScript and other expression languages
Incorta employs various types of preventive measures to thwart a variety of injection attacks.
Reference Injection
In this type of injection attack, a reference can be a database key, a URL, a filename, or some other kind of lookup index. Incorta prevents the injection and ensures that this type of injection does not allow for command execution.
Command Injection
Incorta embeds the grammar of the guest languages, for example SQL, into that of the application language, in this case, Java. In doing so, Incorta automatically generates code that maps the embedded language to constructs in the host language. The result is that Incorta reconstructs the embedded command adding escaping functions where appropriate.
HTML Document Injection
Incorta prevents the insertion of untrusted data into the HTML Document Object Model. Incorta properly employs output escaping and HTML encoding in order to prevent Cross-Site Scripting (XSS) attacks. Escaping is a standard technique to ensure that characters are treated as data, not as characters that are relevant to the interpreter’s parser.
Cross-Site Scripting (XSS)
Cross-site scripting (XSS) is a type of computer security vulnerability typically found in web applications. XSS attacks enable an attacker to inject client-side scripts into web pages viewed by other users. Attackers exploit a cross-site scripting vulnerability so as to bypass access controls such as the same-origin policy.
In order to help prevent XSS attacks, Incorta sanitizes rendered data using common techniques for output escaping and input encoding. In addition, Incorta recommends the following:
- Enable HTTPS
- Enable Tomcat’s Secure Flag
Cross-Site Request Forgery (XSRF)
Unlike cross-site scripting (XSS), which exploits the trust a user has for a particular site, Cross-Site Request Forgery (XSRF), also known as one-click attack or session riding, is a type of malicious exploit of a website where unauthorized commands are transmitted from a user that the web application trusts. XSRF commonly has the following characteristics:
- It involves sites that rely on a user’s identity.
- It exploits the site’s trust in that identity.
- It tricks the user’s browser into sending HTTP requests to a target site.
When a user signs into Incorta with a username and password, Incorta creates a JSESSIONID and a XSRF-TOKEN cookie. The JSESSIONID cookie contains information that the Incorta Direct Data Platform uses for authentication and authorization. The XSRF_TOKEN cookie contains an encrypted XSRF token for client-side JavaScript calls. JSESSONID and XSRF-TOKEN are the only cookies that Incorta uses.
To fully protect from XSRF attacks, Incorta recommends that administrators implement HTTPS and set the Secure flag to true in Apache Tomcat to avoid sending the cookie value over HTTP. By default, Incorta does not set the secure flag on the JSESSIONID and XSRF-TOKEN cookies.
The Secure flag is a Tomcat configuration setting. The flag informs a web browser that a cookie should only be sent to the web server application using HTTPS. Even with HTTPS enabled, if the Secure flag is not set to true in Tomcat, a web browser will send the cookie value over HTTP. To learn more about how to set the Secure flag for Tomcat, please review the following Incorta documentation:
SQL Injection
There are several places where a user that inherits the Schema Manager role can specify a SQL statement.
To help prevent SQL injection, Incorta only accepts SELECT statements. A table that supports an incremental load uses a parameterized syntax. Incorta bounds the parameter value using a bound parameter in the Java Database Connection (JDBC) itself. Bound parameters in JDBC protect against SQL Injection.
As additional protection, Incorta always recommends that data source connectors specify user credentials with read only access to the source data.
Materialized Views and Apache Spark
In Incorta, a materialized view is a type of table in a schema that requires Apache Spark for data materialization and enrichment. In Incorta 4.6, Incorta supports two programming languages for materialized views: Spark SQL and PySpark.
SparK SQL is a declarative language. Incorta only supports SELECT statements. However, because PySpark is Python for Apache Spark, there is the potential for direct command and reference injection within a materialized view using Python code. For example, in Python, a programmer can import the os library, and in doing so, view all the contents of a directory.
import os os.system('ls /home/incorta/incorta/ -ls')
To limit this exposure, Incorta recommends the following:
- Assign the Schema Manager role sparingly to groups as this role allows users to create and edit schemas.
- Regularly analyze PySpark code within a materialized view
- For a Linux host that manages an Incorta Node that runs the Incorta Analytics Service and/or the Notebook Add-On service, restrict the available Bash Shell commands for the Incorta user. For example, remove the Secure Copy command, scp, from the Incorta user bash.
Sensitive data in Heap Dumps
A heap dump is a snapshot of all the objects in the Java Virtual Machine (JVM) heap at the time of collection. The JVM allocates memory for objects from the heap for all class instances and arrays. The garbage collector reclaims the heap memory when an object is no longer needed and there are no references to the object.
The Incorta Cluster Management Console offers On Heap and Off Heap configuration settings for both the Loader and Analytics Service.
By examining the heap, it is possible to locate created objects and their references in the source. Tools such as Eclipse Heap Analyzer, Eclipse Memory Analyzer, or Java VisualVM can help you view the objects in the heap using a heap dump file.
For this reason, depending when the heap is generated, it is possible to reveal an object’s attribute value using many popular heap analysis tools.
Remote Monitoring
JVM applications allow for both local and remote monitoring using Java Management Extensions (JMX).
A common approach is for administrators to query data from Managed Beans (Mbeans) exposed on a JVM port. Many administrators are familiar with this type of monitoring using jconsole. jconsole tool is a JMX-compliant graphical tool for monitoring a Java virtual machine. It can monitor both local and remote JVMs. It can also monitor and manage an application. Tools such as jconsole have the ability to perform a heap dump and to read a heap dump remotely.
By default, Incorta does not supply the start parameters for JMX monitoring. Security administrators should regularly monitor the Java processes for the Incorta Loader and Incorta Analytics services to determine the presence of unwanted monitoring parameters.
User Impersonation
A user that inherits the SuperRole has the ability to impersonate a user. An impersonated user receives an email notifying them of their impersonation. However, this requires SMTP configuration for the Incorta Cluster.
To limit the possibility of unwanted user impersonation, Incorta strongly encourages that security administrators limit the number of users that inherit the SuperRole as well as configure SMTP for the Incorta Cluster.
To learn more about SMTP configuration, please review Email Configuration. | https://docs.incorta.com/5.0/g-incorta-security-guide/ | 2021-09-16T16:12:39 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['/static/2f3a49baea32ea3b03baf8f1691966e7/a2510/ArchitectureDiagram.jpg',
'ArchitectureDiagram ArchitectureDiagram'], dtype=object)
array(['/c17d8bd237d66f1a59b64786073eec68/PermissionDashboard.gif', None],
dtype=object)
array(['/ddd25d02871ed12b1d68c60765f2fa72/AuditAccess.gif', None],
dtype=object) ] | docs.incorta.com |
ServiceNow event handler
ServiceNow provides service management software with a comprehensive managed workflow that supports features such as real-time communication, collaboration, and resource sharing. Configure Kapacitor to send alert messages to ServiceNow.
Configuration
Configuration and default option values for the ServiceNow event
handler are set in your
kapacitor.conf.
The example below shows the default configuration:
[servicenow] # Configure ServiceNow. enabled = false # The ServiceNow URL for the target table (Alert or Event). Replace this instance with your hostname. url = "" # Default source identification. source = "Kapacitor" # Username for HTTP BASIC authentication username = "" # Password for HTTP BASIC authentication password = ""
enabled
Set to
true to enable the ServiceNow event handler.
url
The ServiceNow instance address.
source
Default “Kapacitor” source.
username
Username to use for basic HTTP authentication.
Password to use for basic HTTP authentication.
Options
The following ServiceNow event handler options can be set in a
handler file or when using
.serviceNow() in a TICKscript. These options set corresponding fields in the ServiceNow alert or event. For information about ServiceNow alerts, see Manually create an alert.
All the handler options above support templates with the following variables:
ID,
Name,
TaskName,
Fields,
Tags, same as in the
AlertNode.message.
By default, the handler maps the Kapacitor values below to the ServiceNow Alert or Event fields as follows:
TICKscript examples
stream |from() .measurement('cpu') |alert() .crit(lambda: "usage_user" > 90) .stateChangesOnly() .message('Hey, check your CPU') .serviceNow()
stream |from() .measurement('cpu') |alert() .crit(lambda: "usage_user" > 90) .message('Hey, check your CPU') .serviceNow() .node('{{ index .Tags "host" }}') .type('CPU') .resource('CPU-Total') .metricName('usage_user') .messageKey('Alert: {{ .ID }}')
Support and feedback
Thank you for being part of our community! We welcome and encourage your feedback and bug reports for Kapacitor and this documentation. To find support, the following resources are available: | https://docs.influxdata.com/kapacitor/v1.6/event_handlers/servicenow/ | 2021-09-16T16:28:33 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.influxdata.com |
Double
Tapped Routed Event Args. Handled Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Gets or sets a value that marks the routed event as handled. A true value for Handled prevents most handlers along the event route from handling the same event again.
Equivalent WinUI property: Microsoft.UI.Xaml.Input.DoubleTappedRoutedEventArgs.Handled.
public: property bool Handled { bool get(); void set(bool value); };
bool Handled(); void Handled(bool value);
public bool Handled { get; set; }
var boolean = doubleTappedRoutedEventArgs.handled; doubleTappedRoutedEventArgs.handled = boolean;
Public Property Handled As Boolean
Property Value
true to mark the routed event handled. false to leave the routed event unhandled, which permits the event to potentially route further and be acted on by other handlers. The default is false. | https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.input.doubletappedroutedeventargs.handled?view=winrt-20348 | 2021-09-16T17:37:56 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.microsoft.com |
events
The
cfy events command is used to view events of a specific execution.
Optional flags
These commands support the common CLI flags.
Commands
list
Usage
cfy events list [OPTIONS]
Display events for an execution
-e, --execution-id TEXTThe unique identifier for the execution. Mandatory.
Optional flags
--include-logs / --no-logs- Include logs in returned events. [default: True]
--json- Output events in a consumable JSON format.
--tail- Tail the events of the specified execution until it ends.
-t, --tenant-name TEXTThe name of the tenant on which the execution occurred. If unspecified, the current tenant is used.
-o, --pagination-offset INTEGERThe number of resources to skip; –pagination-offset=1 skips the first resource [default: 0]
-s, --pagination-size INTEGERThe max number of results to retrieve per page [default: 1000]
Example
$ cfy events list -e dcf2dc2f-dc4f-4036-85a6-e693196e6331 ... Listing events for execution id dcf2dc2f-dc4f-4036-85a6-e693196e6331 [include_logs=True] 2017-03-30 10:26:12.723 CFY <cloudify-nodecellar-example> Starting 'update' workflow execution 2017-03-30 10:26:13.201 CFY <cloudify-nodecellar-example> 'update' workflow execution succeeded Total events: 2 ...
delete
Usage
cfy events delete [OPTIONS] EXECUTION_ID
Delete events attached to a deployment.
EXECUTION_ID is the ID of the execution events to delete.
Optional flags
--include-logs / --no-logs- Include logs in returned events [default: True]
-t, --tenant-name TEXTThe name of the tenant on which the execution occurred. If unspecified, the current tenant is used.
Example
$ cfy events delete cloudify-nodecellar-example ... Deleting events for deployment id cloudify-nodecellar-example [include_logs=True] Deleted 344 events ... | https://docs.cloudify.co/5.0.0/cli/orch_cli/events/ | 2021-09-16T16:20:06 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.cloudify.co |
Development Data Partnership Documentation¶
A partnership between international organizations and the private sector for the use of third-party data with shared principles, resources, and best practices.
🌟 Welcome to our Community and Documentation Hub!
Learn more and get involved!
Throughout this document, you will find information about:
🙌 Community : collaborate with and learn from the community
📖 Documentation : consult documentation, code snippets and examples
💡 Projects & News : get inspired by projects and stories from the community | https://docs.datapartnership.org/pages/index.html | 2021-09-16T16:43:05 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.datapartnership.org |
Market Analysis Report
This call returns a Morningstar market analysis for the specified company. Please note not all companies have reports.
Update History
No modifications have been made to this data layer since the initial release.
Report Request
IMPORTANT: Due to a known issue, please do not include request values for SubmittingOfficeID.
Overview
When requesting this feature, a valid D-U-N-S Number for a company, a product (format) preference and the appropriate product code will be required. For improved performance, include a country code of "US".
A trade up option exists to request data for a headquarters location when the requested organization is a branch.
NOTE: The trade up option is currently not available for this data layer.
Report sample(s):
- Market Analysis Report (PDF)
This report is delivered in Adobe Acrobat (PDF) format.
Global Availability
The MKT_ANL_R particular report is entitled as "D&B Business Information Report (BIR) & Other D&B: > <rep:OrderCompanyReportRequest <TransactionDetail> <ApplicationTransactionID>Sample</ApplicationTransactionID> <TransactionTimestamp>2011-06-17T07:32:12.111+05:30</TransactionTimestamp> <SubmittingOfficeID>123</SubmittingOfficeID> </TransactionDetail> <OrderCompanyReportRequestDetail> <InquiryDetail> <DUNSNumber>799901301</DUNSNumber> <CountryISOAlpha2Code>US</CountryISOAlpha2Code> <OrganizationIdentificationNumberDetail> <OrganizationIdentificationNumber>20</OrganizationIdentificationNumber> <OrganizationIdentificationNumberTypeCode>10</OrganizationIdentificationNumberTypeCode> </OrganizationIdentificationNumberDetail> </InquiryDetail> <ProductSpecification> <DNBProductID>MKT_ANL_RP</DNBProductID> <ProductFormatPreferenceCode>13204</ProductFormatPreferenceCode> <CharacterSetPreferenceCode>123</CharacterSetPreferenceCode> <LanguagePreferenceCode>39</LanguagePreferenceCode> <OrderReasonCode>6332</OrderReasonCode> <TradeUpIndicator>false</TradeUpIndicator> <ReturnOnlyInDateDataIndicator>true</ReturnOnlyInDateDataIndicator> <IncludeAttachmentIndicator>true</IncludeAttachmentIndicator> </ProductSpecification> <ArchiveDetail> <ArchiveProductOptOutIndicator>false</ArchiveProductOptOutIndicator> <ExtendArchivePeriodIndicator>true</ExtendArchivePeriodIndicator> <PortfolioAssetContainerID>1</PortfolioAssetContainerID> </ArchiveDetail> <InquiryReferenceDetail> <CustomerReferenceText>Sample</CustomerReferenceText> </InquiryReferenceDetail> </OrderCompanyReportRequestDetail> </rep:OrderCompanyReportRequest> </soapenv:Body> </soapenv:Envelope>
Endpoint
Use the following endpoint for requesting this report. The {version} is dependent on the underlying service delivering the report.
NOTE: While "organizations" is part of this endpoint, there is no service by this name. Many D&B Direct calls have a similar structure; however, the {version} component is based on the SERVICE to which a given product is associated.
Testing
The following parameters may be used for D&B Direct 2.0 developer sandbox requests to retrieve successful responses. The data returned from sandbox requests may not represent actual values that this feature will deliver.
Report Response
Specification
Text reports are returned in a Base64 encoded format within the Report Content/Data tags. All other formats are returned using the Message Transmission Optimization Mechanism (MTOM) method. An option is available to return all report formats as MTOM attachments.
NOTE: The D-U-N-S Number returned in the response will be a nine-digit zero-padded, numeric value.
<![CDATA[ ]]>
Response Codes & Error Handling
Successful service requests will return a CM000 response code in the TransactionResult ResultID field. Otherwise, one of the D&B Direct standard response codes will be returned.
This operation may return the following response codes: CM001-CM005, CM007, CM008, CM011, CM012, PD001-PD006, and SC001-SC008.
Report Notes
The Product Availability Lookup feature may be utilized to determine if a particular report is available for a given D-U-N-S Number. | https://docs.dnb.com/direct/2.0/en-US/report/latest/ordercompanyreport/mktanl-soap-API | 2021-09-16T15:01:58 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.dnb.com |
Date: Sun, 26 Apr 2020 08:57:05 +0200 From: Ralf Mardorf <[email protected]> To: [email protected] Subject: Re: FreeBSD live USB stick Message-ID: <20200426085705.0838e5c9@archlinux> In-Reply-To: <503ac059-c4a5-d618-9b85-e154339e1f36@holgerdanske> <[email protected]> <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Sat, 25 Apr 2020 13:52:06 -0700, David Christensen wrote: >After the 128 GB Ultra Fit failed, I shopped around for high-endurance=20 >USB flash drives. These are hard to find, especially in larger >capacties. > > >I did find one OEM that makes industrial flash devices in various=20 >capacities and form factors, including USB: > > > > >Unfortunately, Cactus Technologies is an OEM and does not sell into=20 >retail channels. I contacted them, and they offered to sell me two 16=20 >GB drives (USB 2.0?) for $39 plus shipping with a lead time of 5 weeks=20 >ARO (if not in stock). > > >STFW 'industrial usb flash' there are a few other manufacturers and/or=20 >distributors. > > >The MacBook Pro has an SD Card slot. SanDisk high-endurance microSD=20 >cards are readily available, so I went with that: > > ce-uhs-i-microsd#SDSQQNR-032G-AN6IA > > >SanDisk also makes a "max endurance" model: > > e-uhs-i-microsd#SDSQQVR-032G-AN6IA > > >STFW I see that some people put these into USB adapters and use them >as live drives. If you run embedded systems with SD/ microSD slots=20 >(Rasperry Pi, etc.), this might be a better way to go. Hi David, in a German forum consensus is SD + USB adapter over USB stick, too [1]. For testing purpose I'll not cancel the order of the Toshiba 32 GB USB stick [2], read 150 MB/s, 11.50 =E2=82=AC including shipping costs [3]. Regards, Ralf [1] [2] 10729.html?utm_source=3Didealo&utm_medium=3Dcpc&utm_campaign=3DPreisverglei= ch&ref=3D109 [3] ;)
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=0+0+archive/2020/freebsd-questions/20200503.freebsd-questions | 2021-09-16T15:38:44 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.freebsd.org |
Date: Wed, 4 Dec 2013 18:35:03 +0100 From: Fleuriot Damien <[email protected]> To: "FreeBSD [email protected]" <[email protected]> Subject: Re: pkg repo not creating repo.txz on 8.4-STABLE Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Dec 4, 2013, at 6:07 PM, Fleuriot Damien <[email protected]> wrote: > Hello list, >=20 >=20 >=20 > I've got this tiny problem where issuing `pkg repo /tmp/repo/All` = won't yield a repo.txz file, anywhere at all. >=20 > /tmp/repo/All contains a single, very small python package (this is a = real, valid port), on purpose to minimize the size of ktrace dumps. >=20 >=20 > # pkg version > root@bsd8:/ # pkg -v > 1.2.1 >=20 > # Contents of /tmp/repo/All/ > root@bsd8:/ # ls -l /tmp/repo/All/ > total 4 > -rw-r--r-- 1 root wheel 2312 Dec 4 15:57 python2-2_1.txz >=20 > # Command used to create the repo > root@bsd8:/ # pkg repo /tmp/repo/All/ > Generating repository catalog in /tmp/repo/All/: done! >=20 > # Resulting files , note that digests and packagesite were generated = successfully > root@bsd8:/ # ls -l /tmp/repo/All/ > total 12 > -rw-r--r-- 1 root wheel 256 Dec 4 16:46 digests.txz > -rw-r--r-- 1 root wheel 712 Dec 4 16:46 packagesite.txz > -rw-r--r-- 1 root wheel 2312 Dec 4 15:57 python2-2_1.txz >=20 >=20 >=20 > I've tried gleaning info from both truss and ktrace, with the = following results from kdump: >=20 > # Actual ktrace > ktrace pkg repo -q /tmp/repo/All/ >=20 > # grep repo.txz during kdump > root@bsd8:/ # kdump | grep -C 8 repo.txz > 2254 initial thread CALL munmap(0x802400000,0xe00000) > 2254 initial thread RET munmap 0 > 2254 initial thread CALL close(0x3) > 2254 initial thread RET close 0 > 2254 initial thread CALL unlink(0x7fffffffe580) > 2254 initial thread NAMI "/tmp/repo/All//digests" > 2254 initial thread RET unlink 0 > 2254 initial thread CALL stat(0x7fffffffe170,0x7fffffffe0c0) > 2254 initial thread NAMI "/tmp/repo/All//repo.txz" > 2254 initial thread RET stat -1 errno 2 No such file or directory > 2254 initial thread CALL = sigprocmask(SIG_BLOCK,0x7fffffffe990,0x802004298) > 2254 initial thread RET sigprocmask 0 > 2254 initial thread CALL sigprocmask(SIG_SETMASK,0x802004298,0) > 2254 initial thread RET sigprocmask 0 > 2254 initial thread CALL = sigprocmask(SIG_BLOCK,0x7fffffffe950,0x802004298) > 2254 initial thread RET sigprocmask 0 > 2254 initial thread CALL sigprocmask(SIG_SETMASK,0x802004298,0) >=20 >=20 > Meh, what gives, no such file ? >=20 >=20 >=20 >=20 > Out of curiosity and to prove I'm not trying to get anyone else to do = my homework, I've taken the liberty of grabbing an earlier version of = `pkg` at: > = ar.xz >=20 > Building and using pkg-static from these sources does yield the = correct repo file : >=20 > root@bsd8:/tmp/pkg/pkg-1.0-rc6/pkg-static # ./pkg-static repo = /tmp/repo/ > Generating repo.sqlite in /tmp/repo/: done! > root@bsd8:/tmp/pkg/pkg-1.0-rc6/pkg-static # ls -l /tmp/repo/ > total 8 > -rw-r--r-- 1 root wheel 2316 Dec 4 16:52 python-2.7_1,2.txz > -rw-r--r-- 1 root wheel 1636 Dec 4 16:59 repo.txz >=20 >=20 >=20 >=20 > I'm going to look up other versions of pkg and try to narrow down the = one that borks things up for me. >=20 >=20 > -- > Dam >=20 OK got some more input. I've tried older `pkg` versions from the mastersite listed in = /usr/ports/ports-mgt/pkg/Makefile : Test results are as follows, every `pkg` was compiled from source: 1.0rc6: OK, creates repo.txz 1.1.4: OK, creates repo.txz + packagesite.txz + digests.txz 1.2.0b1: NOK, doesn't create repo.txz 1.2.0b2: NOK, doesn't create repo.txz 1.2.0rc1: NOK, doesn't create repo.txz 1.2.0: NOK, doesn't create repo.txz 1.2.1: NOK, doesn't create repo.txz Since there are many 1.1.4 bugfix releases of `pkg`, I'll try getting my = hands on each and test them in turn. Obviously, something broke for me (and possibly others ?) somewhere = between 1.1.4 and 1.2.0b1
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=325883+0+archive/2013/freebsd-questions/20131208.freebsd-questions | 2021-09-16T16:57:11 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.freebsd.org |
Configure ERX Router
This guide will walk you through configuring a Ubiquiti EdgeRouter X.
Required Hardware
- ERX Router and power cable
- Ethernet cable
- Computer
- USB Ethernet adapter (if computer doesn't have ethernet port)
Setup Steps
Set static IP on computer
See ./static-ip
Wire up ERX
- Plug the ERX into its power cable, and plug the power cable into an outlet.
- Connect the
eth0port of the ERX to your computer with an Ethernet cable, using the USB Ethernet adapter if you don't have an Ethernet port.
Configure ERX
- Download the ERX config file
- Navigate to the portal at in your browser
Log into the portal with username
ubnt, password
ubnt.
On the
Use wizard?prompt, press no.
Press the
Systemtab on the bottom of the page.
Under the
Restore Configsection, press
Upload a fileand select the ERX config file you downloaded.
The ERX will reboot using the new configuration.
- That's it! If you need to do more configuration, you can log back into the portal using the username
pcwadmin, and a password that you can get from the project maintainers. | https://docs.phillycommunitywireless.org/en/latest/guides/configure-erx/ | 2021-09-16T15:36:11 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['../../assets/images/erx/hardware.jpg', 'Hardware'], dtype=object)
array(['../../assets/images/erx/wiring.jpeg', 'Ports'], dtype=object)
array(['../../assets/images/erx/eth0.jpeg', 'Ports'], dtype=object)] | docs.phillycommunitywireless.org |
All PutEvent modifiers must be used after the Stream driver has initiated and before it has terminated. TD_Unavailable is returned when a modifier cannot be used.
The following table lists modifiers that are used with the PutEvent function of the Connection object to modify the Rate and Periodicity values of the Stream driver in the middle of the stream job. | https://docs.teradata.com/r/JjNnUlFK6_12aVpI0c06XA/eOVWQx5aWrbuj9Bx3HJX~Q | 2021-09-16T16:43:06 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.teradata.com |
Modify the default MariaDB administrator password.
Change the MariaDB root password
You can modify the MariaDB password using the following command at the shell prompt:
$ installdir/mariadb/bin/mysqladmin -p -u root password NEW_PASSWORD
Reset the MariaDB root password
NOTE: Depending on the version you have installed, you may find the MariaDB files at installdir/mysql
If you don’t remember your MariaDB root password, you can follow the steps below to reset it to a new value:
Create a file in /tmp/mysql-init with the content shown below (replace NEW_PASSWORD with the password you wish to use):
UPDATE mysql.user SET Password=PASSWORD('NEW_PASSWORD') WHERE User='root'; FLUSH PRIVILEGES;
Stop the MariaDB server:
$ sudo installdir/ctlscript.sh stop mariadb
Start MariaDB with the following command:
For Bitnami installations following Approach A (using Linux system packages):
$ sudo installdir/mariadb/bin/mysqld_safe --defaults-file=installdir/mariadb/conf/my.cnf --init-file=/tmp/mysql-init 2> /dev/null &
For Bitnami installations following Approach B (self-contained installations):
$ sudo installdir/mariadb/bin/mysqld_safe --defaults-file=installdir/mariadb/my.cnf --init-file=/tmp/mysql-init 2> /dev/null &
Restart MariaDB:
$ sudo installdir/ctlscript.sh restart mariadb
Remove the init script:
$ rm /tmp/mysql-init | https://docs.bitnami.com/installer/apps/redmine/administration/change-reset-password-mariadb/ | 2021-09-16T16:43:06 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.bitnami.com |
Components Fields Field Groups/de
From Joomla! Documentation
Contents.
Description
This is the back-end screen where you can add and edit Field Groups. field groups marked with a check-mark in the corresponding check-mark boxes. To use: select one or more field groups form the table of field groups field groups:
- Select one or more field groups.
- When all of the settings are entered, click on Process to perform the changes. A message "Batch process completed successfully." will show.
Note that nothing will happen if you don't have any items selected. language or access level.. Fields: Article Manager Fields
- To manage Featured Articles: Article Manager: Featured Articles | https://docs.joomla.org/Help39:Components_Fields_Field_Groups/de | 2021-09-16T15:08:04 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['/images/thumb/f/fe/Help30-Article-Manager-columns-field-Groups-en.png/800px-Help30-Article-Manager-columns-field-Groups-en.png',
'Article Fields Columns Header'], dtype=object) ] | docs.joomla.org |
Size of the Market The long-term market we are aiming for is:
SNS (Social Network Sites): Based on publicly available data up to 25 January 2021, at least 17 social media platforms have 300 million or more monthly active users
Facebook has 2.740 billion monthly active users.
YouTube’s potential advertising reach is 2.291 billion
WhatsApp has around 2 billion monthly active users
Facebook Messenger has around 1.3 billion monthly active users
Instagram’s potential advertising reach is 1.221 billion
WeChat (inc. Weixin 微信) has 1.213 billion monthly active users
TikTok has 689 million monthly active users
QQ (腾讯QQ) has 617 million monthly active users
Douyin (抖音) has 600 million daily active users (note: monthly active users may be higher)
Sina Weibo has 511 million monthly active users
Telegram has 500 million monthly active users
Snapchat’s potential advertising reach is 498 million
Kuaishou (快手) has 481 million monthly active users
Reddit has around 430 million monthly active users
Pinterest has 442 million monthly active users
Twitter’s potential advertising reach is roughly 353 million
Quora has around 300 million monthly active users
Archiving license for stills and videos: estimated at US $3.4 billion (global stills market will exceed US $6 billion by 2023)
Social advertising spend: US $25.98 billion
The current US$ 9.7 billion of influencer marketing will reach US $13.8 billion in 2021.
Users of e-commerce marketing In the latest statistics released by Facebook, it is obvious that this social network is losing its attraction to teenagers and young people. This has contributed to a massive growth hinder. On the other hand, newer platforms like Instagram and Snapchat see double-digit growth for this same demographic. Rising star Tiktok has also been catching up to old foundations. This shows that young users are favoring more trendy and intuitive communication platforms. We believe our crypto reward system puts us in a unique position to engage this audience.
The young generation's social network Facebook or Twitter social networks with the words no longer attract young people. Photo and video social networks became more and more popular. Reaching the younger generation is the dynamic generation that uses e-commerce to shop.
Paying for advertising through social networks to reach users has become indispensable. This type of advertising constantly sucks in the advertising budgets of businesses each year. Significant growth figures. | https://docs.streamme.network/problem-and-solution/opportunity | 2021-09-16T15:06:56 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.streamme.network |
Reflection 2008 doesn't support EXTRA! macros and COM applications that contain certain methods and properties from the classes listed in this section.
In this section
ExtraAreaChangedWait Class
ExtraFTPOptions Class
ExtraHostOptions Class
ExtraKermitOptions Class
ExtraMenuEdit Class
ExtraModemSetup Class
ExtraOIA Class
ExtraQuickPad Class
ExtraQuickPads Class
ExtraScreen Class
ExtraSerialSetup Class
ExtraSession Class
ExtraSessions Class
ExtraSystem Class
ExtraTelnetSetup Class
ExtraToolbar Class
ExtraToolbars Class
ExtraWaits Class
ExtraZmodemOptions Class | https://docs.attachmate.com/Reflection/2008/R1SP1/Guide_VBA/14287.htm | 2021-09-16T16:07:41 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.attachmate.com |
Date: Sat, 3 Feb 2007 10:33:47 +0100 From: Pieter de Goeje <[email protected]> To: [email protected] Cc: "Belanger, Benoit" <[email protected]>, [email protected] Subject: Re: Help with autostarting Apache Message-ID: <[email protected]> In-Reply-To: <000001c74742$612951e0$237bf5a0$@[email protected]> References: <000001c74742$612951e0$237bf5a0$@[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Saturday 03 February 2007 04:21, Belanger, Benoit wrote: > Hi, > > > > It seems that I am unable to autostart apache 2.2.4 at boot time with > FreeBSD 6.2. Apache is working fine once loaded manually but I really need > it to run by itself at system startup since this system will be left > unattended for long periods. > > > > It seems that adding apache2_enable="YES" in rc.conf does not produce the > desired result. Can anybody tell me what I am doing wrong? Change it to apache22_enable="YES" and you'll be fine :) (Also see pkg-message in $PORTSDIR/www/apache22) > > > > Otherwise, this version of FreeBSD is working just fine (almost > flawlessly). > > > > Thanks in advance. > > > > Benoit Belanger > > Securidata > > 514.748.4838 (Bureau) > > 514.945.3647 (Mobile) > > [email protected] > > <> Cheers, Pieter de Goeje
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1580055+0+archive/2007/freebsd-questions/20070204.freebsd-questions | 2021-09-16T16:57:45 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.freebsd.org |
Working with Long Characteristic Values
Description
According to the Bluetooth specification, the maximum size of any attribute is 512 bytes. However, the Maximum Transmission Unit (MTU), i.e., the largest amount of data that can be exchanged in a single GATT operation is usually less than this value. As a result, some characteristics may not be read/written with a single GATT operation. If the data to be read or written is larger than the MTU, a long read/write operation must be used. This example demonstrates how to implement this. The attached application handles reading and writing a characteristic of 512 bytes.
Reading
When reading a user characteristic longer than MTU, multiple
gatt_server_user_read_request events will be generated on the server side, each containing the offset from the beginning of the characteristic. The application code must use the offset parameter to send the correct chunk of data.
Writing
Characteristics can be written by calling
gecko_cmd_gatt_write_characteristic_value. If the characteristic data fits within MTU – 3 bytes, a single operation used. Otherwise, the write long procedure is used. The write long procedure consists of a prepare write request operation and an execute write request operation. A maximum of MTU – 5 bytes can be sent in a single prepare_value_write operation. The application can also access these operations directly by calling gecko_cmd_gatt_prepare_characteristic_value_write() and gecko_cmd_gatt_execute_characteristic_value_write(). This is a useful approach if the size of the characteristic is greater than 255 bytes.
Notifying/Indicating
Notifications and indications are limited to MTU – 3 bytes. Since all notifications and indications must fit within a single GATT operation, the application does not demonstrate them.
Setting up
Get started by creating an SoC-Empty application for your chosen device in Simplicity Studio.
After that’s complete, take the attached app.c and copy it to your project folder.
Open app.h and change the definition of DEBUG_LEVEL from 0 to 1 to enable printing./
Open the .isc file for your project and import the gatt.xml file attached by clicking the import icon on the right side of the GATT Configurator.
When the file has been imported, Save, click Generate, and build the project.
Flash the application onto two evaluation boards and open a terminal window, such as Teraterm or Simplicity Studio console for each. One will be the master and the other one will be the slave.
Usage
The attached application can operate in either slave or master mode. The application starts in slave mode. To switch to master mode, press PB0 on the WSTK.
Master
As soon as the device is switched to master mode, it begins scanning for a device advertising a service with the following UUID: cdb5433c-d716-4b02-87f5-c49263182377. When a device advertising this service is found, a connection is formed. The gecko_evt_gatt_mtu_exchanged event saves the MTU size for the connection, which is needed for writing the long characteristic later.
The master now discovers service and characteristic handles. After the long_data characteristic is found, the master performs a read of this characteristic by calling gecko_cmd_gatt_read_characteristic_value(). The size of this characteristic is 512 bytes so the read long procedure is always used.
After this process is complete, you’ll see a message indicating that the read has finished and to press PB1 to write a block of test data to the slave. Pressing PB1 on the WSTK triggers a write of an array of test data to this long characteristic. This action is handled by a helper function called write_characteristic(), which in turn uses a helper function called queue_characteristic_chunk. This function can handle any data size up to 512 bytes. Writing the characteristic data is handled by queuing data with as many calls to gecko_cmd_gatt_prepare_characteristic_value_write() as necessary. After all data is queued up, it is written with a call to gecko_cmd_gatt_execute_characteristic_value_write(). Because only one GATT operation can take place at a time for a given connection, the gatt_procedure_completed event is used to drive the process of queuing writes. To get the process started, queue_characteristic_chunk() is called directly from write_characteristic(). After that queue_characteristic_chunk() is called from the gatt_procedure_completed event. This ensures that the previous procedure is finished before attempting to start another. The master displays messages indicating how many bytes are written to the slave in each operation and the message “exec_result = 0x00” when complete.
Slave
Upon startup, the slave begins advertising the service mentioned above. This service contains a single user-type characteristic of 512 bytes. The gecko_evt_gatt_server_user_read_request event handler handles read requests from the master. Because the characteristic is larger than an MTU, this event handler uses the connection mtu size and offset parameters passed to the event to send the correct portion of the array to the master. This event will be generated as many times as necessary to allow reading the entire characteristic.
A gatt_server_attribute_value event is generated for each queued write performed by the master and a gatt_server_execute_write_completed event is generated when all of the queued writes have been completed. The result parameter indicates whether an error has occurred. A user_write_request response must be sent by the application for each queued write, which is handled in the user_write_request event handler. | https://docs.silabs.com/bluetooth/2.13/code-examples/stack-features/gatt-protocol/working-with-long-characteristic-values | 2021-09-16T16:45:09 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.silabs.com |
Provide' => ($config->get('vote_score_max')), ), ); } } | https://docs.backdropcms.org/api/backdrop/core%21modules%21node%21node.api.php/function/hook_ranking/1 | 2021-09-16T15:47:23 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.backdropcms.org |
Incoming Lync To Phone calls aren't transferred to Exchange Online Unified Messaging voice mail
Problem
Consider the following scenario. You are set up for Office 365, and you have valid Skype for Business Online (formerly Lync Online) and Exchange Online licenses. Lync To Phone and Exchange Online voice mail are set up according to the documentation. You can answer calls that are made to the Lync To Phone number, and you can make calls to other numbers.
In this scenario, if an incoming call isn't answered, the call isn't transferred to Exchange Online Unified Messaging voice mail as expected. Instead, the person who is calling you hears a fast busy tone. The call never is transferred to voice mail.
Solution
To resolve this issue, use the Lync Remote Connectivity Analyzer to make sure that the DNS records for your domain are set up correctly. To do this, follow these steps:
In a web browser, go to Lync Remote Connectivity Analyzer.
Select Office365 Vanity/Custom Domain Name Settings Test for Lync. and then click Next.
Type the sign-in address that you use for Skype for Business Online, type the CAPTCHA text, click to select the check box to agree to the terms, and then click Perform Test.
Review the results, and make sure that the _sipfederationtls._tcp.<domain>.com DNS SRV record resolves successfully and is configured correctly.
The following is an example screenshot of the results of the test:
More Information
For calls to be transferred to voice mail, all DNS records must be configured correctly. Be aware that the _sipfederationstls._tcp.domain.com DNS SRV record must be added, even if you do not federate with any other domains. This is because of how the Skype for Business Online servers communicates with the Exchange Online servers
Still need help? Go to Microsoft Community. | https://docs.microsoft.com/en-us/skypeforbusiness/troubleshoot/online-phone-system/incoming-lync-to-phone-calls-not-transferred | 2021-09-16T15:31:01 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.microsoft.com |
dropdb
dropdb
Removes a database.
Synopsis
dropdb [connection-option ...] [-e] [-i] dbname dropdb -? | --help dropdb -V | - in order to drop the target database. If not specified, the postgres database will be used; if that does not exist (or if it is the name of the database being dropped), template1 will be used. | https://docs.greenplum.org/6-13/utility_guide/ref/dropdb.html | 2021-04-10T14:22:12 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.greenplum.org |
The below Security Policy applies only to customers with an existing New Relic agreement in place that explicitly references this Security Policy applying to the Service purchased in an Order. Capitalized terms not defined below shall take on the meaning set forth in such New Relic agreement.
New Relic Security Policy
1. Data Security
1.1. New Relic shall establish and maintain data security procedures and other safeguards designed to protect against the loss, theft or other unauthorized access or alteration of Customer Data in the possession or under the control of New Relic or to which New Relic has access, which are no less rigorous than accepted security standards in the industry.
1.2. New Relic shall maintain an information security policy that outlines a definition of information security and its overall objectives; a framework for setting control objectives and controls, including the structure of risk assessment and risk management; a brief explanation of the compliance requirements, and procedures for managing information security incidents.
2. Data Access
2.1. Access to Customer Data stored on New Relic’s systems shall not be granted to members of New Relic unless they have been uniquely identified and have sufficient credentials.
2.2. Access permissions shall be established in a manner that allows for the minimum access level(s) required for each employee.
2.3. Access to Customer Data shall be logged with sufficient information to determine the nature and scope of any inappropriate access.
3. Server Security
3.1. New Relic shall establish and follow reasonable server configuration guidelines and processes to prevent unauthorized access to Customer Data.
3.2. New Relic shall establish and follow reasonable configuration change management procedures for its servers containing Customer Data.
4. Network Security
4.1. New Relic network architecture shall be designed to limit site access and restrict the availability of information services that are considered to be vulnerable to attack.
4.2. New Relic shall utilize SSL certificates for all Internet activity. By default, Customer Data transmitted to and from the New Relic network shall be sent over encrypted medium or an encrypted format.
4.3. New Relic network shall use IDS technologies for network intrusion detection.
4.4. Access to New Relic systems containing Customer Data shall be restricted to authorized personnel.
5. Security Audits
5.1. New Relic shall conduct at least annually a SOC 2 or industry equivalent audit. New Relic shall provide to Customer audit results upon request, and shall explain and provide remediation plans to correct any problems to the extent reasonably possible.
6. Security and Incident Response
6.1. New Relic shall maintain an Information Security Incident Response plan, and make that plan available to Customer if requested.
6.2. In the event of an actual theft, loss, or unauthorized access of Customer Data by New Relic’s personnel and/or any unauthorized individual or entity, New Relic shall: (a) investigate such breach, (b) attempt to cure such breach, and (c) provide notification to Customer that describes such breach.
7. Disaster Recovery
7.1. New Relic shall have in effect a disaster recovery plan designed to respond to both a component failure of New Relic equipment within its data center and a catastrophic loss of service. This plan shall include documented policies and procedures to restore service in the event of either type of failure.
7.2. New Relic shall establish and follow backup and restore procedures for servers containing Customer Data.
8. Copies and Removal
8.1. In addition to any obligations of New Relic in the Agreement, upon expiration or termination of this Agreement for any reason: (a) New Relic shall, and shall cause its personnel, to cease and desist all access and use of any Customer Data, (b) New Relic shall delete all copies of Customer Data within ninety (90) days.
9. Disclosure by Law
9.1. In the event the New Relic is required by law, regulation, or legal process to disclose any Customer Data, New Relic shall (a) give Customer, to the extent possible, reasonable advance notice prior to disclosure so Customer may contest the disclosure or seek a protective order, and (b) reasonably limit the disclosure to the minimum amount that is legally required to be disclosed.. | https://docs.newrelic.com/docs/licenses/license-information/referenced-policies/security-policy/ | 2021-04-10T14:48:05 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.newrelic.com |
Sidra API module: Ingestion¶
It contains endpoints to provide asset ingestion into the platform.
Inference¶
The metadata inference is a process by which the metadata of an asset is inferred based on some information about the asset, depending on the type of information provided it will be used one endpoint or other. The information can be:
1. The result of a SQL query¶
The API endpoint used for that is
/api/inference/dbinference. It returns the list of entities and attributes inferred. The endpoint requires the following parameters:
2. A list of data types¶
The API endpoint used is
/api/inference/dbextractinference. It returns the list of entities and attributes inferred. The endpoint requires the following parameters:
Naming convention¶
The entities have a field named
RegEx that contains a regular expression that identifies what assets will be associated to the entity. When inferring an entity from a table, the
RegEx field is populated with the following regular expression composed by three parts:
- prefix will be the name of the schema of the table in lower case followed by underscore. If it is a Transact SQL and the schema is the default
dbo, the prefix will be omitted.
- tableName will be the name of the table in lower case.
- suffix will depend on the
fileNameStyleselected. The options can be seen in the following table:
Database sources¶
The
QuerySource identifies the database source of the SQL query:
Database Types¶
The
dbTypes is an array with the information of the metadata of an entity in a format that can be easily populated with information extracted from the data source database. The JSON structure is the following:
Ingestion¶
It allows to register a new asset in the platform. More information is available in the Data Ingestion overview section. | https://docs.sidra.dev/Sidra-Data-Platform/Sidra-Core/Sidra-api/Ingestion-api-module/ | 2021-04-10T14:05:49 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.sidra.dev |
Set up Google Analytics tracking on your campaigns
To make your campaigns traceable in Google Analytics, you need to enter an UTM code in Rule. "UTM" stands for urchin tracking module and is a format from Google used to track links in Google Analytics. Below we describe how to do it.
1. Start by taking a look at the following inställningar
- Make sure you've entered your domains in your account settings as shown in the example below:
2. To get the right tracking on your campaign, then set up an UTM code. This is done before you send your campaign, in Summary & Schedule. Here you have the option to specify two types; "Utm_campaign" and "own Utm_term". Specify both to optimize the results of your analyses.
- utm_campaign is the name of your campaign, such as the name of your campaign. "sommarrea_v28."
- utm_term is a compilation of your keywords/keywords, such as"campaign+shoes". The UTM term provides additional ability to identify keywords, but usually it is enough to enter utm-campaign. However, utm term should be specified if you previously marked paid keyword campaigns, then enter the same keywords here. (For more information on setting UTM, you can also visit google's site, here).
- Rule then automatically adds, utm_source = rule and utm_medium = email.
3. If you want a unique identification on a link, you can add it under Utm content (see picture below) when creating links in the email builder. Otherwise, each link that goes to the domains is tagged with utm_source=rule&utm_medium=email& utm_campaign=xxx (the campaign name you specified), when you send out the campaign.
4. Once the settings are made and the promotion is sent, you can under the statistics of the campaign, click on the "Google" tab, log in with your Google Analytics account and see statistics. See the results in the image example below:
Do you have questions about setting up Google Analytics tracking on your campaigns? Describe your case to us on [email protected] and you will receive quick help.
Good luck!
/Team Rule | https://en.docs.rule.se/article/101-tracking-av-kampanj-via-google-analytics | 2021-04-10T14:47:24 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/575878229033607a88240260/images/598dc27d2c7d3a73488be897/file-iaBudMpyZ8.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/575878229033607a88240260/images/5995abcb042863033a1c11cb/file-jX0IXTeJwR.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/575878229033607a88240260/images/5811f8c19033604deb0eb838/file-bifNnSjqMX.png',
None], dtype=object) ] | en.docs.rule.se |
The Add Static IPv4 Address widget allows you to quickly add IP addresses to a specific network. You can configure a widget for each network to which you will be adding IPv4 address.
To add a static IPv4 address:
- From the Quick Actions widget that has the appropriate configuration and view configured, click Add Static are not displayed.
- In the Zone field, type or choose the zone to which the IPv4. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Adding-a-static-IPv4-address-using-the-Quick-Action-widget/8.3.0 | 2021-04-10T14:02:31 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.bluecatnetworks.com |
Address Manager can have multiple IP addresses and thus multiple interfaces assigned to its Ethernet connection. These interfaces can be used to provide extra functionality such as DNS views, xHA, and NAT traversal.
Beyond the default or physical interface used on each server, there are two other kinds of interfaces:
- Virtual interfaces are used and managed by Address Manager to represent DNS and to address Address Manager appliances in an xHA configuration.
- Published interfaces are used to associate addresses that may be on the other side of a NAT gateway or a firewall. These interfaces associate data sent to their IP address back to the appropriate managed server. Published interfaces can be added for single servers and xHA clusters.
During an import, Address Manager always assigns the deployment role to the network interface, even if the server is on the other side of an NAT device. As a result, you need to manually create a published interface for any Windows servers behind a NAT device and assign the deployment role to that published interface. For more details on managing Windows servers in Address Manager, refer to BlueCat Address Manager for Windows Server. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Configuring-Server-Interfaces/9.0.0 | 2021-04-10T14:19:30 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.bluecatnetworks.com |
murlin
- type:
- userplugin
- userplugin:
- murlin
- description:
- Adds URL Monitoring to Cacti
- version:
- 0.2.4
- pia:
- v2+
- cacti:
- 0.8.8b
-
- date:
- 2014-07-04
- plugins:
- url,
- website
Download
This plugin can be download here:
Purpose
Adds URL Monitoring capabilities to Cacti.
Features
- Graphs total download time of webpage
- Graph download size
- Shows last HTTP Code returned
- Proxy Support
- Proxy Authentication
- Regex Text Match on sites
- NEW - Availability Graphs
- NEW - Stacked detailed download time graphs
Prerequisites
None
Installation
Unzip the mURLin-x.x.zip file into the Cacti plugins dir.
Browse to plugin management and enable/install the plugin.
Add the user permissions to the required users and the mURLin tab should appear.
Usage
See above for instructions on use.
Contributors
Vincent Geannin - Proxy authentication and various bugfixes
Additional Help?
If you need additional help, please go to or reply to
Possible Bugs?
Issue with bulk importer not indexing correctly
Issue with availability showing incorrectly - Fixed in 0.2.3
When used with jQuerySkin/CactiEZ mURLin causes an error to be displayed - Fixed in 0.2.2
Only supports URL's of 256 chars or less - Fixed in 0.2.2
Slow responding sites will show incorrect values (1000ms) for all queries above 1 second (only affects version 0.2.0, if you are using this version then please update to 0.2.1) - fixed in 0.2.1
Regex test fails in the GUI only when using a proxy - fixed in 0.2.1
Site fails to load correcly in the GUI when using a proxy (graphs will be rendered correctly) - fixed in 0.2.0
mURLin fails to auto update - fixed in 0.1.7
Proxy failed to work when graphing even though the data would show in a preview - fixed in 0.1.7
mURLin fails to automatically follow redirections - fixed in 0.1.7
When previewing a site with multiple GET variables, the site doesn't display correctly - fixed in 0.1.6
When deleting the last URL from a host the data query will not correctly refresh until a new URL is added. This does not affect performance of the plugin.
This seems to be a quirk in the way a data query works in cacti, I believe if there are no indexes returned by a DQ cacti assumes that the DQ has not responded correctly.
This will be fixed in the next version. - Fixed in 0.1.5
Scrollbars are not correctly utilized on the Add URL menu - Fixed in 0.1.4
Validation of the URL add form doesn't work - Fixed in 0.1.4 | https://docs.cacti.net/userplugin:murlin | 2021-04-10T14:55:05 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.cacti.net |
dbForge Schema Compare for Oracle supports some SQL * Plus commands in SQL code. Use these SQL* Plus commands just like usual SQL statements. These commands are supported in both full and short forms.
The list of supported SQL*Plus commands includes:
If you try to execute script, containing unsupported SQL*Plus commands, warning messages will appear in the Error List and these commands will be ignored.
@, @@
Runs the specified script. When debugging SQL scripts, you can step into scripts, called with @ or @@ commands. The Call Stack window will show stack of the documents. dbForge Schema Compare for Oracle does not support running scripts from URLs. You should use filename or filename with a path.
ACCEPT
Stores the input value in a given substitution variable. When dbForge Schema Compare for Oracle executes the ACCEPT command, it shows a dialog to enter a variable value. If you have specified a prompt, it will be displayed as the title of the dialog. Otherwise, default title *Assign value to variable
CLEAR
Clears the Output window.
CONNECT
Connects to the Oracle server. If the connection with entered connection parameters already exists, the existing connection will be opened. Otherwise, a new connection is created. If you have not specified the password, the connection dialog will appear. The CONNECT command in the form CONNECT user/password@host:port/sid creates the Direct connection.
DEFINE
Specifies a user or predefined variable and assigns a CHAR value to it, or lists the value of a single variable or all variables. If you use it to list variable value, it will be displayed in the Data Window. Variables, declared with DEFINE can not be viewed in the Watches window when debugging.
DESCRIBE
Lists the column definitions for the specified table, view or synonym, or the parameter specifications for the specified function or procedure. The result is displayed in the Data window.
EXECUTE
Executes a single PL/SQL statement.
PROMPT
Displays specified text in the General pane of the Output window.
REMARK
Begins a comment.
VARIABLE
Declares a bind variable that can be referenced in PL/SQL or shows variable or all variables with their data types. The result is displayed in the Data window. Bind variables, declared with VARIABLE command, can be viewed in the Watches window when debugging. | https://docs.devart.com/schema-compare-for-oracle/writing-and-executing-sql-statements/sql-plus-command-support.html | 2021-04-10T14:35:20 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.devart.com |
Nested name plus
_attributes=.
You can use this method directly, or more commonly the name of the method can
be an attribute in the updates for the base class, in which case
Mongoid will call the appropriate setter under the covers.
Note that this will work with any attribute based setter method in Mongoid. This includes:
update_attributes,
update_attributes! and
attributes=. | https://docs.mongodb.com/mongoid/master/tutorials/mongoid-nested-attributes/ | 2021-04-10T15:30:25 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.mongodb.com |
A common security technique for detecting domains that may be suspicious or be associated with bad actors such as hosting malware, phishing or botnet command and control, is to investigate domains that haven’t been seen before, i.e. are newly observed.
Deciding whether a domain is truly a new domain would involve deterministic methods, such as maintaining a database of all domains ever seen, and comparing all domain lookups against that database. Such a mechanism would not be scalable in a recursor, and so is best suited to offline analysis. However, determining candidate domains for such an offline service is a problem that can be solved in the recursor, given that sending all domain lookups to such an offline service would still be prohibitely costly, and given that the true number of newly observed domains is likely to be relatively small in a given time period.
A simple method to determine a candidate domain would simply be to check if the domain was not in the recursor cache; indeed this is a method used by many security researchers. However, while that does produce a smaller list of candidate domains, cache misses are still relatively common, particularly in deployments where techniques such as EDNS client-subnet are used.
Therefore, a feature has been developed for the recursor which uses probablistic data structures (specifically a Stable Bloom Filter (SBF): []). This recursor feature is named “Newly Observed Domain” or “NOD” for short.
The use of a probablistic data structure means that the memory and CPU usage for the NOD feature is minimal, however it does mean that there can be false positives (a domain flagged as new when it is not), and false negatives (a domain that is new is not detected). The size of the SBF data structure can be tuned to reduce the FP/FN rate, although it is created with a default size (67108864 cells) that should provide a reasonably low FP/FN rate. To configure a different size use the
new-domain-db-size setting to specify a higher or lower cell count. Each cell consumes 1-bit of RAM (per recursor thread) and 1-byte of disk space.
NOD is disabled by default, and must be enabled through the use of the following setting in recursor.conf:
new-domain-tracking=yes
Once enabled the recursor will keep track of previously seen domains using the SBF data structure, which is periodically persisted to the directory specified in the
new-domain-history-dir, which defaults to /var/lib/pdns-recursor/nod.
Administrators may wish to prevent certain domains or subdomains from ever triggering the NOD algorithm, in which case those domains must be added to the
new-domain-whitelist setting as a comma separated list. No domain (or subdomain of a domain) listed will be considered a newly observed domain.
There are several ways to receive the information about newly observed domains:
The setting
new-domain-log is enabled by default once the NOD feature is enabled, and will log the newly observed domain to the recursor logfile.
The setting
new-domain-lookup=<base domain> will cause the recursor to isse a DNS A record lookup to
<newly observed domain>.<base domain>. This can be a suitable method to send NOD data to an offsite or remote partner, however care should be taken to ensure that data is not leaked inadvertently.
If both NOD and protobuf logging are enabled, then the
newlyObservedDomain field of the protobuf message emitted by the recursor will be set to true. Additionally newly observed domains will be tagged in the protobuf stream using the tag
pdns-nod by default. The setting
new-domain-pb-tag=<tag> can be used to alter the tag.
A similar feature to NOD is Unique Domain Response (UDR). This feature uses the same probablistic data structures as NOD to store information about unique responses for a given lookup domain. Determining if a particular response is unique for a given lookup domain is extremly useful for determining potential security issues such as:
This is because well-behaved domains tend to return fairly stable results to DNS record lookups, and thus domains which don’t exhibit this behaviour may be suspicious or may indicate a domain under attack.
UDR is disabled by default - to enable it, set
unique-response-tracking=yes in recursor.conf.
The data is persisted to /var/log/pdns-recursor/udr by default, which can be changed with the setting
unique-response-history-dir=<new directory>.
The SBF (which is maintained separately per recursor thread) cell size defaults to 67108864, which can be changed using the setting
unique-response-db-size. The same caveats regarding FPs/FNs apply as for NOD.
Similarly to NOD, unique domain responses can be tracked using several mechanisms:
The setting
unique-response-log is enabled by default once the NOD feature is enabled, and will log the newly observed domain to the recursor logfile. | https://docs.powerdns.com/recursor/nod_udr.html | 2020-03-28T12:47:04 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.powerdns.com |
Software Development Description
- Description for debuggingdevices
- Description for App release
- Description for codes such as permissions, full screen, acquisition system versions, etc.
- Print and customer displaydescription
- External interfacedescription
- Description for camera codescanning
- Docking documentation for custom volume key
Sunmi has provided a simple & easy demo in (General Function
Modules ) to show how to use Print , Scan QR code ,
Use the functions of secondary screen, Face Pay , etc
Demo
Customer Screen Developer Documentation | https://docs.sunmi.com/en/documentation/desktop-products/d1s/ | 2022-06-25T08:05:48 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.sunmi.com |
Setting up a basic visual for aggregates
World Life Expectancydataset. In subsequent topics, this visual is used to demonstrate how the various analytic functions for aggregates work.
- Open a new visual.
- In the VISUALS menu, select the Lines visual type.
- Populate the shelves of the visual from the fields listed in the Data menu:
- X Axis
Add the field
year. Order it in ascending order.
- Y Axis
Add the field
population.
- Colors
Add the filed
country. Order it in ascending order.
- Filters
Add the field
year, and set it to the interval of 1950 through 2010.
Add the field
un_subregion, and set it to Northern Africa.
- Click REFRESH VISUAL to see the new line visual.
- Name the visual Basic Lines.
- Click SAVE. | https://docs.cloudera.com/data-visualization/7/using-analytic-functions/topics/viz-analytic-functions-basic.html | 2022-06-25T08:26:40 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.cloudera.com |
Branding issues that may occur when upgrading to SharePoint 2013
APPLIES TO:
2013
2016
2019
Subscription Edition
SharePoint in Microsoft 365
SharePoint 2013 introduces a new user interface that is lightweight, fast, and fluid. This UI is built by using new CSS styles, themes, and master pages. To get this new experience, you must upgrade to the new UI. But the significant changes that were made to support the new UI may break the upgrade story for some scenarios where you use custom branding.
In SharePoint 2010 Products, you may have branded your site in one of several different ways:
Applying a custom style sheet to your site that overrides the SharePoint default styles.
Applying a custom theme (THMX file) to your site.
Copying and changing a master page that is included with SharePoint 2013. create your custom branding again. This requires you to use.
For this reason, if your site collection contains custom branding, we recommend that, before you upgrade, you first create an evaluation site collection where you can test and re-create your custom branding in a SharePoint 2013 environment. For more information about an evaluation site collection, see Upgrade a site collection.
The following sections list branding issues that may occur when you upgrade to SharePoint 2013.
Custom CSS
The most common way to apply custom branding to a SharePoint 2010 Products site is to create a CSS file that contains styles that override the default SharePoint styles.
To make the new UI faster and more fluid, SharePoint 2013 introduced have to override. Create a CSS file for these styles, and then apply that CSS to your upgraded site.
Custom theme
In SharePoint 2010 Products, you can use an Office program such as PowerPoint 2010 to create a THMX file. Then you can upload that theme file to SharePoint 2010 Products and apply the theme to your site.
In SharePoint 2013, the theming engine was.
But there is no support to upgrade a THMX file from SharePoint 2010 Products to SharePoint 2013. If you applied a custom theme to the SharePoint 2010 Products site, when you upgrade to SharePoint 2013, the theme files remain in place. But the theme is no longer applied to the site, and the site reverts to the default theme.
To resolve this, you should first create an evaluation site collection and then use the new theming features in SharePoint 2013 to create the theme again. For more information about the new themes, see the following articles on MSDN:
Themes overview in SharePoint 2013
How to: Deploy a custom theme in SharePoint 2013 are available. If themes don't work for your scenario or you must have more extensive branding, we recommend that you use a publishing site together with Design Manager. But understand that if you invest in building custom master pages and page layouts, you may have to rework or update your design files during and after each SharePoint upgrade.
Copy and change a master page that ships with SharePoint 2013
In SharePoint 2010 Products, a common way to make minor customizations to the UI is to copy and change a master page that ships with SharePoint 2010 Products. For example, you might change the master page to remove or hide capabilities from users.
When you upgrade a SharePoint 2010 Products site to SharePoint 2013, the master page is reset to use the default master page in SharePoint 2013. Therefore, after upgrade, your site will display its custom branding. The custom master page that was created in SharePoint 2010 Products still lives in the site, but you should not apply the old master page to the new site because the new site will not display as expected.
To support the new UI in SharePoint 2013, changes were made to the default master pages. For this reason, you cannot apply a master page that was created in SharePoint 2010 Products to a site in SharePoint 2013.
To resolve this, you should first create an evaluation site collection, and then create the master page agained solution or by uploading the file to the master page gallery.
Important
SharePoint Foundation 2013 does not support publishing sites. You'll need SharePoint 2013 to use publishing sites.
Custom master page in a publishing site
If you want a fully branded site such as a corporate communications intranet site, you use a publishing site that has a fully custom master page and custom page layouts that are attached to the custom master page.
When you upgrade a SharePoint 2010 Products site to SharePoint 2013, the master page is reset to use the default master page in SharePoint 2013. Therefore, after upgrade, your site will not display its custom branding. The custom master page and page layouts created in SharePoint 2010 Products still live in the site, but you should not apply the old master page to the new site because the new site will not display as expected.
To resolve this issue, you should first create an evaluation site collection that is a publishing site, and then create the master page again in the SharePoint 2013 site. After you verify that the new master page works as expected, complete the following steps:
Export the master page as part of a design package.
Import the design package into the new site collection,
Apply the new master page to the site.
Custom content placeholders on a custom master page
Important
If your custom master page contains a custom content placeholder, and if custom page layouts also contain this custom content placeholder, an error may prevent the home page of your site from rendering at all after upgrade. Instead, after upgrade, you may see the error message: "An unexpected error has occurred."
To determine whether you have this issue, you can create an evaluation site collection that is also a publishing site, and then set the master page to the master page that ships with SharePoint 2013. If the site still displays, you don't have this issue. If the site doesn't display and you get an "unexpected error" with a correlation ID, you likely have this issue.
To resolve this issue, do the following:
Create an evaluation site collection that is a publishing site collection.
Create a SharePoint 2013 master page.
Add the custom content placeholder to the 2013 master page.
Apply the new master page to the site.
Create a page layout that does not contain the custom content placeholder.
The page layout will be associated with the new master page that was applied to the site.
Change all the pages that use the old page layout to use the new page layout.
You can manually edit each page individually in the browser and use the option on the Ribbon, or you can use the client-side object model for SharePoint to update pages programmatically.
Delete the old page layout that contains the custom content placeholder.
We recommend that you do not add custom content placeholders to your custom master page or page layouts.
See also
Other Resources
Troubleshoot site collection upgrade issues in SharePoint 2013
Review site collections upgraded to SharePoint 2013
Upgrade a site collection to SharePoint 2013
Run site collection health checks in SharePoint 2013
Overview of Design Manager in SharePoint 2013 | https://docs.microsoft.com/en-us/SharePoint/upgrade-and-update/branding-issues-that-may-occur-when-upgrading-to-sharepoint-2013?redirectedfrom=MSDN | 2022-06-25T08:44:56 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.microsoft.com |
Independent streamfwd (HEC) tests - TCP/UDP aggregation
This page provides performance test results for the
streamfwd binary sending data to indexers using HTTP Event Collector (HEC) and TCP/UDP aggregate
This documentation applies to the following versions of Splunk Stream™: 7.1.0, 7.1.1
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/StreamApp/7.1.1/DeployStreamApp/IndependentForwarderTests4 | 2022-06-25T08:28:19 | CC-MAIN-2022-27 | 1656103034877.9 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Using
webassets in standalone mode¶
You don’t need to use one of the frameworks into which
webassets can
integrate. Using the underlying facilities directly is almost as easy.
And depending on what libraries you use, there may still be some things webassets can help you with, see Integration with other libraries.
Quick Start¶
First, create an environment instance:
from webassets import Environment my_env = Environment( directory='../static/media', url='/media')
As you can see, the environment requires two arguments:
- the path in which your media files are located
- the url prefix under which the media directory is available. This prefix will be used when generating output urls.
Next, you need to define your assets, in the form of so called bundles, and register them with the environment. The easiest way to do it is directly in code:
from webassets import Bundle js = Bundle('common/jquery.js', 'site/base.js', 'site/widgets.js', filters='jsmin', output='gen/packed.js') my_env.register('js_all', js)
However, if you prefer, you can of course just as well define your assets
in an external config file, and read them from there.
webassets
includes a number of helper classes for some popular
formats like YAML.
Using the bundles¶
Now with your assets properly defined, you want to merge and minify them, and include a link to the compressed result in your web page. How you do this depends a bit on how your site is rendered.
>>> my_env['js_all'].urls() ('/media/gen/packed.js?9ae572c',)
This will always work. You can call your bundle’s
urls() method, which
will automatically merge and compress the source files, and return the
url to the final output file. Or, in debug mode, it would return the urls
of each source file:
>>> my_env.debug = True >>> my_env['js_all'].urls() ('/media/common/jquery.js', '/media/site/base.js', '/media/site/widgets.js',)
Take these urls, pass them to your templates, or otherwise ensure they’ll be used on your website when linking to your Javascript and CSS files.
For some templating languages,
webassets provides extensions to access
your bundles directly within the template. See Integration with other libraries for
more information.
Using the Command Line Interface¶
See Command Line Interface. | https://webassets.readthedocs.io/en/latest/generic/index.html | 2022-06-25T08:59:27 | CC-MAIN-2022-27 | 1656103034877.9 | [] | webassets.readthedocs.io |
Add events to experiments
This topic covers how to create and add an event key to your experiment in the Optimizely app.
For more information on how to integrate events into your application code with the SDK, see Track events.
Create an event in the Optimizely app
To create an event that tracks user actions in the Optimizely app:
- Navigate to the Events dashboard and click New Event.
- Define an event key, a unique identifier that you will.
Add events to your experiment
To add an event to your experiment in the Optimizely web app:
- Navigate to Experiment>Metrics.
- Click the event you created, or click Create New Event.
Track events in your application code
For more information, see Track events.
Updated 3 months ago
Did this page help you? | https://docs.developers.optimizely.com/experimentation/v3.1.0-full-stack/docs/create-events | 2022-06-25T08:39:59 | CC-MAIN-2022-27 | 1656103034877.9 | [array(['https://files.readme.io/d9b3a43-events_1.png', 'events_1.png'],
dtype=object)
array(['https://files.readme.io/d9b3a43-events_1.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/ddc74fb-new_event_1.png',
'new_event_1.png'], dtype=object)
array(['https://files.readme.io/ddc74fb-new_event_1.png',
'Click to close...'], dtype=object) ] | docs.developers.optimizely.com |
Hi,
I am trying to establish whether or not we need to enable global VNet peering between two Hub networks in 2 separate Azure regions when designing a DR solution for Expressroute.
The design pattern we are following is straightforward enough and is documented here :
As you can see in the above article the two Hubs (in different regions) are not shown as being peered. However, there is another article on Microsoft Docs which does show 2 Hubs in 2 separate regions where the Hub networks are peered using Global VNet peering :
Which is the preferred approach here ?
Are there any disadvantages to enabling Global VNet peering in the above scenario (apart from the ingress/egress costs) ?
Regards,
Paul Lynch | https://docs.microsoft.com/en-us/answers/questions/307731/global-vnet-peering-with-expressroute-for-dr.html | 2022-06-25T09:09:19 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.microsoft.com |
Jinsoku LC-40 additional user guide
Laser Engraving Simplified!
For more engraving tips, please contact us. ([email protected])
What can I do with LC-40?
With a laser engraver that prioritizes ease of use and engraving power like the Jinsoku LC-40, you can easily add a touch of personalization to any ordinary products or those one-of-a-kind gifts making them even more memorable.
| https://docs.sainsmart.com/article/czkbax25no-jinsoku-lc-40-additional-engraving-guide | 2022-06-25T08:18:09 | CC-MAIN-2022-27 | 1656103034877.9 | [array(['https://files.helpdocs.io/hj2i3yt73y/articles/czkbax25no/1652775413050/101-60-lc-40-06.jpg',
None], dtype=object) ] | docs.sainsmart.com |
EOL: SentryOne Test reached its end of life date on June 15, 2022. See the Solarwinds End of Life Policy for more information.
Data Management actions are responsible for gathering and / or manipulating data. Each action can be used to fulfil a unique role within a test:
- They can prepare tests by ensuring the expected results are up to date.
- Clean-up an environment before or after a test execution.
- Summarize data in preparation for asserts.
- Generate test data for usage within a test.
Data Generation
Required Assets
- Connection
- Data Generation Solution
Once the editor has loaded, you need to input the following properties:
Execute Query Command
Execute Query Command Editor
Execute Query Grid
Execute Query Grid Editor
Execute Query Scalar
Execute Query Scalar Editor
Extract Grid Checksum
Extract Grid Checksum
Filter Grid
Filter Grid Editor
Get Grid Row Count
Get Grid Row Count Editor
Load Excel Scalar
Load Excel Data Grid
Load Flat File Data
Load Flat File Data Editor
Load Grid From Asset
Load Grid Editor
Load Tabular File Table
Table Row Count
| https://docs.sentryone.com/help/sentryone-test-data-management-actions | 2022-06-25T07:27:38 | CC-MAIN-2022-27 | 1656103034877.9 | [array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c630ea28e121c8b13778493/n/s1-test-data-generation-properties-20185.png',
'SentryOne Test Data Generation Properties Version 2018.5 SentryOne Test Data Generation Properties'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c630f99ec161cb501683c83/n/s1-test-execute-query-command-properties-20185.png',
'SentryOne Test Execute Query Command Properties Version 2018.5 SentryOne Test Execute Query Command Properties'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c63104d8e121c9413778462/n/s1-test-execute-query-grid-properties-20185.png',
'SentryOne Test Execute Query Grid Properties Version 2018.5 SentryOne TestExecute Query Grid Properties'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c6311bbec161cad01683ca7/n/s1-test-execute-query-scalar-properties-20185.png',
'SentryOne Test Execute Query Scalar Properties Version 2018.5 SentryOne Test Execute Query Scalar Properties'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c63138a6e121ce87f15e107/n/s1-test-extract-grid-checksum-properties-20185.png',
'SentryOne Test Extract Grid Checksum Properties Version 2018.5 SentryOne Test Extract Grid Checksum Properties'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c631464ec161c567c683d75/n/s1-test-filter-grid-element-editor-20185.png',
'SentryOne Test Filter Grid Element Editor Version 2018.5 SentryOne Test Filter Grid Element Editor'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c631557ec161cb201683cc6/n/s1-test-get-grid-row-count-properties-20185.png',
'SentryOne Test Get Grid Row Count Properties Version 2018.5 SentryOne Test Get Grid Row Count Properties'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c6318598e121c9813778497/n/s1-test-load-excell-cell-scalar-properties-20185.png',
'SentryOne Test Load Excel Cell Scalar Properties Version 2018.5 SentryOne Test Load Excel Cell Scalar Properties'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c88133f6e121c816d1864ed/n/s1-test-load-excel-data-grid-element-editor-20185.png',
'SentryOne Test Load Excel Data Grid Element Editor Version 2018.5 SentryOne Test Load Excel Data Grid Element Editor'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c8813048e121c2b14445fe6/n/s1-test-load-flat-file-data-element-editor-20185.png',
'SentryOne Test Load Flat File Data Element Editor Version 2018.5 SentryOne Test Load Flat File Data Element Editor'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c631b5c8e121c2e1a778446/n/s1-test-load-grid-from-asset-properties-20185.png',
'SentryOne Test Load Grid from Asset Properties Version 2018.5 SentryOne Test Load Grid from Asset Properties'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c88122aad121ccb48442920/n/s1-test-load-tabular-file-table-element-editor-20185.png',
'SentryOne Test Load Tabular File Table Element Editor Version 2018.5 SentryOne Test Load Tabular File Table Element Editor'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c8812d88e121c7914445fcd/n/s1-test-table-row-count-element-editor-20185.png',
'SentyrOne Test Table Row Count Element Editor Version 2018.5 SentryOne Test Table Row Count Element Editor'],
dtype=object) ] | docs.sentryone.com |
These guidelines can help you lower your application risk in Veracode Software Composition Analysis.
- Download the latest version, or least-vulnerable version of the component.Note: The latest version of the component is not always the least vulnerable.
- Replace the vulnerable component with a different component with similar functionality.
- Use environmental controls to suppress application risk. If you are using the vulnerable portion of the component, try a workaround.
- Mitigate the functionality of the vulnerability or license in the component.
- Build your own secure component. | https://docs.veracode.com/r/About_Veracode_SCA_Remediation_Guidance | 2022-06-25T07:16:06 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.veracode.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.