content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Getting Started
After installation you should see two modules in Host-Extensions: “avt.SearchBoost.Input” and “avt.SearchBoost.Results”.
To start using Search Boost, you need to follow a few simple steps:
- Add SearchBoost Input module to a page;
- Add the SearchBoost Results module to the same page or another. Afterwards, you’ll want to link the input with the results module. This and actually all settings are done only from the input module:
As you will see in the settings page we have generally the same settings (and some particular new ones) as in version 2, but the interface is much cleaner.
Let’s start with the appearance of the search box and results items look. We have reduced the number of supported input templates to just six simple beautiful options: default, default-white, expandable, material-design, material-design-purple and orange-wide. But in case you’ve just upgraded from SearchBoost v2, you will see that we have kept the old templates - just in case you have some nice custom template. However, we will not be able to support the old templates.
Ok, now let’s dig into the fun part: indexing and searching. You can directly click the Index button but maybe you will first want to see what is set to be indexed. Check this in Document Search -> File Types: you should see some file types checked. This is self-explanatory; if you can’t check, then go to the documentation for file indexing. Then you should check the Search Sources section: there you select what pages + modules or folders + documents you want indexed.
For file indexing just remember: in order for SearchBoost to know about the files, DNN has to know about them first. So you have to make sure that the files are available by checking Admin->File Management.
Now let’s hit that Index button. At first you will see that Items in queue go up. This is mainly for debugging and represents any indexable item (module, document, url, custom database rule, etc). You should worry about this number only if it doesn’t go back to 0 after some time (~ 5 minutes).
We have implemented partial real-time indexing: when you add a new file or modify the view rights for the folder containing that file, we will have our triggers fire up and indexing the new files or according with the new folder rights, so will not have to worry about reindexing. But in case of changing the settings yourself, let’s say adding a new file type for indexing, you will have to reindex (by reindex we mean Clear Index + Index). You have this option displayed after every save:
Schedulers
Just a heads-up notice for what’s needed for SearchBoost in order to have automatic indexing: we have added two schedulers you can see in the Host->Schedule page. They should look like below. The main thing to check is that the Indexer item is fired often enough - don’t worry about performance penalty, if there is no new indexable content, it will finish in less than a second and the whole thing is running in a separate thread.
With the release of Search Boost 3.2 we have added an item and module count in the Portals section:
| https://docs.dnnsharp.com/search-boost/getting-started/getting_started.html | 2019-02-15T20:49:39 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['/search-boost/getting-started/Assets/5.png', None], dtype=object)
array(['/search-boost/getting-started/Assets/6.png', None], dtype=object)] | docs.dnnsharp.com |
Updating an install
By default, LibreNMS is set to automatically update. If you have disabled this feature then you can perform a manual update.
Manual update
If you would like to perform a manual update then you can do this by running the following command
as the
librenms user:
./daily.sh
This will update both the core LibreNMS files but also update the database structure if updates are available.
Advanced users
If you absolutely must update manually without using
./daily.sh then you can do so by running the following commands:
cd /opt/librenms git pull composer install --no-dev ./build-base.php ./validate.php
You should continue to run daily.sh. This does database cleanup and other processes in addition to updating. You can disable the daily.sh update process as described below.
Disabling automatic updates
LibreNMS by default performs updates on a daily basis. This can be disabled by setting:
$config['update'] = 0; | https://docs.librenms.org/General/Updating/ | 2019-02-15T21:57:33 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.librenms.org |
Periodic task API¶
Related ticket(s):
Problem Statement¶
SSSD contains several periodic tasks that implements custom periodic API. These APIs are more or less sophisticated but it does the same thing.
Current periodic tasks are:
- Enumeration
- Dynamic DNS updates
- SUDO - full and smart refresh
- Refresh of expired NSS entries
We want to replace these individual implementation with one back end wise API.
Implementation details¶
New error code: - ERR_STOP_PERIODIC_TASK struct be_ptask; typedef struct tevent_req * (*be_ptask_send_t)(TALLOC_CTX *mem_ctx, struct be_ctx *be_ctx, struct be_ptask *be_ptask, void *pvt); typedef errno_t (*be_ptask_recv_t)(struct tevent_req *req); enum be_ptask_offline { BE_PTASK_OFFLINE_SKIP, BE_PTASK_OFFLINE_DISABLE, BE_PTASK_OFFLINE_EXECUTE }; errno_t be_ptask_create(TALLOC_CTX *mem_ctx, struct be_ctx *be_ctx, time_t period, time_t first_delay, time_t enabled_delay, time_t timeout, enum be_ptask_offline offline, be_ptask_send_t send, be_ptask_recv_t recv, void *pvt, const char *name, struct be_ptask **_task); void be_ptask_enable(struct be_ptask *task); void be_ptask_disable(struct be_ptask *task); void be_ptask_destroy(struct be_ptask **task);
Terminology¶
- task: object of type be_ptask
- request: tevent request that is fired periodically and is managed by task
API¶
- struct be_ptask_task is encapsulated.
- be_ptask_create() creates and starts new periodic task
- be_ptask_enable(task) enable task and schedule next execution enabled_delay from now
- be_ptask_disable(task) disable task, cancel current timer and wait until it is enabled again
- be_ptask_destroy(task) destroys task and sets it to NULL
Schedule rules¶
- the first execution is scheduled first_delay seconds after the task is created
- if request returns EOK, it will be scheduled again to ‘last_execution_time + period’
- if request returns ERR_STOP_PERIODIC_TASK, the task will be terminated
- if request returns other error code (i.e. non fatal failure), it will be rescheduled to ‘now + period’
- if request does not complete in timeout seconds, it will be cancelled and rescheduled to ‘now + period’
- if the task is reenabled, it will be scheduled again to ‘now + enabled_delay’
When offline¶
Offline behaviour is controlled by offline parameter.
- If offline is BE_PTASK_OFFLINE_EXECUTE and back end is offline, current request will be executed as planned.
- If offline is BE_PTASK_OFFLINE_SKIP and back end is offline, current request will be skipped and rescheduled to ‘now + period’.
- If offline is BE_PTASK_OFFLINE_DISABLE, an offline and online callback is registered. The task is disabled immediately when back end goes offline and then enabled again when back end goes back online. | https://docs.pagure.org/SSSD.sssd/design_pages/periodic_tasks.html | 2019-02-15T21:28:19 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.pagure.org |
Package postaladdress
Overview ▹
Overview ▾
Package postaladdress is a generated protocol buffer package.
It is generated from these files:
google/type/postal_address.proto
It has these top-level messages: PostalAddress ¶:
type PostalAddress struct { // The schema revision of the `PostalAddress`. // All new revisions **must** be backward compatible with old revisions. Revision int32 `protobuf:"varint,1,opt,name=revision" json:"revision,omitempty"` // Required. CLDR region code of the country/region of the address. This // is never inferred and it is up to the user to ensure the value is // correct. See and // // for details. Example: "CH" for Switzerland. RegionCode string `protobuf:"bytes,2,opt,name=region_code,json=regionCode" json:"region_code,omitempty"` // Optional. BCP-47 language code of the contents of this address (if // known). This is often the UI language of the input form or is expected // to match one of the languages used in the address' country/region, or their // transliterated equivalents. // This can affect formatting in certain countries, but is not critical // to the correctness of the data and will never affect any validation or // other non-formatting related operations. // // If this value is not known, it should be omitted (rather than specifying a // possibly incorrect default). // // Examples: "zh-Hant", "ja", "ja-Latn", "en". LanguageCode string `protobuf:"bytes,3,opt,name=language_code,json=languageCode" json:"language_code,omitempty"` // Optional. Postal code of the address. Not all countries use or require // postal codes to be present, but where they are used, they may trigger // additional validation with other parts of the address (e.g. state/zip // validation in the U.S.A.). PostalCode string `protobuf:"bytes,4,opt,name=postal_code,json=postalCode" json:"postal_code,omitempty"` // Optional. Additional, country-specific, sorting code. This is not used // in most regions. Where it is used, the value is either a string like // "CEDEX", optionally followed by a number (e.g. "CEDEX 7"), or just a number // alone, representing the "sector code" (Jamaica), "delivery area indicator" // (Malawi) or "post office indicator" (e.g. Côte d'Ivoire). SortingCode string `protobuf:"bytes,5,opt,name=sorting_code,json=sortingCode" json:"sorting_code,omitempty"` // Optional. Highest administrative subdivision which is used for postal // addresses of a country or region. // For example, this can be a state, a province, an oblast, or a prefecture. // Specifically, for Spain this is the province and not the autonomous // community (e.g. "Barcelona" and not "Catalonia"). // Many countries don't use an administrative area in postal addresses. E.g. // in Switzerland this should be left unpopulated. AdministrativeArea string `protobuf:"bytes,6,opt,name=administrative_area,json=administrativeArea" json:"administrative_area,omitempty"` // Optional. Generally refers to the city/town portion of the address. // Examples: US city, IT comune, UK post town. // In regions of the world where localities are not well defined or do not fit // into this structure well, leave locality empty and use address_lines. Locality string `protobuf:"bytes,7,opt,name=locality" json:"locality,omitempty"` // Optional. Sublocality of the address. // For example, this can be neighborhoods, boroughs, districts. Sublocality string `protobuf:"bytes,8,opt,name=sublocality" json:"sublocality,omitempty"` // Unstructured address lines describing the lower levels of an address. // // Because values in address_lines do not have type information and may // sometimes contain multiple values in a single field (e.g. // "Austin, TX"), it is important that the line order is clear. The order of // address lines should be "envelope order" for the country/region of the // address. In places where this can vary (e.g. Japan), address_language is // used to make it explicit (e.g. "ja" for large-to-small ordering and // "ja-Latn" or "en" for small-to-large). This way, the most specific line of // an address can be selected based on the language. // // The minimum permitted structural representation of an address consists // of a region_code with all remaining information placed in the // address_lines. It would be possible to format such an address very // approximately without geocoding, but no semantic reasoning could be // made about any of the address components until it was at least // partially resolved. // // Creating an address only containing a region_code and address_lines, and // then geocoding is the recommended way to handle completely unstructured // addresses (as opposed to guessing which parts of the address should be // localities or administrative areas). AddressLines []string `protobuf:"bytes,9,rep,name=address_lines,json=addressLines" json:"address_lines,omitempty"` // Optional. The recipient at the address. // This field may, under certain circumstances, contain multiline information. // For example, it might contain "care of" information. Recipients []string `protobuf:"bytes,10,rep,name=recipients" json:"recipients,omitempty"` // Optional. The name of the organization at the address. Organization string `protobuf:"bytes,11,opt,name=organization" json:"organization,omitempty"` }
func (*PostalAddress) Descriptor ¶
func (*PostalAddress) Descriptor() ([]byte, []int)
func (*PostalAddress) GetAddressLines ¶
func (m *PostalAddress) GetAddressLines() []string
func (*PostalAddress) GetAdministrativeArea ¶
func (m *PostalAddress) GetAdministrativeArea() string
func (*PostalAddress) GetLanguageCode ¶
func (m *PostalAddress) GetLanguageCode() string
func (*PostalAddress) GetLocality ¶
func (m *PostalAddress) GetLocality() string
func (*PostalAddress) GetOrganization ¶
func (m *PostalAddress) GetOrganization() string
func (*PostalAddress) GetPostalCode ¶
func (m *PostalAddress) GetPostalCode() string
func (*PostalAddress) GetRecipients ¶
func (m *PostalAddress) GetRecipients() []string
func (*PostalAddress) GetRegionCode ¶
func (m *PostalAddress) GetRegionCode() string
func (*PostalAddress) GetRevision ¶
func (m *PostalAddress) GetRevision() int32
func (*PostalAddress) GetSortingCode ¶
func (m *PostalAddress) GetSortingCode() string
func (*PostalAddress) GetSublocality ¶
func (m *PostalAddress) GetSublocality() string
func (*PostalAddress) ProtoMessage ¶
func (*PostalAddress) ProtoMessage()
func (*PostalAddress) Reset ¶
func (m *PostalAddress) Reset()
func (*PostalAddress) String ¶
func (m *PostalAddress) String() string | http://docs.activestate.com/activego/1.8/pkg/google.golang.org/genproto/googleapis/type/postaladdress/index.html | 2019-02-15T21:40:37 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.activestate.com |
An instance of a button in a TCategoryButtons control.
TButtonItem = class(TBaseButtonItem);
class TButtonItem : public TBaseButtonItem;
You add button items to the category by using the Items Editor. The Items Editor can be invoked from the Items property in the Object Inspector, only when the Categories Editor is displayed and a category is selected. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/CategoryButtons_TButtonItem.html | 2019-02-15T21:58:44 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.embarcadero.com |
Kendall’s Notation¶
Kendall’s notation is used as shorthand to denote single node queueing systems [WS09].
A queue is characterised by:
\[A/B/C/X/Y/Z\]
where:
- \(A\) denotes the distribution of inter-arrival times
- \(B\) denotes the distribution of service times
- \(C\) denotes the number of servers
- \(X\) denotes the queueing capacity
- \(Y\) denotes the size of the population of customers
- \(Z\) denotes the queueing discipline
For the parameters \(A\) and \(B\), a number of shorthand notation is available. For example:
- \(M\): Markovian or Exponential distribution
- \(E\): Erlang distribution (a special case of the Gamma distribution)
- \(C_k\): Coxian distribution of order \(k\)
- \(D\): Deterministic distribution
- \(G\) / \(GI\): General / General independent distribution
The parameters \(X\), \(Y\) and \(Z\) are optional, and are assumed to be \(\infty\), \(\infty\), and First In First Out (FIFO) respectively. Other options for the queueing schedule \(Z\) may be SIRO (Service In Random Order), LIFO (Last In first Out), and PS (Process Sharing).
Some examples:
- \(M/M/1\):
- Exponential inter-arrival times
- Exponential service times
- 1 server
- Infinite queueing capacity
- Infinite population
- First in first out
- \(M/D/\infty/\infty/1000\):
- Exponential inter-arrival times
- Deterministic service times
- Infinite servers
- Infinite queueing capacity
- Population of 1000 customers
- First in first out
- \(G/G/1/\infty/\infty/\text{SIRO}\):
- General distribution for inter-arrival times
- General distribution for service times
- 1 server
- Infinite queueing capacity
- Infinite population
- Service in random order
- \(M/M/4/5\):
- Exponential inter-arrival times
- Exponential service times
- 4 servers
- Queueing capacity of 5
- Infinite population
- First in first out | http://ciw.readthedocs.io/en/latest/Background/kendall.html | 2017-06-22T14:04:51 | CC-MAIN-2017-26 | 1498128319575.19 | [] | ciw.readthedocs.io |
What comes first, the word “interactive” or the word “documentary“?
Or, in other words, what is more important, the “technique” or the “subject matter”?
Image 1. Cinematographe by Lumière brothers
As suggested Alvelos and Almeida (2011):
“As a rule, all film literature starts with a reference to the Lumière brothers and their 1895 cinématographe, but when it comes to interactive documentary, should it begin with Lumière’s invention or Babbage’s Analytical Engine in 1830, the starting point of the modern computer? Probably both, and we might use this question as a pretext for a discussion on the current state of the art of a new “type” of documentary that lays between film and interaction: the interactive documentary.” (Alvelos and Almeida, 2011:123)
Image 2. Analytical engine by Charles Babbage
Image 3. Diferential engine by Charles Babbage
The interactive documentary is made up from two basic ingredients and is the result of conception between two genres: the “interactivity”, which began in the early nineteenth century with the Babbage’s invention and digital media, and the “documentary genre”, which began in the late nineteenth century with the Lumière’s invention and others. This leads us to believe that, although what many theorists call that the basis of the interactive documentary are moving images and film documentary, this is debatable because without interactivity, the genre would not exist as such.
Image 4. Lumière brothers
It is clear that without documentary film it wouldn’t exist either, but interactivity is the key factor that difference, gives autonomy and characterizes this genre. It is for these reasons that we believe that the terms “documentary” and “interactivity” are on the same level on a scale that starts to decant into “interactivity” by its own weight nowadays. However, it is also clear that, in the scenario that “documentary” equals to the “subject” and “interactivity” to the “technology”, the subject should always prevail, and that is precisely what levels this balance and make us conclude that these two aspects deserve the same consideration and are inseparable from each other.
Image 5. Charles Babbage
Within the universe of the interactive documentary there is room for all types of discourses and narrative, and the power offered by interactive technologies and web 2.0 has to be seized to provide new forms of communication and interactivity, but in any case, the technology (technique, the “how”) cannot replace an approach, progress and outcome of a good script, a good story and a good speech (“what”is said, not “as it is said.”). In short, what prevails is the story, but let’s not forget the approach, which would be halfway “how to” tell it. In Uvic (Universitat de Vic), when students develop a creative multimedia project as a Final Degree Project (that it sould be an I-doc or a short interactive film), we always start and end advising them with the same sentence: first think about a good “what” and then the rest will come (“how”). Think of a good idea and then how to develop it (there is an eastern proverb that says “it is not the same to point to the moon than the moon itself”).
We believe this is one of the current problems that some theorists of this genre describe accurately: we are paying less attention to the documentary issues trying to implement interaction everywhere, because that is a new and exciting way, so we bring cognitve saturation to interactors, one of the major hipertext problems, and ultimately, we lose them (their interest in the work). That’s why, in this article, we decided to show the simplicity of the “Honkytonk formula”, with interesting issues without too much depth and complexity in the interaction, intuitive and easy, as it should be.
Image 7. Honkytonk Films logo
In this regard, Almeida and Alvelos (2011) continue pointing:
“Formal arrangements shouldn’t compel interactive documentary into structural rigidity, it’s important to find a solid but flexible structure that allows some degree of freedom. There’s also no need to make it too clickable, too interactive: if linear documentary has zero interaction points and is a successful model, why would we make an interactive one a “clickable extravaganza”? Seek for continuity first, interaction last. And, of course, its always better a good idea made simple.” (Almeida and Alvelos, 2011:125)
Perspective and user’s participation, key issues
Following this line of argument, we should ask ourselves why we are so concerned about the integration of the documentary film with the interactive media. If they really are so contradictory, why not let them lead their separate lives in their own media? Personally we think this is because they need each other. And if they need each other, it is then a question of resolving how to reconcile the differences between the two media. As it is well known, many films find opportunities to extend their life by making use of interactive media, based on the creation of websites, which do not only act as information centers for the film, but also as a resource for additional content and surplus material that was not used in the final montage of the film (as in the case of “The challenge”, a case study that we will analyze in the second post of this series). This method retains a high degree of authorship, while maintaining the introduction of interactive elements more suitable for the Internet. According to Britain (2009:9), although this may be considered by some to be a very limited definition of the interactive documentary, it at least shows that there is no risk of overlapping between the interactive medium and the documentary itself, and that this convergence may provide grounds for optimism for a promising future for the interactive documentary film. The two media can coexist without the emergence of one leading to the marginalization or elimination of the other.
Image 7. Perspective
Image 8. User participation (digital natives)
By combining the power of the film medium to provide perspective and the ability of interactivity to improve the user’s participation with the material, the interactive documentary film may be able to offer more significant documentaries. The idea that interactive media can reduce the distance between the producer and the user is promising for any documentary filmmaker seeking to increase participation in their stories. However, at the other end of the scale, if this difference is diminished by too great an extent, the documentary may lose value and interest, particularly because of the lack of a strong narrative voice and a specific narrative program (this is precisely the fear of most traditional authors).
Arnau Gifreu Castells
Researcher, Professor and Producer
Universitat Ramón Llull / Universitat de Vic
References
Almeida, A.; Alvelos, H. (2010), “An Interactive Documentary Manifesto“. ICIDS’10Proceedings of the Third joint conference on Interactive digital storytelling. Heidelberg: Springer-Verlag Berlin. Conference Proceedings, pp 123-128. ISBN 3-642-16637-7978-3-642-16637-2.
Britain, C. (2009), Raising Reality to the Mythic on the Web: The Future of Interactive Documentary Film. North Carolina: Elon University.
Gifreu, A. (2011), “The Interactive Documentary. Definition Proposal and Characterization of the New Emerging Genre” . Hipertext 9, 2011. Digidoc Research Group. Communication Department. Universitat Pompeu Fabra.
— post on i-docs.org Portal: where we come from introduction and initial ingredients to build a correct taxonomic proposal
Honkytonk Films (company) : | http://i-docs.org/2012/06/10/idea-or-technique-interesting-topics-and-technological-simplicity-honkytonk-projects-journey-to-the-end-of-the-coal-the-big-issue-and-the-challenge/ | 2017-06-22T14:20:16 | CC-MAIN-2017-26 | 1498128319575.19 | [array(['http://i1.wp.com/i-docs.org/wp-content/uploads/2012/06/ma_diferencial_babagge.jpg?resize=111%2C177',
'ma_diferencial_babagge'], dtype=object)
array(['http://i2.wp.com/i-docs.org/wp-content/uploads/2012/06/logo_honkytonk.jpg?resize=175%2C34',
'logo_honkytonk'], dtype=object) ] | i-docs.org |
Claims Mode Forms Authentication Zone
Create a new webapp or extend your existing Claims Mode, Windows Authenticated webapp. The new webapp should only have the “Enable Forms Based Authentication” checked. Here you must specify the membership and role provider names. You can choose any value here. If you choose values that are different from “smp” and/or “srp”, you will need to make adjustments in the web.configs of webapp, STS, and Central Administration.
Select “Custom Sign In Page “ and use the value
/_layouts/ASFSLoginPage.aspx as shown below.
Once created, you will need to create a root site collection. This is important if you want Client Integration to work properly. See kb2590564.
Now you are ready to install and configure Shibboleth SP. | https://docs.9starinc.com/asfs2010v5-4/claims-mode-forms-authentication-zone/ | 2017-06-22T14:00:58 | CC-MAIN-2017-26 | 1498128319575.19 | [array(['/assets/authenticationproviders.png',
'Authentication Providers authentication providers'], dtype=object)
array(['/assets/customsigninpage.png',
'Custom Sign In Page custom sign in page'], dtype=object)] | docs.9starinc.com |
RT 4.4.1 Documentation
RT::Action::ExtractSubjectTag
NAME
RT::Action::ExtractSubjectTag
DESCRIPTION
ExtractSubjectTag is a ScripAction which allows ticket bonding between two RT instances or between RT and other Ticket systems like Siebel or Remedy.
By default this ScripAction is set up to run on every transaction on every Correspondence.
One can configure this ScripActions behaviour by changing the global
$ExtractSubjectTagMatch in
RT_Config.pm.
If a transaction's subject matches this regexp, we append the match tag to the ticket's current subject. This helps ensure that further communication on the ticket will include the remote system's subject tag.
If you modify this code, be careful not to remove the code where it ensures that it only examines remote systems' tags.
EXAMPLE
As an example, Siebel will set their subject tag to something like:
B<[SR ID:1-554]>
To record this tag in the local ticket's subject, we need to change ExtractSubjectTagMatch to something like:
← Back to index← Back to index
Set($ExtractSubjectTagMatch, qr/\[[^\]]+[#:][0-9-]+\]/); | https://docs.bestpractical.com/rt/4.4.1/RT/Action/ExtractSubjectTag.html | 2017-06-22T14:03:08 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.bestpractical.com |
This topic describes DirectQuery mode for Analysis Services tabular models at the 1200 and higher compatibility levels . DirectQuery mode can be turned on for models you're designing in SSDT, or for tabular models that have already been deployed, you can change to DirectQuery mode in SSMS. Before choosing DirectQuery mode, it's important to understand both the benefits and restrictions.
Benefits.
DirectQuery can take advantage of provider-side query acceleration, such as that provided by xVelocity memory optimized column indexes.
Security can be enforced by the back-end database , using row-level security features from the database (alternatively, you can use row-level security in the model via DAX).
If the model contains complex formulas that might require multiple queries, Analysis Services can perform optimization to ensure that the query plan for the query executed against the back-end database will be as efficient as possible.
Restrictions
Tabular models in DirectQuery mode have some restrictions. Before switching modes, it's important to determine whether the advantages of query execution on the backend server outweigh any reduction in functionality.
If you change the mode of an existing model in SQL Server Data Tools, the model designer will notify you of any features in your model that are incompatible with DirectQuery mode.
The following list summarizes the main feature restrictions to keep in mind:
Data sources supported for DirectQuery
DirectQuery tabular models at compatibility level 1200 and higher are compatible with the following data sources and providers:
Connecting to a data source
When designing a DirectQuery model in SSDT, connecting to a data source and selecting the tables and fields to include in your model is much the same as with in-memory models.
If you've already turned on DirectQuery but haven't yet connected to a data source, you can use the Table Import Wizard to connect to your data source, select tables and fields, specify a SQL query, and so on. The difference will be when you finish, no data is actually imported to the in-memory cache.
If you've already used Table Import Wizard to import data, but haven't yet turned on DirectQuery mode, when you do, the in-memory cache will be cleared.
Additional topics in this section
Enable DirectQuery mode in SSDT
Enable DirectQuery mode in SSMS
Add sample data to a DirectQuery model in Design Mode
Define partitions in DirectQuery models
Test a model in DirectQuery mode
DAX Formula Compatibility in DirectQuery Mode | https://docs.microsoft.com/en-us/sql/analysis-services/tabular-models/directquery-mode-ssas-tabular | 2017-06-22T15:07:00 | CC-MAIN-2017-26 | 1498128319575.19 | [array(['media/directquery-import-success.png',
'DirectQuery import success'], dtype=object)] | docs.microsoft.com |
Exam Center Registration
Exam Center Registration
Introduction
All enrolled students are required to register an exam center for the final exams. Students are required to check the rules and regulations mentioned here before registering an exam center.
Finding Suitable Exam Center
Existing Center Registration
- See all IOU-approved exam centers in your country/city of residence, here.
- If you have found a suitable exam center in your area:
Contact the exam center via phone or email and verify the following final exam arrangements with them. See if:
- they are available for the upcoming final exams.
- they have time slots for the normal and late exam periods (ask them if they would be available on weekends, weekdays, or both).
- they charge a fee and how much. If they do, then you will be required to pay it yourself.
- they have arrangements for your gender.
- they have computers available; otherwise, you will have to take your own laptop, etc.
- they have an internet connection; otherwise, you will have to use your own 3G.
Based on your satisfaction and center’s availability, register the center as explained in ‘ How to Register’ section below.
New Center Registration
If you have not found a suitable exam center in your area. You can suggest a new center of your choice.
- The requirements for a new exam center can be verified here.
- You are required to suggest the new center here.
The relevant department will verify your request and email you regarding the status of the request.
- Once the center is approved, proceed to register the exam center as described below.
How to Register Your Center Choice
Non-Resident Center Registration
If you are not in your country of residence during the exam period due to vacation or travel for work, family reasons, etc., you will need to register an exam center in your temporary place of residence and have it approved by the IOU centers management by providing your reasons for travel. You can do so here. After your request has been approved you will be able to register in a non-resident country.
Late Registration
- The due dates for registering the new exam center or the already approved center are indicated here.
- If a student misses the registration due date, s/he will be required to pay a fine of $10 US. This fine applies to those who did not register at an already approved exam center and those who did not submit a new center for approval by the due dates.
Please follow the steps below to register an exam center after the due date:
1. Contact an IOU approved exam center and make arrangements with them if they are willing to accommodate you. For more details, see the above text under the “Finding Suitable Exam Center” paragraph.
2. Once the center agrees to accommodate you, you can proceed to register at the exam center. Furthermore, 10 USD late center registration fees will required to be paid. You can register for the center either after paying the late exam center registration fee or register before paying the late exam center registration fee as it can be paid along with your next semester's fee.
Please check the payment guidelines here. | http://docs.islamiconlineuniversity.com/article/940-exam-center-registration | 2017-06-22T14:14:56 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.islamiconlineuniversity.com |
My interest in interactive documentaries (i-docs) and transmedia storytelling has immensely grew since the past two years. Part of this interest came from the new opportunities offered to me – as photojournalist – to tell stories in a different way. I needed to put more context into the stories and issues I was covering. I needed to“.
The story takes its roots in the 2010 Citizen United vs. F.E.C. Supreme Court ruling and analyses its consequences on U.S elections and on the American population.
What are the legal & political implications of this decision? Is this the beginning of a new era in which corporations will shape the political arena as they shape their businesses ? The project focuses on the rise of Super PACs and their affiliated organisations the 501c4, and documents how these organisations influence the political debate and American voters during the 2012 presidential campaign and beyond through political advertising.
Moneyocracy relies on an interactive documentary (i-doc), a documentary and a comic book, each of which explore a specific issue, allowing the viewers to discover the real influence of corporate money in contemporary elections. The journey takes the audience from Washington D.C to Chicago at the Obama campaign headquarters, to Tampa, FL to Charlotte, NC, during the GOP and Democratic National Conventions. Along the way, the authors interview some of the main actors of the campaign, including lobbyists, unionists, activists and lawyers, all of whom reveal the true consequences of the Supreme Court’s historic 2010 decision.. | http://i-docs.org/2012/03/18/project-presentation-moneyocracy-the-doc-the-idoc-and-the-issue/ | 2017-06-22T14:17:43 | CC-MAIN-2017-26 | 1498128319575.19 | [array(['http://i2.wp.com/i-docs.org/wp-content/uploads/2012/03/MC_FB_Timelinebanner.jpg?resize=536%2C199',
'Moneyocracy Facebook Timeline cover The Moneyocracy project a transmedia documentary by Gerald Holubowicz and Jean Nicholas guillo'],
dtype=object)
array(['http://i0.wp.com/i-docs.org/wp-content/uploads/2012/03/MCscreen9a.jpg?resize=300%2C168',
'MCscreen9a'], dtype=object) ] | i-docs.org |
Demo projects¶
There is a demo project showing stories (custom model Article) with a TextFieldWithInlines, so that the user may insert inline media content in the text.
Find the code of the example sites here.
Demo sites setup¶
Run the demo sites in a virtualenv for this app. Create the virtualenv, clone the code and cd into any of the demo sites. Then do as follow.
$ cd django-inline-media/example/demo $ python manage.py syncdb –noinput $ python manage.py collectstatic $ python manage.py runserver
Admin user/pwd:
admin/admin.
Demo project structure¶
The home page shows a link to an article list. The article list contains six example articles. Each of which contains pictures or picture sets located at different positions in the text. Take a look at the articles and click on the media. Pictures and picture sets are clickable by default. When clicking on a picture, the prettyPhoto jquery plugin overlays the picture on the current page. When clicking on a picture set, the plugin overlays a gallery view of all the pictures in the picture set.
The demo site uses django-inline-media with a custom articles app. The articles app defines the Article model. django-inlines-media provides 4 models: InlineType, License, Picture and PictureSet:
The Article model has a body field of type TextFieldWithInlines. The field uses its own widget TextareaWithInlines that renders an extra control to insert inline media in the textarea. The inline media content can be placed at different positions and with different size.
Positions can be left, right, or center. The size can be mini (80px width), small (150px width), medium (200px width), large (250px width) and full. Pictures at the center are in full size, and picturesets in the center render at a default size of 380x280 pixels. All sizes are customizables using the setting
INLINE_MEDIA_CUSTOM_SIZES.
Example articles¶
Let’s see how articles in the demo site look like. Following you can see example articles one, two and five. Article views are combined with their body fields in the admin UI so that you can get an idea of how inline elements look like in the textarea and what’s the effect in the final rendered article.
Example article one¶
Article one is made of four text paragraphs with a picture. The picture and its description float at the right hand side of the first paragraph.
The code highlighted in blue inserts the Ubuntu logo at the top right side of the article’s text. It’s been added using the control Inlines below the body’s textarea.
The attribute type in the <inline> corresponds with the InlineType instance of the content_type Picture. The attribute id is the object id of the picture, and class represents the CSS class applied when rendering the inline with the template
templates/inline_media/inline_media_picture.html.
Example article two¶
Yet another four paragraphs example article with two pictures, both floating at the right hand side, the first one on the first paragraph and the second on the second paragraph.
The Python logo uses the CSS class inline_large_right while the Django logo uses inline_medium_right. Both are clickable and both contain a description with an anchor element.
The change picture view for the first image, the Python one, looks like this:
Removing the tick of the box Show as link avoids making the image clickable. As an alternative you can also rewrite the template
inline_media/inline_media_picture.html using the attributes at will. Take a look at the Article 4 to see an example with an inline non-clickable picture.
Example article five¶
Three paragraphs with an inline picture set. The picture set float at the right side using the inline_medium_right CSS class.
An inline picture set has different looks:
- As an inline: the picture set shows only the croped version of the cover picture.
- On mouseover: A croped version of the 2/3 first pictures of the set are fanned out.
- On click: The picture set is overlaid in a gallery view showing complete pictures.
The overlaid gallery view of the picture set of article five:
| http://django-inline-media.readthedocs.io/en/latest/example.html | 2017-07-20T16:33:15 | CC-MAIN-2017-30 | 1500549423269.5 | [array(['_images/demo_admin.png', '_images/demo_admin.png'], dtype=object)
array(['_images/demo_article_1.png', '_images/demo_article_1.png'],
dtype=object)
array(['_images/demo_article_2.png', '_images/demo_article_2.png'],
dtype=object)
array(['_images/demo_pic_change_view_python.png',
'_images/demo_pic_change_view_python.png'], dtype=object)
array(['_images/demo_article_5.png', '_images/demo_article_5.png'],
dtype=object)
array(['_images/demo_article_5_gallery.png',
'_images/demo_article_5_gallery.png'], dtype=object)] | django-inline-media.readthedocs.io |
You can install vCenter Server and the Platform Services Controller on different virtual machines or physical servers.
You can separate the Platform Services Controller and vCenter Server and have them installed on different virtual machines or physical servers. First install the Platform Services Controller, then install vCenter Server and the vCenter Server components on another virtual or physical machine, and connect vCenter Server to the Platform Services Controller. You can connect many vCenter Server instances to one Platform Services Controller.
Concurrent installations of vCenter Server instances and Platform Services Controllers are not supported. You must install the Platform Services Controllers and vCenter Server instances in a sequence.
After you deploy vCenter Server with an embedded Platform Services Controller, you can reconfigure your topology and switch to vCenter Server with an external Platform Services Controller. This is a one-way process after which you cannot switch back to vCenter Server with an embedded Platform Services Controller. You can repoint the vCenter Server instance only to an external Platform Services Controller that is configured to replicate the infrastructure data within the same domain.
Before installing vCenter Server with an external Platform Services Controller, synchronize the clocks on the vSphere network. Time skew on the virtual machines or physical servers on which you install the Platform Services Controller and vCenter Server might cause deployment failure. For instructions about synchronizing the clocks on your vSphere network, see Synchronizing Clocks on the vSphere Network. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.install.doc/GUID-A97DA7BD-8745-42B1-8C84-118A76C17323.html | 2017-07-20T16:31:47 | CC-MAIN-2017-30 | 1500549423269.5 | [array(['images/GUID-E9592B28-47F7-4C10-8955-75C3369FD138-high.png',
'The Platform Services Controller is installed on one virtual machine or physical server and the vCenter Server instances registered with the Platform Services Controller are installed on other virtual machines or physical servers.'],
dtype=object) ] | docs.vmware.com |
Auto Deploy can assign a host profile to one or more hosts. The host profile might include information about storage configuration, network configuration, or other characteristics of the host. If you add a host to a cluster, that cluster's host profile is used.
Before you begin
Install vSphere PowerCLI and all prerequisite software. For information see vSphere Installation and Setup.
Export the host profile that you want to use.
About this task
In many cases, you assign a host to a cluster instead of specifying a host profile explicitly. The host uses the host profile of the cluster.
Procedure
- Run the Connect-VIServer vSphere PowerCLI cmdlet to connect to the vCenter Server system that Auto Deploy is registered with.
Connect-VIServer 192.XXX.X.XX
The cmdlet might return a server certificate warning. In a production environment, make sure no server certificate warnings result. In a development environment, you can ignore the warning.
- Using the vSphere Web Client, set up a host with the settings you want to use and create a host profile from that host.
- Find the name of the host profile by running Get-VMhostProfile vSphere PowerCLI cmdlet, passing in the ESXi host from which you create a host profile.
- At the vSphere PowerCLI prompt, define a rule in which host profiles are assigned to hosts with certain attributes, for example a range of IP addresses.
New-DeployRule -Name "testrule2" -Item my_host_profile -Pattern "vendor=Acme,Zven", "ipv4=192.XXX.1.10-192.XXX.1.20"
The specified item is assigned to all hosts with the specified attributes. This example specifies a rule named testrule2. The rule assigns the specified host profile my_host_profile to all hosts with an IP address inside the specified range and with a manufacturer of Acme or Zven.
- Add the rule to the rule set.
Add-DeployRule testrule Auto Deploy to the new host profile by performing compliance test and repair operations on those hosts. For more information, see Test and Repair Rule Compliance.
Power on unprovisioned hosts to provision them with the host profile. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.upgrade.doc/GUID-7EDD1093-D1F2-4798-9C51-71D6ABC1485A.html | 2017-07-20T16:31:11 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.vmware.com |
When you create, clone, or relocate a virtual machine, Storage DRS generates only one placement recommendation.
Problem.
Results
Accept the single recommendation. To obtain multiple recommendations, choose a destination host that does not specify that the virtual machine swap file location is on a datastore that is in the target datastore cluster. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.troubleshooting.doc/GUID-657623DA-5121-4419-8F81-BCBE08395393.html | 2017-07-20T16:30:36 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.vmware.com |
To create a vCloud Air endpoint, you must provide vRealize Automation with the required vCloud Air region and the management URL.
About this task
The vCloud Air management URL is also the URL of the vCloud Director server used to manage a specific virtual data center (vDC). You can use the region information and the management URL to configure your vCloud Air endpoint.
Locate the Management URL for each region vDC from the vCloud Air Console.
Procedure
- Log in to vCloud Air console with administrative privileges.
- From the vCloud Air dashboard, select your virtual data center.
- Click the link to display a URL for the virtual data center for use in API commands.
For example:.
The Management URL that you need to provide to vRealize Automation is the host and port portion of the API command URL, and the region is the portion of the URL that follows cloud/org/. In the example provided, the Management URL is, and the region is vCloudAutomation. | https://docs.vmware.com/en/vRealize-Automation/7.2/com.vmware.vra.config.doc/GUID-3E8226A3-C185-4074-B2AD-AA4A89128855.html | 2017-07-20T16:29:53 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.vmware.com |
Sets the file I/O errorint sm_fio_error_set(int new_error);
-
new_error
- The error code to set, one of the file I/O error codes shown in the Returns section below.
- The value returned by the last call to a file I/O function, one of the following:
- -1
SMFIO_INVALID_HANDLE: Invalid file handle.
- -2
SMFIO_HANDLE_CLOSE: Handle points to closed file.
- -3
SMFIO_EOF: Already at end of file.
- -4
SMFIO_IO_ERROR: Standard I/O error. Check the value in system variable
errnoto determine the nature of the error.
- -5
SMFIO_INVALID_MODE: Invalid mode specified for open operation.
- -6
SMFIO_NO_HANDLES: All available file handles currently in use.
- -7
SMFIO_OPEN_ERROR: Unable to open the file—for example, because it does not exist or is protected.
- -8
SMFIO_FIELD_ERROR: Nonexistent field.
- -9
SMFIO_FILE_TRUNCATE: Array not large enough to accept all file data; partial read was successful.
- -10
SMFIO_LINE_BREAK: One or more lines in the file were too long and wrapped to the next occurrence.
- -11
SMFIO_NO_EDITOR: Panther behavior variable
SMEDITORis undefined; no editor is available to handle the operation.
- -12
SMFIO_PUTFIELD: Unable to write to the field.
- -13
SMFIO_GETFIELD: Unable to read the field's contents.
sm_fio_error_setsets the error code for Panther's file I/O processing functions. Use this function to clear the last-reported error.
For an example of this function, refer to sm_fio_error. | http://docs.prolifics.com/panther/html/prg_html/libfu121.htm | 2017-07-20T16:44:01 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.prolifics.com |
Setup for HTTPS Users Using Git Credentials
The simplest way to set up connections to AWS CodeCommit repositories is to configure Git credentials for AWS AWS CodeCommit, you must edit your .gitconfig file to remove the credential helper information from the file before you can use Git credentials. If your local computer is running macOS, you might need to clear cached credentials from Keychain Access.
Step 1: Initial Configuration for AWS CodeCommit
Follow these steps to set up an AWS account, create an IAM user, and configure access to AWS CodeCommit. AWS KMS and Encryption.
In the IAM console, in the navigation pane, choose Users, and then choose the IAM user you want to configure for AWS CodeCommit access.
On the Permissions tab, choose Add Permissions.
In Grant permissions, choose Attach existing policies directly..
If you want to use AWS CLI commands with AWS CodeCommit, install the AWS CLI. For more information, see Command Line Reference.: Create Git Credentials for HTTPS Connections to AWS CodeCommit
After you have installed Git, create Git credentials for your IAM user in IAM. For more information, see Use Git Credentials and HTTPS with AWS CodeCommit in the IAM User Guide.
To set up HTTPS Git Credentials for AWS CodeCommit
Make sure to sign in as the IAM user who will create and use the Git credentials for connections to AWS CodeCommit.
In the IAM console, in the navigation pane, choose Users, and from the list of users, choose your IAM user.
On the user details page, choose the Security Credentials tab, and in HTTPS Git credentials for AWS CodeCommit, choose Generate.
Note
You cannot choose your own user name or password for Git credentials. For more information, see Use Git Credentials and HTTPS with AWS CodeCommit.
Copy the user name and password that IAM generated for you, either by showing, copying, and pasting this information into a secure file on your local computer, or by choosing Download credentials to download this information as a .CSV file. You will need this information to connect to AWS.
Step 4: Connect to the AWS CodeCommit Console and Clone the Repository
If an administrator has already sent you the name and connection details for the AWS CodeCommit repository, you can skip this step and clone the repository directly.
To connect to an AWS CodeCommit repository
Open the AWS CodeCommit console at.
In the region selector, choose the AWS CodeCommit Tutorial tutorial.
Copy the HTTPS:Copy
git clone my-demo-repo
The first time you connect, you will be prompted to provide the user name and password for the repository. Depending on the configuration of your local computer, this prompt will either originate from a credential management system for the operating system (for example, Keychain Access for macOS), a credential manager utility for your version of Git (for example, the Git Credential Manager included in Git for Windows), your IDE, or Git itself. Provide the user name and password generated for Git credentials in IAM (the ones you created in Step 3: Create Git Credentials for HTTPS Connections to AWS. | http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html | 2017-07-20T16:31:13 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.aws.amazon.com |
For all Joomla 3+ templates built using the Zen Grid Framework v4 (any theme after October 2014) please refer to the Zen Grid Framework v4 documentation.
The Zen Grid framework version 2.4 was released in November 2012. This version represents a number of improvements which mainly effect the templateDetails.xml of previous Zen Grid Framework templates but in some cases layout overrides and other files are also effected. This document is for users who have customised their template and wish to upgrade to the new version without losing their changes.
In most instances depending on the customisation this means simply uoploading specific files but the changes and new additions are noted below for developers who want to dig deeper.
Upgrading to v2.4 of the framework is not mandatory and most of the changes are improvements and not bug fixes. You can see a full list of the additions via the framework changelog page.
Unfortunately due to the nature of this release of this version of the framework, your template needs to be made specifically compatible with v2.4 of the framework.
We will update the specific changelog for templates that are made compatible with v2.4 with details regarding the update process.
If you install the 2.4 version of the framework on templates named v2.3 and less then your template will stop working.
If you do install v2.4 of the framework on an incompatible template then don't fear, because you can simply reinstall the previous version of the framework that is compatible with your template and it will reinstate the template.
At this stage all templates except for Shop Ignition will be updated shortly after the release of 2.4. Shop Ignition was built using legacy mode which is now deprecated and will be updated to use the non-legacy layer in the near future.
The naming convention we use for our templates uses the following convention:
JB_Template Name_Joomla Version_FrameworkVersion number.Template increment number.
as an example:
JB_Lifestyle_J1.5_J2.5_v2.4.2
Any template with anything less than 2.4.0 in the version number is not compatible with Zen Grid Framework v2.4.
It is not possible to use the upgrade method using Joomlabamboo templates because we have disabled this functionality in the template installer. We decided that it's safer for our users if we disable this because installing new template files over old template files will also remove any customisations added to the template.
One of the challenges faced with adding new functionality to old template's, centers around how best to add new css to the current core theme.css file. In the past we have been reluctant to make too many changes to the theme.css file as this generally adds unneeded complexity to new and even experienced users.
As of v2.4, all new patches, upgrades and fixes will be added to a css file located in the user folder of the template. All files in this folder are automatically included in the template on load so we will be using the new patch file to load any new css that is required for the template. This patch file will include code additions for new functionality as well as css overrides for bug fixes.
The css files will be named according to the following naming convention:
So as an example an update for the Lifestyle template with a new patch file would be found here
As a part of the code improvements we have removed all references to the tamplet getParameters and moved the template parameters into a global object.
So now rather than retrieving a template parameter like this:{codecitation}
echo $this->params->get("hilite");{/codecitation}
or this:{codecitation}
$hilite = $this->params->get("hilite");
echo $hilite;{/codecitation}
it is possible to simply use this:{codecitation}
echo $zen->hilite;{/codecitation}
Changing to this convention has improved the framework's performance and reduced the codebase significantly.
The bulk of the framework jQuery effects and fucntionality is now included in the media/zengridframework/js/zen.min.js file. There is an uncompressed version of this fiel also available in the same folder.
A number of javascript features that were using across templates - such as the select navigation for small devices - have now been transformed into jQuery functions that can be used in the templates and in particular the js/template.js file.
The new jQuery functions include:
Function that drives the inbuilt tabbed layouts.
Example{codecitation} jQuery("#right-tabs li").zentabs(); {/codecitation}
The example above shows the jQuery used to fire the tabs in the sidebar of the template. The object of the function is the list element that will be used to trigger the tabs.
The html required for this is included below:{codecitation}
Used to toggle elements on the page.{codecitation} jQuery("#mytrigger").zentoggle({activeclass: 'active',display: true}); {/codecitation}
Functions:
activeclass: the class applied to the trigger when the item is open.
display: false by default but when set to true the content is displayed on page load.
Implementation:
When the code block above is added to the template.js file the following markup becomes a show / hide toggle.{codecitation}
<div id="mytrigger">Toggle</div>
<div id= "mycontent">Content here in the next div.</div>{/codecitation}
The selector nominated in the function above eg #togglemenutrigger when clicked toggles the display of the following div.
This is similar to zentoggle except that the toggle state is saved to a cookie and recalled on page load.
eg{codecitation}
jQuery('.moduletable-slide .moduleTitle>h3').zencookietoggle();{/codecitation}
This snippet can be used to add a class to a specified div when it gets to a certain size. This is useful for conditions when you want to display a condensed format for a specific div that reduces in size that may or may not be affected by the window size.{codecitation}
jQuery('#rightCol,#leftCol,#jbtabbedArea').zenwidthcheck({width: "300",targetclass: "narrow"});{/codecitation}
Functions
After adding the selector you want to target in the code block above you can then specify:
width: Determines the width the element needs to be in order to toggle the class.
Class: The class that you want to add to the element when the width is reached.
Can be used to create a new panel instance on the page.
eg{codecitation}
jQuery('#mypanel').zenpanel({
trigger: '#mypanelopen,#mypanelclose,#myoverlay',
overlay: '#myoverlay',
type: 'opacity'
});{/codecitation}
The Object
The object of this function is the panel element itself eg The content you want to show when the panel is created.
Functions:
The zenaccordion(); function refers to the old panelmenu functionality and has not been made a global object yet.
We have also included some other 3rd party scripts taht are used extensively throughout our templates.
ImagesLoaded plugin: Created by Dave Desandro
We use this to trigger javascript once images have loaded. eg for masonry and equal heights layouts
Breakpoint.js: created by XOXCO
We use this to run javascript and control some layout elements when the user moves through various screen breakpoints.
jQuery Cookie: Created by carhartl
Used to recall states of toggles and menus based on user interaction
Version 2.4 of the framework also adds the following new features to version 2 compatible templates.
You can now add tabbed layouts above the left and right columns by publishing modules to left-a, left-b, left-c,left-d for the left column and right-a, right-b, right-c and right-d for the right column.
Users who run websites in the EU can now simply enable the cookie accept functionality from their template backend.
We used the cookiecuttr script for this funtionality.
The framework now includes a reset css file for the Virtuemart templat which can be toggled on or off from the template admin. This css file is designed to reset some of the core display characteristics of Virtuemart to make the default Virtuemart template more integrated into our templates.
This file can be overidden by simply loading a file named virtuemart.css into the css folder of your template.
The core functions/vars.php file is now half it's previous size.
In addition to this much of the logic in that file has been simplified and optimised.
Removed legacy mode: This only applies to the Shop Ignition template which at this stage will not be made compatible with 2.4 of the framework.
Removed debug mode: This was a smart idea but just added to code size in a lot of respects. We will revisit this in future updates to see how it can be re-implemented with a lighter footprint.
Removed unneed jLibrary imports.
Added League Gothic as a stand alone webfont. Not sure why it was removed from google fonts but we like this font alot.
Reorganised framework admin files.
Removed all legacy files
Updated layout/assets files
Added noscript message for users running their browser without javascript enabled. This is a configurable message. | http://docs.joomlabamboo.com/zen-grid-framework-v2/developer/upgrading-templates-to-v24-framework | 2017-07-20T16:31:57 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.joomlabamboo.com |
Reference - Definitions
SPMeta2 Definitions
Essentially, definitions are c# POCO objects provided by SPMeta2 library. SPMeta2 introduces a domain model providing set of definitions for most of the SharePoint artifacts.
Before you begin, make sure you are familiar with the following concepts:
Domain model
SPMeta2 introduces a domain of c# POCO objects, then it maps every single POCO object on SharePoint artifacts.
We use the following name convention to map SharePoint objects to SPMeta2 definition:
- "SP" prefix gets removed
- "Definition" postfix gets added
For instance, here are a few definitions for most common SharePoint artifacts:
- SPWeb => WebDefinition
- SPField => FieldDefinition
- SPContentType => ContentTypeDefinition
We have different editions of SharePoint, we have a split up between 'foundation' and 'standard' definitions across SPMeta2 library.
SharePoint Foundation definitions
SharePoint Foundation definitions could be found in SPMeta2.dll under the following namespaces:
- SPMeta2.Definitions.* - contains SharePoint Foundation definitions
- SPMeta2.Definitions.ContentTypes.* - contains definitions for content type operations
- SPMeta2.Definitions.Fields.* - contains typed field definitions
- SPMeta2.Definitions.Webparts.* - contains typed web part definitions
SharePoint Standard definitions
SharePoint standard definitions could be found in SPMeta2.Standard.dll under the following namespaces:
- SPMeta2.Standard.Definitions.* - contains SharePoint Standard definitions
- SPMeta2.Standard.Definitions.DisplayTemplates.* - contains various display template definitions
- SPMeta2.Standard.Definitions.Fields.* - contains typed field definitions
- SPMeta2.Standard.Definitions.Taxonomy.* - contains taxonomy related definitions
- SPMeta2.Standard.Definitions.Webparts.* - contains typed web part definitions | http://docs.subpointsolutions.com/spmeta2/reference/definitions | 2017-07-20T16:38:55 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.subpointsolutions.com |
Securing communication among Infrastructure Management components
By default, TrueSight Infrastructure Management and its associated components use Transport Layer Security (TLS) versions earlier than TLS 1.2 to communicate with each other. You can upgrade the security in your enterprise environment by using TLS 1.2 to communicate with TrueSight Infrastructure Management components. Following installation of the TrueSight Infrastructure Management components, you can switch from the default inter-component security configuration to TLS 1.2 configuration.
To enable TLS communication among Infrastructure Management components
Before configuring TLS 1.2, ensure that you have completed creating and importing security certificates among the components. The following workflow diagram explains the steps involved to achieve the same.
The following table describes the steps and the relevant links: | https://docs.bmc.com/docs/TSOperations/110/securing-communication-among-infrastructure-management-components-743794566.html | 2019-11-12T04:59:03 | CC-MAIN-2019-47 | 1573496664567.4 | [array(['/docs/TSOperations/110/files/743794566/744456242/1/1502383873562/tls1.2_workflow.png',
'tls1.2_workflow'], dtype=object) ] | docs.bmc.com |
Multimedia DN
A single directory number that can receive simultaneous interactions of more than one media type.
Glossary
Contents
Capacity Planning
How Stat Server Handles Invalid Rules
If a capacity rule is invalid or missing, then the capacity rule for the next object in the chain of precedence prevails. If no rule applies at all to an object (because of invalidity or because no rule was assigned to the object), then the inherent Default capacity rule applies to the object. Refer to Capacity Rule Inheritance to learn which links contribute to the chain of precedence.
<tabber>
New Agent-Place Model=
Capacity Planning for New Agent-Place Model
Stat Server 8.1.0 and lower releases (8.1.0-) always checked which place a DN belonged to as part of its algorithm for determining capacity. With this information, Stat Server 8.1.0- was then able to link an agent, who was logged in to the DN, to the place. In a SIP Cluster environment, however, you do not create DN objects in Configuration Server (where Stat Server looks for DNs). Instead, DNs are managed within the SIP Cluster solution. Given this DN-less model and the independence that Agent and Place objects gained in the 8.1.2 and higher releases (8.1.2+), there is no theoretically correct way that the model used by Stat Server 8.1.0- can associate an agent with a place. And, with multiple and different types of clients (multiple types of T-Server, SIP Server, and Interaction Server) - each potentially reporting different kinds of associations between agents and devices - there is no single entity in which Stat Server 8.1.0- can maintain a one-to-one constraint of agent to place. So, the model in Stat Server 8.1.2+ was improved primarily for scalability but also to address this SIP Cluster environment.
Stat Server 8.1.2+ does not require that a DN be assigned to a particular place for the purposes of agent or agent-group reporting. In the 8.1.2+ release, when Stat Server receives the EventAgentLogin TEvent, Stat Server tries instead to identify the agent in configuration by matching the AgentID attribute of the TEvent with the EmployeeID attribute of an agent (Person object) in configuration. If Stat Server succeeds in finding a match, Stat Server then is able to link the configured agent to the DN. Any agent-state changes or interactions occurring at the DN can then be attributed to the agent for determining capacity. In this model, there is no association at all between agent and place.
Capacity Rule Inheritance
Capacity rule inheritance in the Stat Server 8.1.2+ release also differs from that of the Stat Server 8.1.0 release.
In the 8.1.0 release, Stat Server is able to create an implicit relationship between an agent and a place as described above. You can assign capacity rules to both Agent and Place objects in configuration (to Tenant object as well), and Stat Server will determine which rule prevails:
- If the agent has an explicitly defined and valid capacity rule, then that rule overrides the capacity rule that might be assigned to the place.
- If no capacity rule is assigned to the agent, the capacity rule explicitly assigned to the place prevails.
- If no capacity rule is assigned to the agent or the place, then the capacity rule explicitly assigned to the tenant prevails.
- If no capacity rule is assigned to the agent, the place, or the tenant, the inherent default capacity rule of no more than one voice interaction prevails.
Stat Server 8.1.2+ does not associate agents with places as it did in prior releases. The reasoning behind this improved logic enables advanced configurations where:
- Two (or more) DNs could be associated with one place. This configuration allows two independent agents to log in to these DNs.
- One agent might be logged in to more than one DN. This configuration allows DNs to belong to different places.
In both cases, Stat Server aggregates information without having to determine which one agent should be associated with the one place.
The Capacity Vector
The Universal Routing Server (URS) requests four statistics from Stat Server in order to determine which contact center object has the capacity to receive a routed interaction. The statistics all share the same statistical category (CurrentTargetState), but they each differ in object:
Agent, Place, GroupAgents, GroupPlaces
If an interaction appears on a DN, Stat Server 8.1.0- reflects this information in the capacities of the corresponding Place and GroupPlaces objects. And, by association, this interaction would also be reflected in the capacities of corresponding Agent and GroupAgents objects. Stat Server 8.1.0- returns one capacity vector only to URS when all four CurrentTargetState statistics are opened for the associated object.
In the 8.1.2+ release, however, Stat Server must send two capacity vectors - one vector for Place and GroupPlaces objects, the other for Agent and GroupAgents objects - because agents and places are not linked in this release.
|-| Voice Interactions=
Capacity Planning for Voice Interactions
Because many different types of telephony devices are continuously being developed, attention should be paid during capacity planning to some peculiarities in the ways that Genesys software routes voice interactions. Genesys software continually evolves to meet new capabilities. Whereas the initial release of capacity planning limited the number of voice interactions that the Genesys router could direct to a particular Genesys place to one, now the Genesys router can route more than one voice interaction to a Genesys place - under certain circumstances.
Multi-DN places or multiline DNs are devices that have more than one directory number ascribed to them.
As a general rule, in regard to Genesys environments that uses a nonSIP (Sessions Initiated Protocol) Server, Genesys capacity planning enables routing to single- or multi-DN places where an agent performs a login to each DN. When DNs exist at a place to which the agent does not log in, Genesys software does not consider those DNs in its calculation of resource capacity.
Routing of more than one call to a multiline DN within a Genesys environment is currently unavailable in this release. Meridian and Meridian Link Symposium phones, for instance, present special examples of multiline DNs that occupy two DNs for each Genesys place - a Position DN and an Extension DN. For this particular configuration, the Genesys Stat Server logs DN status against the Position DN, but only the Extension DN is accessible to the Genesys router for directing calls. Ensure that you consider this when incorporating capacity planning into your routing strategies. IVR phones present additional examples of multiline DNs for which the Genesys capacity model is unable to direct more than one voice interaction at a time.
There are two exceptions to this general rule:
- The Genesys router can route calls to voice treatment port DNs (IVR DNs) without requiring that agent logins be simulated on these DN types.
- In a Meridian 1/Meridian Link Symposium model, it is the Position DN that the agent logs in to, not the Extension DN. However, the Genesys router routes calls - one at a time - to the Extension DN, given an agent login on the Position DN.
|-| Multimedia DNs=
Capacity Planning for Multimedia DNs
Prior to the 7.6 release, Genesys capacity planning, from the routing perspective, treated multiline DN devices (such as IP phones that have simultaneous video and multiline DN capabilities) as single-line, voice DNs. Genesys software permitted the routing of one and only one voice interaction, for instance, to these DNs, provided that no other voice interaction was already occurring at the DN. The presence of an interaction on a DN prevented the routing of another interaction to that DN, even though the DN was capable of handling an additional interaction of a different media type. Furthermore, each DN was associated with a single media type.
The Stat Server 7.6 release introduced support for the multimedia DN - a DN type that is controlled by SIP Server.
To specify the physical capacity of a multimedia DN, the voice configuration option was introduced in the Framework 7.6 release for Extension-type DN objects. Setting this option to true enables you to specify whether capacity for voice interactions applies to the multimedia DN. Multimedia DN capacity settings propagate to capacity rules for supported resources. Refer to the “Configuring DNs to handle instant messaging” procedure in the Genesys Instant Messaging Solution Guide for more information on this topic.
Unlike a single-media DN, a multimedia DN supports multiple media (for example, voice and chat) on each DN. Introduction of this multimedia DN to a Genesys contact-center environment enhances the resource capacity model by enabling the Genesys Universal Router Server (URS) to route multiple interactions to a single instance of such a DN. The model now enables URS to route one or more of chat and/or voice interactions to the same DN if the DN is configured as multimedia, its type is Extension, and the servicing T-Server is SIP-compliant.
To support this new functionality, the semantics of actions that are generated by Stat Server for statistic and status calculation of multimedia DNs have been updated. In addition to prior classifications, actions can now be media-dependent or media-independent. Furthermore, media-dependent actions can be further classified as media-unique or media-common.
An action is media-unique if only one action of a particular media type can exist on a device. For example, the LoggedIn action is media-independent. Generation of this action does not rely on the media channel to which an agent registers. Contrarily, the CallInbound action is media-dependent, because inbound interactions in the current capacity model are always associated with only one media type. Furthermore, the CallInbound action is media-common. In fact, all call-related actions are media-dependent and media-common. The WaitForNextCall, NotReadyForNextCall, and AfterCallWork actions that occur on multimedia DNs are classified as media-dependent and media-unique.
Prior classifications of actions ran along the following lines and are still applicable to multimedia DNs:
- Actions are either durable or instantaneous.
- Actions are either related to interactions or not.
- Actions are generated for either mediation DNs or regular (or multimedia) DNs.
Refer to the Framework Stat Server User's Guide for a complete listing of how Stat Server–generated actions are categorized.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/RTME/8.5.0/Capacity/CapacityPlanning | 2020-03-28T20:13:32 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.genesys.com |
setsourcefilter function
The setsourcefilter inline function sets the multicast filter state for an IPv4 or IPv6 socket.
Syntax
int setsourcefilter( SOCKET Socket, ULONG Interface, const SOCKADDR *Group, int GroupLength, MULTICAST_MODE_TYPE FilterMode, ULONG SourceCount, const SOCKADDR_STORAGE *SourceList );
Parameters
Socket
A descriptor that identifies a multicast socket.
Interface
The interface index of the multicast interface.
Group
A pointer to the socket address of the multicast group.
GroupLength
The length, in bytes, of the socket address pointed to by the Group parameter.
FilterMode
The multicast filter mode for the multicast group address.
SourceCount
The number of source addresses in the buffer pointed to by the SourceList parameter.
SourceList
A pointer to a buffer with the IP addresses to associate with the multicast filter.
Return value
On success, setsourcefilter returns NO_ERROR (0). Any nonzero return value indicates failure and a specific error code can be retrieved by calling WSAGetLastError.
Remarks
The setsourcefilter inline function is used to set the multicast filter state for an IPv4 or IPv6 socket.
This function is part of socket interface extensions for multicast source filters defined in RFC 3678. An app can use these functions to retrieve and set the multicast source address filters associated with a socket.
Windows Phone 8: This function is supported for Windows Phone Store apps on Windows Phone 8 and later.
Windows 8.1 and Windows Server 2012 R2: This function is supported for Windows Store apps on Windows 8.1, Windows Server 2012 R2, and later. | https://docs.microsoft.com/en-us/windows/win32/api/ws2tcpip/nf-ws2tcpip-setsourcefilter | 2020-03-28T20:37:04 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.microsoft.com |
4. How does Qubole access data in my Cloud object store?¶
QDS accesses data in your Cloud storage account using credentials you configure when setting up your QDS account. In addition, QDS accesses data in the following ways:
- For Hive queries, Pig scripts, Hadoop jobs, and Presto queries, QDS runs a Hadoop cluster on instances that Qubole rents for you. The Hadoop cluster reads, processes data, and writes the results back to your storage buckets.
- When you browse or download results from Qubole’s website (UI or the API), Qubole servers read the results from your object store and provide them to you.
- When you run data import or export commands, a Qubole server transfers the data. | https://docs.qubole.com/en/latest/faqs/general-questions/qubole-access-data-buckets.html | 2020-03-28T21:19:42 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.qubole.com |
Software Download Directory
Live Forms v8.0 is no longer supported. Please visit Live Forms Latest
for our current Cloud Release. Earlier documentation is available too.
Live Forms is a multi-tenant application. Tenants allow you to segregate groups of users and roles. Users from one tenant cannot access users in any other tenant. Note: this does not apply to public forms/flow which do not require login access to a tenant.
The Live Forms.
Live Forms trial tenants in the cloud are initially configured with the frevvo Default security Manager. Once you have purchased your Live Forms license, you can switch the Security Manager of your tenant and retain existing forms/flows, users, roles and submissions.
Tenants using the Default Security Manager can be migrated to:
Tenants using the LDAP Security Manager can migrate to:
If you want to switch the security manager of your tenant, cloud customers should contact [email protected] to initiate the procedure.
On this page:
The superuser for Live Forms in-house customers can add new tenants to your Live Forms server using the Manage Tenants page.
You cannot remove or copy the d (Default tenant).
The superuser for in-house customers can use the Tenant page to add a new Live Forms Live Forms. Live Forms.
Live forms offers a user interface to specify credentials to external secure web services that are accessed by the forms/flows. Live Forms Live Forms to continue designing forms and users filling forms will have to get new form instance and re-enter the values. The tenant admin can override the default session timeout with the value that is entered into the Session Timeout field. Live Forms Live Forms. | https://docs.frevvo.com/d/pages/viewpage.action?pageId=21528832 | 2020-03-28T20:09:40 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['/d/images/icons/linkext7.gif', None], dtype=object)] | docs.frevvo.com |
KB2775511 deployment for the SCCM Admin | https://docs.microsoft.com/en-us/archive/blogs/michaelgriswold/kb2775511-deployment-for-the-sccm-admin | 2020-03-28T21:33:06 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.microsoft.com |
# Mobile Sensing
movisensXS has the ability to do mobile sensing. Mobile sensing is "Using the sensors of a mobile device (i.e. smartphone or tablet computer) to acquire data from the environment." Modern Smartphones are equipped with the sensors to monitor a diverse range of human activities and commonly encountered contexts.
This is particularly useful for the research field of ambulatory assessment.
Mobile sensing data is store in the Unisens format. Read more about the file format here.
WARNING
The mobile sensing features are only available on request!
# Features Library version 6510
Log App Usage Beta
This logs the used applications.
Log Battery Level Beta
This logs the battery level to a unisens log.
Log Nearby Devices Beta
This logs nearby devices in an unsisens log file. It will send a bluetooth advertisement through IBeacon standard on supported devices and will protocol all other devices which are broadcasting with a beacon standard.
Log Device Running Beta
This logs device running to a unisens log.
Log Location Beta
This logs the location in an energy efficient way.
It uses GPS, WLAN and Cell to balance the battery usage with precision.
While tracking there are continuous location updates at a maximum rate of every 5s and every 20m. The tracking transitions to a stationary state when the participant remains within 100m of a central position or no location update occurs for 120s. Then more tracking will be done until the participant leaves the 100m radius.
*Location accuracy is the estimated horizontal accuracy of this location, radial, in meters. We define horizontal accuracy as the radius of 68% confidence. In other words, if you draw a circle centered at this location's latitude and longitude, and with a radius equal to the accuracy, then there is a 68% probability that the true location is inside the circle. Furthermore problems can occur with the location tracking using WLAN if the router is moving. This can happen with WLAN in trains for example.
Log Music Listening Beta
This logs the currently listened music metadata and the current playback state.
- Tested and working: Google Android Player, Amazon, Spotify
- Not tested but should work: Google Android player, HTC Music, Apollo, Miui, Real, Sonyericsson, Rdio, Samsung Music Player, PowerAmp, Last.fm, Rhapsody, PlayerPro Music Player, Rocket Player, doubleTwist Music Player, Pandora, Winamp, 8tracks playlist radio, jetAudio HD Music Player, Spotify, Soundcloud
- Tested and not working: Apple Music
Log Notifications Beta
This Logs all incoming notifications in an eventbased manner.
Log Phone Call Beta
This logs the phone activity in an anonymized way (Hashing of phone numbers).
By hashing the phone number, first an individual code will be generated. This code together with the phone number will be processed using the SHA1 hash algorithm. Because of that the hash number can not be assigned to the phone number later on. Furthermore it is not possible to compare the hashs of different participants. How a phone call can be displayed in the results is shown in this example:
442637,Call,type=Outgoing|number={"ONE_WAY_HASH":"1c3d3074811843e1e133a3cba16d506ecb7e8593"}|duration=121|time=13:44:53|date=2018-02-01
Log Physical Activity Beta
This logs the physical activity of the user (IN_VEHICLE: 0, ON_BICYCLE: 1, ON_FOOT: 2, STILL: 3, UNKNOWN: 4, TILTING: 5). This implementation is based on Google Play Services which detection algorithm could change even during a running study.
Activity confidence is).
WARNING
Beginning with Android 5, activities may be received less frequently than minutely if the device is in power save mode and the screen is off. To conserve battery, activity reporting may stop when the device is 'STILL' for an extended period of time. It will resume once the device moves again. This only happens on devices that support the Sensor.TYPE_SIGNIFICANT_MOTION hardware.
WARNING
Physical Activity cannot be measured accurately with a Smartphone. Smartphone accelerometers are not very accurate, differ between devices and the wearing position of the Smartphone varies. To accurately measure it please use a dedicated activity sensor like the movisens Move4.
Log SMS Beta
This logs the SMS activity in an anonymized way (Hashing of phone numbers).
Log Steps Beta
This logs the steps.
WARNING
Steps cannot be measured accurately with a Smartphone. This is just an estimation based on 5 seconds of measurement of the Smartphone accelerometer in a minute. Smartphone accelerometers are not very accurate, differ between devices and the wearing position of the Smartphone varies. To accurately measure it please use a dedicated activity sensor like the movisens Move4.
Log Traffic Beta
This action logs the traffic usage.
Log Display On/Off Beta
This condition is true if the display is on. | https://docs.movisens.com/movisensXS/mobile_sensing/ | 2020-03-28T21:32:49 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.movisens.com |
SQL Authorization through Ranger in Presto¶
Introduction¶
QDS Presto supports authorization using Apache Ranger. In addition to providing column-level authorization, Ranger provides additional features, including row-level filtering and data masking. Ranger authorization works for the Hive connector in QDS Presto. This section describes the Ranger features supported in QDS Presto and how to configure Ranger with Presto.
Note
Currently, Apache Ranger is only supported from Presto version 0.208.
The following sections are:
Supported Ranger Features¶
Hive Authorization¶
All forms of Hive Authorization are supported with the Ranger plugin: Schema level, Table level, and Column level.
All Ranger policy constructs for authorization are also supported: Allow-Conditions, Deny-Conditions, Resource-Inclusion, and Resource-Exclusion. User Groups defined in Ranger are also supported.
Row-level Filtering¶
In Ranger, admins can set policies to hide certain rows from particular users or groups. Assume, for example, that an organization wants to prevent access by non-managers in the Finance department to certain details from the Salary table (empId, designation, salary) for executive staff. In this case, the admin adds all non-managers in the Finance department into a group, say
finEmps, and adds a row filter on the table Salary with the filter defined as
designation != 'executive'. The results of queries run by users in the
finEmps group will not include any rows where the value of the designation column is
executive.
Click here to learn more about row-level filters in Ranger (see use case #2).
Data Masking Policies¶
For datasets that include sensitive information, admins can configure a policy with Mask Conditions to mask the sensitive data for a particular set of users. Ranger provides several options for masking data, including Redact, Hash, Nullify, Custom, and more. The Ranger plugin in QDS Presto, however, currently supports only the Custom masking option. Support for other masking policies will be available in future releases.
Click here to learn more about data masking in Ranger (see use case #3).
Configuring Presto to use Ranger for Authorization¶
Before you begin configuring your Ranger plugin, note down the following information:
- Ranger URL. This is the endpoint of the Ranger Admin that serves your authorization policies. We will use as our sample Ranger URL. Connectivity between the Presto master instance and the Ranger Admin instance should be ensured by opening up the necessary ports. In the sample URL, for example, port 6080 is used, so the network should be configured such that the Presto master instance can communicate with port 6080 of the Ranger Admin instance.
- Credentials. The credentials provide access to the Ranger Admin. They are used to communicate with Ranger Admin to fetch policies and user-group information. We will use admin and password as sample credentials for the username and password, respectively.
- Service Name. This is the name of the service in Ranger Admin that holds the policies you want to apply to your catalog. We will use DepartmentBasedHiveAuth as our sample service name.
These configurations, along with others, are provided in the configurations files as described in the following sections.
Ranger Plugin Configuration Files¶
Two files must be configured for Ranger:
access-control.properties and the Ranger configuration file for catalogs. This section describes both these files in detail.
System Level configuration file¶
System level configurations are defined in the
access-control.properties file. The following configurations must be provided in this file:
access-control.name=ranger-access-control: This configures Presto to use Ranger for authorization.
ranger.usernameand
ranger.password: Credentials to communicate with Ranger Admin.
ranger.<catalog>.audit-config-xml: This is the location of the audit configuration file. Solr is the only audit store that is currently supported with Presto. An example of the
ranger.<catalog>.audit-config-xmlis provided below.
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>xasecure.audit.is.enabled</name> <value>true</value> </property> <property> <name>xasecure.audit.solr.is.enabled</name> <value>true</value> </property> <property> <name>xasecure.audit.solr.async.max.queue.size</name> <value>1</value> </property> <property> <name>xasecure.audit.solr.async.max.flush.interval.ms</name> <value>1000</value> </property> <property> <name>xasecure.audit.solr.solr_url</name> <value><Solr_ranger_audit_url></value> </property> </configuration>
Note
In the
ranger.<catalog>.audit-config-xmlfile,
xasecure.audit.solr.async.max.queue.sizeand
xasecure.audit.solr.async.max.flush.interval.msare optional configuration properties.
One or both of the following:
ranger.<Catalog>.security-config-xml: Location of the catalog-configuration XML file to use for particular catalog.
ranger.<Catalog>.config-file: Location of the catalog-configuration file containing key-value pairs to use for particular catalog’s Ranger configuration.
When both these configs are provided, configs from both are used and, in case of conflicting config keys, configuration from
ranger.<Catalog>.security-config-xmlare given preference over
ranger.<Catalog>.config-file. The contents of the file configured here are described in the next section, Ranger Configuration file(s) for Catalogs.
Optionally, the following config can also be provided:
ranger.user-group-cache-expiry-ttl: This governs the refresh interval in seconds of the Users-Groups cache, which is populated with information from Ranger. The default refresh interval is 30 seconds.
Ranger Configuration file(s) for Catalogs¶
This file defines Ranger configurations for the catalog. There must be one such file for each catalog for which you use
Ranger. This is the file whose location is configured in the
access-control.properties file, as described above.
The file can either be a Hadoop configuration style XML file from your existing Hive deployment, or in key-value pairs
of configName-configValue. As described in the section System Level configuration file above, either a
security-config-xml
or a config-file config can be used to point to the location of the Ranger configuration file.
The following must be defined in catalog configuration files. How they are defined depends on the file format you use, that is, either Hadoop configuration style XML or key-value pairs. Let us see examples of configuration in both styles in the next section:
ranger.plugin.<CatalogName>.service.name: Service Name in Ranger for the catalog.
ranger.plugin.<CatalogName>.policy.rest.url: Ranger endpoint from which to fetch policies.
ranger.service.store.rest.url: Endpoint to fetch Users-Groups information. This would be the same as the Ranger endpoint when using Ranger as the source of User-Group information.
Additionally, the following can be provided:
ranger.plugin.<CatalogName>.policy.pollIntervalMs: Polling interval for policies. The default is 30000 (30 seconds).
- Any other Ranger client configs that have been used with your Hive installation.
Sample Ranger Plugin Configurations in QDS Presto¶
Hive tables in QDS show up under the catalog name “hive.” We will look at the Ranger configuration with
hive as the
catalog name. If you have configured your QDS Presto cluster to use a different catalog name, replace the name
hive
with your catalog name in the configs described below.
We will use the following sample Ranger configurations:
- Ranger URL:
- Ranger Credentials:
- username:
admin
- Service Name for the Hive Catalog:
DepartmentBasedHiveAuth
Sample XML based Ranger Configuration file¶
The following sample values are shown in the screen shot that follows:
access-control.properties: access-control.name=ranger-access-control ranger.username=admin ranger.password=password ranger.hive.security-config-xml=/usr/lib/presto/etc/hive_ranger.xml hive_ranger.xml: <configuration> <property> <name>ranger.plugin.hive.service.name</name> <value>DepartmentBasedHiveAuth</value> </property> <property> <name>ranger.plugin.hive.policy.pollIntervalMs</name> <value>5000</value> </property> <property> <name>ranger.service.store.rest.url</name> <value></value> </property> <property> <name>ranger.plugin.hive.policy.rest.url</name> <value></value> </property> </configuration>
The screenshot above is from the QDS Cluster Edit page, and shows the Presto configuration overrides. Through these configs,
Presto is configured to use
ranger-access-control via
access-control.properties.
/usr/lib/presto/etc/hive_ranger.xml
is configured as the xml based Ranger configuration file for the
hive catalog.
Sample Key-Value based Ranger Configuration File¶
A second way to configure the Ranger plugin, as mentioned above, is using key-value pairs, as shown in this section. The following sample values are shown in the screen shot that follows:
access-control.name=ranger-access-control ranger.username=admin ranger.password=password ranger.hive.config-file=/usr/lib/presto/etc/hive_ranger hive_ranger: ranger.plugin.hive.service.name=DepartmentBasedHiveAuth ranger.plugin.hive.policy.pollIntervalMs=5000 ranger.service.store.rest.url= ranger.plugin.hive.policy.rest.url=
Limitations¶
- HTTPS mode of Ranger Admin is not yet supported. This will be supported in a future release.
- Row filters and data masking policies with non ANSI SQL constructs are not supported, and will lead to query failures. For example, Strings are delimited by single quotes in Presto, while Hive uses both double quotes and single quotes for them. If existing Hive policies are used with Presto, admins should convert any double quotes to single quotes, as supported by Presto.
- Row filters and data masking policies with functions not defined in Presto are not supported, and will lead to query failures. It is recommended that admins either define new policies for Presto using only functions supported by Presto or, if they want to use existing Hive policies in Presto, they write the UDFS for the functions not defined in Presto.
- Some SQL constructs have different semantics in Presto than in Hive. As a result, row filter policies defined for Hive might not give the same results when used in Presto. Admins are advised to either use a new policy defined for Presto, or test the existing Hive policies thoroughly before using them.
Examples¶
Let us walk through several examples of the Ranger plugin capabilities in this section.
Users and Groups¶
First let us see the users and groups configuration. We have two groups defined:
- Admins
- Analysts
There are three users who belong to one of these groups:
- Sakshi: username sakshia and belongs to Admins group
- Shubham: username stagra and belongs to Analyst group
- Sumit: username sumitm and belongs to Analyst group
Here are the above users as they appear in the Ranger Users/Groups page:
Table Access Example¶
We add a policy in Ranger to allow access to the customer table to only stagra:
With this, Shubham should be able to run any queries on the customer table:
But any other user, say Sumit, should be unable to access this table:
As seen above, the query by [email protected] fails with an AccessDeniedException.
Column Level Authorization¶
Next, let us configure a policy the the table nation. We give user Shubham full access to this table, and exclude user Sumit from access to the table:
For user Sumit, we provide access to all but n_regionkey column in this table. Notice the exclusion of this column in the policy below:
With this setup, user Shubham should be able to access all columns of the table nation:
But trying to select n_regionkey by user Sumit should throw an access denied exception:
But, since we have allowed access to other columns of this table for user Sumit, queries on those columns work fine:
Row Filters¶
Next, we setup a Row Filter Policy to hide the row with n_name as INDIA from user Shubham:
With this policy, the user Shubham does not see any rows where the column n_name has the value INDIA:
But since the policy does not filter any rows for user Sumit, he should be able to see the rows with n_name having value INDIA:
Data Masking¶
To demonstrate Data Masking capability, we define a policy for user Shubham to see the n_comment values in table nation truncated to only the first five characters:
Selecting n_comment by user Shubham returns the truncated data as follows:
Groups-based Policy¶
Finally, we will see an example where we set up policy on a group rather than at the user level. In this example, full access is provided to the table promotion to the Admins group. The rest of the users do not have access to it.
This means that only user Sakshi, who is a part of the Admins group, should be able to access this table:
User Shubham should not be able to access the promotion table:
User Sumit should also be unable to access the promotion table:
| https://docs.qubole.com/en/latest/security-guide/data-ranger-for-presto.html | 2020-03-28T21:48:27 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['../_images/ranger_masking_options.png',
'../_images/ranger_masking_options.png'], dtype=object)
array(['../_images/ranger_config_overrides.png',
'../_images/ranger_config_overrides.png'], dtype=object)
array(['../_images/ranger_config_with_key-value_pairs.png',
'../_images/ranger_config_with_key-value_pairs.png'], dtype=object)
array(['../_images/01_Users_and_Groups.png',
'../_images/01_Users_and_Groups.png'], dtype=object)
array(['../_images/02_customer_access_to_shubham.png',
'../_images/02_customer_access_to_shubham.png'], dtype=object)
array(['../_images/03_Select_customer_by_shubham_works.png',
'../_images/03_Select_customer_by_shubham_works.png'], dtype=object)
array(['../_images/04_Select_customer_by_sumit_fails.png',
'../_images/04_Select_customer_by_sumit_fails.png'], dtype=object)
array(['../_images/05a_nation_table_access_to_shubham_not_sumit.png',
'../_images/05a_nation_table_access_to_shubham_not_sumit.png'],
dtype=object)
array(['../_images/05b_nation_table_access_to_sumit_excluding_n_regionkey.png',
'../_images/05b_nation_table_access_to_sumit_excluding_n_regionkey.png'],
dtype=object)
array(['../_images/06_select_nation_by_shubham_works.png',
'../_images/06_select_nation_by_shubham_works.png'], dtype=object)
array(['../_images/07a_select_all_on_nation_by_sumit_fails.png',
'../_images/07a_select_all_on_nation_by_sumit_fails.png'],
dtype=object)
array(['../_images/07b_select_n_name_on_nation_by_sumit_works.png',
'../_images/07b_select_n_name_on_nation_by_sumit_works.png'],
dtype=object)
array(['../_images/08_row_filter_on_nation_hide_INDIA_row_for_shubham.png',
'../_images/08_row_filter_on_nation_hide_INDIA_row_for_shubham.png'],
dtype=object)
array(['../_images/09a_nothing_returned_for_INDIA_for_shubham.png',
'../_images/09a_nothing_returned_for_INDIA_for_shubham.png'],
dtype=object)
array(['../_images/09b_row_returned_for_INDIA_for_sumit.png',
'../_images/09b_row_returned_for_INDIA_for_sumit.png'],
dtype=object)
array(['../_images/10_Data_mask_on_nation_to_show_only_5_characters_in_comment.png',
'../_images/10_Data_mask_on_nation_to_show_only_5_characters_in_comment.png'],
dtype=object)
array(['../_images/11_Data_mask_in_action.png',
'../_images/11_Data_mask_in_action.png'], dtype=object)
array(['../_images/12_policy_to_give_access_to_promotion_to_admins.png',
'../_images/12_policy_to_give_access_to_promotion_to_admins.png'],
dtype=object)
array(['../_images/15_select_on_promotion_by_sakshi_passes.png',
'../_images/15_select_on_promotion_by_sakshi_passes.png'],
dtype=object)
array(['../_images/13_select_on_promotion_by_shubham_fail.png',
'../_images/13_select_on_promotion_by_shubham_fail.png'],
dtype=object)
array(['../_images/14_select_on_promotion_by_sumit_fail.png',
'../_images/14_select_on_promotion_by_sumit_fail.png'],
dtype=object) ] | docs.qubole.com |
Jupyter Notebooks¶
Qubole provides JupyterLab interface, which is the next generation user interface for Jupyter. Jupyter notebooks are supported on Spark 2.2 and later versions.
Note
JupyterLab interface is a Beta feature, and is not enabled for all users by default. Contact your account executive or customer success manager to enable this feature in your account.
The following topics help you understand how to use the JupyterLab notebook interface to create and manage Jupyter notebooks:
- Accessing JupyterLab Interface with R58
- Accessing JupyterLab Interface in Earlier Versions
- Access Control in Jupyter Notebooks
- Folders in JupyterLab Interface
- Creating Jupyter Notebooks
- Scheduling Jupyter Notebooks
- Managing Jupyter Notebooks
- Version Control Systems for Jupyter Notebooks
- Exploring Data in Jupyter Notebooks
- Interpreter Modes for Jupyter Notebooks
- Configuring Spark Settings for Jupyter Notebooks
- Viewing Spark Application Status
- Converting Zeppelin Notebooks to Jupyter Notebooks
- Adding Packages to Jupyter Notebooks
- Changing the JupyterLab Interface Theme
- Known Limitations | https://docs.qubole.com/en/latest/user-guide/notebooks-and-dashboards/notebooks/jupyter-notebooks/index.html | 2020-03-28T21:45:17 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.qubole.com |
Storage encryption¶
OpenLMI supports Linux Unified Key Setup (LUKS) to encrypt block devices. This means any device can be formatted with LUKS, which destroys all data on the device and allows for encryption of the device future content. The block device then contains encrypted data. To see unencrypted (clear-text) data, the LUKS format must be opened. This operation creates new block device, which contains the clear-text data. This device is just regular block device and can be formatted with any filesystem. All write operations are automatically encrypted and stored in the LUKS format data.
To hide the clear-text data, the clear text device must be closed. This destroys the clear-text device, preserving only encrypted content in the LUKS format data.
The data are encrypted by a key, which is accessible using a pass phrase. There can be up to 8 different pass phrases per LUKS format. Any of them can be used to open the format and to unencrypt the data.
Note
There is currently no way how to specify which algorithm, key or key size will be used to actually encrypt the data. cryptsetup defaults are applied.
CIM_StorageExtent can be recognized by LMI_LUKSFormat resides on it.
If the LMI_LUKSFormat is opened, the new clear-text device is created as LMI_LUKSStorageExtent, which has BasedOn association to the original CIM_StorageExtent.
All operations with LUKS format can be done using LMI_ExtentEncryptionConfigurationService.
Following instance diagram shows one encrypted partition. The LUKS is not opened, which means that there is no clear-text device on the system.
Following instance diagram shows one encrypted partition with opened LUKS. That means any data written to /dev/mapper/cleartext are automatically encrypted and stored on the partition.
Useful methods¶
- CreateEncryptionFormat
- Formats a StorageExtent with LUKS format. All data on the device are destroyed.
- OpenEncryptionFormat
- Opens given LUKS format and shows its clear-text in LMI_LUKSStorageExtent.
- CloseEncryptionFormat
- Closes given LUKS format and destroys its previously opened LMI_LUKSStorageExtent.
- AddPassphrase, DeletePassphrase
- Manage pass phrases for given LUKS format.
Use cases¶
Note
All example scripts expect properly initialized lmishell.
Create encrypted file system.¶
Use CreateEncryptionFormat to create LUKS format, open it and create ext3 filesystem on it:
encryption_service = ns.LMI_ExtentEncryptionConfigurationService.first_instance() filesystem_service = ns.LMI_FileSystemConfigurationService.first_instance() # Find the /dev/sda1 device sda1 = ns.CIM_StorageExtent.first_instance({"Name": "/dev/sdb1"}) # Format it (ret, outparams, err) = encryption_service.SyncCreateEncryptionFormat( InExtent=sda1, Passphrase="opensesame") luks_format = outparams['Format'].to_instance() # 'Open' it as /dev/mapper/secret_data (ret, outparams, err) = encryption_service.SyncOpenEncryptionFormat( Format=luks_format, Passphrase="opensesame", ElementName="secret_data") clear_text_extent = outparams['Extent'].to_instance() # Format the newly created clear-text device (ret, outparams, err) = filesystem_service.SyncLMI_CreateFileSystem( FileSystemType=filesystem_service.LMI_CreateFileSystem.FileSystemTypeValues.EXT3, InExtents=[clear_text_extent])
The resulting situation is the same as shown in the second diagram above.
Close opened LUKS format¶
CloseEncryptionFormat can be used to destroy the clear-text device so only encrypted data is available. The clear-text device must be unmounted first!
encryption_service = ns.LMI_ExtentEncryptionConfigurationService.first_instance() # Find the LUKS format sda1 = ns.CIM_StorageExtent.first_instance({"Name": "/dev/sdb1"}) luks_format = sda1.first_associator(AssocClass="LMI_ResidesOnExtent") # Close it (ret, outparams, err) = encryption_service.SyncCloseEncryptionFormat( Format=luks_format)
The resulting situation is the same as shown in the first diagram above.
Pass phrase management¶
Pass phrases can be added or deleted using AddPassphrase and DeletePassphrase methods.
Following code can be used to replace weak ‘opensesame’ password with something stronger:
# Find the LUKS format sda1 = ns.CIM_StorageExtent.first_instance({"Name": "/dev/sdb1"}) luks_format = sda1.first_associator(AssocClass="LMI_ResidesOnExtent") # Add a pass phrase (ret, outparams, err) = encryption_service.AddPassphrase( Format=luks_format, Passphrase="opensesame", NewPassphrase="o1mcW+O27F") # Remove the old weak one (ret, outparams, err) = encryption_service.DeletePassphrase( Format=luks_format, Passphrase="opensesame")
There are 8 so called key slots, which means each LUKS formats supports up to 8 different pass phrases. Any of the pass phrases can be used to open the LUKS format. Status of these key slots can be found in LMI_LUKSFormat.SlotStatus property. | https://openlmi.readthedocs.io/en/latest/openlmi-storage/usage-luks.html | 2020-03-28T21:38:20 | CC-MAIN-2020-16 | 1585370493120.15 | [] | openlmi.readthedocs.io |
General Usage
$ ponzu [flags] command <params>
Commands¶
new¶
generate, gen, g¶ { item.Item Title string `json:"title"` Body string `json:"body"` Rating int `json:"rating"` Tags []string `json:"tags"` }:
Generate Content References
It's also possible to generate all of the code needed to create references between your content types. The syntax to do so is below, but refer to the documentation for more details:
$ ponzu gen c author name:string genre:string:select $ ponzu gen c book title:string author:@author,name,genre
The commands above will generate a
Book Content type with a reference to an
Author item, by also generating a
reference.Select
as the view for the
author field.
build¶
From within your Ponzu project directory, running build will copy and move the necessary files from your workspace into the vendored directory, and will build/compile the project to then be run.
Optional flags:
-
--gocmd sets the binary used when executing
go build within
ponzu build step
Example:
$ ponzu build (or) $ ponzu build --gocmd=go1.8rc1 # useful for testing
Errors will be reported, but successful build commands return nothing.
run¶:
--bindsets the address for ponzu to bind the HTTP(S) server
-)
--docsruns a local documentation server in case of no network connection
--docs-portsets the port on which the docs server listens for HTTP requests [defaults to 1234]
Example:
$ ponzu run (or) $ ponzu run --bind=0.0.0.0 ¶
Will backup your own custom project code (like content, addons,¶
Downloads an addon to GOPATH/src and copies it to the current Ponzu project's
/addons directory.
Example:
$ ponzu add github.com/bosssauce/fbscheduler
Errors will be reported, but successful add commands return nothing.
version, v¶¶
- Make code changes
- Test changes to ponzu-dev branch
- make a commit to ponzu-dev
- to manually test, you will need to use a new copy (ponzu new path/to/code), but pass the
--devflag so that ponzu generates a new copy from the
ponzu-devbranch, not master by default (i.e.
$ponzu new --dev /path/to/code)
- build and run with
$ ponzu build --dev new | http://docs.ponzu-cms.org/CLI/General-Usage/ | 2020-03-28T20:02:10 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.ponzu-cms.org |
GENEVE¶
GENEVE supports all of the capabilities of VXLAN, NVGRE, and STT and was designed to overcome their perceived limitations. Many believe GENEVE could eventually replace these earlier formats entirely. - A technique for composing network fabrics larger than a single switch while maintaining non-blocking bandwidth across connection points. ECMP is used to divide traffic across the multiple links and switches that constitute the fabric. Sometimes termed “leaf and spine” or “fat tree” topologies.
Geneve Header:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |Ver| Opt Len |O|C| Rsvd. | Protocol Type | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Virtual Network Identifier (VNI) | Reserved | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Variable Length Options | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Configuration¶
Configure interface <interface> with one or more interface addresses.
address can be specified multiple times as IPv4 and/or IPv6 address, e.g. 192.0.2.1/24 and/or 2001:db8::1/64
Example:
set interfaces geneve gnv0 address 192.0.2.1/24 set interfaces geneve gnv0 address 192.0.2.2/24 set interfaces geneve gnv0 address 2001:db8::ffff/64 set interfaces geneve gnv0 address 2001:db8:100::ffff/64 | https://docs.vyos.io/en/latest/interfaces/geneve.html | 2020-03-28T21:27:27 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.vyos.io |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::Rekognition::Types::KinesisVideoStream
- Defined in:
- (unknown)
Overview
Note:
When passing KinesisVideoStream as input to an Aws::Client method, you can use a vanilla Hash:
{ arn: "KinesisVideoArn", }
Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
Returned by:
Instance Attribute Summary collapse
- #arn ⇒ String
ARN of the Kinesis video stream stream that streams the source video.
Instance Attribute Details
#arn ⇒ String
ARN of the Kinesis video stream stream that streams the source video. | https://docs.amazonaws.cn/sdk-for-ruby/v2/api/Aws/Rekognition/Types/KinesisVideoStream.html | 2020-03-28T20:31:49 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.amazonaws.cn |
So Many Cool Announcements, Where to Start?
Visual Studio 2010 and the .NET Framework 4.0 week on Channel 9
The week of November 10th is Visual Studio 2010 and the .NET Framework 4.0 week on Channel 9! They'll have 12 videos going live this week featuring interviews with various members of the Visual Studio and the .NET Framework product teams, including several screen-cast demonstrations of the latest bits. Stay tuned to for all of the action...
TFS 2008 Power Tools - October 2008 Release is here!
So many cool new features, as well as many of the usual suspects Brian alluded to in the beginning of October (Team Members, Windows Shell Extension, Power Shell support, and Custom component download, tfpt unshelve /undo and BPA improvements). Brian as always did a fantastic job of summing it up so I won't bother repeating all the details, you can read for yourself here:
RC1 of the VSTS 2008 Database Edition Tools GDR is Out!
I know, the what of the what? GDR is a general developer release of some add-ons for the 2008 DB Pro edition of Team System. Think of it as power tools on steroids, and only targeting that one edition. This awesome new set of features includes all of the previous power tools, support for SQL Server 2008 database projects, distinct Build and Deploy phases, Static Code Analysis and improved integration with SQL CLR projects.
Database Edition ALSO no longer requires a Design Database. Therefore, it is no longer necessary to install an instance of SQL Express or SQL Server prior to using Database Edition. How awesome is that? Find out more and download it today at the following location:
Technorati Tags: TFS 2008,Power Tools,Database professional Edition,GDR,.NET Framework 4.0,VS 2010,Channel 9 | https://docs.microsoft.com/en-us/archive/blogs/angelab/so-many-cool-announcements-where-to-start | 2020-03-28T22:28:27 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.microsoft.com |
DataGridAutomationPeer.GetPattern Method
Microsoft Silverlight will reach end of support after October 2021. Learn more.
Gets an object that supports the requested pattern, based on the patterns supported by this DataGridAutomationPeer.
Namespace: System.Windows.Automation.Peers
Assembly: System.Windows.Controls.Data (in System.Windows.Controls.Data.dll)
Syntax
'Declaration Public Overrides Function GetPattern ( _ patternInterface As PatternInterface _ ) As Object
public override Object GetPattern( PatternInterface patternInterface )
Parameters
- patternInterface
Type: System.Windows.Automation.Peers.PatternInterface
One of the enumeration values.
Return Value
Type: System.Object
The object that implements the pattern interface, or nulla null reference (Nothing in Visual Basic) if the specified pattern interface is not implemented by this peer.
Remarks
This method returns a non-null result for the Grid, Selection and Table patterns. It also returns a non-null result for the Scroll pattern, so long as the element in question is scrollable.
Version Information
Silverlight
Supported in: 5, 4, 3
Platforms
For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers.
See Also | https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/cc838653%28v%3Dvs.95%29 | 2020-03-28T22:07:51 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.microsoft.com |
You can install the Template Service Broker to gain access to the template applications that it provides.
Install the service catalog
The Template Service Broker gives the service catalog visibility into the default Instant App and Quickstart templates that have shipped with OKD since its initial release. The Template Service Broker can also make available as a service anything for which an OKD.
The Template Service Broker is not installed by default in OKD 4.
You have installed the service catalog.
The following procedure installs the Template Service Broker Operator using the web console.
Create a namespace.
Navigate in the web console to Administration → Namespaces and click Create Namespace.
Enter
openshift-template-service-broker in the Name field and... | https://docs.okd.io/latest/applications/service_brokers/installing-template-service-broker.html | 2020-03-28T21:15:05 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.okd.io |
11. I have my data in RDS. Can I use Hive to process the data?¶
Yes. You can use Data Import from the Analyze page of the QDS to fetch data from RDS, and many other non RDBMS databases, into a Hive table. Then normal Hive queries can be run on the Hive table.
As a final step you can use Data Export on the processed data to export it back to a database of your choice for visualization purposes. | https://docs.qubole.com/en/latest/faqs/hive/data-rds-can-use-hive-process-data.html | 2020-03-28T21:42:43 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.qubole.com |
Dynamic DNS¶
VyOS is able to update a remote DNS record when an interface gets a new IP address. In order to do so, VyOS includes ddclient, a Perl script written for this only one purpose.
ddclient uses two methods to update a DNS record. The first one will send updates directly to the DNS daemon, in compliance with RFC 2136. The second one involves a third party service, like DynDNS.com or any other similar website. This method uses HTTP requests to transmit the new IP address. You can configure both in VyOS.
Configuration¶
RFC 2136 Based¶
Example¶
- Register DNS record
example.vyos.ioon DNS server
ns1.vyos.io
- Use auth key file at
/config/auth/my.key
- Set TTL to 300 seconds
[email protected]# show service dns dynamic interface eth0.7 { rfc2136 VyOS-DNS { key /config/auth/my.key record example.vyos.io server ns1.vyos.io ttl 300 zone vyos.io } }
This will render the following ddclient configuration entry:
# # ddclient configuration for interface "eth0.7": # use=if, if=eth0.7 # RFC2136 dynamic DNS configuration for example.vyos.io.vyos.io server=ns1.vyos.io protocol=nsupdate password=/config/auth/my.key ttl=300 zone=vyos.io example.vyos.io
Note
You can also keep different DNS zone updated. Just create a new
config node:
set service dns dynamic interface <interface> rfc2136
<other-service-name>
HTTP based services¶
VyOS is also able to use any service relying on protocols supported by ddclient.
To use such a service, one must define a login, password, one or multiple hostnames, protocol and server.
customDynDNS provider is used the protocol used for communicating to the provider must be specified under <protocol>. See the embedded completion helper for available protocols.
customDynDNS provider is used the <server> where update requests are being sent to must be specified.
Example:¶
Use DynDNS as your preferred provider:
set service dns dynamic interface eth0 service dyndns set service dns dynamic interface eth0 service dyndns login my-login set service dns dynamic interface eth0 service dyndns password my-password set service dns dynamic interface eth0 service dyndns host-name my-dyndns-hostname
Note
Multiple services can be used per interface. Just specify as many serives per interface as you like!
Running Behind NAT¶
By default, ddclient will update a dynamic dns record using the IP address directly attached to the interface. If your VyOS instance is behind NAT, your record will be updated to point to your internal IP.
ddclient has another way to determine the WAN IP address. This is controlled by: | http://docs.vyos.io/en/latest/services/dynamic-dns.html | 2020-03-28T21:12:12 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.vyos.io |
How to create Google “reCAPTCHA” API key?
This service is provided by a third party company. We don’t take any responsibility for any changes in how this service operates or works. Any issue with this service you need to discuss directly to company which provide it.
- In order to protect your website against spam we use free Google reCAPTCHA service.
- To create new API key please visit this page
- Click “Admin Console” button in the top right corner of the page. You need to be logged in to access this page.
- In the top right corner of the Admin Console page click plus button “Create”.
- Enter your label, choose ‘reCAPTCHA V3’ option, enter your domain name, accept Terms of Service and press “Submit” button.
- Now you should be able to see your API keys please make a note of it, you will need it in the next step.
- Once the process described above is completed login to, go to Contact tab (left side) -> Integration -> click Reset Keys -> add Site and Secret keys and Save
- Here you can find out more about Google anti spam protection
- Please see the screenshots below for step by step instructions. | https://docs.easytaxioffice.com/getting-started/google-recaptcha-api-key/ | 2020-03-28T20:35:16 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.easytaxioffice.com |
- TeamForge 18.2 supports Review Board 2.5.6.1 on RHEL/CentOS 6.10 and 7.5.
- This procedure is for those who have Review Board already and are upgrading Review Board to a latest build on RHEL/CentOS 6.10 or 7.5.
-.
Back up the Review Board Data Directory
The default Review Board data directory has been changed from
/opt/collabnet/reviewboard/data to
/opt/collabnet/teamforge/var/reviewboard/data in TeamForge 17.4.
Are You Upgrading from TeamForge 17.1 or Earlier to TeamForge 17.4 or Later?
If you are upgrading from TeamForge 17.1 or earlier to TeamForge 17.4 or later, regardless of whether you upgrade Review Board on the same or new hardware, you must back up your Review Board data directory from
/opt/collabnet/reviewboard/data and restore it to
/opt/collabnet/teamforge/var/reviewboard/data.
- Back up the Review Board data directory.
cd /opt/collabnet tar -zcvf /tmp/reviewboard_data.tgz reviewboard
- Copy the
/tmp/reviewboard_data.tgzfile to the
/tmpdirectory of the new server if you are upgrading Review Board on a new hardware.
Are You Upgrading from TeamForge 17.4 to TeamForge 17.8 or Later?
- Back up the Review Board data directory.
cd /opt/collabnet/teamforge/var tar -zcvf /tmp/reviewboard_data.tgz reviewboardTip: If you are upgrading from TeamForge 17.4 (or later), the
/opt/collabnet/teamforge/vardirectory would have been backed up already as part of your TeamForge upgrade process, in which case you can skip backing up the
/opt/collabnet/teamforge/vardirectory again.
- Copy the
/tmp/reviewboard_data.tgzfile to the
/tmpdirectory of the new server. cvs
Restore the Review Board data.
The default Review Board data directory has been changed from
/opt/collabnet/reviewboard/datato
/opt/collabnet/teamforge/var/reviewboard/datain TeamForge 17.4. If you are upgrading from TeamForge 17.1 or earlier to TeamForge 17.4 or later, regardless of whether you upgrade Review Board on the same or new hardware, you must back up your Review Board data directory from
/opt/collabnet/reviewboard/dataand restore it to
/opt/collabnet/teamforge/var/reviewboard/data.
If you are upgrading from TeamForge 17.4 (or later), the
/opt/collabnet/teamforge/vardirectory would have been restored already as part of your TeamForge upgrade process, in which case you can skip restoring the
/opt/collabnet/teamforge/vardirectory again.
If you are upgrading on a new hardware, ensure that you have already copied the backup of the Review Board data directory to the
/tmpdirectory of the new server.
cd /opt/collabnet/teamforge/var/ tar -zxvf /tmp/reviewboard_data.tgz
- If SCM is installed on a separate box, run the following script to authenticate a scmviewer user against a TeamForge Subversion repository for creating a new review request.
python ./svn-auth.py --repo-path=https://<scm_domain>/svn/repos/<repo_dir_name>
Post Upgrade Tasks
- Add Review Board to Projects
- Users are not getting email notifications for review requests and reviews. | http://docs.collab.net/teamforge182/upgrade_review_board.html | 2020-03-28T20:32:20 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.collab.net |
csv command.
This documentation applies to the following versions of Splunk Cloud™: 7.0.11, 8.0.2001, 7.0.13, 7.1.6, 7.2.4, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.1.3
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/SplunkCloud/7.0.11/SearchReference/Inputcsv | 2020-03-28T19:56:25 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Important: #87518 - Use prepared statements for pdo_mysql per default¶
See Issue #87518
Description¶
Before this adaption, the
pdo_mysql driver used emulated prepared statements per default.
With that, all returned values of a query were strings.
With this change the behavior changes to use the actual prepared statements, which return native data types. Thus, if a column is defined as INTEGER, the returned value in PHP will also be an INTEGER.
It is possible to deactivate this feature as follows:
You need to “overwrite” the option to set
PDO::ATTR_EMULATE_PREPARES
(reference:) in your database connection:
'Connections' => [ 'Default' => [ 'dbname' => 'some_database_name', 'driver' => 'pdo_mysql', 'driverOptions' => [ \PDO::ATTR_EMULATE_PREPARES => true ], 'password' => 's0meS3curePW!', 'user' => 'someUser', ], ], | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/9.5.x/Important-87518-UsePreparedStatementsForPdo_mysqlPerDefault.html | 2020-03-28T21:47:56 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.typo3.org |
- Shipping
- Misc Shipping Documents
- Gliding Eagle DTC China
Gliding Eagle - DTC China
Setup
Gliding Eagle Shipping Type Codes
Placing A Gliding Eagle Order
Gliding Eagle FAQs
Setup
1. Contact Gliding Eagle to signup and register for an account.
Gliding Eagle Inc.
120 Tower Road, Suite 1
American Canyon, CA 94503
Phone: 707-939-5576
[email protected]
2. Email [email protected] when you have a registered Gliding Eagle account to have the features enabled.
3. Adjust your shipping strategy for Gliding Eagle.
- Add Shipping Zone(s) for the Chinese provinces
- Add Shipping Type with correct code. See Shipping Type Codes >
Placing A Gliding Eagle Order
1. Add your items to your POS cart.
2. Click Shipping.
3. Select if the order is being shipped to the customer's billing address information or an alternate shipping address.
4. Select China from the Country drop down.
5. The text address field text will now appear both in English and in Mandrine. Click the Globe icon on the keyboard to enabled the iPad's Chinese Keyboard. Have the customer or tasting room staff member enter the correct address information. Click Apply.
6. Select the Gliding Eagle shipping type.
7. Finish completing the order by checking out as you would with any other POS order.
Gliding Eagle FAQs
- Will this program work for wineries in the USA, Canada, Australia, etc.?
- What are the shipping rates?
- Are the shipping rates integrated into WineDirect?
- Can I place Gliding Eagle shipping orders on the website, admin panel, and POS?
- Is there a bottle minimum?
- How do I get my orders to Gliding Eagle?
- Is there extra costs to use Gliding Eagle?
- Do I have to be on WineDirect Plus to use Gliding Eagle?
Will this program work for wineries in the USA, Canada, Australia, etc.?
This program is for wineries in the USA only. Any winery from the USA can use this program but it is up to the winery to get the package to Gliding Eagle at the cost of the winery.
What are the shipping rates?
The rates that you are charged for shipping will be handled directly with Gliding Eagle. For more information on the rates please contact Gliding Eagle:
Gliding Eagle Inc.
120 Tower Road, Suite 1
American Canyon, CA 94503
Phone: 707-939-5576
[email protected]
Are the shipping rates integrated into WineDirect?
No. Gliding Eagle will charge the winery set rates for shipping the wines. Shipping price is dependent on the selling price of the wine and the full service price is inclusive of duties & taxes. It is up to the winery to determine what they will charge their customers for shipping.
WineDirect suggests using a flat rate shipping. For example if it is $25 dollars for the first 6 bottles then have your ranges list 1-6 bottles = $25. If a customer does not have to pay more for shipping additional bottles they are more likely to increase the number of bottles they purchase.
Can I place Gliding Eagle shipping orders on the website, admin panel, and POS?
Gliding Eagle orders can only be placed on the POS at launch. Additional channels may come in future.
Is there a bottle minimum?
No. There is no bottle minimum; you can ship 1 bottle if you'd like. This would not affect any minimum fee Gliding Eagle may have in their rates.
How do I get my orders to Gliding Eagle?
Depends where you are located. Gliding Eagle will be notified automatically as soon as you place an Order to be shipped to China. If you are located in the Napa/Sonoma area, a courier for Gliding Eagle will pick up your orders. If you are outside of the pickup area, you can ship your wine to the Gliding Eagle fulfillment facility for processing. Regardless of your location, if you have wines stored at Wine Direct, you can also have a Gliding Eagle courier to pick up wines there via Will Call.
Is there extra costs to use Gliding Eagle?
No. There is no additional cost from WineDirect, only the shipping costs you have with Gliding Eagle.
Do I have to be on WineDirect Plus to use Gliding Eagle?
Yes. Gliding Eagle DTC China fulfillment is exclusive to WineDirect Plus. Want Plus? Learn More > | https://docs.winedirect.com/Shipping/Misc-Shipping-Documents/Gliding-Eagle-DTC-China | 2020-03-28T20:41:19 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['https://documentation.vin65.com/assets/client/Image/Store/Shipping/POS-Select-China.jpg',
None], dtype=object)
array(['https://documentation.vin65.com/assets/client/Image/Settings/Website-Settings/POS-Chinese-Keyboard.jpg',
None], dtype=object) ] | docs.winedirect.com |
update
Updates the role object in Kuzzle.
Unlike a regular document update, this method will replace the whole role definition under the indexes node with the
updateContent parameter.
In other words, you always need to provide the complete role definition.
This method has the same effect as calling setContent followed by the save method.
To get more information about Kuzzle permissions, please refer to our permissions guide.
update(content, [options], [callback])
Options
Return Value
Returns the
Role object to allow chaining.
Callback Response
Returns the updated version of this object.
Usage
<?php use Kuzzle\Security\Role; // ... $roleDefinition = [ 'controllers' => [ 'document' => [ 'actions' => [ 'get' => true ] ] ] ]; /* * @var $role Role */ try { $role = $role->update($roleDefinition); // $role instanceof Role } catch (ErrorException $e) { // error occured } | https://docs.kuzzle.io/sdk/php/3/core-classes/role/update/ | 2020-03-28T20:49:35 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.kuzzle.io |
Introduction to Panda3D¶
Panda3D Basics¶
P.
Error tolerance is about the fact that all game developers create bugs. When you do, you want your engine to give you a clear error message and help you find the mistake. Too many engines will just crash if you pass the wrong value to a function. Panda3D almost never crashes, and much code is dedicated to the problem of tracking and isolating errors.
Finally, to come back to power and speed:¶
To successfully use Panda3D, you must be a skilled programmer. If you do not know what an “API” is, or if you don’t know what a “tree” is, you will probably find Panda3D overwhelming. This is no point-and-click game-maker: this is a tool for professionals. While it is important to point that out so you have accurate expectations, it’s also relevant to be aware that Panda3D is one of the easiest and most powerful engines you will ever use, and we welcome your participation.
If you are just getting started with programming, we suggest that your best option is to start with a class on programming. Alternately, you could try teaching yourself using a training tool like Alice,¶
Since version 1.5.3, Panda3D has been released under the so-called “Modified BSD license,” which is a free software license with very few restrictions on usage. In versions 1.5.2 and before, it used a proprietary license which was very similar in intention to the BSD and MIT licenses, though there was some disagreement about the freeness of two of the clauses. The old license can still be accessed here.
Although the engine itself is completely free, it comes with various third-party libraries that are not free software. Some of them (like FMOD) even restrict you from using¶¶
This introductory chapter of the manual is designed to walk you through some of the basics of using Panda3D. This chapter is structured as a tutorial, not as a reference work.
- Installing Panda3D in Windows
- Installing Panda3D in Linux
- General Preparation
- Running your Program
- A Panda3D Hello World Tutorial | https://docs.panda3d.org/1.10/cpp/introduction/index | 2020-03-28T21:48:38 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.panda3d.org |
General information about the ESP32 port¶
ESP32 is a popular WiFi-enabled System-on-Chip (SoC) by Espressif Systems.
Multitude of boards¶
There is a multitude of modules and boards from different sources which carry the ESP32 a datasheet, schematics and other reference materials for your board handy to look up various aspects of your board functioning.
To make a generic ESP32 port and support as many boards as possible, the following design and implementation decision were made:
- GPIO pin numbering is based on ESP32 chip numbering, not some “logical” numbering of a particular board. Please have the manual/pin diagram of your board at hand to find correspondence between your board pins and actual ESP3232 deepsleep mode.
Technical specifications and SoC datasheets¶
The datasheets and other reference material for ESP32 chip are available from the vendor site: . They are the primary reference for the chip technical specifications, capabilities, operating modes, internal functioning, etc.
For your convenience, some of technical specifications are provided below:
- Architecture: Xtensa Dual-Core 32-bit LX6
- CPU frequency: upto 240MHz
- Total RAM available: 528KB (part of it reserved for system)
- BootROM: 448KB
- Internal FlashROM: None
- External FlashROM: code and data, via SPI Flash. Normal sizes 4MB
- GPIO: 34 (GPIOs are multiplexed with other functions, including external FlashROM, UART, deep sleep wake-up, etc.)
- UART: 3 RX/TX UART (no hardware handshaking), one TX-only UART
- SPI: 4 SPI interfaces (one used for FlashROM)
- I2C: 2 I2C (bitbang implementation available on any pins)
- I2S: 2
- ADC: 12-bit SAR ADC up to 18 channels
- DAC: 2 8-bit DAC
- Programming: using BootROM bootloader from UART - due to external FlashROM and always-available BootROM bootloader, the ESP32 is not brickable
For more information see the ESP32 datasheet:
MicroPython is implemented on top of the ESP-IDF, Espressif’s development framework for the ESP32. See the ESP-IDF Programming Guide for details. | https://micropython-docs-esp32.readthedocs.io/en/esp32_doc/esp32/general.html | 2020-03-28T20:07:32 | CC-MAIN-2020-16 | 1585370493120.15 | [] | micropython-docs-esp32.readthedocs.io |
Configure Lightweight Directory Access Protocol (LDAP) a standards-compliant Lightweight Directory Access Protocol (LDAP) tool for authentication. The Sensu LDAP authentication provider is tested with OpenLDAP. If you’re using AD, head to the AD section.
For general information about configuring authentication providers, see Use an authentication provider." } }
Example LDAP configuration: Use
memberOf attribute instead of
group_search
If your LDAP server is configured to return a
memberOf attribute when you perform a query, you can use
memberOf in your Sensu LDAP implementation instead of
group_search.
The
memberOf attribute contains the user’s group membership, which effectively removes the requirement to look up the user’s groups.
To use the
memberOf attribute in your LDAP implementation, remove the
group_search object from your LDAP config:
--- type: ldap api_version: authentication/v2 metadata: name: openldap spec: servers: host: 127.0.0.1 user_search: base_dn: dc=acme,dc=org
{ "type": "ldap", "api_version": "authentication/v2", "spec": { "servers": [ { "host": "127.0.0.1", "user_search": { "base_dn": "dc=acme,dc=org" } } ] }, "metadata": { "name": "openldap" } }
After you configure LDAP][19] [...] | https://docs-preview.sensuapp.org/sensu-go/6.3/operations/control-access/ldap-auth/ | 2021-04-11T00:44:42 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs-preview.sensuapp.org |
Evidence Bucket, which you can access by clicking the paper clip icon at the top right corner of the application. Learn more about the Evidence Bucket below.
- In the Hunting area, click the Add to investigation button after performing a search. In this case, elements will be also added to the Evidence Bucket. Learn more about this in the Threat Hunting article.
Evidence Bucket
Before creating a new investigation or updating an old one with elements added from the Triage and Hunting areas, there is always an intermediate step. All the elements that you add to an investigation from those areas go to the Evidence Bucket, where you can review and manage all the alerts and entities before defining the investigation. To access the Evidence Bucket, just click the paper clip icon that you can find at the top right corner of the application. The number next to the icon indicates the current number of alert, entities and queries in the bucket.
Using the Evidence Bucket, you can review all the elements added from the Triage and Hunting areas together, and check if any other evidence is needed before finally creating or updating an investigation. Before defining the investigation, you can delete the alerts or entities that you don't need by clicking the trash bin icon next to them. You can also click the Clean button to delete all the elements in the bucket.
You can also add enrichments to entities before opening an investigation. To do it, click the + button at the bottom of each alert, choose the entities you want to enrich, and select the required enrichments. The application will suggest you some enrichments for the selected entities, but you can mark the ones you need. Finally, click Run enrichment to add them.
To delete an enrichment from an entity, click it, select the - icon that appears, and click OK in the confirmation dialog window.
Once you have all the required elements in the bucket, you can create a new investigation or update an existing one. To decide it, use the toggle at the right part of the bucket window.
- With the toggle in the New investigation position, just click the Create investigation button. You will be redirected to the investigation parameters window, where you can set all the details of the new investigation. Learn more about these settings below.
- With the toggle in the Add to investigation position, choose the investigation to be updated from the dropdown list and click Add to investigation. You will be redirected to the investigation parameters window. Change any parameter if required and save it. Learn more about these settings below.
Investigation parameters
In all the cases, you will be prompted to enter the details of the new investigation or edit the information on the investigation you decided to modify. The information of an investigation is divided into three different categories:
Saving, downloading and closing investigations
Remember to click the Save button at the top right corner of the area after performing any modification in an investigation, or creating a new one.
You can download a report with the investigation contents and close it by clicking the ellipsis icon next to the Save button.
Details
This is the basic information of your investigation and is located in the left panel of the New investigation screen.
Evidence
This is the main section of the investigation, where users can check the alerts or hunting queries that have initiated the investigation. The alerts are stored in specific fields depending on the type.
Investigation Timeline
Users can check all the modifications or edits made to the investigation, and when they were made. The timeline at the top shows all the alerts involved so that you can compare incidences. You can display or hide any of each type of alerts using the buttons under the timeline. In the bottom area, you can check the events that occurred during the investigation, user comments, and when the alerts were thrown.
Filter investigations
You can use the filters at the top of the Investigations area to filter specific. | https://docs.devo.com/confluence/ndt/applications/devo-security-operations/investigations | 2021-04-11T01:54:58 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.devo.com |
Final
Specifies whether this class is final (cannot have subclasses).
Usage
To specify that a class is final, use the following syntax:
Class MyApp.Exam As %Persistent [ Final ] { //class members }
Copy code to clipboard
Otherwise, omit this keyword or place the word Not immediately before the keyword.
Details
If a class is final, it cannot have subclasses.
Also, if a class is final, the class compiler may take advantage of certain code generation optimizations (related to the fact that instances of a final class cannot be used polymorphically).
Default
If you omit this keyword, the class definition is not final.
See Also
“Class Definitions” in this book
“Defining and Compiling Classes” in Defining and Using Classes
“Introduction to Compiler Keywords” in Defining and Using Classes | https://docs.intersystems.com/irisforhealthlatest/csp/docbook/Doc.View.cls?KEY=ROBJ_class_final | 2021-04-11T02:24:07 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.intersystems.com |
Appendix¶
Abbreviations¶
- PBAC
- Prime Base AutomationClient
- PBAS
- Prime Base ApplicationServer
- PBT
- Prime Base Talk
- TDES
- Team Drive Enterprise Server
- TDNS
- Team Drive Name Service
- TDRS
- Team Drive Registration Server
- TDSV
- Same as SAKH, but for TeamDrive 3.0 Clients: Team Drive Server
- TSHS
- Team Drive Scalable Hosting Storage. | https://docs.teamdrive.net/HostServer/3.0.013.12/html/TeamDrive-Host-Server-Admin-Guide-en/Appendix.html | 2021-04-11T01:40:21 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.teamdrive.net |
Xano's Jumpstart creates the first API endpoints during the Jumpstart process. These endpoints are the CRUD operations for your database tables. Let's have a look at the API endpoints that were made for the user table.
Create: This API creates a new record in the database.
Read(single user): This API gets a single record in the database using the defined input, in this example, we can use the primary ID.
Read(all users): This API gets all records in the database.
Update: This API updates a single record that is found by a certain input. In this example, we can use the primary ID.
Delete: This API deletes a record in the database by using a certain input. In this example, we can use the primary ID. | https://docs.xano.com/api/crud-operations | 2021-04-11T00:58:32 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.xano.com |
Copy a chart
You can copy a chart from a dashboard or protocol page and then save the copied chart to a dashboard. Copied widgets are always placed into a new region on the dashboard, which you can later modify.
- Log into the Discover or Command appliance and then click Dashboard at the top of the page.
- Select a dashboard that contains the chart or widget that you want to copy.
- Click the title.
- Hover over Copy to… to expand a drop-down list and then make one of the following selections:
- stepsThe chart is copied into a new region on the dashboard that is in Edit Layout mode. You can now edit your dashboard or chart in the following ways:
Thank you for your feedback. Can we contact you to ask follow up questions? | https://docs.extrahop.com/7.5/copy-chart/ | 2021-04-11T01:20:54 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.extrahop.com |
You're reading the documentation for a development version. For the latest released version, please have a look at Foxy.
Logging.
Logging directory configuration¶
The logging directory can be configured.
For example, to set the logging directory to
~/my_logs:
export ROS_LOG_DIR=~/my_logs ros2 run logging_demo logging_demo_main
export ROS_LOG_DIR=~/my_logs ros2 run logging_demo logging_demo_main
set "ROS_LOG_DIR=~/my_logs" ros2 run logging_demo logging_demo_main
You will then find the logs under
~/my_logs/.
Alternatively, you can set
ROS_HOME and the logging directory will be relative to it (
$ROS_HOME/log).
ROS_HOME is intended to be used by anything that needs a base directory.
Note that
ROS_LOG_DIR has to be either unset or empty.
For example, with
ROS_HOME set to
~/my_ros_home:
export ROS_HOME=~/my_ros_home ros2 run logging_demo logging_demo_main
export ROS_HOME=~/my_ros_home ros2 run logging_demo logging_demo_main
set "ROS_HOME=~/my_ros_home" ros2 run logging_demo logging_demo_main
You will then find the logs under
~/my_ros_home/log/.
This configures the default severity for any unset logger to the debug severity level. You should see debug output from loggers from the demo itself and from the ROS 2 core.
As of the Galactic ROS 2 release, the severity level for individual loggers can be configured from the command-line. Restart the demo including the following command line arguments:
ros2 run logging_demo logging_demo_main --ros-args --log-level logger_usage_demo:=debug
Console output formatting¶
If you would like more or less verbose formatting, you can use})"
export RCUTILS_CONSOLE_OUTPUT_FORMAT="[{severity} {time}] [{name}]: {message} ({function_name}() at {file_name}:{line_number})"
#
export RCUTILS_COLORIZED_OUTPUT=0 # 1 for forcing it
#.
Default stream for console output¶
In Foxy and later, the output from all debug levels goes to stderr by default. It is possible to force all output to go to stdout by setting the
RCUTILS_LOGGING_USE_STDOUT environment variable to
1.
For example:
export RCUTILS_LOGGING_USE_STDOUT=1
export RCUTILS_LOGGING_USE_STDOUT=1
set "RCUTILS_LOGGING_USE_STDOUT=1"
Line buffered console output¶
By default, all logging output is unbuffered.
You can force it to be buffered by setting the
RCUTILS_LOGGING_BUFFERED_STREAM environment variable to 1.
For example:
export RCUTILS_LOGGING_BUFFERED_STREAM=1
export RCUTILS_LOGGING_BUFFERED_STREAM=1
set "RCUTILS_LOGGING_BUFFERED_STREAM=1"
Then run:
ros2 run logging_demo logging_demo_main | https://docs.ros.org/en/rolling/Tutorials/Logging-and-logger-configuration.html | 2021-04-11T00:28:33 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.ros.org |
If you choose to use external Audit tables, a new datastore with the name "AuditPlus" will be created in the target environment. Also, dataviews will be created for each AuditPlus transaction.
The information in the new datastore and information in dataviews will be used to navigate audited data and also, it will be used to create the audit database triggers.
In order to create audit triggers in the database, AuditPlus will consider the following information:
Database connection can also be customized using the procedure 'AuditPlusGetDatastoreParms' .
In this procedure, the user can decide which database and which schema will be used for each datastore and table.
This can be used to create and deploy audit reorganization programs for different deployments.
You should consider the following restrictions: | https://docs.workwithplus.com/servlet/com.wiki.wiki?1571,External+tables+requirements, | 2021-04-11T00:32:41 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.workwithplus.com |
Migrating Atrium Integrator jobs and transformations from one computer to another
You can migrate jobs from one computer to another using one of the following:
Migrating jobs using the Export Configuration tool
The Export configuration tool is available with BMC Atrium Core 8.1.02 and later versions. Export Configuration tool allows you to migrate Atrium Integrator jobs from one computer to another wherein, you can:
- Migrate all or specific Atrium Integrator jobs.
- Migrate Atrium Integrator job schedules.
- Migrate 8.0 and later Atrium Integrator jobs to a computer where the higher version of Atrium Integrator is installed. You will have to run the utility only from a computer where you have BMC Atrium Core 8.1.02 or later version installed.
- The target version must always be equal to or greater than the source.
- Migrate job connections. Only the connections specific to the job are migrated.
Note
Unlike the Development-to-Production command line utility, when you migrate jobs using the Export Configuration tool, you don't need a backup folder (staging area) for copying the .kjb and .ktr files.
To migrate jobs using the Export Configuration tool
- Launch the Atrium Integrator console.
- In the tool bar, click Export Configuration tool. The Export Configuration tool dialog box is displayed.
- Enter the source server details.
- Server Name: Is auto populated
- Server Port: Is auto populated
- AI User: User name
- Password: Password
- Authentication: Client-provided authentication string, such as the domain.
- Click Fetch Jobs.
- All the jobs on the source server are displayed in Job List table.
- Only the parent jobs are displayed in the Job List table. However, all the associated sub jobs and transformations are also migrated.
- For all the jobs that have a schedule, the Schedule check box is selected.
-
- Enter the target server details.
- Server Name
- Server Port
- AI Admin User: Remedy Application Service
- Password: Remedy application service user password
- Authentication: Client-provided authentication string, such as the domain.
- Click Test Target. Connection to the target server is tested.
-
- By default all the jobs are selected to be exported.
- To export selected jobs:
- Uncheck the check box on the Job List table header.
- Select the jobs you want to export.
- Similarly, you can also export specific schedules.
- Click Export Jobs.
- The job export status is displayed in the Export Status column.
- To view the details of a failed job, click the Failed status link in the Export Status column. The export job status dialog box is displayed.
- In case of errors, fix the errors and re-export only the updated job.
- All the successfully exported jobs are displayed in the Atrium Integartor console.
Migrating jobs using Development-to-Production command line utility
When using the Development-to-Production command utility, consider the following:
- Migrate jobs and transformations, only between the same version of Atrium Integrator 8.1 or later.
- The schedules are not migrated.
The development-to-production utility available at the following location:
- Microsoft Windows:
<AR Installed Directory>\diserver\data-integration\ngie\bin\ devtoprod.cmd
- UNIX:
<AR Installed Directory>\diserver\data-integration\ngie\bin\ devtoprod.sh
When migrating jobs, you can run the development-to-production utility from a development computer, a production computer, or even from a computer where Atrium Integrator client is installed. Ensure that you add the location of the folder from which you are running the utility in the AISrcAndTargerARDetails.properties file.
When you migrate jobs and transformations from one computer to another, only the Run History is modified.
To migrate jobs from one computer to another
- Open the AISrcAndTargerARDetails.properties file from
<ARInstallationDirectory>\diserver\data-integration\ngie\conf
- Provide the source BMC Remedy AR System Application Services credentials.
- Provide the target BMC Remedy AR System Application Services credentials.
Provide the backup folder location for all the .kjb and .ktrfiles that are to be exported.
Note
On Windows, use double slash when you specify the backup folder location for all the .kjb and .ktr files that will be exported.
Following is a snippet of the AISrcAndTargerARDetails.properties file:
# Fill Source AR Credentials
AR.SRC.SERVERNAME =
AR.SRC.TCPPORT =
AR.SRC.RPCPORT =
# Fill Source AR Remedy Application Services password
AR.SRC.PASSWORD =
# Fill Target AR Credentials
AR.TARGET.SERVERNAME =
AR.TARGET.TCPPORT =
AR.TARGET.RPCPORT =
# Fill Target AR Remedy Application Services password
AR.TARGET.PASSWORD =
# Enter the Backup Folder Location FOR All the .KJB AND .KTR file WHICH WILL get Exported
# For Windows use
for path separator
AI.SRC.REPO.BACKUP.FOLDER=
- Save the changes.
- Run the development-to-production utility.
- Verify if the jobs are migrated on the target computer.
Hi BMC, Can you double here. ''•Only the parent jobs are displayed in the Job List table. However, all the associated sub jobs and transformations are also migrated.''
because migrating the jobs from one server to another using the export congiguration Tool, the transformation are not migrated.
Hi Mahamadou,
Thank you for your comment. Will check with the SME and respond at the earliest.
Regards,
Maithili
Hi Mahamadou,
We tried this feature in our environment and the sub jobs and transformations are migrating using the export configuration tool. If it is not working properly for you, we recommend you reach out to BMC Support.
Feel free to reach out to us for any other queries.
Regards,
Maithili
Hi Maithili,
I raised a case, BMC responded by as design. They told me only the jobs and subjob are exported not the transformation. What version are you in?
Hi Mahamadou,
It was the version 9.1.00 that we checked. However, let me share your feedback with the SME.
Thanks!
Regards,
Maithili | https://docs.bmc.com/docs/ac91/migrating-atrium-integrator-jobs-and-transformations-from-one-computer-to-another-609847107.html | 2021-04-11T00:41:11 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.bmc.com |
Renaming Panels
T-SBFND-005-009
You can rename panels one at a time using the Panel view or rename multiple panels using the Rename Panel command. But before you can rename panels, you must use the Preferences dialog box to enable the renaming of panel names.
- Do one of the following:
- Select Edit > Preferences (Windows) or Storyboard Pro 20 > Preferences (macOS).
- Press Ctrl + U (Windows/Linux) or ⌘ + U (macOS).
- In the Preferences dialog box, select the Naming tab.
- In the Panel section, select the Allow Custom Panel Names option.
- In the Thumbnails view, select a panel to rename.
- In the Panel view, type a new name in the Panel field and press Enter.
The panel is renamed.
- In the Thumbnails view, select a panel to rename.
- Select Storyboard > Rename Panel.
The Rename Panel dialog box opens.
- Type a new name in the New name field.
- You can use the Renaming Rule for Subsequent Panel menu to determine if the next scenes should be renamed:
- Current Panel Only: Renames only the selected panel.
- Renumber Panels: Renumbers the current panel, as well as all panels that follow.
- Renumber Selected Panels: Renumbers the first selected panel of a multiselection, as well as all following panels that are part of the multiselection.
- Renumber Prefix Only: Renumbers the panels’ numerical prefixes beginning at the selected scene. The new name must be a numerical value.
The Renumbered Panel Names section displays a list of the panels that will be renamed, their old names and the new names.
| https://docs.toonboom.com/help/storyboard-pro-20/storyboard/structure/rename-panel.html | 2021-04-11T00:27:45 | CC-MAIN-2021-17 | 1618038060603.10 | [array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_23_RenamePanel_02.png',
None], dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_23_RenamePanel_03.png',
None], dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_23_RenamePanel_04.png',
None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_23_RenamePanel_02.png',
None], dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_28_RenamePanel.png', None],
dtype=object) ] | docs.toonboom.com |
How can I get a refund?
You can use the automatic refund option on the Billing page by clicking on the Get refunded option as shown below:
A form will pop up and:
- You can choose to be directly refunded by following the short next steps and that's it, your order will be automatically refunded.
- You can also choose to ask for help before getting a refund.
The link is displayed only for orders under the 14-days refund policy.
You can also ask for a refund using this contact form and choosing Refund as the Subject. | https://docs.wp-rocket.me/article/1513-how-can-i-get-a-refund | 2021-04-11T00:42:27 | CC-MAIN-2021-17 | 1618038060603.10 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5415e7bfe4b01e2a68fe8243/images/605e4b14c44f5d025f4488db/file-F5c4pwY1JL.png',
None], dtype=object) ] | docs.wp-rocket.me |
Quick preview GUIs¶
Preview GUI¶
In order to be sure that the parameters in configuration file are correct, and before launching the algorithm that will filter the data on-site (and thus mess with them if parameters are wrong), one can use the preview GUI. To do so, simply do:
>> spyking-circus path/mydata.extension -p
The GUI will display you the electrode mapping, and the first second of the data, filtered, with the detection thresholds as dashed dotted lines. You can then be sure that the value of spike_thresh used in the parameter file is correct for your own data.
A snapshot of the preview GUI. You can click/select one or multiple electrodes, and see the 1s of the activity, filtered, on top with the detection threshold
Once you are happy with the way data are loaded, you can launch the algorithm.
Note
You can write down the value of the threshold to the configuration file by pressing the button
Write thresh to file
Result GUI¶
In order to quickly visualize the results of the algorithm, and get a qualitative feeling of the reconstruction, you can see use a python GUI, similar to the previous one, showing the filtered traces superimposed with the reconstruction provided by the algorithm. To do so, simply do:
>> spyking-circus path/mydata.extension -r
A snapshot of the result GUI. You can click/select one or multiple electrodes, and see the the activity, filtered, on top with the reconstruction provided by the template matching algorithm (in black)
Warning
If results are not there yet, the GUI will only show you the filtered traces
Note
You can show the residuals, i.e. the differences between the raw data and the reconstruction by ticking the button
Show residuals
Meta-Merging GUI¶
See the devoted section on Meta-Merging (see Automatic Merging) | https://spyking-circus.readthedocs.io/en/0.9.6/GUI/python.html | 2021-04-11T01:12:20 | CC-MAIN-2021-17 | 1618038060603.10 | [array(['../_images/preview.png', '../_images/preview.png'], dtype=object)
array(['../_images/result.png', '../_images/result.png'], dtype=object)] | spyking-circus.readthedocs.io |
November 16
We’re happy to announce the release of the Sprint 104 edition of Quamotion. The version number is 0.104.64.
In this release, we’ve improved the stability and reliability of the WebDriver.
Stability and reliability improvements
We’ve:
- Fixed an issue where the WebDriver would crash, when you stop the WebDriver but a remote screen session is still active.
- Fixed an issue where starting an application on Android may fail, if that application has an embedded copy of a popular HTTP server.
- Fixed an issue where certain requests may fail if your PC is using a proxy server.
- Fixed an issue where the last known screenshot of any device would be saved to the trace folder.
PowerShell improvements
We’ve:
- Added a DoubleClick-Element function to our PowerShell client.
- Added a Get-AppVersion function to our PowerShell client, which allows you to identify which version of an application is installed on your device.
- The Start-App command is now supported on Android.
Last modified October 25, 2019: Move docs to docs/ (519bf39) | http://docs.quamotion.mobi/docs/release-notes/2018/2018-11-16/ | 2021-04-11T00:14:10 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.quamotion.mobi |
Freshworks CRM documentation: API Guide, Authentication
A. Set up a Freshworks CRM connection
Start establishing a connection to Freshworks CR Freshworks CRM.
The Create connection pane opens with required and advanced settings.
B. Supply required Freshworks CRM account information
At this point, you’re presented with a series of options for providing Freshworks CR.
Subdomain (required): Enter your Freshworks CRM subdomain, where you sign in to your account. For example, if is the URL, then type test for this setting.
API key (required): Paste in the API key from your Freshworks CRM account. Multiple layers of protection are in place, including AES 256 encryption, to keep this value safe. When editing this form later, you must enter this value again; it is stored only for a saved connection.
- From your profile avatar at the top right, select Settings.
- Navigate to the API settings tab.
- Copy the API key.
C. Edit advanced Freshworks CRM settings
Before continuing, you have the opportunity to provide additional configuration information, if needed, for the Freshworks CRM connection.
D. Test the connection
Once you have configured the Freshworks CRM. | https://docs.celigo.com/hc/en-us/articles/360058252431-Set-up-a-connection-to-Freshworks-CRM | 2021-04-11T01:57:23 | CC-MAIN-2021-17 | 1618038060603.10 | [array(['/hc/article_attachments/360088810932/freshworks.png', None],
dtype=object)
array(['/hc/article_attachments/360088876031/freshworks-1.png', None],
dtype=object)
array(['/hc/article_attachments/360088838412/freshworks-3.png', None],
dtype=object)
array(['/hc/article_attachments/360088838592/freshworks-2.png', None],
dtype=object)
array(['/hc/article_attachments/360088839292/joor-3.png', None],
dtype=object)
array(['/hc/article_attachments/360088839392/amazon-redshift-confirm.png',
None], dtype=object) ] | docs.celigo.com |
Kubernetes is a solution by Google for automating application deployment, scaling, and management.
Xano leverages Kubernetes to manage the entire release cycle of all the Docker containers. Kubernetes is constantly monitoring the environment and has the ability to auto-scale based on a variety of environmental factors. Here’s how Kubernetes fits into a typical software development environment: | https://docs.xano.com/technology/kubernetes | 2021-04-11T01:32:29 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.xano.com |
How to get the code¶
The code is currently hosted on github, in a public repository, relying on Git, at. The following explanations are only for those that want to get a copy of the git folder, with a cutting-edge version of the software.
Note
The code can be installed automatically to its latest release using
pip or
conda (see How to install).
Cloning the source¶
Create a folder called
spyking-circus, and simply do:
>> git clone spyking-circus
The advantages of that is that you can simply update the code, if changes have been made, by doing:
>> git pull
Without git¶
If you do not have git installed, and want to get the source, then one way to proceed is:
1. Download and install SourceTree 2. 3. Click on the
Clone in SourceTreebutton, and use the following link with SourceTree 4. In SourceTree you just need to click on the
Pullbutton to get the latest version of the software.
Download the archive¶
All released versions of the code can now be downloaded in the
Download section of the github project, as
.tar.gz files (pip install)
To know more about how to install the sofware, (see How to install) | https://spyking-circus.readthedocs.io/en/1.0.0/introduction/download.html | 2021-04-11T01:14:28 | CC-MAIN-2021-17 | 1618038060603.10 | [] | spyking-circus.readthedocs.io |
Development Guide
Local Navigation
Element: <address>
The <address> element specifies the PIN of the BlackBerry® in the request.
In a cancellation request or a status query request, the <address> element is optional. If you do not include an <address> element, the request affects all of the BlackBerry devices that are associated with the corresponding request.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/25167/PAP_ref_address_603618_11.jsp | 2014-03-07T11:20:49 | CC-MAIN-2014-10 | 1393999642201 | [] | docs.blackberry.com |
User Guide
Local Navigation
My display picture isn't animating
If your display picture isn't animating in BlackBerry Messenger,).
- Verify that the dimensions of the picture are less than 333 x 333 pixels.
If necessary, you can try editing the file using a computer to adjust the size or dimensions.
Previous topic: Troubleshooting: BBM
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/41228/Display_pictures_arent_animating_61_1912989_11.jsp | 2014-03-07T11:22:41 | CC-MAIN-2014-10 | 1393999642201 | [] | docs.blackberry.com |
Ivy is a dependency management system for Ant. As Gant uses Ant tasks, Gant can make use of Ivy very straightforwardly. Installing Gant installs the Ivy jar, so there are no missing dependencies (Gant 1.2.0 installs Ivy 2.0.0-beta2). Gant provides an Ivy tool that is used to work with Ivy, so you need the statement:
to access the features. This creates an object called Ivy which can then be used to invoke all the features of the Ivy system, for example:
The Ivy distribution has an example in it, that installs Ivy, ensures the local presence of the Commons-Lang jar, compiles and runs a small test program. A Gantified version of part of this example is provided in the Gant source. The Gantfile for this example shows how simple and straightforward using Ivy can be:
For full details of the Ivy features, it is best to check the Ivy manual pages for the version of Ivy associated with your Gant installation. To replicate the information here is probably just a bad idea, as well as being a lot of work
. | http://docs.codehaus.org/display/GANT/Ivy+Tool | 2014-03-07T11:22:04 | CC-MAIN-2014-10 | 1393999642201 | [] | docs.codehaus.org |
Mo.
You may want to add @required and @readonly, but I dropped them to keep it short.
Dependencies for Components
When developing with the above mentioned commonly used components you will
want to add the following dependencies to your pom.xml:
Creating and resolving an artifact
To create an artifact, use an ArtifactFactory.
An artifact doesn't help you much until it's resolved (using an ArtifactResolver). This resolving process is not transitive!
Now you get the artifact file (in the local repository):.
Accessing the Plexus container
If you want access to the container, for doing your own component lookups, implement the org.codehaus.plexus.personality.plexus.lifecycle.phase.Contextualizable interface | http://docs.codehaus.org/pages/viewpage.action?pageId=67627 | 2014-03-07T11:17:46 | CC-MAIN-2014-10 | 1393999642201 | [] | docs.codehaus.org |
This article provides instructions to configure and register Samsung Smart Signage Platform (SSSP) 6.0 Tizen v4 devices with Appspace App.
Prerequisites
- The device must meet the manufacturer’s minimum hardware and technical specifications. Please refer to Supported Devices & Operating Systems.
- Relearese Samsung SSP
Follow the instructions below to setup the Samsung SSP device:
- On the SSSP device, click Home on the remote control, and select URL Launcher.
- Paste the following URL path in the Install Web App field, and click OK to save changes. Samsung SSP URL:
For deployments that do not have access to the internet, you may download the Appspace App for Samsung client (AppspaceSSSP6.zip) package from your cloud account, by navigating to System > Downloads from the Appspace menu.
Once downloaded, extract the AppspaceSSSP6.zip file, and upload the following files to your web server, and copy the URL link to be pasted in step 3.
- Appspace.wgt
- SSSP_CONFIG.xml
- Once installed, click the Appspace URL Launcher icon from the menu bar to launch Appspace App.
- Once Appspace App is launched, proceed to Register your device.NoteUSB keyboards are not supported. Please use a remote control or the on-screen keyboard. | https://docs.appspace.com/latest/how-to/install-appspace-app-on-samsung-ssp-tizen/ | 2020-09-18T17:02:23 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.appspace.com |
Form to create and modify approval processes
The AP:Process Definition form opens when you click View or Create on the Process tab of the AP:Administration form. Process administrators use this form to create and modify approval processes. See Using the Process tab on AP-Administration.
You can also open this form using Quick Links > Approval Administration Console.
Basic tab
AP:Process Definition form — Basic tab
(Click the image to expand it.)
Fields on the AP:Process Definition form — Basic tab
Request Owner field
The setting of this field is crucial for:
- The execution of Self Approval rules — The value of this field is compared with the current user's name, and if they match, the rule is executed, otherwise it is skipped.
- Finding the first approval in the approval chain — In the Parent-Child, Level, and Rule-Based process types, the first approver in the chain is completely dependent on the name of the person stored in the field mapped to AP:Process Definition > Request Owner Field. The Request Owner field must contain a valid entry in the approval lookup form (for example, AP-Sample:Signature Authority is the lookup form for the Lunch Scheduler sample application). To set an appropriate value for Request Owner field, a process administrator must consider the following:
- Does this field store the name of the person defined in the approval lookup form?
- Is the organizational structure for this user defined in the approval lookup form? The value of Request Owner field is not considered when finding the first approver in an ad hoc process, because the requester is responsible for specifying all the required approvers.
Full name form
If your application does not contain a User form that contains the full name of a person, you should create a custom form that provides this information. The custom form should contain the following character fields, with their input lengths set to 0:
- The field with the ID 10001 contains the login name.
- The field with the ID 10002 captures the full name, which can be generated by any means.
Create a filter on this form, which runs on a service action. This filter uses the data in the first field (10001) as input to generate the corresponding full name for the second field (10002).
Configuration tab
AP:Process Definition form — Configuration tab
(Click the image to expand it.)
Fields on AP:Process Definition — Configuration tab
Signature Escalations tabs
AP:Process Definition form — Signature Escalations (Normal) tab
(Click the image to expand it.)
The three tabs (Normal, Urgent, and Low) on the Signature Escalation tab contain identical fields.
Fields on the AP:Process Definition form — Signature Escalations tabs
More Information Escalations tab
You can use the fields on this tab to send a notification to the addressee of the More Information request.
AP:Process Definition form — More Information Escalations tab
(Click the image to expand it.)
Fields on the AP:Process Definition form — More Information Escalations tab
Administrative info tab
For more information about the Administrative Information tab, see Administrative Information tab.
Related topic
Working with If Multiple Approvers field | https://docs.bmc.com/docs/ars1808/form-to-create-and-modify-approval-processes-820497461.html | 2020-09-18T17:12:06 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.bmc.com |
-
-
-
-
-
-
Troublesh!
Troubleshoot
As a first step in troubleshooting any issue that you or your users experience, follow these basic steps:
- If you are using XenDesktop 7, start troubleshooting in Citrix Director. This console displays properties of profiles that can help you diagnose and correct problems.
- Use UPMConfigCheck. It is a PowerShell script that examines a live Profile Management deployment and determines whether it is optimally configured. For more information on this tool, see Knowledge Center article CTX132805.
- If a Profile Management .ini file is in use, check its configuration on the affected computer.
- Check the settings in Group Policy (GP) against the recommended configurations that are described in Decide on a configuration. To deactivate any Profile Management policy that you enter as lists (for example, exclusion lists and inclusion lists), set the policy to Disabled. Do not set the policy to Not Configured.
- Check the
HKEY_LOCAL_MACHINE\SOFTWARE\Policiesregistry entry on the affected computer to see if there are any stale policies due to GP tattooing issues, and delete them. Tattooing occurs when policies are deleted from GP but remain in the registry.
- Check the file UPMSettings.ini, which contains all the Profile Management settings that have been applied for each user. This file is located in the root folder of each Citrix user profile in the user store.
Troublesh. | https://docs.citrix.com/en-us/profile-management/current-release/troubleshoot.html | 2020-09-18T17:19:19 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.citrix.com |
Photobox User Guide
A how-to guide with helpful information about
Photobox installation, capabilities, features and options.
Note: This documentation applies to Version 1.4 and above only.
If you have an older version of Photobox than 1.4, We would recommend you to download the latest version of Photobox
Installation
After successful UNZIP of the file Photobox_UNZIP you will find the module zip file and the ReadMe.txt file inside the folder.
- For Joomla 3.x install : mod_photobox3.x_vX.x.zip
Go to Extensions > Extension Manager and click on the Upload Package File Tab.
Click on browse to choose the correct installation file and click on "Upload and Install"
Module Settings
- Gallery Type : You can choose the gallery type to responsive or fixed width. If the option is set to fixed width then you can fill the max width of the gallery in pixels or percentage.
- Gallery Width : You can mention the width of the gallery in px or %. Example : 900px or 80%
- Start Slideshow : Option to start the slideshow of the image gallery on clicking the image or on pageload.
- Allow Loop : Option to allow looping of the gallery. If set to OFF the image slideshow will display once.
- Show Counter : Option to show or hide the image counter.
- Auto Play : Option to enable or disable the autoplay of the slideshow.
- Auto Play Interval : The time between each image transition. Example: 100 which means 100 ms (milliseconds)
- Control Color : Select the control colors using the color picker.
- Control Opacity : The opacity of the controls can be adjusted. 10 is transparent and 100 is opaque.
- Active Close Button : Select the color of active close button.
Image Settings
- Image Source : Option to fetch the images from folder or through slides.
- Image Folder Path : Image folder path from which the gallery will fetch all the images. Example : images/gallery/ . It is recommended to use the default joomla images folder/media manager.
- Show Thumbnail : Option to show or hide thumbnail of the gallery slideshow.
- Show Thumbnail Images by : Option to resize the thumbnail images by Height, Width or Both.
- Thumb Image Height : Mention the height of the thumb images. Ex : 80 which means 80 pixels. Please don't use px in the field.
- Thumb Image Width: Mention the width of the thumb images. Ex : 80 which means 80 pixels. Please don't use px in the field.
- Show Images : Option to show the images sequentially or randomly by image name.
- Sort Image By: If you choose to show the images sequentially, you can sort it by ascending or descending order.
Slide Settings
If the Image Source is set to Slides, the title and caption for the images can be controlled.
- Show Caption : Option to show/hide the image captions.
- Text Font : Option to set the font to user defined or a Google font.
- Font : If the font is set to google font then choose a font from Google Fonts and enter the name of the font into this field. Example : Open Sans, Raleway.
- Text Color : Select the font color of the text.
- Text Font Size : Enter the font size of the sidebar text in pixels. Example : 15 which means 15 pixels
- Text Font Weight : You can set the font weight to normal or bold.
Slide Settings
Click on the (+) to add new slide. You can add unlimited slides and images for the module and drag to sort it.
- Select an Image: Select the image you would like to display in the module.
- Caption : Mention the caption for the image. Leave it blank if you don't want to add any caption.
Advanced
- Include JQuery Files : Incase of any jQuery related issues it is recommended to load the JS in body instead of head.
- Module Suffix : Add a suffix class to the module to add extra css and customize the module. | https://docs.infyways.com/index.php?option=com_extensions_guide&view=guide&id=60&Itemid=101 | 2020-09-18T17:53:48 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.infyways.com |
R:
testthat Format¶
Ottr tests rely on the testthat library to run tests on students’ submissions. Because of this, ottr’s tests look like unit tests with a couple of important caveats. The main different is that a global
test_metadata variable is required, which is a multiline YAML-formatted string that contains configurations for each test case (e.g. point values, whether the test case is hidden). Each call to
testthat::test_that in the test file must have a corresponding entry in the test metadata and these are linked together by the description of the test. An example test file would be:
library(testthat) test_metadata = " name: q1 cases: - name: q1a points: 1 hidden: false - name: q1b hidden: true - name: q1c " test_that("q1a", { expect_equal(a, 102.34) }) test_that("q1a", { expected = c(1, 2, 3, 4) expect_equal(some_vec, expected) }) test_that("q1a", { coeffs = model$coefficeints expected = c("(Intercept)" = 1.25, "Sepal.Length" = -3.40) expect_equal(coeffs, expected) })
Each call to
testthat::test_that has a description that matches the
name of an element in the
cases list in the test metadata. If a student passes that test case, they are awarded the points in
case[points] (defaults to 1). The
hidden key of each case is used on Gradescope and determines whether a test case result is visible to students; this key defaults to
false. The example above is worth 3 points in total, 1 for each test case, and the only visible test case is
q1a. | https://otter-grader.readthedocs.io/en/latest/test_files/testthat_format.html | 2020-09-18T17:21:17 | CC-MAIN-2020-40 | 1600400188049.8 | [] | otter-grader.readthedocs.io |
Grey mould Botrytis cinerea
Botrytis cinerea is a necrotrophic fungus that affects many plant species, although its most notable hosts may beaj. spore masses. The fungus is usually referred to by its anamorph (asexual form) name, because the sexual phase is rarely observed. The teleomorph (sexual form) is an ascomycete, Botryotinia cinerea.
Table of contents:Table of contents: | http://docs.metos.at/Grey+mould+Botrytis+cinerea?structure=Disease+model_en | 2020-09-18T18:48:02 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.metos.at |
Commercially grown apples have to be free of scab. Scabbed apples will only be sold for processing. Therefore it is the aim of all plant protection activities in conventional as well as in organic growing to have scab free fruits. Models, which show the apple scab ascospore discharge and ascospore /conidia infection are very important tools to reach this goal.
Two basic types of fungicides against apple scab are used in conventional growing systems: a) Preventative products like Captan, Mancozeb, Dithianone and Strobilurins or b) curative products like Cyprodinil (Chorus) or Pyrimethanil (Scala) or for application in the later warmer periods of the season the DMI fungicides. Actually most of the growers follow a preventative strategy. Nevertheless, a practical preventative strategy is not able to protect apple trees completely because the apple tree grows and develop blossoms, fruits and leaves. Therefore the preventative spray only protects for a period of 4 to 7 days in dependence of the actual growth of the tree. Such narrow spray intervals are not manageable, therefore growers will integrate their experience on the local climate, weather forecast and apple scab models in their spray management. They will schedule the preventative sprays on base of their experience and weather forecast. The apple scab infection models will show them the exact date of infection (weak, moderate and severe) as well as the ascospore/conidia discharge model and with their experience they are able to estimate the importance of an infection. This gives the possibility to act with a curative product if an apple scab infection was too long after the last preventative spray.
In organic apple production lime sulfur pointed out to be the most effective control agent against scab. The optimal control can be achieved if it is sprayed short before infection or into the beginning of infection. This has to be planned on the base of the weather forecast. Sometime we will miss this optimum period and we have to spray into the wet leaves of an nearly complete scab infection. This will still give a good efficacy. The apple scab models are helping to decide if an emergence spray into a nearly complete infection is needed.
| http://docs.metos.at/Practical+use+of+the+model | 2020-09-18T16:16:04 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['http://www.metos.at/tiki/img/wiki_up/image/applescab_lf3.jpg',
'Image'], dtype=object)
array(['http://www.metos.at/tiki/img/wiki_up/image/appscabft5.jpg',
'Image'], dtype=object)
array(['http://www.metos.at/tiki/img/wiki_up/image/scabsprayschedule1_1.jpg',
'Image'], dtype=object) ] | docs.metos.at |
With the help of looping functionality, now you may execute multiple iterations of a test case. To setup a loop, please follow the steps below:
- First create a test case which you think, will be part of loop. Skip this step, if test case is already created.
- Simply go to Details sub-tab of this test case and provide value for loop source field. More information regarding loop source is provided below.
- Once you submit the loop source then a loop indicator will appear on the header of the test case.
Loop Source:
Loop source can be any of the following:
- Number Value
If the loop source is a number value then the test case inside loop will be executed specified number of times.
- Variable Name
- If the variable's value is a number then the test case inside loop will be executed n (n is the variable's value) number of times.
- If the variable's value is an array then the test case inside loop will be executed n (n is the length of array) number of times.
- Otherwise the variable's value will be treated as boolean
- if the variable's value is truthy then test case inside loop will be executed.
- Otherwise test case inside loop will loop will not be executed.
- Boolean Value
- If the value is false then test case inside loop will not be executed.
- Otherwise test case inside loop will be executed.
Loop Index:
Test Case inside loop can access the current loop index with the help of a special variable "$" (without double quotes). Loop index can be accessed in the following ways:
- Using variable notation {{$}}. Wherever this variable is specified that will be replaced with the current loop index starting from zero.
- If the loop source contains variable name (say {{data}}) which value is an array then individual array item can be accessed using $ as below:
{{data.$.xyz}}
Usage Scenarios:
Data Driven Testing:
Few Caveats:
Request Method cannot be defined as dynamic for the loop test case.
- Different set of request query parameters for each iteration cannot be defined for the loop test case. However if you are using same set of query parameters then values can be dynamic for each iteration.
- Similarly different set of assertions cannot be defined for the loop test case. However if you are using same set of assertions then values can be dynamic for each iteration. | https://docs.optimizory.com/display/vrest/Looping | 2020-09-18T17:25:03 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.optimizory.com |
This post will explain, how you can schedule test cases written in vREST. vREST do does not provide direct support for scheduling test cases on its own server. But you can easily schedule test cases written in vREST with the help of any external schedular scheduler like cron .
Here is a video tutorial for the same:
For Linux
For Windows
Prerequisite
Prerequisite for the machine, on which you want to schedule test cases.
- Download vrunner binary
- For download and setup vrunner, please follow Setup / install vrunner.
Let's take a sample application (Contacts Application) and step by step guide on how we can schedule vREST test cases.
Note: You can find the source code of sample application at Github.
...
Step 3: Add the above vrunner command in any external schedular
Now, you can add this vrunner command in any external schedular like cron. That's it.
For example, if you want to schedule vREST test cases daily at mid night, then you may write the cron job like this:
Note:
- In VREST_TEST_CASE_LIST_URL, every % symbol must be escaped with backslash (\).
- In your environment, path to vrunner can be found with the following commands (in Linux / Mac):
- which vrunner
in Linux/Mac and Windows Task Scheduler on Windows.
Please follow the following guides according to the Operating system of the machine on which you want to schedule the test cases: | https://docs.optimizory.com/pages/diffpagesbyversion.action?pageId=11768256&selectedPageVersions=12&selectedPageVersions=13 | 2020-09-18T16:27:37 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.optimizory.com |
Filter: Everything Submit Search Adding Harmony Binaries to the $PATH Environment Variable on macOS You can add the path to the Harmony binary files to the $PATH environment variable. This will allow you to run Harmony and its applications and utilities from a terminal by typing the name of the executable files, without having to type their full path. How to add Harmony binaries to the $PATH environment variable on macOS Double-click on Configuration Assistant. In the Welcome screen, check the Register console applications in the path option and uncheck all other options. Click on Continue. If you only want to add the Harmony binaries to the $PATH variable for your own user, select Register Path for my user only. If you want the Harmony binaries to be in the $PATH variable regardless of who is logged in, select Register Path for all users. NOTE You need an account with administrator privileges to register the path for all users. Click on Create. If you selected Register Path for all users, you will be prompted to enter an administrator's username and password to authorize the change. To verify that the change has been applied, open a new Terminal window and type the following command: $ echo $PATH The path to the Harmony bin folder should be included in the output, separated by other paths with a colon. | https://docs.toonboom.com/help/harmony-16/premium/installation/basic/mac/add-to-path-mac.html | 2020-09-18T16:32:17 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.toonboom.com |
.
Optionally, the leftmost input port of a Multi-Points Constraint node can be connected to a Constraint-Switch, which can be used to reduce the effect of the Multi-Points Constraint node on the element it is connected by setting its Active parameter between 0 and 100.
| https://docs.toonboom.com/help/harmony-17/premium/reference/node/constraint/multi-point-constraint-node.html | 2020-09-18T17:28:56 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['../../../Resources/Images/HAR/Stage/Breakdown/Constraints/multi-point-constraint-example.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Breakdown/Constraints/multi-point-constraint.png',
None], dtype=object)
array(['../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Breakdown/Constraints/multi-point-constraint-editor.png',
None], dtype=object) ] | docs.toonboom.com |
Overlooking Axon Frameworks architecture, you can notice that in general systems using the framework are "Message Driven", "Responsive", "Resilient" and "Elastic". According to Reactive Manifesto, the same holds for Reactive Systems in general.
Although we can state that Axon Framework is a type of reactive system, we can't say that it is fully reactive. I/O processing: the clients are notified of new data instead of asking for it, which frees the client to do other work while waiting for these notifications.
By their nature, a reactive API and Axon are a great fit, as most of framework's operations are async and non-blocking. Providing a dedicated extension for this was thus a logical step to take. To that end, we chose to use Pivotal’s Project Reactor to build this extension. Reactor builds on top of the Reactive Streams specification and is the de-facto standard for Java enterprise and Spring applications. As such, we feel it to be a great fit to provide an extension in, making Axon more reactive.
Not all Axon components offer a reactive API, yet. We will incrementally introduce more "reactiveness" to this extension, giving priority to components where users can benefit the most.
To use the Axon Reactor Extension, make sure that
axon-reactor module is available on the classpath. | https://docs.axoniq.io/reference-guide/extensions/introduction | 2020-09-18T16:48:45 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.axoniq.io |
Gets or sets the simulation position of the navmesh agent.
The position vector is in world space coordinates and units.
The nextPosition is coupled to Transform.position. In the default case the navmesh agent's Transform position will match the internal simulation position at the time the script Update function is called. This coupling can be turned on and off by setting updatePosition.
When updatePosition is true, the Transform.position reflects the simulated position, when false the position of the transform and the navmesh agent is not synchronized, and you'll see a difference between the two in general. When updatePosition is turned back on, the Transform.position will be immediately move to match nextPosition.
By setting nextPosition you can directly control where the internal agent position should be. The agent will be moved towards the position, but is constrained by the navmesh connectivity and boundaries. As such it will be useful only if the positions are continuously updated and assessed. See Also: Warp for teleporting a navmesh agent.
using UnityEngine; using UnityEngine.AI; using System.Collections;
public class ExampleClass : MonoBehaviour { void Start() { // Update the transform position explicitly in the OnAnimatorMove callback GetComponent<NavMeshAgent>().updatePosition = false; }
void OnAnimatorMove() { transform.position = GetComponent<NavMeshAgent>().nextPosition; } }
Additionally it can be useful to control the agent position directly - especially if the GO transform is controlled by something else - e.g. animator, physics, scripted or input.
using UnityEngine; using UnityEngine.AI; using System.Collections;
public class ExampleClass : MonoBehaviour { public bool agentIsControlledByOther; void Update() { var agent = GetComponent<NavMeshAgent>(); agent.updatePosition = !agentIsControlledByOther; if (agentIsControlledByOther) { GetComponent<NavMeshAgent>().nextPosition = transform.position; } } } | https://docs.unity3d.com/kr/2017.2/ScriptReference/AI.NavMeshAgent-nextPosition.html | 2020-09-18T18:31:43 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.unity3d.com |
Early Blight of Potato and Tomato
Randall C. Rowe, Sally A. Miller, Richard M. Riedel, Ohio State University Extension Service~-coloured.
Pathogen.. USING TOMCAST: Tomatoes grown within 10 miles of a reporting station should benefit from the disease management function of TOMCAST to help forecast early blight, Septoria, and Anthracnose. If you decide to try TOMCAST this season please keep in mind three very important concepts.
One: If this is your first time using the system, it is recommended that only part of your acreage be put into the program to see how it fits with your quality standards and operational style.
Two: Use TOMCAST as a guide to help better time fungicide applications, realizing in some seasons you may actually apply more product than a set schedule program might require.
Three: The further a tomato. | http://docs.metos.at/TomCast+Alternaria+Model+for+Tomato?structure=Disease+model_en | 2020-09-18T17:33:51 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.metos.at |
Article by Asen Atanasov, Jaroslav Křivánek, Vladimir Koylazov, Alexander Wilkie
Abstract:
Texturing is a ubiquitous technique used to enrich surface appearance with fine detail. While standard filtering approaches, such as mipmapping or summed area tables, are available for rendering diffuse reflectance textures at different levels of detail, no widely accepted filtering solution exists for multiresolution rendering of surfaces with fine specular normal maps. The current state of the art offers accurate filtering solutions for specular reflection at the cost of very high memory requirements and expensive 4D queries. We propose a novel normal map filtering solution for specular surfaces which supports data pre-filtering, and with an valuation speed that is roughly independent of the filtering footprint size. Its memory usage and evaluation speed are significantly more favorable than for existing methods. Our solution is based on high-resolution binning in the half-vector domain, which allows us to quickly build a very memory efficient data structure.
Download the full article: | https://docs.chaosgroup.com/display/RESEARCH/Efficient+Multiscale+Rendering+of+Specular+Microstructure | 2020-09-18T17:57:43 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.chaosgroup.com |
If you are using eStore Theme and want to import the demo content for your site, then you are at the right place. With just one click, you will easily be able to import eStore demos.
To Import a Demo in One Click
- Install and Activate eStore Theme
- Once you’ve installed and activated eStore, a notice will appear on your dashboard prompting you to get started with the theme.
- Clicking on Get started with eStore will automatically set up the ThemeGrill Demo Importer plugin and redirect you to the demos page. Here, you can check out the available demos and import the one you like. To do that, just hover the mouse cursor over the demo and click on Import.
If you want to know more about importing demos. View in Detail
To Import a Demo Manually (Alternative Way)
If you missed the welcome notice, you can still get started with importing demos using this method.
- Install and Activate eStore Theme
- From your WordPress Dashboard, go to Plugins > Add New and search for ThemeGrill Demo Importer. Now just install and activate it.
- Now go to Appearance > Demo Importer and take a look at the available demos.
- Hover the mouse cursor over the demo you want and click on Import.
The Importing Process is Shown in the Images Below:
- Install the ThemeGrill Demo Importer manually or using the one click method.
- The one click method will redirect you automatically to the demos page. If you chose the manual method, then follow step 3 to view the demos page.
- Clicking on Import will reveal a popup message shown below:
- If you are sure about importing then you can click Confirm to proceed. That’s all there is to it. Your demo content will then be ready.
Can we Import Starter Sites on Our Existing Website?
Importing starter sites doesn’t delete your previous content. It just adds new settings and content to your site. While it only adds new content, the previous design of your site might seem to be broken. Thus, we highly recommend you to import starter sites on a fresh WordPress install.
Importing starter sites on an existing site which already has content is not recommended. | https://docs.themegrill.com/estore/how-to-get-starter-sites-with-estore-demos/ | 2020-09-18T16:32:33 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['https://docs.themegrill.com/estore/wp-content/uploads/sites/2/2020/03/demo_importer.png',
None], dtype=object)
array(['https://docs.themegrill.com/estore/wp-content/uploads/sites/2/2020/03/demoimporter_1azy8dz.png',
None], dtype=object)
array(['https://docs.themegrill.com/estore/wp-content/uploads/sites/2/2020/03/manual_import.png',
None], dtype=object)
array(['https://docs.themegrill.com/estore/wp-content/uploads/sites/2/2020/03/import_confirmation.png',
None], dtype=object) ] | docs.themegrill.com |
Specifies preconditions, postconditions, and assertions for functions.
The expression in a contract attribute, contextually converted to
bool, is called its predicate. Evaluation of the predicate must not have any side effects other than modification of non-volatile objects whose lifetimes begin and end within that evaluation; otherwise the behavior is undefined. If the evaluation of a predicate exits via an exception,
std::terminate is called.
During constant expression evaluation, only predicates of checked contracts are evaluated. In all other contexts, it is unspecified whether the predicate of a contract that is not checked is evaluated; the behavior is undefined if it would evaluate to
false.
Preconditions and postconditions are collectively called contract conditions. These attributes may be applied to the function type in a function declaration:
int f(int i) [[expects: i > 0]] [[ensures audit x: x < 1]]; int (*fp)(int i) [[expects: i > 0]]; // error: not a function declaration
The first declaration of a function must specify all contract conditions (if any) of the function. Subsequent redeclarations must either specify no contract conditions or the same list of contract conditions; no diagnostic is required if corresponding conditions will always evaluate to the same value. If the same function is declared in two different translation units, the list of contract conditions shall be the same; no diagnostic is required.
Two lists of contract conditions are the same if they contain the same contract conditions in the same order. Two contract conditions are the same if they are the same kind of contract condition and have the same contract-level and the same predicate. Two predicates are the same if they would satisfy the one-definition rule were they to appear in function definitions, except for the renaming of function and template parameters and return value identifiers (if any).
int f(int i) [[expects: i > 0]]; int f(int); // OK, redeclaration int f(int j) [[expects: j > 0]]; // OK, redeclaration int f(int k) [[expects: k > 1]]; // ill-formed int f(int l) [[expects: 0 < l]]; // ill-formed, no diagnostic required
If a friend declaration is the first declaration of the function in a translation unit and has a contract condition, that declaration must be a definition and must be the only declaration of the function in the translation unit:
struct C { bool ok() const; friend void f(const C& c) [[ensures: c.ok()]]; // error, not a definition friend void g(C c) [[expects: c.ok()]] { } // OK }; void g(C c); // error
The predicate of a contract condition has the same semantic restrictions as if it appeared as the first expression statement in the body of the function it applies to.
If a postcondition odr-uses a parameter in its predicate and the function body modifies the value of that parameter directly or indirectly, the behavior is undefined.
int f(int x) [[ensures r: r == x]] { return ++x; // undefined behavior } int g(int* p) [[ensures: p != nullptr]] { *p = 42; // OK, p is not modified } bool meow(const int&) { return true; } void h(int x) [[ensures: meow(x)]] { ++x; // undefined behavior } void i(int& x) [[ensures: meow(x)]] { ++x; // OK; the "value" of a reference is its referent and cannot be modified }
For templated functions with deduced return types, the return value may be named in a postcondition without additional restrictions (except that the name of the return value is treated as having a dependent type). For the non-templated functions with deduced return types, naming the return value is prohibited in declarations (but allowed in the definitions):
auto h(int x) [[ensures res: true]]; // error: return value with deduced type // on a non-template function declaration
A program may be translated with one of three build levels:
default.
defaultor
audit.
The mechanism for selecting the build level is implementation-defined. Combining translation units that were translated at different build levels is conditionally-supported.
The violation handler for a program is a function of type
void (const std::contract_violation &) (optionally
noexcept), specified in an implementation-defined manner. It is invoked when the predicate of a checked contract evaluates to
false.
std::contract_violationargument is implementation-defined.
std::contract_violationargument is the source location of the function definition.
std::contract_violationargument is the source location of the statement to which the assertion is applied.
The value of the
std::contract_violation argument passed to the violation handler is otherwise implementation-defined.
If a violation handler exits by throwing an exception and a contract is violated on a call to a function with a non-throwing exception specification,
std::terminate is called:
void f(int x) noexcept [[expects: x > 0]]; void g() { f(0); // terminate if the violation handler throws }
A program may be translated with one of two violation continuation modes:
std::terminateis called;
Implementations are encouraged to not provide any programmatic way to query, set, or modify the build level or to set or modify the violation handler.
© cppreference.com
Licensed under the Creative Commons Attribution-ShareAlike Unported License v3.0. | https://docs.w3cub.com/cpp/language/attributes/contract/ | 2020-09-18T17:31:41 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.w3cub.com |
Release Notes - 2.4.2
Breaking Changes 💥Breaking Changes 💥
N/A
New Features 🚀New Features 🚀
- Added
HelmetProviderto
FalconClientMock(Falcon Client)
- Added support for passing a function to the
globalCssprop (Falcon UI)
- Added keyboard support to
<Dropdown>component (Falcon UI)
- Added integration test for checkout flow (Demo V2)
Bug Fixes 🐛Bug Fixes 🐛
- Fixed Jest configuration for custom app-level configurations (Falcon Client)
- Ignore newlines when validating character limits (Falcon Front Kit)
- Add check for Intersection Observer support (Falcon Front Kit)
- Fixed stale content on orders page (Demo V2)
- Fixed product loading (BigCommerce API)
Polish 💅Polish 💅
- Removed react-powerplug dependency (Falcon UI + Falcon UI Kit)
- Removed react-adopt dependency (Demo V2)
- Removed react-lazyload dependency (Demo V2)
- Removed react-powerplug dependency (Demo V2) | https://docs.deity.io/docs/platform/release/2-4-2/ | 2020-09-18T16:54:58 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.deity.io |
SqlTableName
Usage
To override the default name of the SQL table to which this class is projected, use the following syntax:
Class MyApp.Person Extends %Persistent [ SqlTableName = DBTable ] { //class members }
Where DBTable is a valid SQL identifier.
Details
This keyword specifies the name of the SQL table to which this class is projected. By default, the SQL table name is the same as the class name.
Typically you use this keyword when the class name is a SQL reserved word (not uncommon) or if you want the SQL table to contain characters not supported by class names (such as the “_” character).
Effect on Subclasses
This keyword is not inherited.
Default
If you omit this keyword, the class name is used as the SQL table name.
See Also
“Class Definitions” in this book
“Defining and Compiling Classes” in Defining and Using Classes
“Introduction to Compiler Keywords” in Defining and Using Classes | https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ROBJ_CLASS_SQLTABLENAME | 2020-09-18T16:22:43 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.intersystems.com |
Creating the Programmable Interface
An object's programmable interface comprises the properties, methods, and events that it defines. Organizing the objects, properties, and methods that an application exposes is like creating an object-oriented framework for an application. Standard Objects and Naming Guidelines discusses some of the concepts behind naming and organizing the programmable elements that an application can expose. | https://docs.microsoft.com/en-us/previous-versions/windows/desktop/automat/creating-the-programmable-interface | 2020-09-18T18:31:14 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.microsoft.com |
How to handle a query message has been covered in more detail in the Query Handling section. Queries have to be dispatched, just like any type of message, before they can be handled. To that end Axon provides two interfaces:
The Query Bus, and
The Query Gateway
This page will show how and when to use the query gateway and bus. How to configure and specifics on the the query gateway and bus implementations are discussed here
The
QueryBus is the mechanism that dispatches queries to query handlers. Queries are registered using the combination of the query request name and query response type. It is possible to register multiple handlers for the same request-response combination, which can be used to implement the scatter-gather pattern. When dispatching queries, the client must indicate whether it wants a response from a single handler or from all handlers.
The
QueryGateway is a convenient interface towards the query dispatching mechanism. While you are not required to use a gateway to dispatch queries, it is generally the easiest option to do so. It abstracts certain aspects for you, like the necessity to wrap a Query payload in a Query Message.
Regardless whether you choose to use the
QueryBus or the
QueryGateway, both provide several types of queries. Axon Framework makes a distinction between three types, being:
Scatter-Gather queries, and
The direct query represents a query request to a single query handler. If no handler is found for a given query, a
NoHandlerForQueryException is thrown. In case multiple handlers are registered, it is up to the implementation of the Query Bus to decide which handler is actually invoked. In the listing below we have a simple query handler:
@QueryHandler // 1.public List<String> query(String criteria) {// return the query result based on given criteria}
By default the name of the query is fully qualified class name of query payload (
java.lang.String in our case).
However, this behavior can be overridden by stating the
queryName attribute of the
@QueryHandler annotation.
If we want to query our view model, the
List<String>, we would do something like this:
// 1.GenericQueryMessage<String, List<String>> query =new GenericQueryMessage<>("criteria", ResponseTypes.multipleInstancesOf(String.class));// 2. send a query message and print query responsequeryBus.query(query).thenAccept(System.out::println);
It is also possible to state the query name when we are building the query message,
by default this is the fully qualified class name of the query payload.
The response of sending a query is a Java
CompletableFuture,
which depending on the type of the query bus may be resolved immediately.
However, if a
@QueryHandler annotated function's return type is
CompletableFuture,
the result will be returned asynchronously regardless of the type of the query bus.
When you want responses from all of the query handlers matching your query message, the scatter-gather query is the type to use. As a response to that query a stream of results is returned. This stream contains a result from each handler that successfully handled the query, in unspecified order. In case there are no handlers for the query, or all handlers threw an exception while handling the request, the stream is empty.
In the listing below we have two query handlers:
@QueryHandler(queryName = "query")public List<String> query1(String criteria) {// return the query result based on given criteria}
@QueryHandler(queryName = "query")public List<String> query2(String criteria) {// return the query result based on given criteria}
These query handlers could possibly be in different components and we would like to get results from both of them. So, we will use a scatter-gather query, like so:
// create a query messageGenericQueryMessage<String, List<String>> query =new GenericQueryMessage<>("criteria", "query", ResponseTypes.multipleInstancesOf(String.class));// send a query message and print query responsequeryBus.scatterGather(query, 10, TimeUnit.SECONDS).map(Message::getPayload).flatMap(Collection::stream).forEach(System.out::println);
The subscription query allows a client to get the initial state of the model it wants to query, and to stay up-to-date as the queried view model changes. In short it is an invocation of the Direct Query with the possibility to be updated when the initial state changes. To update a subscription with changes to the model, we will use the
QueryUpdateEmitter component provided by Axon.
Let's take a look at a snippet from the
CardSummaryProjection:
@QueryHandlerpublic List<CardSummary> handle(FetchCardSummariesQuery query) {log.trace("handling {}", query);TypedQuery<CardSummary> jpaQuery = entityManager.createNamedQuery("CardSummary.fetch", CardSummary.class);jpaQuery.setParameter("idStartsWith", query.getFilter().getIdStartsWith());jpaQuery.setFirstResult(query.getOffset());jpaQuery.setMaxResults(query.getLimit());return log.exit(jpaQuery.getResultList());}
This query handler will provide us with the list of GiftCard states. Once our GiftCard gets redeemed we would like to update any component which is interested in the updated state of that GiftCard. We'll achieve this by emitting an update using the
QueryUpdateEmitter component within the event handler function of the
RedeemedEvt event:
@EventHandlerpublic void on(RedeemedEvt evt) {// 1.CardSummary summary = entityManager.find(CardSummary.class, event.getId());summary.setRemainingValue(summary.getRemainingValue() - event.getAmount());// 2.queryUpdateEmitter.emit(FetchCardSummariesQuery.class,query -> event.getId().startsWith(query.getFilter().getIdStartsWith()),summary);}
First, we update our view model by updating the existing card.
If there is a subscription query interested in updates about this specific GiftCard we emit an update.
The first parameter of the emission is the type of the query (
FetchCardSummariesQuery in our case)
which corresponds to the query type in a previously defined query handler.
The second parameter is a predicate which will select the subscription query to be updated.
In our case we will only update subscription queries interested in the GiftCard which has been updated.
The third parameter is the actual update, which in our case is the card summary.
There are several overloads of the emit method present, feel free to take a look at JavaDoc for more specifics on that.
The important thing to underline here is that an update is a message and that some overloads take
the update message as a parameter (in our case we just sent the payload which was wrapped in the message)
which enables us to attach meta-data for example.
Once we have the query handling and the emitting side implemented, we can issue a subscription query to get the initial state of the GiftCard and be updated once this GiftCard is redeemed:
// 1.commandGateway.sendAndWait(new IssueCmd("gc1", amount));// 2.FetchCardSummariesQuery fetchCardSummariesQuery =new FetchCardSummariesQuery(offset, limit, filter);// 3.SubscriptionQueryResult<List<CardSummary>, CardSummary> fetchQueryResult = queryGateway.subscriptionQuery(fetchCardSummariesQuery,ResponseTypes.multipleInstancesOf(CardSummary.class),ResponseTypes.instanceOf(CardSummary.class));fetchQueryResult//4..handle(cs -> cs.forEach(System.out::println), System.out::println)//5..doFinally(it -> fetchQueryResult.close());// 6.commandGateway.sendAndWait(new RedeemCmd("gc1", amount));
Issuing a GiftCard with
gc1 id and initial value of
amount.
Creating a subscription query message to get the list of GiftCards
(this initial state is multiple instances of
CardSummary)
and to be updated once the state of GiftCard with id
gc1 is changed (in our case an update means the card is redeemed).
The type of the update is a single instance of
CardSummary.
Do note that the type of the update must match the type of the emission side.
Once the message is created, we are sending it via the
QueryGateway.
We receive a query result which contains two components: one is
initialResult and the other is
updates.
In order to achieve 'reactiveness' we use Project Reactor's
Mono for
initialResult
and
Flux for
updates.
Note
Once the subscription query is issued, all updates are queued until the subscription to the
Fluxof
updatesis done. This behavior prevents the losing of updates.
Note
The Framework prevents issuing more than one query message with the same id. If it is necessary to be updated in several different places, create a new query message.
Note
The
reactor-coredependency is mandatory for usage of subscription queries. However, it is a compile time dependency and it is not required for other Axon features.
The
SubscriptionQueryResult#handle(Consumer<? super I>, Consumer<? super U>)
method gives us the possibility to subscribe to the
initialResult and the
updates in one go.
If we want more granular control over the results, we can use the
initialResult() and
updates() methods on the query result.
As the
queryUpdateEmitter will continue to emit updates even when there are no subscribers, we need to notify the emitting side once we are no longer interested in receiving updates.
Failing to do so can result in hanging infinitive streams and eventually a memory leak.
Once we are done with using subscription query, we need to close the used resource. We can do that in
doFinally hook.
As an alternative to the
doFinally hook, there is the
Flux#using API. This is synonymous
to the try-with-resource Java API:
Flux.using( () -> fetchQueryResult,queryResult -> queryResult.handle(..., ...),SubscriptionQueryResult::close);
When we issue a
RedeemCmd, our event handler in the projection will eventually be triggered,
which will result in the emission of an update.
Since we subscribed to updates with the
println() method, the update will be printed out once it is received.
Axon Coding Tutorial #5: - Connecting the UI | https://docs.axoniq.io/reference-guide/axon-framework/queries/query-dispatchers | 2020-09-18T18:04:38 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.axoniq.io |
API Gateway 7.5.3 Policy Developer Filter Reference Save PDF Selected topic Selected topic and subtopics All content XML signature verification Overview. See also XML signature generation. Signature verification settings The following sections are available on the Signature Verification tab: Signature Location:Because there may be multiple signatures in the message, you must specify which signature API Gateway uses to verify the integrity of the message. The signature can be extracted from one of the following: From the SOAP header Using WS-Security actors Using XPath Select the appropriate option from the list. For more details on signature location options, see the API Gateway Policy Developer Guide., Find Certificate) may have already located a certificate and populated the certificate message attribute. To use this certificate to verify the signature, specify the selector expression in the field provided (for example, ${certificate}). Using a selector enables settings to be evaluated and expanded at runtime based on metadata (for example, in a message attribute, KPS, or environment variable). For more details, see Select configuration values at runtime in the API Gateway Policy Developer Guide. Via Certificate in LDAP:Clients may not always want to include their public keys in their signatures. In such cases, the public key can be retrieved from a specified LDAP directory. This setting enables you to select a previously configured LDAP directory from a list. You can add LDAP connections under the Environment Configuration > box next to the certificate that contains the public key to use to verify the signature, and click OK. What must be signed settings The What Must Be Signed tab the contents of a message attribute. For more details, see What to sign settings. Note If all attachments are required to be signed, select All attachments to enforce this. Advanced settings The following advanced configuration options are available on the Advanced tab:, use the default value to recreate the derived key. Algorithm Suite:Select the WS-Security Policy Algorithm Suite that must have been used when signing the message. This check ensures that the appropriate algorithms were used to sign the message. Fail if No Signatures to Verify:Select this, or environment variable). For more details, see Select configuration values at runtime in the API Gateway Policy Developer Guide. Remove enclosing WS-Security element on successful verification:Select this check box if you wish to remove the enclosing WS-Security block when the signature has been successfully verified. This setting is not selected by default. Related Links | https://docs.axway.com/bundle/APIGateway_753_PolicyDevFilterReference_allOS_en_HTML5/page/Content/PolicyDevTopics/content_integrity.htm | 2020-09-18T17:27:18 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.axway.com |
Clothesline Installation Park Ridge South 4125 QLD
Park Ridge South 4125 Logan City QLD
A fold down washing line unit – whether dual framed or single framed – is a great alternative within your home in Park Ridge South 4125 Logan City QLD when there is no room for a traditional rotary clothes hoist.
Park Ridge South Clothesline Installation and Installers highly recommends the following products that can function as a main drying solution or can provide additional hanging space depending on your available drilling area inside or outside your home.
Austral Compact 39 Fold Down Clothesline
- Provides 39 metres of total hanging space
- Can accommodate the high drying requirements of 3 to 6 persons
- Has 12 high quality lines with 6.5cm line spacing
- Fits king sized sheets
- Can be fitted with a waterproof cover
- Specifically designed to fit narrow, tight, or awkward spaces in small homes and rental units
- Can be used to as additional hanging space inside the bathroom, kitchen, or laundry area
Daytek Classic Twin Fold Down Clothesline
- Offers 22 metres of total hanging space
- Can accommodate the low to regular washing demand of 3 to 4 persons
- Has 10 high quality lines with 72mm up to 90mm line spacing
- Fits double sized sheets
- Wall projection of only 83mm when both frames are folded
- For wall or post mounting installation
- For apartments, townhouses, and semi-detached housing units, and condos
Want fast and honest Park Ridge South Clothesline Installation and Installers?
Lifestyle Clotheslines is the name you can trust when it comes to Clothesline Installation and Installers that is swift and trustworthy in Park Ridge South 4125 Logan City QLD. Speak with a clothesline employee at 1300 798 779 today.
Lifestyle Clotheslines also provides high quality washing line and laundry line brands and units as well as fast clothesline installation services to its neighbouring Logan City suburbs of Cedar Grove, Cedar Vale, Chambers Flat, Cornubia, and Crestmead.
For more information and the best clotheslines in the Logan City area click here.
Clothesline Services in the Park Ridge South area include:
- Rotary clothesline installation
- Hills Hoist and clothes hoist installs
- Removal of old clothesline
- Core Hole Drilling service
- Insurance & Storm damaged quotes
- Rewiring service | https://docs.lifestyleclotheslines.com.au/article/3578-clothesline-installation-park-ridge-south-4125-qld | 2020-09-18T17:00:03 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.lifestyleclotheslines.com.au |
disk images used, as long as the description is provided. Any hardware described in a proper format (i.e. CIM - Common Information Model) is accepted, although there is no guarantee that every virtualization software will support all types of hardware.
OVF package should contain exactly).
In order to provide secure means of distribution for OVF packages, the manifest and certificate are provided. Manifest (.mf file) contains checksums for all the files in OVF package, whereas certificate (.cert file) contains X.509 certificate and a checksum of manifest file. Both files are not compulsory, but certificate requires manifest to be present.
Although OVF is claimed to support ‘any disk format’, what we are interested in is which formats are supported by VM managers that currently use OVF.
In our implementation of the OVF we allow a choice between raw, cow and vmdk disk formats for both import and export. Other formats covertable using qemu-img are allowed in import mode, but not tested. The justification is the following::DiskTemplate</gnt:DiskTemplate> <gnt:OperatingSystem> <gnt:Name/> <gnt:Parameters></gnt:Parameters> </gnt:OperatingSystem> <gnt:Hypervisor> <gnt:Name/> <gnt:Parameters></gnt:Parameters> </gnt:Hypervisor> <gnt:Network> <gnt:Mode/> <gnt:MACAddress/> <gnt:Link/> <gnt:IPAddress/> </gnt:Network> < => File in References disk0_ivname = name => generated automatically disk0_size = size_in_mb => calculated after disk conversion disk_count = number => generated automatically disk_template = disk_type => gnt:DiskTemplate hypervisor = hyp-name => gnt:Name in gnt:Hypervisor name = inst-name => Name in VirtualSystem nic0_ip = ip => gnt:IPAddress in gnt:Network nic0_link = link => gnt:Link in gnt:Network nic0_mac = mac => gnt:MACAddress in gnt:Network or Item in VirtualHardwareSection nic0_mode = mode => gnt:Mode in gnt:Network nic_count = number => generated automatically tags => gnt:Tags [backend] auto_balanced => gnt:AutoBalance memory = mem_in_mb => Item in VirtualHardwareSection vcpus = number => Item in VirtualHardwareSection [export] compression => ignored os => gnt:Name in gnt:OperatingSystem source => ignored timestamp => ignored version => gnt:VersionId or constants.EXPORT_VERSION [os] => gnt:Parameters in gnt:OperatingSystem [hypervisor] => gnt:Parameters in gnt:Hypervisor you can specify all the missing parameters in the command line. Please refer to Command Line section.
In the OVF converter we provide examples of options when converting from VirtualBox, VMWare and OpenSuseStudio.) of the .ovf file export in raw, cow and vmdk formats. This means i.e. that the appropriate ovf:format will be provided. As for import, we will support all formats that qemu-img can convert to raw. At this point this means raw, cow, qcow, qcow2, vmdk and cloop. We do not plan for now to support vdi or vhd unless they become part of qemu-img supported formats.
We plan to support compression both for import and export - in gzip format. There is also a possibility to provide virtual disk in chunks of equal size. The latter will not be implemented in the first version, but we do plan to support it eventually.
The ovf:format tag is not used in our case when importing. Instead we use qemu-img info, which provides enough information for our purposes and is better standardized.
Please note, that due to security reasons we require the disk image to be in the same directory as the .ovf description file for both import and export.
In order to completely ignore disk-related information in resulting config file, please use --disk-template=diskless option.
Ganeti provides support for routed and bridged mode for the networks. Since the standard OVF format does not contain any information regarding used network type, we add our own source of such information in gnt:GanetiSection. In case this additional information is not present, we perform a simple check - if network name specified in NetworkSection contains words bridged or routed, we consider this to be the network type. Otherwise option auto is chosen, in which case the cluster’s default value for that field will be used when importing. This provides a safe fallback in case of NAT networks usage, which are commonly used e.g. in VirtualBox.
The supported hardware is limited to virtual CPUs, RAM memory, disks and networks. In particular, no USB support is currently provided, as Ganeti does not support them.
Support for different operating systems depends solely on their accessibility for Ganeti instances. List of installed OSes can be checked using gnt-os list command.
The basic usage of the ovf tool is one of the following:
ovfconverter import filename ovfconverter export --format=<format> filename
This will result in a conversion based solely on the content of provided file. In case some information required to make the conversion is missing, an error will occur.
If output directory should be different than the standard Ganeti export directory (usually /srv/ganeti/export), option --output-dir can be used.
If name of resulting entity should be different than the one read from the file, use --name option.
Import options that ovfconverter supports include options for backend, disks, hypervisor, networks and operating system. If an option is given, it overrides the values provided in the OVF file.
--backend=option=value can be used to set auto balance, number of vcpus and amount of RAM memory.
Please note that when you do not provide full set of options, the omitted ones will be set to cluster defaults (auto).
--disk-template=diskless causes the converter to ignore all other disk option - both from .ovf file and the command line. Other disk template options include plain, drdb, file, sharedfile and blockdev.
--disk=number:size=value causes to create disks instead of converting them from OVF package; numbers should start with 0 and be consecutive.
--no-nics option causes converter to ignore any network information provided.
--network=number:option=value sets network information according to provided data, ignoring the OVF package configuration.
Export options include choice of disk formats to convert the disk image (--format) and compression of the disk into gzip format (--compress). User has also the choice of allowing to skip the Ganeti-specific part of the OVF document (--external).
By default, exported OVF package will not be contained in the OVA package, but this may be changed by adding --ova option.
Please note that in order to create an OVF package, it is first required that you export your VM using gnt-backup export.
Example:
gnt-backup export -n node1.xen xen.i1 [...] ovfconverter export --format=vmdk --ova --external \ --output-dir=~/xen.i1 \ /srv/ganeti/export/xen.i1.node1.xen/config.ini
Disk conversion for both import and export is done using external tool called qemu-img. The same tool is used to determine the type of disk, as well as its virtual size.
Import functionality is implemented using two classes - OVFReader and OVFImporter.
OVFReader class is used to read the contents of the .ovf file. Every action that requires .ovf file access is done through that class. It also performs validation of manifest, if one is present.
The result of reading some part of file is typically a dictionary or a string, containing options which correspond to the ones in config.ini file. Only in case of disks, the resulting value is different - it is then a list of disk names. The reason for that is the need for conversion.
OVFImporter class performs all the command-line-like tasks, such as unpacking OVA package, removing temporary directory, converting disk file to raw format or saving the configuration file on disk. It also contains a set of functions that read the options provided in the command line.
Typical workflow for the import is very simple:
read the .ovf file into memory
verify manifest
parse each element of the configuration file: name, disk template, hypervisor, operating system, backend parameters, network and disks
-
check if option for the element can be read from command line options
- if yes: parse options from command line
- otherwise: read the appropriate portion of .ovf file
save gathered information in config.ini file
Similar to import, export functionality also uses two classes - OVFWriter and OVFExporter.
OVFWriter class produces XML output based on the information given. Its sole role is to separate the creation of .ovf file content.
OVFExporter class gathers information from config.ini file or command line and performs necessary operations like disk conversion, disk compression, manifest creation and OVA package creation.
Typical workflow for the export is even simpler, than for the import: | http://docs.ganeti.org/ganeti/2.15/html/design-ovf-support.html?highlight=qcow | 2020-09-18T16:35:27 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.ganeti.org |
Adding your NPM auth token
In order to be able to get access to the Falcon Platform packages, you will need to connect to our private npm registry.
You can do this using an authorisation token. You can get this from your admin panel or by contacting us.
To set up your token please follow these steps:
- Log in into (you can use any email) using credentials provided by DEITY:
npm login --registry= --scope=@deity
- Your
~/.npmrcfile should contain your auth token.
Example:
@deity:registry=
//npm.deity.io/:_authToken=<YOUR_TOKEN>
Using Deity Cloud?
If you're using Deity Cloud and are logged into
dcloud you can run
dcloud project:npm-token to get your NPM token.
- To let Falcon Cloud use your NPM token for the deployments and keep your token outside of GIT, run the following command in your terminal.
dcloud build:var NPM_TOKEN "<YOUR_TOKEN>"
- To use the dcloud build variable, add a
.npmrcfile to the root folder of your project application (e.g.
client/.npmrcand
server/.npmrc) with the following content:
//npm.deity.io/:_authToken=${NPM_TOKEN}
@deity:registry= | https://docs.deity.io/docs/platform/getting-started/npm/ | 2020-09-18T16:22:32 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.deity.io |
Crate wait_timeout [−] [src]
A crate to wait on a child process with a particular timeout.
This crate is an implementation for Unix and Windows of the ability to wait
on a child process with a timeout specified. On Windows the implementation
is fairly trivial as it's just a call to
WaitForSingleObject with a
timeout argument, but on Unix the implementation is much more involved. The
current implementation registeres a
SIGCHLD handler and initializes some
global state. If your application is otherwise handling
SIGCHLD then bugs
may arise.
Example
use std::process::Command; use wait_timeout::ChildExt; use std::time::Duration; let mut child = Command::new("foo").spawn().unwrap(); let one_sec = Duration::from_secs(1); let status_code = match child.wait_timeout(one_sec).unwrap() { Some(status) => status.code(), None => { // child hasn't exited yet child.kill().unwrap(); child.wait().unwrap().code() } }; | https://docs.rs/wait-timeout/0.1.5/wait_timeout/ | 2020-09-18T17:50:22 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.rs |
Helpers
Some tasks and features will be common to many if not all applications. For those, Expressive provides helpers. These are typically utility classes that may integrate features or simply provide standalone benefits.
Currently, these include:
Installation
If you started your project using the Expressive skeleton package, the helpers are already installed.
If not, you can install them as follows:
$ composer require zendframework/zend-expressive-helpers | https://zend-expressive.readthedocs.io/en/latest/v1/features/helpers/intro/ | 2018-11-13T02:26:52 | CC-MAIN-2018-47 | 1542039741192.34 | [] | zend-expressive.readthedocs.io |
6.3.5. Transactions
A Virtuoso cluster is fully transactional and supports the 4 isolation levels identically with a single server Virtuoso. Transactions are committed using single to two phase commit as may be appropriate and this is transparent to the application program.
Distributed deadlocks are detected and one of the deadlocking transactions is killed, just as with a single process.
Transactions are logged on the cluster nodes which perform updates pertaining to the transaction.
A transaction has a single owner connection. Each client connection has a distinct transaction. From the application program's viewpoint there is a single thread per transaction. Any parallelization of queries is transparent.
For roll forward recovery, each node is independent. If a transaction is found in the log for which a prepare was received but no final commit or rollback, the recovering node will ask the owner of the transaction whether the transaction did commit. Virtuoso server processes can provide this information during roll forward, hence a simultaneous restart of cluster nodes will not deadlock.
Performance Considerations
A lock wait in a clustered database requires an asynchronous notification to a monitor node. This is done so that a distributed deadlock can be detected. Thus the overhead of waiting is slightly larger than with a single process.
We recommend that read committed be set as the default isolation since this avoids most waiting. A read committed transaction will show the last committed state of rows that have exclusive locks and uncommitted state. This is set as DefaultIsolation = 2.
In the parameters section of each virtuoso.ini file.
Row Autocommit Mode
Virtuoso has a mode where insert/update/delete statements commit after each row. This is called row autocommit mode and is useful for bulk operations that need no transactional semantic.
The row autocommit mode is set by executing log_enable (2) or log_enable (3), for no logging and logging respectively. The setting stays in effect until set again or for the duration of the connection. Do not confuse this with the autocommit mode of SQL client connection.
In a clustered database the row autocommit mode is supported but it will commit at longer intervals in order to save on message latency. Statements are guaranteed to commit at least once, at the end of the statement.
A searched update or delete statement in row autocommit mode processes a few thousand keys between commits, all in a distributed transaction with 2PC. These are liable to deadlock. Since the transaction boundary is not precisely defined for the application, a row autocommit batch update must be such that one can distinguish between updated and non-updated if one must restart after a deadlock. This is of course not an issue if updating several times makes no difference to the application.
Naturally, since a row can be deleted only once, the problem does not occur with deletes. Both updates and deletes in row autocommit mode are guaranteed to keep row integrity, i.e. all index entries of one row will be in the same transaction.
Naturally, since a row can be deleted only once, the problem does not occur with deletes. Both updates and deletes in row autocommit mode are guaranteed to keep row integrity, i.e. all index entries of one row will be in the same transaction.
A row autocommit insert sends all keys of the row at once and each commit independently. Hence, a checkpoint may for example cause a situation where one index of a row is in the checkpoint state and the other is not.
Thus, a row autocommit insert on a non-empty application table with transactional semantic is not recommended. This will be useful for bulk loads into empty tables and the like, though. | http://docs.openlinksw.com/virtuoso/clusteroperationtransc/ | 2018-11-13T02:25:33 | CC-MAIN-2018-47 | 1542039741192.34 | [] | docs.openlinksw.com |
Client/Server Example Configurations
For easy configuration, you can start with these example client/server configurations and modify for your systems.
Examples of Standard Client/Server Configuration
Generally, locators and servers use the same distributed system properties file, which lists locators as the discovery mechanism for peer members and for connecting clients. For example:
mcast-port=0 locators=localhost[41111]
On the machine where you wish to run the locator (in this example, ‘localhost’), you can start the locator from a gfsh prompt:
gfsh>start locator --name=locator_name --port=41111
Or directly from a command line:
prompt# gfsh start locator --name=locator_name --port=41111
Specify a name for the locator that you wish to start on the localhost. If you do not specify the member name, gfsh will automatically pick a random name. This is useful for automation.
The server’s
cache.xml declares a
cache-server element, which identifies the JVM as a server in the distributed system.
<cache> <cache-server port="40404" ... /> <region . . .
Once the locator and server are started, the locator tracks the server as a peer in its distributed system and as a server listening for client connections at port 40404.
You can also configure a cache server using the
gfsh command-line utility. For example:
gfsh>start server --name=server1 --server-port=40404
See
start server.
The client’s
cache.xml
<client-cache> declaration automatically configures it as a standalone GemFire application.
The client’s
cache.xml:
- Declares a single connection pool with the locator as the reference for obtaining server connection information.
- Creates
cs_regionwith the client region shortcut configuration,
CACHING_PROXY. This configures it as a client region that stores data in the client cache.
There is only one pool defined for the client, so the pool is automatically assigned to all client regions.
<client-cache> <pool name="publisher" subscription- <locator host="localhost" port="41111"/> </pool> <region name="cs_region" refid="CACHING_PROXY"> </region> </client-cache>
With this, the client is configured to go to the locator for the server connection location. Then any cache miss or put in the client region is automatically forwarded to the server.
Example—Standalone Publisher Client, Client Pool, and Region
The following API example walks through the creation of a standalone publisher client and the client pool and region.
public static ClientCacheFactory connectStandalone(String name) { return new ClientCacheFactory() .set("log-file", name + ".log") .set("statistic-archive-file", name + ".gfs") .set("statistic-sampling-enabled", "true") .set("cache-xml-file", "") .addPoolLocator("localhost", LOCATOR_PORT); } private static void runPublisher() { ClientCacheFactory ccf = connectStandalone("publisher"); ClientCache cache = ccf.create(); ClientRegionFactory<String,String> regionFactory = cache.createClientRegionFactory(PROXY); Region<String, Strini> region = regionFactory.create("DATA"); //... do work ... cache.close(); }
Example—Standalone Subscriber Client
This API example creates a standalone subscriber client using the same
connectStandalone method as the previous example.
private static void runSubscriber() throws InterruptedException { ClientCacheFactory ccf = connectStandalone("subscriber"); ccf.setPoolSubscriptionEnabled(true); ClientCache cache = ccf.create(); ClientRegionFactory<String,String> regionFactory = cache.createClientRegionFactory(PROXY); Region<String, String> region = regionFactory .addCacheListener(new SubscriberListener()) .create("DATA"); region.registerInterestRegex(".*", // everything InterestResultPolicy.NONE, false/*isDurable*/); SubscriberListener myListener = (SubscriberListener)region.getAttributes().getCacheListeners()[0]; System.out.println("waiting for publisher to do " + NUM_PUTS + " puts..."); myListener.waitForPuts(NUM_PUTS); System.out.println("done waiting for publisher."); cache.close(); }
Example of a Static Server List in Client/Server Configuration
You can specify a static server list instead of a locator list in the client configuration. With this configuration, the client’s server information does not change for the life of the client member. You do not get dynamic server discovery, server load conditioning, or the option of logical server grouping. This model is useful for very small deployments, such as test systems, where your server pool is stable. It avoids the administrative overhead of running locators.
This model is also suitable if you must use hardware load balancers. You can put the addresses of the load balancers in your server list and allow the balancers to redirect your client connections.
The client’s server specification must match the addresses where the servers are listening. In the server cache configuration file, here are the pertinent settings.
<cache> <cache-server port="40404" ... /> <region . . .
The client’s
cache.xml file declares a connection pool with the server explicitly listed and names the pool in the attributes for the client region. This XML file uses a region attributes template to initialize the region attributes configuration.
<client-cache> <pool name="publisher" subscription- <server host="localhost" port="40404"/> </pool> <region name="cs_region" refid="CACHING_PROXY"> </region> </client-cache> | http://gemfire.docs.pivotal.io/91/geode/topologies_and_comm/cs_configuration/client_server_example_configurations.html | 2018-11-13T03:11:49 | CC-MAIN-2018-47 | 1542039741192.34 | [] | gemfire.docs.pivotal.io |
Transfer CFT 3.2.2 Local Administration Guide Transfer CFT 3.2.2 Local Administration Central Governance. Central Governance simplifies Transfer CFT usage, and provides services such as identity and access management, certificate management, monitoring, alerting, and a web dashboard. For more information, visit. The Local Administration version of the Transfer CFT User Guide is designed for users who have not yet activated Central Governance. Axway encourages Transfer CFT Local Administrator users to discover the benefits of centralized management. Tip Existing users can view the Changelog for details on new features in this version as well as previous versions. Installation and Operation Guides Transfer CFT Installation Guide UNIX PDF Transfer CFT Installation Guide z/OS PDF Transfer CFT Installation Guide IBM i PDF Transfer CFT Installation Guide Windows PDF Transfer CFT Installation Guide OpenVMS PDF For new users About Transfer CFT About governance services My first file transfer! Installing and starting Transfer CFT Basic administrative tasks Using command line For existing users Platform specifics Using multi-node architecture Setting UCONF parameters Using APIs Get more help Additional documentation For the latest Transfer CFT documentation and downloads, go to Axway Support at. Transfer CFT 3.2.2 Release Notes Transfer CFT 3.2.2 Installation Guides Transfer CFT 3.2.2 Notes de Diffusion Related documentation includes: Axway Supported Platforms Document version: 5 October 2017 Related Links | https://docs.axway.com/bundle/Transfer_CFT_322_UsersGuide_LocalAdministration_allOS_en_HTML5/page/Content/AxwayStartPageLA.htm | 2019-05-19T08:49:54 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.axway.com |
All content with label examples+gridfs+hot_rod+infinispan+installation+jcache+jsr-107+repeatable_read.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, archetype, jbossas, lock_striping, nexus, guide, schema, listener,
cache, amazon, grid, test, api, xsd, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, s, hibernate, getting, aws, interface, custom_interceptor, setup, clustering, eviction, concurrency, jboss_cache, import, index, events, hash_function, configuration, batch, buddy_replication, loader, xa, write_through, cloud, mvcc, tutorial, notification,, standalone, hotrod, webdav, snapshot, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, jgroups, locking, rest
more »
( - examples, - gridfs, - hot_rod, - infinispan, - installation, - jcache, - jsr-107, - repeatable_read )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/examples+gridfs+hot_rod+infinispan+installation+jcache+jsr-107+repeatable_read | 2019-05-19T08:53:38 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.jboss.org |
Note: This section is under development and may change.
This section contains exercises that demonstrate how to deploy a WAM application into Azure. If all you need to do is provision a demonstration of a LANSA Stack, then essentially it's just a few clicks and then wait in the region of 45 minutes for all the parts of the stack to be provisioned.
If you wish to deploy your own application (there is a sample application provided for use with these exercises), then the application must be uploaded first and more care will need to be taken over the database server type, the size of the virtual machine instances, the number of instances and security settings.
The exercises to be performed to complete this objective are:
ATE015 – Subscribe to the LANSA Scalable License Image
ATE030 – Upload your LANSA WAM Application
ATE040 – Deploy the LANSA MSI using an Azure Resource Manager Template | https://docs.lansa.com/14/en/lansa022/content/lansa/vldtoolct_0255.htm | 2019-05-19T09:25:46 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.lansa.com |
Consume a .NET Standard library in Visual Studio 2017
Once you've created a .NET Standard class library by following the steps in Building a C# class library with .NET Core in Visual Studio 2017 or Building a Visual Basic class library with .NET Core in Visual Studio 2017, tested it in Testing a class library with .NET Core in Visual Studio 2017, and built a Release version of the library, the next step is to make it available to callers. You can do this in two ways:
If the library will be used by a single solution (for example, if it's a component in a single large application), you can include it as a project in your solution.
If the library will be generally accessible, you can distribute it as a NuGet package.
Including a library as a project in a solution
Just as you included unit tests in the same solution as your class library, you can include your application as part of that solution. For example, you can use your class library in a console application that prompts the user to enter a string and reports whether its first character is uppercase:
Open the
ClassLibraryProjectssolution you created in the Building a C# Class Library with .NET Core in Visual Studio 2017 topic. In Solution Explorer, right-click the ClassLibraryProjects solution and select Add > New Project from the context menu.
In the Add New Project dialog, expand the Visual C# node and select the .NET Core node followed by the Console App (.NET Core) project template. In the Name text box, type "ShowCase", and select the OK button.
In Solution Explorer, right-click the ShowCase project and select Set as StartUp Project in the context menu.
Initially, your project doesn't have access to your class library. To allow it to call methods in your class library, you create a reference to the class library. In Solution Explorer, right-click the
ShowCaseproject's Dependencies node and select Add Reference.
In the Reference Manager dialog, select StringLibrary, your class library project, and select the OK button.
In the code window for the Program.cs file, replace all of the code with the following code:
using System; using UtilityLibraries; class Program { static void Main(string[] args) { int row = 0; do { if (row == 0 || row >= 25) ResetConsole(); string input = Console.ReadLine(); if (String.IsNullOrEmpty(input)) break; Console.WriteLine($"Input: {input} {"Begins with uppercase? ",30}: " + $"{(input.StartsWithUpper() ? "Yes" : "No")}\n"); row += 3; } while (true); return; // Declare a ResetConsole local method void ResetConsole() { if (row > 0) { Console.WriteLine("Press any key to continue..."); Console.ReadKey(); } Console.Clear(); Console.WriteLine("\nPress <Enter> only to exit; otherwise, enter a string and press <Enter>:\n"); row = 3; } } }
The code uses the
rowvariable to maintain a count of the number of rows of data written to the console window. Whenever it is greater than or equal to 25, the code clears the console window and displays a message to the user.
The program prompts the user to enter a string. It indicates whether the string starts with an uppercase character. If the user presses the Enter key without entering a string, the application terminates, and the console window closes.
If necessary, change the toolbar to compile the Debug release of the
ShowCaseproject. Compile and run the program by selecting the green arrow on the ShowCase button.
You can debug and publish the application that uses this library by following the steps in Debugging your Hello World application with Visual Studio 2017 and Publishing your Hello World Application with Visual Studio 2017.
Distributing the library in a NuGet package
You can make your class library widely available by publishing it as a NuGet package. Visual Studio does not support the creation of NuGet packages. To create one, you use the
dotnet command line utility:
Open a console window. For example in the Ask me anything text box in the Windows taskbar, enter
Command Prompt(or
cmdfor short), and open a console window by either selecting the Command Prompt desktop app or pressing Enter if it's selected in the search results.
Navigate to your library's project directory. Unless you've reconfigured the typical file location, it's in the Documents\Visual Studio 2017\Projects\ClassLibraryProjects\StringLibrary directory. The directory contains your source code and a project file, StringLibrary.csproj.
Issue the command
dotnet pack --no-build. The
dotnetutility generates a package with a .nupkg extension.
Tip
If the directory that contains dotnet.exe is not in your PATH, you can find its location by entering
where dotnet.exein the console window.
For more information on creating NuGet packages, see How to Create a NuGet Package with Cross Platform Tools.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/dotnet/core/tutorials/consuming-library-with-visual-studio | 2019-05-19T08:59:42 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.microsoft.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.