content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Samples Below are a few common query needs and how the Kusto query language can be used to meet them. Display a column chart Project two or more columns and use them as the x and y axis of a chart: StormEvents | where isnotempty(EndLocation) | summarize event_count=count() by EndLocation | top 10 by event_count | render columnchart - The first column forms the x-axis. It can be numeric, datetime, or string. - Use where, summarizeand topto limit the volume of data that you display. - Sort the results so as to define the order of the x-axis. Get sessions from start and stop events Let's suppose we have a log of events, in which some events mark the start or end of an extended activity or session. Every event has an SessionId, so the problem is to match up the start and stop events with the same id. let Events = MyLogTable | where ... ; Events | where Name == "Start" | project Name, City, SessionId, StartTime=timestamp | join (Events | where Name="Stop" | project StopTime=timestamp, SessionId) on SessionId | project City, SessionId, StartTime, StopTime, Duration = StopTime - StartTime Use let to name a projection of the table that is pared down as far as possible before going into the join. Project is used to change the names of the timestamps so that both the start and stop times can appear in the result. It also selects the other columns we want to see in the result. join matches up the start and stop entries for the same activity, creating a row for each activity. Finally, project again adds a column to show the duration of the activity. Get sessions, without session id Now let's suppose that the start and stop events don't conveniently have a session id that we can match on. But we do have an IP address of the client where the session took place. Assuming each client address only conducts one session at a time, we can match each start event to the next stop event from the same IP address. Events | where Name == "Start" | project City, ClientIp, StartTime = timestamp | join kind=inner (Events | where Name == "Stop" | project StopTime = timestamp, ClientIp) on ClientIp | extend duration = StopTime - StartTime // Remove matches with earlier stops: | where duration > 0 // Pick out the earliest stop for each start and client: | summarize arg_min(duration, *) by bin(StartTime,1s), ClientIp The join will match every start time with all the stop times from the same client IP address. So we first remove matches with earlier stop times. Then we group by start time and ip to get a group for each session. We must supply a bin function for the StartTime parameter: if we don't, Kusto will automatically use 1-hour bins, which will match some start times with the wrong stop times. arg_min picks out the row with the smallest duration in each group, and the * parameter passes through all the other columns, though it prefixes "min_" to each column name. Then we can add some code to count the durations in conveniently-sized bins. We've a slight preference for a bar chart, so we divide by 1s to convert the timespans to numbers. // Count the frequency of each duration: | summarize count() by duration=bin(min_duration/1s, 10) // Cut off the long tail: | where duration < 300 // Display in a bar chart: | sort by duration asc | render barchart Real example Logs | filter ActivityId == "ActivityId with Blablabla" | summarize max(Timestamp), min(Timestamp) | extend Duration = max_Timestamp - min_TimestampMaps = extract("totalLaunchedMaps=([^,]+),", 1, EventText, typeof(real)) | extend MapsSeconds = extract("mapsMilliseconds=([^,]+),", 1, EventText, typeof(real)) / 1000 | extend TotalMapsSeconds = MapsSeconds / TotalLaunchedMaps | filter Tenant == 'DevDiv' and Environment == 'RollupDev2' | filter TotalLaunchedMaps > 0 | summarize sum(TotalMapsSeconds) by UnitOfWorkId | extend JobMapsSeconds = sum_TotalMapsSeconds * 1 | project UnitOfWorkId, JobMapsSeconds |Reducers = extract("totalLaunchedReducers=([^,]+),", 1, EventText, typeof(real)) | extend ReducesSeconds = extract("reducesMilliseconds=([^,]+)", 1, EventText, typeof(real)) / 1000 | extend TotalReducesSeconds = ReducesSeconds / TotalLaunchedReducers | filter Tenant == 'DevDiv' and Environment == 'RollupDev2' | filter TotalLaunchedReducers > 0 | summarize sum(TotalReducesSeconds) by UnitOfWorkId | extend JobReducesSeconds = sum_TotalReducesSeconds * 1 | project UnitOfWorkId, JobReducesSeconds ) on UnitOfWorkId | JobName = extract("jobName=([^,]+),", 1, EventText) | extend StepName = extract("stepName=([^,]+),", 1, EventText) | extend UnitOfWorkId = extract("unitOfWorkId=([^,]+),", 1, EventText) | extend LaunchTime = extract("launchTime=([^,]+),", 1, EventText, typeof(datetime)) | extend FinishTime = extract("finishTime=([^,]+),", 1, EventText, typeof(datetime)) | extend TotalLaunchedMaps = extract("totalLaunchedMaps=([^,]+),", 1, EventText, typeof(real)) | extend TotalLaunchedReducers = extract("totalLaunchedReducers=([^,]+),", 1, EventText, typeof(real)) | extend MapsSeconds = extract("mapsMilliseconds=([^,]+),", 1, EventText, typeof(real)) / 1000 | extend ReducesSeconds = extract("reducesMilliseconds=([^,]+)", 1, EventText, typeof(real)) / 1000 | extend TotalMapsSeconds = MapsSeconds / TotalLaunchedMaps | extend TotalReducesSeconds = (ReducesSeconds / TotalLaunchedReducers / ReducesSeconds) * ReducesSeconds | extend CalculatedDuration = (TotalMapsSeconds + TotalReducesSeconds) * time(1s) | filter Tenant == 'DevDiv' and Environment == 'RollupDev2') on UnitOfWorkId | extend MapsFactor = TotalMapsSeconds / JobMapsSeconds | extend ReducesFactor = TotalReducesSeconds / JobReducesSeconds | extend CurrentLoad = 1536 + (768 * TotalLaunchedMaps) + (1536 * TotalLaunchedMaps) | extend NormalizedLoad = 1536 + (768 * TotalLaunchedMaps * MapsFactor) + (1536 * TotalLaunchedMaps * ReducesFactor) | summarize sum(CurrentLoad), sum(NormalizedLoad) by JobName | extend SaveFactor = sum_NormalizedLoad / sum_CurrentLoad Chart concurrent sessions over time Suppose we have a table of activities with their start and end times. We'd like to see a chart over time that shows how many are running concurrently at any time. Here's an example input, which we'll call X: We want a chart in 1-minute bins, so we want to create something that, at each 1m interval, we can count for each running activity. Here's an intermediate result: X | extend samples = range(bin(StartTime, 1m), Stop, 1m) range generates an array of values at the specified intervals: But instead of keeping those arrays, we'll expand them using mv-expand: X | mv-expand samples = range(bin(StartTime, 1m), StopTime , 1m) We can now group these by sample time, counting the occurrences of each activity: X | mv-expand samples = range(bin(StartTime, 1m), StopTime , 1m) | summarize count(SessionId) by bin(todatetime(samples),1m) - We need todatetime() because mv-expand yields a column of dynamic type. - We need bin() because, for numeric values and dates, summarize always applies a bin function with a default interval if you don't supply one. This can be rendered as a bar chart or time chart. Introduce null bins into summarize When the summarize operator is applied over a group key that consists of a datetime column, one normally "bins" those values to fixed-width bins. For example: let StartTime=ago(12h); let StopTime=now() T | where Timestamp > StartTime and Timestamp <= StopTime | where ... | summarize Count=count() by bin(Timestamp, 5m) This operation produces a table with a single row per group of rows in T that fall into each bin of five minutes. What it doesn't do is add "null bins" -- rows for time bin values between StartTime and StopTime for which there's no corresponding row in T. Often, it is desired to "pad" the table with those bins. Here's one way to do it: let StartTime=ago(12h); let StopTime=now() T | where Timestamp > StartTime and Timestamp <= StopTime | summarize Count=count() by bin(Timestamp, 5m) | where ... | union ( // 1 range x from 1 to 1 step 1 // 2 | mv-expand Timestamp=range(StartTime, StopTime, 5m) to typeof(datetime) // 3 | extend Count=0 // 4 ) | summarize Count=sum(Count) by bin(Timestamp, 5m) // 5 Here's a step-by-step explanation of the query above: - Using the unionoperator allows us to add additional rows to a table. Those rows are produced by the expression to union. - Using the rangeoperator to produce a table having a single row/column. The table is not used for anything other than for mv-expandto work on. - Using the mv-expandoperator over the rangefunction to create as many rows as there are 5-minute bins between StartTimeand EndTime. - All with a Countof 0. - Last, we use the summarizeoperator to group-together bins from the original (left, or outer) argument to unionand bins from the inner argument to it (namely, the null bin rows). This ensure that the output has one row per bin, whose value is either zero or the original count. Get more out of your data in Kusto using Machine Learning There are many interesting use cases for leveraging machine learning algorithms and derive interesting insights out of telemetry data. While often these algorithms require a very structured dataset as their input, the raw log data will usually not match the required structure and size. Our journey starts with looking for anomalies in the error rate of a specific Bing Inferences service. The Logs table has 65B records, and the simple query below filters 250K errors, and creates a time series data of errors count that utilizes anomaly detection functionseries_decompose_anomalies. The anomalies detected by the Kusto service, and are highlighted as red dots on the time series chart. Logs | where Timestamp >= datetime(2015-08-22) and Timestamp < datetime(2015-08-23) | where Level == "e" and Service == "Inferences.UnusualEvents_Main" | summarize count() by bin(Timestamp, 5min) | render anomalychart The service identified few time buckets with suspicious error rate. I'm using Kusto to zoom into this time frame, running a query that aggregates on the ‘Message' column trying to look for the top errors. I've trimmed the relevant parts out of the entire stack trace of the message to better fit into the page. You can see that I had nice success with the top eight errors, but then reached a long tail of errors since the error message was created by a format string that contained changing data. Logs | where Timestamp >= datetime(2015-08-22 05:00) and Timestamp < datetime(2015-08-22 06:00) | where Level == "e" and Service == "Inferences.UnusualEvents_Main" | summarize count() by Message | top 10 by count_ | project count_, Message This is where the new reduce operator comes to help. The reduce operator identified 63 different errors as originated by the same trace instrumentation point in the code, and helped me focus on additional meaningful error trace in that time window. Logs | where Timestamp >= datetime(2015-08-22 05:00) and Timestamp < datetime(2015-08-22 06:00) | where Level == "e" and Service == "Inferences.UnusualEvents_Main" | reduce by Message with threshold=0.35 | project Count, Pattern Now that I have a good view into the top errors that contributed to the detected anomalies, I want to understand the impact of these errors across my system. The 'Logs' table contains additional dimensional data such as 'Component', 'Cluster', etc... The new 'autocluster' plugin can help me derive that insight with a simple query. In this example below, I can clearly see that each of the top four errors are specific to a component, and while the top three errors are specific to DB4 cluster, the fourth one happens across all clusters. Logs | where Timestamp >= datetime(2015-08-22 05:00) and Timestamp < datetime(2015-08-22 06:00) | where Level == "e" and Service == "Inferences.UnusualEvents_Main" | evaluate autocluster() Mapping values from one set to another A common use-case is using static mapping of values that can help in adopting results into more presentable way. For example, consider having next table. DeviceModel specifies a model of the device, which is not a very convenient form of referencing to the device name. A better representation may be: The two approaches below demonstrate how this can be achieved. Mapping using dynamic dictionary The approach below shows how the mapping can be achieved using a dynamic dictionary and dynamic accessors. // Data set definition let Source = datatable(DeviceModel:string, Count:long) [ 'iPhone5,1', 32, 'iPhone3,2', 432, 'iPhone7,2', 55, 'iPhone5,2', 66, ]; // Query start here let phone_mapping = dynamic( { "iPhone5,1" : "iPhone 5", "iPhone3,2" : "iPhone 4", "iPhone7,2" : "iPhone 6", "iPhone5,2" : "iPhone5" }); Source | project FriendlyName = phone_mapping[DeviceModel], Count Mapping using static table The approach below shows how the mapping can be achieved using a persistent table and join operator. Create the mapping table (just once): .create table Devices (DeviceModel: string, FriendlyName: string) .ingest inline into table Devices ["iPhone5,1","iPhone 5"]["iPhone3,2","iPhone 4"]["iPhone7,2","iPhone 6"]["iPhone5,2","iPhone5"] Content of Devices now: Same trick for creating test table Source: .create table Source (DeviceModel: string, Count: int) .ingest inline into table Source ["iPhone5,1",32]["iPhone3,2",432]["iPhone7,2",55]["iPhone5,2",66] Join and project: Devices | join (Source) on DeviceModel | project FriendlyName, Count Result: Creating and using query-time dimension tables In many cases one wants to join the results of a query with some ad-hoc dimension table that is not stored in the database. It is possible to define an expression whose result is a table scoped to a single query by doing something like this: // Create a query-time dimension table using datatable let DimTable = datatable(EventType:string, Code:string) [ "Heavy Rain", "HR", "Tornado", "T" ] ; DimTable | join StormEvents on EventType | summarize count() by Code Here's a slightly more complex example: // Create a query-time dimension table using datatable let TeamFoundationJobResult = datatable(Result:int, ResultString:string) [ -1, 'None', 0, 'Succeeded', 1, 'PartiallySucceeded', 2, 'Failed', 3, 'Stopped', 4, 'Killed', 5, 'Blocked', 6, 'ExtensionNotFound', 7, 'Inactive', 8, 'Disabled', 9, 'JobInitializationError' ] ; JobHistory | where PreciseTimeStamp > ago(1h) | where Service != "AX" | where Plugin has "Analytics" | sort by PreciseTimeStamp desc | join kind=leftouter TeamFoundationJobResult on Result | extend ExecutionTimeSpan = totimespan(ExecutionTime) | project JobName, StartTime, ExecutionTimeSpan, ResultString, ResultMessage Retrieving the latest (by timestamp) records per identity Suppose you have a table that includes an id column (identifying the entity with which each row is associated, such as a User Id or a Node Id) and a timestamp column (providing the time reference for the row), as well as other columns. Your goal is to write a query that returns the latest 2 records for each value of the id column, where "latest" is defined as "having the highest value of timestamp". This can be done using the top-nested operator. First we provide the query, and then we'll explain it: datatable(id:string, timestamp:datetime, bla:string) // (1) [ "Barak", datetime(2015-01-01), "1", "Barak", datetime(2016-01-01), "2", "Barak", datetime(2017-01-20), "3", "Donald", datetime(2017-01-20), "4", "Donald", datetime(2017-01-18), "5", "Donald", datetime(2017-01-19), "6" ] | top-nested of id by dummy0=max(1), // (2) top-nested 2 of timestamp by dummy1=max(timestamp), // (3) top-nested of bla by dummy2=max(1) // (4) | project-away dummy0, dummy1, dummy2 // (5) Notes - The datatableis just a way to produce some test data for demonstration purposes. In reality, of course, you'd have the data here. - This line essentially means "return all distinct values of id". - This line then returns, for the top 2 records that maximize the timestampcolumn, the columns of the previous level (here, just id) and the column specified at this level (here, timestamp). - This line adds the values of the blacolumn for each of the records returned by the previous level. If the table has other columns of interest, one would repeat this line for every such column. - Finally, we use the project-away operator to remove the "extra" columns introduced by top-nested. Extending a table with some percent-of-total calculation Often, when one has a tabular expression that includes a numeric column, it is desireable to present that column to the user alongside its value as a percentage of the total. For example, assume that there is some query whose value is the following table: And you want to display this table as: To do so, one needs to calculate the total (sum) of the SomeInt column, and then divide each value of this column by the total. It is possible to do so for arbitrary results by giving these results a name using the as operator: // The following table literal represents a long calculation // that ends up with an anonymous tabular value: datatable (SomeInt:int, SomeSeries:string) [ 100, "Foo", 200, "Bar", ] // We now give this calculation a name ("X"): | as X // Having this name we can refer to it in the sub-expression // "X | summarize sum(SomeInt)": | extend Pct = 100 * bin(todouble(SomeInt) / toscalar(X | summarize sum(SomeInt)), 0.001) Performing aggregations over a sliding window The following example shows how to summarize columns using a sliding window. Lets take, for example, the table below, which contains prices of fruits by timestamps. Suppose we would like to calculate the min, max and sum cost of each fruit per day, using a sliding window of 7 days. In other words, each record in the result set aggregates the past 7 days, and the result contains a record per day in the analysis period. Fruits table: Sliding window aggreation query (explanation is provided below query results): let _start = datetime(2018-09-24); let _end = _start + 13d; Fruits | extend _bin = bin_at(Timestamp, 1d, _start) // #1 | extend _endRange = iif(_bin + 7d > _end, _end, iff( _bin + 7d - 1d < _start, _start, iff( _bin + 7d - 1d < _bin, _bin, _bin + 7d - 1d))) // #2 | extend _range = range(_bin, _endRange, 1d) // #3 | mv-expand _range to typeof(datetime) limit 1000000 // #4 | summarize min(Price), max(Price), sum(Price) by Timestamp=bin_at(_range, 1d, _start) , Fruit // #5 | where Timestamp >= _start + 7d; // #6 Query details: The query "stretches" (duplicates) each record in the input table throughout 7 days post its actual appearance, such that each record actually appears 7 times. As a result, when performing the aggregation per each day, the aggregation includes all records of previous 7 days. Step-by-step explanation (numbers refer to the numbers in query inline comments): - Bin each record to 1d (relative to _start). - Determine the end of the range per record - _bin + 7d, unless this is out of the (start, end) range, in which case it is adjusted. - For each record, create an array of 7 days (timestamps), starting at current record's day. - mv-expand the array, thus duplicating each record to 7 records, 1 day apart from each other. - Perform the aggregation function for each day. Due to #4, this actually summarizes the past 7 days. - Finally, since the data for the first 7d is incomplete (there's no 7d lookback period for the first 7 days), we exclude the first 7 days from the final result (they only participate in the aggregation for the 2018-10-01).
https://docs.microsoft.com/nb-no/azure/kusto/query/samples
2019-08-18T00:10:24
CC-MAIN-2019-35
1566027313501.0
[array(['images/samples/060.png', '060 alt text'], dtype=object) array(['images/samples/040.png', '040 alt text'], dtype=object) array(['images/samples/050.png', '050 alt text'], dtype=object)]
docs.microsoft.com
Getting started with Splunk Cloud To start using your new Splunk Cloud deployment, you or your Splunk administrator need to: - Get data in - Create reports based on your data Log into Splunk Cloud If you bought self-service Splunk Cloud directly on the Splunk web site, you received a cloud.splunk.com URL and credentials. If you worked with Splunk Sales to purchase managed Splunk Cloud, you received an email with a dedicated URL and login credentials as part of the purchase process. - Open your web browser. - Navigate to your Splunk Cloud URL. - Log in using the credentials supplied by Splunk Sales or Support. Get data into Splunk Cloud To get data into Splunk Cloud, the most common approach is to install the Splunk Universal Forwarder and required credentials on the computers where your source data resides and configure them to send data to Splunk Cloud. For details about the options for getting data into Splunk Cloud, see Overview of getting data into Splunk Cloud. Search and render your data After you get your data into Splunk Cloud, you can search the data to create reports and display the results using dashboards and visualizations. For detailed information, see the following manuals. For detailed information about the Splunk platform, see the following resources. - Splunk Docs is Splunk user documentation. - Splunk Answers is our thriving user community. Join us! - Splunk Education offers courses on-site, off-site, and on the Web. - Splunk Videos offer training and demos on a variety of topics.™: 6.6.3, 7.0.0, 7.0.2, 7.0.3, 7.0.5, 7.0.8, 7.1.3, 7.1.6, 7.2.3, 7.2.4, 7.2.6 May I know the Specifications or Technical Requirements for this product?
https://docs.splunk.com/Documentation/SplunkCloud/7.2.6/User/GetstartedwithSplunkCloud
2019-08-17T23:25:11
CC-MAIN-2019-35
1566027313501.0
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Message properties and search operators for In-Place eDiscovery This topic describes the properties of Exchange email messages that you can search by using In-Place eDiscovery & Hold in Exchange Server and Exchange Online. The topic also describes Boolean search operators and other search query techniques that you can use to refine eDiscovery search results. In-Place eDiscovery uses Keyword Query Language (KQL). For more details, see Keyword Query Language syntax reference. Searchable properties in Exchange The following table lists email message properties that can be searched using an In-Place eDiscovery search or by using the New-MailboxSearch or the Set-MailboxSearch cmdlet. The table includes an example of the property:value syntax for each property and a description of the search results returned by the examples. Note 1 For the value of a recipient property, you can use the SMTP address, display name, or alias to specify a user. For example, you can use [email protected], annb, or "Ann Beebe" to specify the user Ann Beebe. Supported search operators Boolean search operators, such as AND, OR, help you define more-precise mailbox searches by including or excluding specific words in the search query. Other techniques, such as using property operators (such as >= or ..), quotation marks, parentheses, and wildcards, help you refine eDiscovery search queries. The following table lists the operators that you can use to narrow or broaden search results. Important You must use uppercase Boolean operators in a search query. For example, use AND; don't use and. Using lowercase operators in search queries will return an error. Note 1 Use this operator for properties that have date or numeric values. Unsupported characters in search queries Unsupported characters in a search query typically cause a search error or return unintended results. Unsupported characters are often hidden and they're typically added to a query when you copy the query or parts of the query from other applications (such as Microsoft Word or Microsoft Excel) and copy them to the keyword box on the query page of In-Place eDiscovery search. Here's a list of the unsupported characters for an In-Place eDiscovery search query.: As previous explained, you have to use uppercase Boolean operators, such as AND and OR, in a search query. Note that the query syntax will often indicate that a Boolean operator is being used even though lowercase operators might be used; for example, (WordA or WordB) and (WordC or WordD). **How to prevent unsupported characters in your search queries?**The best way to prevent unsupported characters is to just type the query in the keyword box. Alternatively, you can copy a query from Word or Excel and then paste it to file in a plain text editor, such as Microsoft Notepad. Then save the text file and select ANSI in the Encoding drop-down list. This will remove any formatting and unsupported characters. Then you can copy and paste the query from the text file to the keyword query box. Search tips and tricks Keyword searches are not case sensitive. For example, cat and CAT return the same results. A space between two keywords or two property:valueexpressions is the same as using AND. For example, from:"Sara Davis" subject:reorganizationreturns all messages sent by Sara Davis that contain the word reorganization in the subject line. Use syntax that matches the property:valueformat. Values are not case-sensitive, and they can't have a space after the operator. If there is a space, your intended value will just be full-text searched.. Feedback
https://docs.microsoft.com/en-gb/exchange/security-and-compliance/in-place-ediscovery/message-properties-and-search-operators
2019-08-17T22:44:06
CC-MAIN-2019-35
1566027313501.0
[]
docs.microsoft.com
Overview InvoicePluginApi The InvoicePluginApi exists to develop invoice plugins. Those plugins are called from the core invoice system each time a new invoice is being generated by the system: The invoice system will first compute all the items based on the existing subscriptions or usage data associated to the account and then invoke the registered plugins to allow to add extra items on that invoice. Plugins need to implement one api getAdditionalInvoiceItems, whose purpose is to to allow plugins to add additional items and which takes the following parameters: invoice: A copy of the current invoice being generated by the system (including all items generated by the system, and only items generated by the system) properties: An empty list of PluginProperty, which means this field exists only for symmetry with other plugin apis, but is not used. context: The call context associated to that invoice (mostly interesting for plugins to identify the tenant) If multiple plugins have been registered (we don’t advise for that, unless there is a very good reason), the system will invoke all of them in no particular order. Each plugin will only see the original items generated by the system, but the union of all items generated by all plugins would end up on the invoice. Plugins are restricted in the type of items they can add because some items (e.g RECURRING) need to match existing subscription or usage data associated to the account. The following types are allowed: EXTERNAL_CHARGE ITEM_ADJ TAX Plugin Activation Invoice plugins are registered through the per-tenant org.killbill.invoice.plugin property (CSV list of plugin names) and invoked in order. Retries Invoice generation can be aborted and retried at a later time if the plugin is unable to compute the additional items (e.g. third-party tax service unavailable). The mechanism to do so is very similar to the retry mechanism for notification plugins: the plugin simple needs to throw an InvoicePluginApiRetryException with a specified retry schedule. Take a look at the Notification Plugins documentation for more details. A note about invoice item ids When returning InvoiceItem objects back to Kill Bill, it is recommended to set the id value to null. However: For new invoices or if an invoice is still in DRAFTmode, invoice plugins are allowed to update existing items, instead of adding new ones, by respecting the invoice item id returned by Kill Bill. This can be useful for instance to update tax amounts. If the invoice plugin needs to keep track of the items it added, it can set the idwhich will be respected by the core system. Such values are expected to be globally unique. Use Cases Tax Plugin One of the main use case of this api is to allow plugins to add TAX items. Kill Bill by default knows nothing about tax, and that logic is deferred to plugins implementing this api. Example of such existing plugins are: Avalara tax plugin: This is a plugin that interracts with the third party Avalara. Simple tax plugin: This is a simple tax plugin that was developed mostly as a use case for the api or to address use cases where tax calculation is simple. Invoice Customization There are many use cases where one would want to modify existing invoices. For instance one could implement a way to generate discounts based on certain heuristics (for a given tenant/account/subscription some discount could apply). In those scenarios, the plugin would add ITEM_ADJ items to reflect the discount. A reverse use case is one where a plugin needs to add extra charges that are unknown of the system (Kill Bill) and also based on some heuristics that are only known from the plugin.
http://docs.killbill.io/0.20/invoice_plugin.html
2019-08-17T23:20:41
CC-MAIN-2019-35
1566027313501.0
[]
docs.killbill.io
This is documentation for Orange 2.7. For the latest documentation, see Orange 3. Curve (owcurve)¶ - class Orange.OrangeWidgets.plot.owcurve.OWPlotItem¶ This class represents a base for any item than can be added to a plot. - data_rect()¶ Returns the bounding rectangle of this item in data coordinates. This method is used in autoscale calculations. - set_graph_transform(transform)¶ Sets the graph transform (the transformation that maps from data to plot coordinates) for this item. - set_zoom_transform(transform)¶ Sets the zoom transform (the transformation that maps from plot to scene coordinates) for this item. - update_properties()¶ Called by the plot, this function is supposed to updates the item’s internal state to match its settings. The default implementation does nothing and shold be reimplemented by subclasses. - register_points()¶ If this item constains any points (of type OWPoint), add them to the plot in this function. The default implementation does nothing. - set_in_background(background)¶ If background is True, the item is moved to be background of this plot, behind other items and axes. Otherwise, it’s brought to the front, in front of axes. The default in False, so that items apper in front of axes. - is_in_background()¶ Returns if item is in the background, set with set_in_background(). Subclassing Often you will want to create a custom curve class that inherits from OWCurve. For this purpose, OWPlotItem provides two virtual methods: paint() and update_properties(). - update_properties() is called whenever a curve or the plot is changed and needs to be updated. In this method, child items and other members should be recalculated and updated. The OWCurve even provides a number of methods for asynchronous (threaded) updating. - paint() is called whenever the item needs to be painted on the scene. This method is called more often, so it’s advisable to avoid long operation in the method. Most provided plot items, including OWCurve, OWMultiCurve and utility curves in owtools only reimplement the first method, because they are optimized for performance with large data sets. - class Orange.OrangeWidgets.plot.owcurve.OWCurve(xData=[], yData=[], x_axis_key=2, y_axis_key=0, tooltip=None)¶ This class represents a curve on a plot. It is essentially a plot item with a series of data points or a continuous line. Note All the points or line segments in an OWCurve have the same properties. Different points in one curve are supported by the OWMultiCurve class. - point_item(x, y, size=0, parent=None)¶ Returns a single point with this curve’s properties. It is useful for representing the curve, for example in the legend. - set_data(x_data, y_data)¶ Sets the curve’s data to a list of coordinates specified by x_data and y_data. - set_style(style)¶ Sets the curve’s style to style. The following values are recognized by OWCurve: Curve subclasses can use this value for different drawing modes. Values up to OWCurve.UserCurve are reserved, so use only higher numbers, like the following example: class MyCurve(OWCurve): PonyStyle = OWCurve.UserCurve + 42 def draw_ponies() # Draw type-specific things here def update_properties(self): if self.style() == PonyStyle: self.draw_ponies() else: OWCurve.update_properties(self) - cancel_all_updates()¶ Cancel all pending threaded updates and block until they are finished. This is usually called before starting a new round of updates. - update_number_of_items()¶ Resizes the point list so that it matches the number of data points in data() - class Orange.OrangeWidgets.plot.owcurve.OWMultiCurve¶ A multi-curve is a curve in which each point can have its own properties. The point coordinates can be set by calling OWCurve.set_data(), just like in a normal curve. In addition, OWMultiCurve provides methods for setting properties for individual points. Each of this methods take a list as a parameter. If the list has less elements that the curve’s data, the first element is used for all points.
https://docs.biolab.si/2/extend-widgets/rst/OrangeWidgets.plot.owcurve.html
2019-08-17T22:30:20
CC-MAIN-2019-35
1566027313501.0
[]
docs.biolab.si
Introduction Use this integration to exchange files with your GitLab repository. As soon as the integration is set up, you can pull the files by using the Lokalise web dashboard. There will be also an incoming webhook set up, which you may use to automatically pull the files as they are pushed to GitLab by your developer team. Once the translations are done, you can trigger the creation of a pull (merge) request when exporting. Setup 1. Connect your repository Navigate to project settings > Integrations and click Connect under the GitLab badge. You will need to generate and copy/paste a personal access token from GitLab. The token must have api scope enabled. Next, you can choose the Repository and Branch to pull from. You must specify the platform of the files stored in this repo. As you export files from Lokalise, you can trigger the GitLab_0<< 2. Select files to pull Browse the selected project and select the files you want to pull and import. In most cases, at this step you would only need to select the base language files (the files that are being modified locally and then pushed to GitLab). After selecting a file, you must set the language of the file in the drop-down. 3. Add more repositories As we recommend keeping all platform files within the same project, you may want to set up another repositories that apply to the same project. Click Add another repo to add more repositories. Pulling files Manual pull Use the Pull now button at the integration page (in project settings > Integrations). Hitting the button will add the pull to the system queue to be executed in the background. Auto-pull When you are satisfied with the initial pull results, it is a good idea to set up a webhook at GitLab which automates pulling the changes to Lokalise as you push to GitLab. At GitLab, navigate to your repository settings > Integrations and copy/paste the Auto-pull URL provided in the Lokalise integration config. You will need to provide the Auto-pull secret generated at the integration page as well. Pull requests As the translations are being completed, Lokalise can create the pull requests with the exported files, which you can then merge to a selected branch. In order to create a pull request, you need to perform a project export with the GitLab trigger enabled. It is a good idea to use the Preview button first, so you can see the resulting file/folder structure before triggering the creation of a pull request. We would trigger pull requests only to the repos of the platform that match the file type you are exporting, i.e. if you are exporting JSON format, Lokalise will only create pull requests in repositories with the Web platform. Here is what happens as you trigger it: A new branch is created from the last revision of the branch which you have chosen GitLab. To initiate a pull request from the API, use triggers=['gitlab'] parameter with the /export endpoint. If you are using the CLI tool, use --triggers=gitlab as a parameter when performing the export. In case there is a firewall in place, make sure to whitelist these IP addresses: 159.69.72.82 94.130.129.237
https://docs.lokalise.co/en/articles/1789855-gitlab
2019-08-17T22:44:05
CC-MAIN-2019-35
1566027313501.0
[array(['https://downloads.intercomcdn.com/i/o/55410412/af2f02f519d932bce850fe24/x___Lokalise.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/55410902/c59dedb72351d364478d44ae/Screen+Shot+2018-04-12+at+10.51.50.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/55411081/42b2e3fc6eea1bd6054c9fe7/x___Lokalise.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/55411728/8340a4ae4d990d48855aa1fd/x___Lokalise.png', None], dtype=object) ]
docs.lokalise.co
Improved OData 2.0 Adapter for Salesforce Connect If you’re using the OData 2.0 adapter with Salesforce Connect, we’re pushing improvements your way. To prepare, make sure that your external data sources that connect using the OData 2.0 adapter can decode encoded URLs. But if you’re following HTTP standards, you are already compliant. Where: This change applies to Lightning Experience and Salesforce Classic as part of Salesforce Connect. Salesforce Connect is free for Developer Edition and available for an extra cost in Enterprise, Performance, and Unlimited editions.
https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_forcecom_external_data_improved_odata2_adapter.htm
2019-08-17T22:58:39
CC-MAIN-2019-35
1566027313501.0
[]
docs.releasenotes.salesforce.com
Step 3: Send Email Using Amazon SES Event Publishing After you create a configuration set and add an event destination, the last step to event publishing is to send your emails. To publish events associated with an email, you must provide the name of the configuration set to associate with the email. Optionally, you can provide message tags to categorize the email. You provide this information to Amazon SES as either parameters to the email sending API, Amazon SES-specific email headers, or custom headers in your MIME message. The method you choose depends on which email sending interface you use, as shown in the following table. The following sections describe how to specify the configuration set and message tags using headers and using API parameters. Additionally, this guide contains several code examples that demonstrate how to send email programmatically using Amazon SES. Each of these code examples includes a method of passing a configuration set when sending an email. For more information, see Amazon SES Code Examples. Using Amazon SES API Parameters To use SendEmail or SendRawEmail with event publishing, you specify the configuration set and the message tags by passing data structures called ConfigurationSet and MessageTag to the API call. For more information about using the Amazon SES API, see the Amazon Simple Email Service API Reference. Using Amazon SES-Specific Email Headers When you use SendRawEmail or the SMTP interface, you can specify the configuration set and the message tags by adding Amazon SES-specific headers to the email. Amazon SES removes the headers before sending the email. The following table shows the names of the headers to use. The following example shows how the headers might look in a raw email that you submit to Amazon SES. Copy-- Using Custom Email Headers Although you must specify the configuration set name using the Amazon SES-specific header X-SES-CONFIGURATION-SET, you can specify the message tags by using your own MIME headers. Note Header names and values that you use for Amazon SES event publishing must be in ASCII. If you specify a non-ASCII header name or value for Amazon SES event publishing, the email sending call will still succeed, but the event metrics will not be emitted to Amazon CloudWatch.
http://docs.aws.amazon.com/ses/latest/DeveloperGuide/event-publishing-send-email.html
2017-11-17T19:40:39
CC-MAIN-2017-47
1510934803906.12
[]
docs.aws.amazon.com
You can use View Administrator to add a pod to an existing site. Procedure - Log in to the View Administrator user interface for any View Connection Server instance in the pod federation. - In View Administrator, select . -.
https://docs.vmware.com/en/VMware-Horizon-6/6.2/com.vmware.horizon-view.cloudpodarchitecture.doc/GUID-0F4FA353-67EB-4191-9822-4580B52479F8.html
2017-11-17T19:41:08
CC-MAIN-2017-47
1510934803906.12
[]
docs.vmware.com
You can configure when a serial port is connected to a virtual machine. You can also configure whether to send output to a physical port or to a file on the host system, set up a direct connection between two virtual machines, and specify whether the guest operating system uses the port in polled mode. To configure serial port settings for a selected virtual machine, select the virtual machine, select Hardware tab, and select the serial port., click the
https://docs.vmware.com/en/VMware-Workstation-Player-for-Linux/14.0/com.vmware.player.linux.using.doc/GUID-A6D5BFF7-13AB-4141-B433-93D200D0F907.html
2017-11-17T19:41:02
CC-MAIN-2017-47
1510934803906.12
[]
docs.vmware.com
. Use the Access Point widget to see the status of multiple Access Points, including blast and PCoIP session counts, usage percentage (i.e., the percentage of the maximum number of sessions actually used), IP addresses, and collection status. Select an Access Point from the list and click on the Object Details icon to details about each Access Point (e.g., health and active alerts); you can also configure alert thresholds for session usage. You can configure the session level usage for an Access Point that triggers an alert. For example, you may want to trigger an alert if session usage reaches 90% of an AD's capacity. To do so: Click Home. Click Content in the vRealize Operations Manager for Horizon's sidebar. Click Symptom Definitions and Metric/Property Symptom Definitions. Filter for "Access Point" (or simply "access"). Double-click on Session Usage and configure the alert to the desired value (e.g., 90%).
https://docs.vmware.com/en/VMware-vRealize-Operations-for-Horizon/6.5/com.vmware.vrealize.horizon.admin/GUID-86E1B6BC-B282-4A3B-B360-163130EC9111.html
2017-11-17T19:35:47
CC-MAIN-2017-47
1510934803906.12
[]
docs.vmware.com
Configure and publish a machine component as a standalone blueprint that other architects can reuse as a component in application blueprints, and catalog administrators can include in catalog services. Prerequisites Log in to the vRealize Automation console as an infrastructure architect. Complete external preparations for provisioning, such as creating templates, WinPE's, and ISO's, or gather the information about external preparations from your administrators. Configure your tenant. Configuring Tenant Settings. Configure your IaaS resources. Checklist for Configuring IaaS Resources. See Preparing Your Environment for vRealize Automation Management. Procedure - Click the New icon ( ). - Follow the prompts on the New Blueprint dialog box to configure general settings. - Click OK. - Click Machine Types in the Categories area to display a list of available machine types. - Drag the type of machine you want to provision onto the design canvas. - Follow the prompts on each of the tabs to configure machine provisioning details. - Click Finish. - Select your blueprint and click Publish. Results You configured and published a machine component as a standalone blueprint. Catalog administrators can include this machine blueprint in catalog services and entitle users to request this blueprint. Other architects can reuse this machine blueprint to create more elaborate application blueprints that include Software components, XaaS blueprints, or additional machine blueprints. What to do next You can combine a machine blueprint with Software components, XaaS blueprints, or additional machine blueprints to create more elaborate application blueprints. See Assembling Composite Blueprints.
https://docs.vmware.com/en/vRealize-Automation/7.1/com.vmware.vrealize.automation.doc/GUID-C135286B-175F-4ADA-A9A8-4F16EEB7E672.html
2017-11-17T19:32:59
CC-MAIN-2017-47
1510934803906.12
[]
docs.vmware.com
When you install from the Installation Details page, you are informed of any issues that are preventing the installation from finishing. About this task When problems are found, the component is flagged and you are presented with detailed information about the failure along with steps to investigate solutions. After you have addressed the issue, you retry the installation step. Depending on the type of failure, you follow different remediation steps. Procedure - If the Retry Failed button is enabled, use the following steps. - Review the failure. - Assess what needs to be changed and make required changes. - Return to the Installation screen and click Retry Failed. The installer attempts to install all failed components. - If the Retry All IaaS button is enabled, use the following steps. - Review the failure. - Assess what needs to be changed. - Revert all IaaS servers to the snapshots you created earlier. - Delete the MS SQL database, if you are using an external database. - Make required changes. - Click Retry All IaaS. - If the failure is in the virtual appliance components use the following steps. - Review the failure. - Assess what needs to be changed. - Revert all servers to snapshots, including the one from which you are running the wizard, - Make required changes. - Refresh the wizard page. - Logon and rerun the wizard again. The wizard opens at the pre-installation step.
https://docs.vmware.com/en/vRealize-Automation/7.2/com.vmware.vrealize.automation.doc/GUID-0CEAA004-1897-49C8-9FC0-D43A570F3F70.html
2017-11-17T19:32:57
CC-MAIN-2017-47
1510934803906.12
[]
docs.vmware.com
Core Dispatcher Core Dispatcher Core Dispatcher Core Dispatcher Class Definition Provides the Windows Runtime core event message dispatcher. Instances of this type are responsible for processing the window messages and dispatching the events to the client. public : sealed class CoreDispatcher : ICoreAcceleratorKeys, ICoreDispatcher, ICoreDispatcher2, ICoreDispatcherWithTaskPriority public sealed class CoreDispatcher : ICoreAcceleratorKeys, ICoreDispatcher, ICoreDispatcher2, ICoreDispatcherWithTaskPriority Public NotInheritable Class CoreDispatcher Implements ICoreAcceleratorKeys, ICoreDispatcher, ICoreDispatcher2, ICoreDispatcherWithTaskPriority // This class does not provide a public constructor. - Attributes - Remarks Instances of this type can be obtained from the CoreWindow.Dispatcher property. The current CoreWindow instance can be obtained by calling CoreWindow.GetForCurrentThread. void MyCoreWindowEvents::Run() // this is an implementation of IFrameworkView::Run() used to show context. It is called by CoreApplication::Run(). { CoreWindow::GetForCurrentThread()->Activate(); /... CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessUntilQuit); } Properties Gets and sets the priority of the current task. public : CoreDispatcherPriority CurrentPriority { get; set; } public CoreDispatcherPriority CurrentPriority { get; set; } Public ReadWrite Property CurrentPriority As CoreDispatcherPriority var coreDispatcherPriority = coreDispatcher.currentPriority; coreDispatcher.currentPriority = coreDispatcherPriority; A CoreDispatcherPriority enumeration value that specifies the priority of the current task. - See Also - Gets a value that specifies whether the event dispatcher provided by this instance of CoreWindow has access to the current thread or not. public : Platform::Boolean HasThreadAccess { get; } public bool HasThreadAccess { get; } Public ReadOnly Property HasThreadAccess As bool var bool = coreDispatcher.hasThreadAccess; - Value - Platform::Boolean bool bool bool True if the event dispatcher has thread access; false it does not. Methods ProcessEvents(CoreProcessEventsOption) ProcessEvents(CoreProcessEventsOption) ProcessEvents(CoreProcessEventsOption) ProcessEvents(CoreProcessEventsOption) Starts the dispatcher processing the input event queue for this instance of CoreWindow. public : void ProcessEvents(CoreProcessEventsOption options) public void ProcessEvents(CoreProcessEventsOption options) Public Function ProcessEvents(options As CoreProcessEventsOption) As void coreDispatcher.processEvents(options); - options - CoreProcessEventsOption CoreProcessEventsOption CoreProcessEventsOption CoreProcessEventsOption Determines how many events to process, and if this method should block. RunAsync(CoreDispatcherPriority, DispatchedHandler) RunAsync(CoreDispatcherPriority, DispatchedHandler) RunAsync(CoreDispatcherPriority, DispatchedHandler) RunAsync(CoreDispatcherPriority, DispatchedHandler) Schedules the provided callback on the UI thread from a worker thread, and returns the results asynchronously. public : IAsyncAction RunAsync(CoreDispatcherPriority priority, DispatchedHandler agileCallback) public IAsyncAction RunAsync(CoreDispatcherPriority priority, DispatchedHandler agileCallback) Public Function RunAsync(priority As CoreDispatcherPriority, agileCallback As DispatchedHandler) As IAsyncAction var iAsyncAction = coreDispatcher.run object that provides handlers for the completed async event dispatch. Examples The following examples demonstrate the use of CoreDispatcher::RunAsync to schedule work on the main UI thread using the CoreWindow 's event dispatcher. // _dispatcher->RunAsync(Windows::UI::Core::CoreDispatcherPriority::Normal, ref new Windows::UI::Core::DispatchedHandler([this]() { _count++; TimerTextBlock->Text = "Total Running Time: " + _count.ToString() + " Seconds"; })); // using CallbackContext::Any void Playback::DisplayStatus(Platform::String^ text) { _dispatcher->RunAsync(Windows::UI::Core::CoreDispatcherPriority::Normal, ref new Windows::UI::Core::DispatchedHandler([=]() { _OutputStatus->Text += text + " "; }, CallbackContext::Any)); } await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { rootPage.NotifyUser("The toast encountered an error", NotifyType.ErrorMessage); }); var ignored = dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { Scenario3OutputText.Text += outputText; }); Remarks If you are on a worker thread and want to schedule work on the UI thread, use CoreDispatcher::RunAsync. Always set the priority to CoreDispatcherPriority::Normal or CoreDispatcherPriority::Low, and ensure that any chained callbacks also use CoreDispatcherPriority::Normal or CoreDispatcherPriority::Low. Note Callbacks scheduled with CoreDispatcherPriority::Low priority are called when there are no pending input events. Use the CoreDispatcherPriority::Low priority to make your app UI more responsive. To schedule background tasks, use CoreDispatcher::RunIdleAsync. To spin off a worker thread from the UI thread, do not use this method (CoreDispatcher::RunAsync ). Instead, use one of the Windows::System::Threading::ThreadPool::RunAsync method overloads. Await a UI task sent from a background thread When you update your UI from a background thread by calling RunAsync, it schedules the work on the UI thread and returns control to the caller immediately. If you need to wait for async work to complete before returning, for example, waiting for user input in a dialog box, do not use RunAsync alone. RunAsync also does not provide a way for the task to return a result to the caller. In this example, RunAsync returns without waiting for the user input from the dialog box. (RunAsync returns as soon as the code in the lambda expression begins executing.) //DO NOT USE THIS CODE. await dispatcher.RunAsync(CoreDispatcherPriority.Normal, async () => { await signInDialog.ShowAsync(); }); // Execution continues here before the call to ShowAsync completes. In this case, you need to use a TaskCompletionSource in combination with RunAsync to return a Task that you can await from your background thread, thereby pausing execution until the UI task completes. We recommend that you use the RunTaskAsync extension method from our task snippet library for this. It provides a robust solution that enables code running on a background thread to await a task that must run on the UI thread. See the Await a UI task sent from a background thread page for the code and example usage. Porting from .NET If you are porting from .NET code and using Dispatcher.BeginInvoke and Dispatcher.Invoke methods, note that CoreDispatcher::RunAsync is asynchronous. There is no synchronous version. After you change Dispatcher.Invoke to CoreDispatcher::RunAsync, your code must support the Windows Runtimeasync pattern and use the specific lambda syntax for your chosen language. - See Also - RunIdleAsync(IdleDispatchedHandler) RunIdleAsync(IdleDispatchedHandler) RunIdleAsync(IdleDispatchedHandler) RunIdleAsync(IdleDispatchedHandler) Schedules a callback on the UI thread from a worker thread at idle priority, and returns the results asynchronously. public : IAsyncAction RunIdleAsync(IdleDispatchedHandler agileCallback) public IAsyncAction RunIdleAsync(IdleDispatchedHandler agileCallback) Public Function RunIdleAsync(agileCallback As IdleDispatchedHandler) As IAsyncAction var iAsyncAction = coreDispatcher.runIdleAsync(agileCallback); - agileCallback - IdleDispatchedHandler IdleDispatchedHandler IdleDispatchedHandler IdleDispatchedHandler The callback on which the idle priority dispatcher returns when the event is dispatched.. - See Also - Queries whether the caller should yield if there are items in the task queue of higher priority than the current task. public : Platform::Boolean ShouldYield() public bool ShouldYield() Public Function ShouldYield() As bool var bool = coreDispatcher.shouldYield(); true if the current work item should yield to higher priority work; false if it should not. - See Also - ShouldYield(CoreDispatcherPriority) ShouldYield(CoreDispatcherPriority) ShouldYield(CoreDispatcherPriority) ShouldYield(CoreDispatcherPriority) Queries whether the caller should yield if there are items in the task queue of the specified priority or higher. public : Platform::Boolean ShouldYield(CoreDispatcherPriority priority) public bool ShouldYield(CoreDispatcherPriority priority) Public Function ShouldYield(priority As CoreDispatcherPriority) As bool var bool = coreDispatcher.shouldYield(priority); - priority - CoreDispatcherPriority CoreDispatcherPriority CoreDispatcherPriority CoreDispatcherPriority The minimum priority level for which the current work item should yield. true if the current work item should yield to higher priority work; false if it should not. - See Also - Stops the dispatcher from processing any queued events. public : void StopProcessEvents() public void StopProcessEvents() Public Function StopProcessEvents() As void coreDispatcher.stopProcessEvents(); TryRunAsync(CoreDispatcherPriority, DispatchedHandler) TryRunAsync(CoreDispatcherPriority, DispatchedHandler) TryRunAsync(CoreDispatcherPriority, DispatchedHandler) TryRunAsync(CoreDispatcherPriority, DispatchedHandler) Attempts to schedule the provided callback on the UI thread from a worker thread, and returns the results asynchronously. public : IAsyncOperation<Platform::Boolean> TryRunAsync(CoreDispatcherPriority priority, DispatchedHandler agileCallback) public IAsyncOperation<bool> TryRunAsync(CoreDispatcherPriority priority, DispatchedHandler agileCallback) Public Function TryRunAsync(priority As CoreDispatcherPriority, agileCallback As DispatchedHandler) As IAsyncOperation( Of bool ) var iAsyncOperation = coreDispatcher.tryRun asynchronous operation. - See Also - TryRunIdleAsync(IdleDispatchedHandler) TryRunIdleAsync(IdleDispatchedHandler) TryRunIdleAsync(IdleDispatchedHandler) TryRunIdleAsync(IdleDispatchedHandler) Attempts to schedule a callback on the UI thread from a worker thread at idle priority, and returns the results asynchronously. public : IAsyncOperation<Platform::Boolean> TryRunIdleAsync(IdleDispatchedHandler agileCallback) public IAsyncOperation<bool> TryRunIdleAsync(IdleDispatchedHandler agileCallback) Public Function TryRunIdleAsync(agileCallback As IdleDispatchedHandler) As IAsyncOperation( Of bool ) var iAsyncOperation = coreDispatcher.tryRunIdleAsync(agileCallback); - agileCallback - IdleDispatchedHandler IdleDispatchedHandler IdleDispatchedHandler IdleDispatchedHandler The callback on which the idle priority dispatcher returns when the event is dispatched. The asynchronous operation. - See Also - Events AcceleratorKeyActivated AcceleratorKeyActivated AcceleratorKeyActivated AcceleratorKeyActivated Fired when an accelerator key is activated (pressed or held down). public : event TypedEventHandler AcceleratorKeyActivated<CoreDispatcher, AcceleratorKeyEventArgs> public event TypedEventHandler AcceleratorKeyActivated<CoreDispatcher, AcceleratorKeyEventArgs> Public Event TypedEventHandler AcceleratorKeyActivated( Of ( Of CoreDispatcher ), ( Of AcceleratorKeyEventArgs )) function onAcceleratorKeyActivated(eventArgs){/* Your code */} coreDispatcher.addEventListener("acceleratorKeyActivated", onAcceleratorKeyActivated); coreDispatcher.removeEventListener("acceleratorKeyActivated", onAcceleratorKeyActivated); Remarks **.
https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Core.CoreDispatcher
2017-11-17T19:52:06
CC-MAIN-2017-47
1510934803906.12
[]
docs.microsoft.com
win8_share_image(filename, title, description, immediate); Returns: N/A With this function you can share an image from your game. The image should be an included file and part of the app "bundle", and you should.
http://docs.yoyogames.com/source/dadiospice/002_reference/windows8/win8_share_image.html
2017-11-17T19:36:07
CC-MAIN-2017-47
1510934803906.12
[]
docs.yoyogames.com
of VM fails when backed up VM was connected to DVS and restored to other ESX (222475) When powering on a VM after you restore it, vCenter Server., or 6.0.2 to 6.0.3 requires upgrading the kernel (244133) Before you upgrade the vSphere Data Protection appliance from 5.5.10, 5.8.2, or 6.0.2 to 6.0.3, upgrade the kernel of the vSphere Data Protection appliance that you want to upgrade. Otherwise, the upgrade fails. Workaround Perform the following steps to upgrade the vSphere Data Protection appliance kernel: - From the vSphere Data Protection 6.0 emwebapp.sh -restart command. Backup Issues - Loading clients takes a significant amount of time when editing or cloning a backup job (60249). - Backup of a VM fails when ESX is moved from one datacenter to other within the same vCenter Server inventory (207375): -) Workaround As a best practice, avoid deleting virtual machines during restore operations. If this error does occur, create the virtual machine with a new name and add it to a new backup job. - File Level Restore (FLR): "Error 10007: Miscellaneous error" is displayed for most of the FLR failures . Microsoft Application (MS App). Automatic Backup Verification (ABV) Issues - ABV: Verification job fails after renaming the datastore (55790) This error can occur if you rename or move the destination datastore outside of vSphere Data Protection. Workaround Edit the verification job and select the renamed or moved destination datastore as the new destination. For instructions, refer to "Editing a Backup Verification Job" in the vSphere Data Protection Administration Guide. - ABV: Verification job fails to initiate when destination path for host is changed :
https://docs.vmware.com/en/VMware-vSphere/6.0/rn/data-protection-603-release-notes.html
2017-11-17T20:03:50
CC-MAIN-2017-47
1510934803906.12
[]
docs.vmware.com
When you create an approval policy, you can add levels for both pre- and post-provisioning phases within the provisioning workflow. Procedure - On the Pre Approval or Post Approval tab, click the Add Levels icon ( ). - Enter a name and, optionally, a description. - Select a manual approval requirement. - Select the approvers. - Indicate who must approve the request or action. - Click the Approval Form tab. - Double-click the fields you want to make editable. An approver can modify the editable fields when completing an approval for this level and policy. Editing fields is only in the pre-provisioning phase. In the post-provisioning phase, the fields are displayed as read-only and cannot be edited. - Click Add. - Click Add. Results You can apply your approval policy to services, catalog items, and actions when you create an entitlement.
https://docs.vmware.com/en/vRealize-Automation/6.2/com.vmware.vra.asd.doc/GUID-5246602D-F086-48BF-95D0-A73CAFA5D8F2.html
2017-11-17T20:03:41
CC-MAIN-2017-47
1510934803906.12
[]
docs.vmware.com
To register your data collector with vRealize Business for Cloud, you must generate a one-time key in the vRealize Business for Cloud server. Prerequisites Verify that you have deployed and configured the data collector and a vRealize Business for Cloud server. See Deploying a Remote Data Collector Procedure - Log in to the vRealize Automation interface at by using credentials of a tenant administrator. - Click the Administration tab. - Click Business Management. - Click Manage Data Collection, and select Remote Data Collection. - Click the Generate a new one time use key link. You see a key on the Success dialog box.Note: The key is active for 20 minutes only. - Copy or note down the key. - Click OK. What to do next Log into the data collection manager of the remote data collector on the 9443 port and use the one-time key to register your collector. See Register a Remote Data Collector with vRealize Business for Cloud Server.
https://docs.vmware.com/en/vRealize-Business/7.0.1/com.vmware.vRBforCloud.install.doc/GUID-17EDCB92-FDA9-418C-8D34-0DCCDC2F81D9.html
2017-11-17T20:03:21
CC-MAIN-2017-47
1510934803906.12
[]
docs.vmware.com
Writes tagged data to the writer object's stream. procedure WriteString(const Value: UnicodeString); __fastcall WriteString(const UnicodeString Value); WriteString is used internally by the component streaming system to write component properties to a stream. WriteString is otherwise only used by component writers in the DefineProperties and WriteData procedures. WriteString writes the string passed in Value to the writer object's stream. WriteString checks to ensure that Value is the correct type before calling Write to write the string and its value type to the stream.
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/Classes_TWriter_WriteString.html
2017-11-17T19:16:04
CC-MAIN-2017-47
1510934803906.12
[]
docs.embarcadero.com
6. Writing Renjin Extensions¶ This chapter can be considered as Renjin’s equivalent of the Writing R Extensions manual for GNU R. Here we discuss how you create extensions, or packages as they are referred to in R, for Renjin. Packages allow you to group logically related functions and data together in a single archive which can be reused and shared with others. Renjin packages do not differ much from packages for GNU R. One notable difference is that Renjin treats unit tests as first class citizens, i.e. Renjin includes a package called hamcrest that provides functionality for writing unit tests right out of the box. We encourage you to include as many unit tests with your package as possible. One feature currently missing for Renjin packages is the ability to document your R code. You can use Javadoc to document your Java classes and methods. 6.1. Package directory layout¶ The files in a Renjin package must be organized in a directory structure that adheres to the Maven standard directory layout. A directory layout that will cover most Renjin packages is as follows: projectdir/ src/ main/ java/ ... R/ resources/ test/ java/ ... R/ resources/ NAMESPACE pom.xml The table Directories in a Renjin package gives a short description of the directories and files in this layout. The functionality of the DESCRIPTION file used by GNU R packages is replaced by a Maven pom.xml (POM) file. In this file you define the name of your package and any dependencies, if applicable. The POM file is used by Renjin’s Maven plugin to create the package. This is the subject of the next section. 6.2. Renjin Maven plugin¶ Whereas you would use the commands R CMD check, R CMD build, and R CMD INSTALL to check, build (i.e. package), and install packages for GNU R, packages for Renjin are tested, packaged, and installed using a Maven plugin. The following XML file can be used as a pom.xml template for all Renjin packages: <project xmlns="" xmlns: <modelVersion>4.0.0</modelVersion> <groupId>com.acme</groupId> <artifactId>foobar</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <!-- general information about your package --> <name>Package name or title</name> <description>A short description of your package.</description> <url></url> <licenses> <!-- add one or more licenses under which the package is released --> <license> <name>Apache License version 2.0</name> <url></url> </license> </licenses> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <renjin.version>0.8.2527</renjin.version> </properties> <dependencies> <!-- the script engine is convenient even if you do not use it explicitly --> <dependency> <groupId>org.renjin</groupId> <artifactId>renjin-script-engine</artifactId> <version>${renjin.version}</version> </dependency> <!-- the hamcrest package is only required if you use it for unit tests --> <dependency> <groupId>org.renjin</groupId> <artifactId>hamcrest</artifactId> <version>${renjin.version}</version> <scope>test</scope> </dependency> </dependencies> <repositories> <repository> <id>bedatadriven</id> <name>bedatadriven public repo</name> <url></url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>bedatadriven</id> <name>bedatadriven public repo</name> <url></url> </pluginRepository> </pluginRepositories> <build> <plugins> <plugin> <groupId>org.renjin</groupId> <artifactId>renjin-maven-plugin</artifactId> <version>${renjin.version}</version> <executions> <execution> <id>build</id> <goals> <goal>namespace-compile</goal> </goals> <phase>process-classes</phase> </execution> <execution> <id>test</id> <goals> <goal>test</goal> </goals> <phase>test</phase> </execution> </executions> </plugin> </plugins> </build> </project> This POM file provides a lot of information: - fully qualified name of the package, namely com.acme.foobar; - package version, namely 1.0-SNAPSHOT; - package dependencies and their versions, namely the Renjin Script Engine and the hamcrest package (see the next section); - BeDataDriven’s public repository to look for the dependencies if it can’t find them locally or in Maven Central; Important Package names is one area where Renjin takes a different approach to GNU R and adheres to the Java standard of using fully qualified names. The package in the example above must be loaded using its fully qualified name, that is with library(com.acme.foobar) or require(com.acme.foobar). The group ID (com.acme in this example) is traditionally a domain over which only you have control. The artifact ID should have only lower case letters and no strange symbols. The term artifact is used by Maven to refer to the result of a build which, in the context of this chapter, is always a package. Now you can use Maven to test, package, and install your package using the following commands: - mvn test - run the package tests (both the Java and R code tests) - mvn package - create a JAR file of the package (named foobar-1.0-SNAPSHOT.jar in the example above) in the targetfolder of the package’s root directory - mvn install - install the artifact (i.e. package) into the local repository - mvn deploy - upload the artifact to a remote repository (requires additional configuration) - mvn clean - clean the project’s working directory after a build (can also be combined with one of the previous commands, for example: mvn clean install) 6.3. Package NAMESPACE file¶ Since R version 2.14, packages are required to have a NAMESPACE file and the same holds for Renjin. Because of dynamic searching for objects in R, the use of a NAMESPACE file is good practice anyway. The NAMESPACE file is used to explicitly define which functions should be imported into the package’s namespace and which functions the package exposes (i.e. exports) to other packages. Using this file, the package developer controls how his or her package finds functions. Usage of the NAMESPACE in Renjin is almost exactly the same as in GNU R save for two differences: - the directives related to S4 classes are not yet supported by Renjin and - Renjin accepts the directive importClass()for importing Java classes into the package namespace. Here is an overview of the namespace directives that Renjin supports: export(f)or export(f, g) - Export an object f(singular form) or multiple objects fand g(plural form). You can add as many objects to this directive as you like. exportPattern("^[^\\.]") - Export all objects whose name does not start with a period (‘.’). Although any regular expression can be used in this directive, this is by far the most common one. It is considered to be good practice not to use this directive and to explicitly export objects using the export()directive. import(foo)or import(foo, bar) - Import all exported objects from the package named foo(and barin the plural form). Like the export()directive, you can add as many objects as you like to this directive. importFrom(foo, f)or importFrom(foo, f, g) - Import only object f(and gin the plural form) from the package named foo. S3method(print, foo) - Register a print (S3) method for the fooclass. This ensures that other packages understand that you provide a function print.foo()that is a print method for class foo. The print.foo()does not need to be exported. importClass(com.acme.myclass) - A namespace directive which is unique to Renjin and which allows Java classes to be imported into the package namespace. This directive is actually a function which does the same as Renjin’s import()function that was introduced in the chapter Importing Java classes into R code. To summarize: the R functions in your package have access to all R functions defined within your package (also those that are not explicitely exported) as well as the Java classes imported into the package names using the importClass directive. Other packages only have access to the R objects that your package exports as well as to the public Java classes. Since Java has its own mechanism to control the visibility of classes, there is no exportClass directive in the NAMESPACE file.s 6.4. Using the hamcrest package to write unit tests¶ Renjin includes a built-in package called hamcrest for writing unit tests using the R language. The package and its test functions are inspired by the Hamcrest framework. From hamcrest.org: Hamcrest is a framework for writing matcher objects allowing ‘match’ rules to be defined declaratively. The Wikipedia article on Hamcrest gives a good and short explanation of the rationale behind the framework. If you are familiar with the ‘expectation’ functions used in the testthat package for GNU R, then you will find many similarities with the assertion and matcher functions in Renjin’s hamcrest package. A test is a single R function with no arguments and a name that starts with test.. Each test function can contain one or more assertions and the test fails if at least one of the assertions throws an error. For example, using the package defined in the previous section: library(hamcrest) library(com.acme.foobar) test.df <- function() { df <- data.frame(x = seq(10), y = runif(10)) assertThat(df, instanceOf("data.frame")) assertThat(dim(df), equalTo(c(10,2))) } Test functions are stored in R script files (i.e. files with extension .R or .S) in the src/test/R folder of your package. Each file should start with the statement library(hamcrest) in order to attach the hamcrest package to the search path as well as a library() statement to load your own package. You can put test functions in different files to group them according to your liking. The central function is the assertThat(actual, expected) function which takes two arguments: actual is the object about which you want to make an assertion and expected is the matcher function that defines the rule of the assertion. In the example above, we make two assertions about the data frame df, namely that it should have class data.frame and that its dimension is equal to the vector c(10, 2) (i.e. ten rows and two columns). The following sections describe the available matcher functions in more detail. 6.4.1. Testing for (near) equality¶ Use equalTo() to test if actual is equal to expected: assertThat(actual, equalTo(expected)) Two objects are considered to be equal if they have the same length and if actual == expected is TRUE. Use identicalTo() to test if actual is identical to expected: assertThat(actual, identicalTo(expected)) Two objects are considered to be identical if identical(actual, expected) is TRUE. This test is much stricter than equalTo() as it also checks that the type of the objects and their attributes are the same. Use closeTo() to test for near equality (i.e. with some margin of error as defined by the delta argument): assertThat(actual, closeTo(expected, delta)) This assertion only accepts numeric vectors as arguments and delta must have length 1. The assertion also throws an error if actual and expected do not have the same length. If their lengths are greater than 1, the largest (absolute) difference between their elements may not exceed delta. 6.4.2. Testing for TRUE or FALSE¶ Use isTrue() and isFalse() to check that an object is identical to TRUE or FALSE respectively: assertThat(actual, isTrue()) assertTrue(actual) # same, but shorter assertThat(actual, identicalTo(TRUE)) # same, but longer 6.4.3. Testing for class inheritance¶ Use instanceOf() to check if an object inherits from a class: assertThat(actual, instanceOf(expected)) An object is assumed to inherit from a class if inherits(actual, expected) is TRUE. Tip Renjin’s hamcrest package also exists as a GNU R package with the same name available at. If you are writing a package for both Renjin and GNU R, you can use the hamcrest package to check the compatibility of your code by running the test files in both Renjin and GNU R. 6.4.4. Understanding test results¶ When you run mvn test within the directory that holds the POM file (i.e. the root directory of your package), Maven will execute both the Java and R unit tests and output various bits of information including the test results. The results for the Java tests are summarized in a section marked with: ------------------------------------------------------- T E S T S ------------------------------------------------------- and which will summarize the test results like: Results : Tests run: 5, Failures: 1, Errors: 0, Skipped: 0 The results of the R tests are summarized in a section marked with: ------------------------------------------------------- R E N J I N T E S T S ------------------------------------------------------- The R tests are summarized per R source file which will look similar to the following example: Running tests in /home/foobar/mypkg/src/test/R Running function_test.R No default packages specified Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.898 Note that the number of tests run is equal to the number of test.* functions in the R source file + 1 as running the test file is also counted as a test.
http://renjin.readthedocs.io/en/latest/writing-renjin-extensions.html
2017-11-17T19:02:24
CC-MAIN-2017-47
1510934803906.12
[]
renjin.readthedocs.io
The Delphix Engine 4.1 provides a wealth of features, bug fixes, and performance improvements. SAP ASE Support Support for virtualizing the SAP ASE database platform is new in Delphix Engine 4.1. Existing Delphix workflows are supported, including Linking, VDB Provisioning, VDB Refresh, VDB Rewind, and V2P. Amazon Web Services / EC2 Support Support for running in the Amazon cloud is new in Delphix Engine versions 4.0.4.0 and 4.1. All existing Delphix workflows are supported. Oracle 12c Pluggable Database Support In the previous release, the Delphix Engine’s support of Oracle 12c was limited to virtualizing databases which did not use the new pluggable database feature. Oracle 12c pluggable databases are now fully supported in Delphix Engine 4.1. Existing Delphix workflows are supported, including pluggable database Linking, Provisioning, VDB Refresh, and VDB Rewind. Application Data Virtualization on Windows In the previous release, the Delphix Engine expanded its data management functionality beyond the database by supporting unstructured file virtualization for UNIX environments. In Delphix Engine 4.1 the ability to virtualize unstructured files on Windows environments was added, and the existing Linking and Provisioning workflows are supported. SQL Server AlwaysOn Availability Groups Support for linking to SQL Server AlwaysOn Availability Groups is new in Delphix Engine 4.1. All existing SQL Server Delphix workflows are supported. Red Gate SQL Backup Support In the previous release, the Delphix Engine supported native SQL Server backups and Quest/NetVault LiteSpeed backups. New in the Delphix Engine 4.1, backups taken with the Red Gate SQL Backup Pro product are also supported. Removal of Reservations Feature In previous Delphix Engine releases, the Delphix Engine provided the infrequently used Reservations feature that allowed for setting the minimum amount of storage space that is allocated to an object, such as a dSource or VDB, or to an entire group. The feature is managed on the capacity screen accessed through the Resources menu in the Delphix Admin application. Ostensibly this feature was to provide a capability for guaranteeing the availability of storage for an object or group, but it instead created confusion around capacity management and usage thresholds in the Delphix Engine. Consequently, the feature was removed in this release of the Delphix Engine. Capacity management remains an important function in the Delphix Engine, and the removal of Reservations will enable frequently requested enhancements to be included in future releases, including the ability to move objects between groups and enhanced management capabilities at the user or project level. Additional Security Settings A security banner can be configured to display customized text whenever a user logs in via the CLI or web UI.
https://docs.delphix.com/display/DOCS41/What's+New+for+Delphix+Engine+4.1
2015-06-29T23:09:43
CC-MAIN-2015-27
1435375090887.26
[]
docs.delphix.com
RibbonTab Class An individual tab within the ASPxRibbon control. Namespace: DevExpress.Web Assembly: DevExpress.Web.v20.2.dll Declaration Related API Members The following members accept/return RibbonTab objects: Remarks The RibbonTab class contains the settings which define an individual tab within the ASPxRibbon control. A tab text can be specified by the RibbonTab.Text property. You can hide the tab headers by setting the ASPxRibbon.ShowTabs property to false. All the tabs are stored in the ribbon control’s ASPxRibbon.Tabs collection. Individual tabs can be accessed using index notation. The appearance of all the tabs is specified by the RibbonStyles.TabContent and ASPxRibbon.StylesTabControl properties. Note that the RibbonTab class has a client-side equivalent - an object of the ASPxClientRibbonTab type.
https://docs.devexpress.com/AspNet/DevExpress.Web.RibbonTab?v=20.2
2022-06-25T01:02:43
CC-MAIN-2022-27
1656103033925.2
[]
docs.devexpress.com
fn:error( [error as xs:QName?], [description as xs:string], [data as item()*] ) as empty-sequence() [1.0 and 1.0-ml only, 0.9-ml has a different signature] Throw the given error. When an error is thrown, the XQuery program execution is stopped. For detailed semantics, see. xquery version "1.0-ml"; let $x := xdmp:random(100) return ( if ( $x gt 50 ) then ( fn:error(xs:QName("ERROR"), "greater than 50") ) else ( "Less than or equal to 50" ) , ": no error was thrown" ) => The error when the random number is greater than 50. Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/fn:error
2022-06-25T01:15:11
CC-MAIN-2022-27
1656103033925.2
[]
docs.marklogic.com
New Relic makes it easy for you to ensure data privacy and follow your organization's logs security guidelines with new obfuscation options now available for our log management feature. Log obfuscation requires that your organization be on our newer pricing model and have the Data Plus option. Define regular expressions to mask or hash data you want to protect and easily create custom rules before sensitive information is being stored, all directly available in the Logs user interface without the need for lengthy manual configurations. Try it out: - Go to Logs in New Relic and from the left nav, select Obfuscation. - Select Create expression or Create obfuscation rule to get started. New Relic's log management already helps you troubleshoot issues faster with simplified navigation, dashboard visualizations, the best logs in context, and now, a simple way to control what log data you may be required to protect. To know more, see our docs on log management security and privacy and our docs about the Data Plus option. Suggest a change and learn how to contribute
https://docs.newrelic.com/whats-new/2022/06/whats-new-log-obfuscation
2022-06-25T01:15:11
CC-MAIN-2022-27
1656103033925.2
[]
docs.newrelic.com
IService Remoting Callback Client Interface Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Defines the interface that must be implemented for providing callback mechanism from the remoting listener to the client. public interface IServiceRemotingCallbackClient type IServiceRemotingCallbackClient = interface Public Interface IServiceRemotingCallbackClient
https://docs.azure.cn/zh-cn/dotnet/api/microsoft.servicefabric.services.remoting.v1.iserviceremotingcallbackclient?view=azure-dotnet
2022-06-25T01:19:14
CC-MAIN-2022-27
1656103033925.2
[]
docs.azure.cn
To use a DNS/DHCP Server with Address Manager, you must add the server to the Address Manager configuration. This involves providing information in the Add Server page, and then connecting to the server, or using the addServer method. A successful connection places the DNS/DHCP Server under the control of Address Manager and disables the native DNS/DHCP Server command server agent. Then, the server is no longer managed through the DNS/DHCP Server Management Console and it responds only to commands from Address Manager, where you can deploy services to the servers using deployment roles and server options. If you wish to create and configure server objects before making the actual servers available, you can add the servers without connecting them, through either the Address Manager user interface or the API. The method described here adds the server to an Address Manager configuration only, and does not connect to the server. Before deploying a configuration, you must connect to servers using the addServer method or using the Address Manager web interface. For more information about connecting to existing servers using the Address Manager web interface, refer to Adding DNS/DHCP Servers to Address Manager in the Address Manager Adminstration Guide.
https://docs.bluecatnetworks.com/r/Address-Manager-API-Guide/Servers/9.1.0
2022-06-25T02:25:01
CC-MAIN-2022-27
1656103033925.2
[]
docs.bluecatnetworks.com
pssolar¶ Plot day-light terminators and other sunlight parameters Synopsis¶ gmt pssolar [ -B[p|s]parameters ] [ -C ] [ -G[fill] ] [ -I[lon/lat][+ddate][+zTZ] ] [ -Jparameters ] [ -K ] [ -M ] [ -N ] [ -O ] [-P ] [ -Rregion ] [ -Tdcna[+ddate][+zTZ]] [ -U[stamp] ] [ -V[level] ] [ -Wpen ] [ -X[a|c|f|r][xshift] ] [ -Y[a|c|f|r][yshift] ] [ -bobinary ] [ -oflags ] [ -pflags ] [ -ttransp ] [ --PAR=value ] Description¶ solar calculates closed polygons for the day-night terminator and the civil, nautical and astronomical twilights and either writes them to standard output or uses them for clipping or filling on maps. Required Arguments¶ None. Optional Arguments¶ - -B[p|s]parameters Set map boundary frame and axes attributes. (See full description) (See cookbook information). - -C Formats the report selected by -I using tab-separated fields on a single line. The output is Sun Lon Lat Azimuth Elevation in degrees, Sunrise Sunset Noon in decimal days, day length in minutes, SolarElevationCorrected corrected for the effect of refraction index and Equation of time in minutes. Note that if no position is provided in -Ilon/lat the data after Elevation refers to the point (0,0). - -G[fill] (more …) Select color or pattern for filling of terminators, or give no argument for clipping [Default is no fill or clipping]. Deactivate clipping by appending the output of gmt clip -C. - -I[lon/lat][+ddate][+zTZ] Print current sun position as well as Azimuth and Elevation. Append lon/lat to print also the times of Sunrise, Sunset, Noon and length of the day. Add +ddate in ISO 8601 format, e.g, +d2000-04-25T04:52, to compute sun parameters for this date and time [Default is now]. If necessary, append a time zone via +zTZ. The time zone is given as an offset from UTC. Negative offsets look like −03:00 or −03. Positive offsets look like 02:00 or 02. - -Jparameters Specify the projection. (See full description) (See cookbook summary) (See projections table). - -M Write terminator(s) as a multisegment ASCII (or binary, see -bo) polygons to standard output. No plotting occurs. - -N Invert the sense of what is inside and outside the terminator. Only used with clipping (-G) and cannot be used together with -B. - -Rxmin/xmax/ymin/ymax[+r][+uunit] Specify the region of interest. (See full description) (See cookbook information). - -Tdcna[+ddate][+zTZ] Plot (or dump; see -M) one or more terminators defined via the dcna flags. Where: d means day/night terminator; c means civil twilight; n means nautical twilight; a means astronomical twilight. Add +ddate in ISO format, e.g, +d2000-04-25T12:15:00 to know where the day-night was at that date [Default is now]. If necessary, append time zone via +zTZ. Refer to for definitions of different twilights. - -U[label|+c][+jjust][+odx[/dy]] Draw GMT time stamp logo on plot. (See full description) (See cookbook information). - -V[level] Select verbosity level [w]. (See full description) (See cookbook information). - -W[pen] (more …) Set pen attributes for lines or the outline of symbols [Defaults: width = default, color = black, style = solid]. - -X[a|c|f|r][xshift] Shift plot origin. (See full description) (See cookbook information). - -Y[a|c|f|r][yshift] Shift plot origin. . Print current Sun position and Sunrise, Sunset times at given date, time and time zone: gmt pssolar -I-7.93/37.079+d2016-02-04T10:01:00+z02:00 Plot the day-night and civil twilight: gmt pscoast -Rd -W0.1p -JQ0/14c -Ba -BWSen -Dl -A1000 -P -K > terminator.ps gmt pssolar -R -J -W1p -Tdc -O >> terminator.ps Set up a clip path overlay based on the day/night terminator: gmt pssolar -R -J -G -Tc -O -K >> someplot.ps References¶ Code from the Excel Spreadsheets in Notes¶ Taken from the NOAA site Data for Litigation note. See Also¶ gmt, psclip, pscoast, psxy
https://docs.generic-mapping-tools.org/6.3/pssolar.html
2022-06-25T01:16:54
CC-MAIN-2022-27
1656103033925.2
[]
docs.generic-mapping-tools.org
Tachograph Auto API Level 13 Last Updated Level 12 Added In Level 7 Identifier 0x00 0x64 This page specifies the Auto API protocol for Vehicle Status. Head over to the REST API, iOS SDK, Android SDK or Node.js SDK code reference pages for platform specifics. Drivers working states id: 0x01 name: drivers_working_states name_cased: driversWorkingStates name_pretty: Drivers working states type: types.driver_working_state multiple: true name_singular: driver_working_state Example data_component: "0102" values: driver_number: 1 working_state: working description: Driver nr 1 is working data_component: "0200" values: driver_number: 2 working_state: resting description: Driver nr 2 is resting Drivers time states id: 0x02 name: drivers_time_states name_cased: driversTimeStates name_pretty: Drivers time states type: types.driver_time_state multiple: true name_singular: drivers_time_state Example data_component: "0302" values: driver_number: 3 time_state: four_reached description: Driver nr 3 has reached 4 hours data_component: "0405" values: driver_number: 4 time_state: fifteen_min_before_sixteen description: Driver nr 4 has reached 15 hours and 45 minutes Drivers cards present id: 0x03 name: drivers_cards_present name_cased: driversCardsPresent name_pretty: Drivers cards present type: types.driver_card_present multiple: true name_singular: drivers_card_present Example data_component: "0601" values: driver_number: 6 card_present: present description: Driver nr 6 has a card present data_component: "0700" values: driver_number: 7 card_present: not_present description: Driver nr 7 does not have a card present Vehicle motion id: 0x04 name: vehicle_motion name_cased: vehicleMotion name_pretty: Vehicle motion type: types.detected Example data_component: "01" value: detected description: Detected vehicle in motion Vehicle overspeed id: 0x05 name: vehicle_overspeed name_cased: vehicleOverspeed name_pretty: Vehicle overspeed type: enum size: 1 controls: switch enum_values: - id: 0x00 name: no_overspeed - id: 0x01 name: overspeed Example data_component: "00" value: no_overspeed description: Vehicle is not overspeeding Vehicle direction id: 0x06 name: vehicle_direction name_cased: vehicleDirection name_pretty: Vehicle direction type: enum size: 1 enum_values: - id: 0x00 name: forward - id: 0x01 name: reverse Example data_component: "00" value: forward description: Vehicle is moving forward Vehicle speed id: 0x07 name: vehicle_speed name_cased: vehicleSpeed name_pretty: Vehicle speed type: unit.speed size: 10 description: The tachograph vehicle speed Example data_component: "16014054000000000000" value: kilometers_per_hour: 80 description: Vehicle speed is 80.0km/h
https://docs.high-mobility.com/api-references/auto-api/heavy_duty/tachograph/
2022-06-25T01:46:36
CC-MAIN-2022-27
1656103033925.2
[]
docs.high-mobility.com
./Example.framework/run ${BUDDYBUILD_SECURE_FILES}/file.txt Secure files Secure files allow you to specify API keys, access tokens, or other secrets that your build requires, without having them checked into your repository. Secure files are made available during a build to any process that can use them, including custom build steps. Step 1: Upload a Secure File to buddybuild Launch the buddybuild dashboard and select App Settings. In the left navigation, select Build settings, then Secure files. Select the file you would like to upload and select Upload file. Note that files are limited to a maximum of 50 MiB. Your file is now ready to be consumed by your app. Step 2: Consume the secure file in your build Your secure files: That’s it! For more details, refer to our SDK API guide.
https://docs.buddybuild.com/builds/secrets/secure_files.html
2017-11-18T00:36:59
CC-MAIN-2017-47
1510934804125.49
[array(['../img/Builds---Settings.png', 'The buddybuild dashboard'], dtype=object) array(['../img/Settings---Secure-files---1.png', 'The Secure files button'], dtype=object) array(['../img/Settings---Secure-files---2.png', 'The Secure files screen'], dtype=object)]
docs.buddybuild.com
Accessing the Console You can reach the Omega’s Console in your browser by navigating to, where ABCD is your Omega’s unique identifier. If you don’t know how to find your Omega’s unique identifier you can take a look at our brief guide to finding your Omega’s name Upon loading, you should see the console login page. The default login info is: username: root password: onioneer Your computer and Omega have to be on the same network in order for you to get to the Console! If the above method doesn’t work, try connecting your computer to your Omega’s WiFi Access Point (named Omega-ABCD by default), and then trying to load again. If this doesn’t work, you can navigate to the Omega’s IP Address,.
https://docs.onion.io/omega2-docs/accessing-the-console.html
2017-11-18T00:32:36
CC-MAIN-2017-47
1510934804125.49
[array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Get-Started/img/installing-console-login.png', 'login-page'], dtype=object) ]
docs.onion.io
This page contains information on how JWST and APT obtain and use moving target ephemerides, how the proposer can obtain ephemerides using JPL Horizons, and how JWST's orbit affects the accuracy of the ephemerides. Moving target ephemerides and APT The Visit Planner in the Astronomer's Proposal Tool (APT) uses orbital elements directly from JPL Horizons to construct ephemerides and determine observing windows. See Tutorial: Creating Solar System Targets in APT for step-by-step instructions on obtaining orbital parameters for any moving target. Moving target ephemerides and JWST Using the orbital elements specified in APT and the relative positions of the guide star and the moving target, a 5th-order polynomial is constructed that describes the path of the guide star across the field of view of the Fine Guidance Sensor (FGS). The FGS then tracks the guide star along this path; this keeps the moving target fixed in the science instrument reference frame (see the JWST Moving Target Observing Procedures page for more details). Ephemeris data will not be included in the FITS headers, but pointing information obtained every 64 ms will be included, enabling a reconstruction of the target's motion across the sky (in RA/DEC coordinates). The heliocentric and observer distances will also be included in the FITS headers. JWST in JPL Horizons A nominal JWST orbit (computed in April 2014 for planning purposes) is available in the JPL Horizons ephemeris generation system. There are two ways to specify JWST as the Observer Location in Horizons: "@jwst" (without quotes) or the observatory code "500@-170" (without quotes). The images below provide a quick walkthrough for specifying the Observer Location in Horizons. - Step 1. JPL Horizons home page. On the main page of the JPL Horizons web-interface, locate the "Observer Location" row, as highlighted in the figure below. Click on "change." - Step 2. Observer Location page. After clicking on "change," the user will be redirected to the Observer Location page. Locate the "Lookup Named Location" section and the search box. Input either "@jwst" (without quotes) or "500@-170" (without quotes) and click the "Search" button. The user will be redirected back to the main page. - Step 3. Check that Observer Location updated. After being redirected to the main page, the Observer Location should have updated to read "James Webb Space Telescope (JWST) Spacecraft [500@-170]," as highlighted in the image below. JWST orbit in JPL horizons The observatory orbit will be updated after launch to reflect the actual launch date and the L2 halo orbit that is chosen. The halo orbit can differ substantially depending on launch date and course-corrections, so ephemeris predictions even for main-belt asteroids can be highly uncertain (up to 20 arcminute) until the final JWST orbit has been determined. For trans-Neptunian objects, the initial ephemeris uncertainty will be around 1 arcsecond, but will become much smaller once the orbit of JWST is known. Station-keeping to maintain the observatory's orbit about the L2 point will be performed every 21 days. Therefore, it is expected that the updated JWST orbit from each station-keeping procedure will be sent to JPL and incorporated in Horizons roughly every 21 days. A description of the orbit and the station-keeping procedures can be found on the JWST Orbit page. Related links JWST User Documentation Home JWST Moving Target Observing Procedures JWST Orbit Tutorial: Creating Solar System Targets in APT Moving target articles
https://jwst-docs.stsci.edu/display/JPP/JWST+Moving+Target+Ephemerides
2017-11-18T01:08:29
CC-MAIN-2017-47
1510934804125.49
[]
jwst-docs.stsci.edu
JWST NIRCam mosaics are used primarily to cover areas larger than the 5.1' × 2.2' NIRCam field of view. JWST mosaics may be used in NIRCam imaging and wide field slitless spectroscopy observations. The primary purpose is to cover areas larger than the 5.1' × 2.2' NIRCam Field of View. The spatial extent of each tile with zero overlap is 5.115033' × 2.221150' (the horizontal and vertical extent of the NIRCam aperture "ALL"). This spacing may be decreased (to make the pointings overlap, as is the default) or increased (to widen the gaps between pointings). By default, spacing is defined along NIRCam's "ALL" aperture axis, which is roughly aligned with the JWST (V2, V3) coordinate system and the NIRCam detector rows and columns. Optionally, shifts may be added between rows and/or columns to skew the pattern; these are input as rotation angles in the APT mosaic planning. All dithers and filter changes are performed at each pointing (or tile) in the mosaic before proceeding to the next. To obtain roughly even observing depth over large areas, consider using the NIRCam "FULL" primary dither patterns in conjunction with mosaics with tile spacings of 5.8' × 2.25'. Related links JWST User Documentation Home NIRCam Apertures NIRCam Detectors NIRCam Dithers and Mosaics NIRCam Field of View NIRCam Filters NIRCam Imaging NIRCam Overview NIRCam Modules NIRCam Primary Dithers NIRCam Wide Field Slitless Spectroscopy
https://jwst-docs.stsci.edu/display/JTI/NIRCam+Mosaics
2017-11-18T01:06:54
CC-MAIN-2017-47
1510934804125.49
[]
jwst-docs.stsci.edu
To discover the Weblogic administration server, the vRealize Hyperic agent must run under an account with permissions that allow it to obtain the administration server's process arguments, and its current working directory. Procedure - On Unix-based platforms, ensure that the vRealize Hyperic agent has the permissions it needs by running one of the following: Run the vRealize Hyperic agent as the same user id that runs the Administration Server. Run the vRealize Hyperic agent as root. After the agent has discovered the Administration Server, it no longer needs the permissions required for process discovery. (The agent uses JMX to discover the Weblogic managed servers.) - After discovery of the admin server, update the permissions to the vRealize Hyperic agent directory. As superuser, run the following command in the agent directory: chown user -R where useris the username you will use to run the agent, for example, hqadmin.Note: If you cannot run the agent as the Administration Server user or as root, follow the directions in Add Administration Server Manually .
https://docs.vmware.com/en/vRealize-Hyperic/5.8.4/com.vmware.hyperic.resource.configuration.metrics.doc/GUID-E2F4CF29-B184-4977-8B29-DA0F6960AE7E.html
2017-11-18T01:14:31
CC-MAIN-2017-47
1510934804125.49
[]
docs.vmware.com
You use the Notifications page to manage your individual alert notification rules. The rules determine which vRealize Operations Manager alerts are sent to the supported target systems. How Notifications Work You add, manage, and edit your notification rules from this page. To send notifications to a supported system, you must configure and enable the settings for outbound alerts. The supported outbound notification plug-ins include the Standard Email plug-in, REST plug-in, SNMP Trap plug-in, and the Log File plug-in. Before you can create and manage your notification rules, you must configure the outbound alert plug-in instances. Where You Find Notifications To manage your notifications, select Content in the left pane, and click Notifications.
https://docs.vmware.com/en/vRealize-Operations-Manager/6.3/com.vmware.vcom.core.doc/GUID-DDBA0534-CD40-4AF0-9C82-E2FE4C8744FB.html
2017-11-18T01:13:31
CC-MAIN-2017-47
1510934804125.49
[]
docs.vmware.com
Conf. Table of Contents - Highlights - Previous releases -. Previous releases¶ Confluent Platform 3.1.1 as it includes both new major functionality as well as important bug fixes. The technical details of this release are summarized below. Highlights¶ Confluent Platform. Highlights¶: Kafka Streams¶start. Confluent Control Center¶ Installation. A term license for Confluent Control Center is available for Confluent Platform Enterprise Subscribers, but any user may download and try Confluent Control Center for free for 30 days. Apache Kafka 0.10.0.0¶AKFA. Deprecating Camus¶. Other Notable Changes¶ Confluent Platform Kafka Enforcing Client.
https://docs.confluent.io/current/release-notes.html
2017-11-18T01:04:39
CC-MAIN-2017-47
1510934804125.49
[]
docs.confluent.io
Assembly: AWSSDK.dll Version: (assembly version) Container for the necessary parameters to execute the RegisterDevice service method. .NET Framework: Supported in: 4.5, 4.0, 3.5
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MCognitoSyncICognitoSyncRegisterDeviceRegisterDeviceRequestNET45.html
2017-11-18T00:34:45
CC-MAIN-2017-47
1510934804125.49
[]
docs.aws.amazon.com
Blueprint architects build Software components, machine blueprints, and custom XaaS blueprints and assemble those components into the blueprints that define the items users request from the catalog. You can create and publish blueprints for a single machine, or a single custom XaaS blueprint, but you can also combine machine components and XaaS blueprints with other building blocks to design elaborate catalog item blueprints that include multiple machines, networking and security, software with full life cycle support, and custom XaaS functionality. Depending on the catalog item you want to define, the process can be as simple as a single infrastructure architect publishing one machine component as a blueprint, or the process can include multiple architects creating many different types of components to design a complete application stack for users to request. Software Components You can create and publish software components to install software during the machine provisioning process and support the software life cycle. For example, you can create a blueprint for developers to request a machine with their development environment already installed and configured. Software components are not catalog items by themselves, and you must combine them with a machine component to create a catalog item blueprint. Machine Blueprints You can create and publish simple blueprints to provision single machines or you can create more complex blueprints that contain additional machine components and optionally any combination of the following component types: Software components Existing blueprints NSX network and security components XaaS components Containers components Custom or other components XaaS Blueprints You can publish your vRealize Orchestrator workflows as XaaS blueprints. For example, you can create a custom resource for Active Directory users, and design an XaaS blueprint to allow managers to provision new users in their Active Directory group. You create and manage XaaS components outside of the design tab. You can reuse published XaaS blueprints to create application blueprints, but only in combination with at least one machine component. Application Blueprints with Multi-Machine, XaaS, and Software Components You can add any number of machine components, Software components, and XaaS blueprints to a machine blueprint to deliver elaborate functionality to your users. For example, you can create a blueprint for managers to provision a new hire setup. You can combine multiple machine components, software components, and a XaaS blueprint for provisioning new Active Directory users. The QE Manager can request your New Hire catalog item, and their new quality engineering employee is provisioned in Active Directory and given two working virtual machines, one Windows and one Linux, each with all the required software for running test cases in these environments.
https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.prepare.use.doc/GUID-29B7A5D0-AC2F-41BE-8788-F8004077278D.html
2017-11-18T01:02:39
CC-MAIN-2017-47
1510934804125.49
[array(['images/GUID-FE80F3D4-9CD0-4111-BF63-5E545D2A8730-high.png', 'Diagram of Creating Blueprints'], dtype=object) ]
docs.vmware.com
1 Introduction The Mendix Cloud supports adding custom domains such as to your environments. As we only allow HTTPS connections, you have to provide a custom domain certificate (an SSL/TLS certificate). This how-to walks you through the process. This option is available for free for licensed apps. You cannot add custom domains to free apps. This how-to will teach you how to do the following: - Generate a certificate request for your custom domain - Upload a custom domain certificate to the Mendix Cloud - Renew a custom domain certificate - Configure a custom domain for your environment 2 Prerequisites 2.1 General Prerequisites Before starting this how-to, make sure you have completed the following prerequisites: - Have a basic knowledge of DNS - Have a basic knowledge of SSL/TLS certificates: - What an SSL/TLS certificate is and what it is used for - What an intermediate certificate chain is and what it is used for - What an SSL/TLS private key is and what it is used for - What a certificate request is and what it is used for - Have a basic knowledge of certificate authorities (like GeoTrust, Thawte, Verisign, RapidSSL, GoDaddy, Comodo) - Have the correct permissions (for more information, see Security-Node Permissions) 2.2 Domain Registrar/DNS Provider Before configuring your custom domain in the Mendix Cloud, you will need to configure a DNS record for your custom domain at your domain registrar/DNS provider. Please create a CNAME record and point it to [YOUR-CUSTOM-DOMAIN].cname.mendix.net.. For example, when your custom domain is myapp.mycompany.com, create a CNAME record to myapp.mycompany.com.cname.mendix.net. so that Mendix can point your custom domain to your Mendix app. It’s not possible to create a CNAME record for an apex/naked domain (meaning, a domain without a subdomain, like mycompany.com), as custom apex/naked domains are currently not supported. 3 Managing Custom Domains in the Mendix Cloud Custom domain certificates (or just “certificates”) and custom domains are managed in separate locations in the Mendix Cloud. Certificates are currently managed on the application level. You can have a collection of certificates. For example when your certificate expires, you can upload a new certificate next to your old certificate. Those can be chosen when you configure a custom domain. This is done on the environment level (test, acceptance, production). To manage custom domains, follow these steps: - Go to the Developer Portal and click Apps in the top navigation panel. - Click My Apps and select Nodes. - Select the node you want to manage. - Click Environments under the Deploy category. - Go to the Custom Domains tab. 4 Generating a Certificate Request for your Custom Domain When you do not have an SSL/TLS certificate you have to order one at a certificate authority (like GeoTrust, Thawte, Verisign, RapidSSL, GoDaddy, or Comodo). In order to get a signed SSL/TLS certificate from a certificate authority, you need to provide a certificate signing request (CSR). A private A SSL/TLS key and a CSR tied to that key can be created in the Mendix Cloud for you. To create a certificate signing request and an RSA key, follow these steps: Click New and then click Create a Certificate Request: Fill in and submit the provided fields: Click Generate. An SSL/TLS private key and a certificate request is generated. The certificate request will be shown in the PEM format. The SSL/TLS private key will be stored in our secure keystore. It will not be available for download in order to keep it secure. 4.1 Upload Signed Certificate Before uploading a certificate, go to your certificate authority to get a signed SSL/TLS certificate. After you have received the signed SSL/TLS certificate, you can upload it by following these steps: - Select the custom domain certificate. - Click Upload Signed Certificate. Here you can change the description of your certificate and upload the signed SSL/TLS certificate. You can also upload an intermediate certificate chain. The intermediate certificate chain is often provided by your certificate authority. 5 Uploading Your Own Custom Domain Certificate To upload a custom domain certificate, you need to have the following things prepared: - An SSL/TLS certificate that is signed by your certificate authority - An intermediate certificate chain provided by your certificate authority - An SSL/TLS private key To upload the custom domain certificate, follow these steps: Click New and then click Upload Certificate, Chain and Key: Enter the SSL/TLS certificate, intermediate certificate chain, and SSL/TLS private key in the provided fields. Optionally, you can give your custom domain certificate a description. The description is used when selecting the custom domain certificate when configuring a custom domain later on. Click Save to save your new custom domain certificate. It will be uploaded to the Mendix Cloud automatically. The SSL/TLS private key will be hidden after uploading it. It will be stored in our secure keystore and will not be available for download in order to keep it secure. 6 Renewing a Custom Domain Certificate There are two methods for renewing a custom domain certificate: - Create a new custom domain certificate (recommended). - Update an existing custom domain certificate. 6.1 Method 1: Creating a New Custom Domain Certificate (Recommended) When a custom domain certificate is about to expire, you can renew it by generating a new certificate request (for more information, see 4 Generating a Certificate Request for Your Custom Domain) or by uploading a new custom domain certificate (for more information, see 5 Uploading Your Own Custom Domain Certificate). Now select the new certificate for your custom domain (for more information, see 7 Configuring a Custom Domain). 6.2 Method 2: Renewing by Updating an Existing Custom Domain Certificate You can also renew your custom domain certificate by editing an existing custom domain certificate. Please be aware that the certificate request that you created in the past is required for that. 7 Configuring a Custom Domain After a custom domain certificate has been uploaded, you can start configuring a custom domain for one of your application environments. To configure a custom domain on your application environment, follow these steps: - Click Environments under the Deploy category. Click Details for the environment you want to configure: Go to the Network tab: In Custom Domains, you can manage your custom domains. You can configure a custom domain by doing the following: - Providing a Domain name (like myapp.mycompany.com) - Selecting a custom domain Certificate you have uploaded above Click Save to save your custom domain. It will be configured for your application environment automatically. Please make sure you’ve configured a CNAME record for your custom domain at your domain registrar/DNS provider (for details, see 2.2 Domain Registrar/DNS Provider). 8 Frequently Asked Questions 8.1 Can I Create a *.mycompany.com Wildcard Certificate? Yes. However, when you create the certificate request via the Mendix Cloud, you will only be able to use the wildcard certificate for all the environments of only one application. When you have your own custom domain certificate, you can upload it to all of your apps and use it for all the environments of all of your apps. You can select the same wildcard certificate per environment by specifying different subdomains. For example, test.mycompany.com, accp.mycompany.com, and app.mycompany.com. 8.2 How Do I Properly Construct an Intermediate Certificate Chain? Your certificate is signed by the certificate authority. They sign your certificate with their intermediate certificate. They also sign their intermediate certificate with their own root certificate. Almost always, the intermediate certficate chain that you will need is just one intermediate certificate. Sometimes there is more then one intermediate certificate; this depends on the CA you use. You do not need to provide the root certificate, as every web browser has it in its trusted keystore. An intermediate certificate chain chain could look like this from top to bottom: - Intermediate certificate 2 - Intermediate certificate 1 - Root certificate (optional)
https://docs.mendix.com/developerportal/howto/custom-domains
2017-11-18T00:50:31
CC-MAIN-2017-47
1510934804125.49
[array(['attachments/deploy/21168230.png', None], dtype=object) array(['attachments/deploy/21168233.png', None], dtype=object) array(['attachments/deploy/21168226.png', None], dtype=object) array(['attachments/deploy/certificate.jpg', None], dtype=object) array(['attachments/deploy/21168227.png', None], dtype=object)]
docs.mendix.com
Once you have set up a cluster with ceph-deploy, you may provide the client admin key and the Ceph configuration file to another host so that a user on the host may use the ceph command line as an administrative user. To enable a host to execute ceph commands with administrator privileges, use the admin command. ceph-deploy admin {host-name [host-name]...} To send an updated copy of the Ceph configuration file to hosts in your cluster, use the config push command. ceph-deploy config push {host-name [host-name]...} Tip With a base name and increment host-naming convention, it is easy to deploy configuration files via simple scripts (e.g., ceph-deploy config hostname{1,2,3,4,5}).
http://docs.ceph.com/docs/jewel/rados/deployment/ceph-deploy-admin/
2017-06-22T22:07:46
CC-MAIN-2017-26
1498128319912.4
[]
docs.ceph.com
Casts¶ Mypy supports type casts that are usually used to coerce a statically typed value to a subtype. Unlike languages such as Java or C#, however, mypy casts are only used as hints for the type checker, and they don’t perform a runtime type check. Use the function cast to perform a cast: from typing import cast, List o = [1] # type: object x = cast(List[int], o) # OK y = cast(List[str], o) # OK (cast performs no actual runtime check) To support runtime checking of casts such as the above, we’d have to check the types of all list items, which would be very inefficient for large lists. Use assertions if you want to perform an actual runtime check. Casts are used to silence spurious type checker warnings and give the type checker a little help when it can’t quite understand what is going on. You don’t need a cast for expressions with type Any, or when assigning to a variable with type Any, as was explained earlier. You can also use Any as the cast target type – this lets you perform any operations on the result. For example: from typing import cast, Any x = 1 x + 'x' # Type check error y = cast(Any, x) y + 'x' # Type check OK (runtime error)
http://mypy.readthedocs.io/en/latest/casts.html
2017-06-22T22:05:21
CC-MAIN-2017-26
1498128319912.4
[]
mypy.readthedocs.io
Certain features of BMC Discovery need command line access to configure. For example, the appliance obtains its IP address using DHCP by default, although you can configure it at the command line to use a static IP address. Default command line passwords BMC. For more information, see the upload user. The first time you log in to the appliance UI, you are forced to change the UI system user password and the passwords for the tideway and root command line users. To change the command line user passwords subsequently, log in using the VMware Server Console, or ssh into the appliance using a terminal. (You cannot ssh into the appliance as the root user. You must first log in as the tideway user and su to root.)) Password quality The default requirements for command line passwords are that they must have at least one lowercase letter, one uppercase letter, one numeric character, and one special character. They must also contain a minimum of six characters, at least four. - Still logged in as the root user, change the tideway user's password using the passwdcommand and be used only for the following tasks: - Changing the default Linux passwords - Upgrading the BMC Discovery application software - Adding new disks to the appliance Any other task on the appliance that typically requires a UNIX/Linux root user can be undertaken using sudo to temporarily grant additional privileges to the tideway user. The tideway user The tideway user is a powerful administrative account that can perform virtually all of the tasks that would need to undertake using the command line. Examples of such tasks are: - All command line utilities - Customizations (taxonomy, reports, and visualizations) - Managing log files (although this can be done through the UI) - Starting and stopping services (with additional privileges accessed through sudo) Additional users The following additional users are configured on the appliance: 1 Comment Volker Scheithauer
https://docs.bmc.com/docs/display/DISCO110/Logging+in+to+the+appliance+command+line
2017-06-22T22:16:32
CC-MAIN-2017-26
1498128319912.4
[]
docs.bmc.com
- Security > - Authentication > - Enterprise Authentication Mechanisms > - LDAP Proxy Authentication > - Authenticate and Authorize Users Using Active Directory via Native LDAP Authenticate and Authorize Users Using Active Directory via Native LDAP¶ On this page New in version 3.4: MongoDB Enterprise. Considerations¶ This tutorial explains configuring MongoDB for AD authentication and authorization. To perform this procedure on your own MongoDB server, you must modify the given procedures with respect to your own specific infrastructure, especially Active Directory¶. Connect to the MongoDB server.¶ Connect to the MongoDB server using the mongo shell using the --host and --port options. mongo --host <hostname> --port <port>. You must also enable LDAP authentication by setting security.authorization to enabled and setParameter authenticationMechanisms to PLAIN Example To connect to an AD server located at activedirectory.example.net, include the following in the configuration file: security: authorization: "enabled" ldap: servers: "activedirectory.example.net" setParameter: authenticationMechanisms: 'PLAIN'. Example. security: ldap: userToDNMapping: '[ { match : "(.+)", ldapQuery: "DC=example,DC=com??sub?(userPrincipalName={0})" } ]': DC=example,DC=com??sub?([email protected]). Configure query credentials.¶ MongoDB requires credentials for performing queries on the AD server. Configure the following settings in the configuration file: security.ldap.bind.queryUser, specifying the Active Directory user the mongodor mongosbinds as for performing queries on the AD server. security.ldap.bind.queryPassword, specifying the password for the specified queryUser. security: ldap: bind: queryUser: "[email protected]" queryPassword: "secret123" Add any additional configuration options required for your deployment. For example, you can specify your desired storage.dbPath or change the default net.port number.. mongod --config <path-to-config-file> Windows MongoDB deployments must use mongod.exe instead of mongod.. mongo --username [email protected] --password 'secret123' --authenticationMechanism 'PLAIN' --authenticationDatabase '$external' --host <hostname> --port <port>-SHA-1 in the setParameter authenticationMechanisms configuration option. setParameter: authenticationMechanisms: "PLAIN,SCRAM-SHA-1": "PLAIN" The given sample configuration requires modification to match your Active Directory schema, directory structure, and configuration. You may also require additional configuration file options for your deployment. For more information on configuring roles and privileges, see: © MongoDB, Inc 2008-2017. MongoDB, Mongo, and the leaf logo are registered trademarks of MongoDB, Inc.
https://docs.mongodb.com/v3.4/tutorial/authenticate-nativeldap-activedirectory/
2017-06-22T22:14:41
CC-MAIN-2017-26
1498128319912.4
[]
docs.mongodb.com
Example Repositories¶ Each of the below repositories provides an example of Solano CI passing configurations and tests. The solano.yml (or tddium.yml) file in each one, along with any setup hook or test-initiating scripts, demonstrate various aspects of Solano CI capabilities. Android¶ GitHub Android app - Demonstrates downloading and cacheing Gradle and the Android SDK. Gradle tasks from decoupled projects can usually be built as separate Solano CI tests without any additional configuration. Go¶ Geographical calculations in Go - Demonstrates using Solano CI supplied environment variables to connect to PostgreSQL and MySQL and installing and cacheing dependencies. Python¶ Sphinx Python document generator - The tddium-parallel branch demonstrates Solano CI automatically running nosetests across parallel workers, and is used to generate this very doc site. PHP¶ Guzzle PHP HTTP client - Uses the Solano PHPUnit wrapper to implement Solano CI’s parallel PHPUnit testing functionality. Demonstrates the use of Composer to install dependencies and have them cached by Solano. Laravel 5 Example - Demonstrates a Laravel web application running PHPUnit tests in parallel on Solano CI. NodeJS and io.js¶ Karma Sauce Labs - Demonstrates using Sauce Connect to run Selenium tests on Sauce Labs. Express - Demonstrates using Yarn instead of npm as a NodeJS package manager, including specifying appropriate cache paths. Intern JS code testing stack - Uses Intern to test itself. Demonstrates using npm to install dependencies that are cached by Solano and executing tests with a standalone Selenium server. Docker¶ CI memes docker web app - Demonstrates pulling a Docker image from Docker Hub, building it, and running a simple testsuite. CI memes using Docker Compose - Demonstrates launching an application stack with docker-compose to test against. CI memes Amazon ECS/ECR - Demonstrates pulling Docker images from AWS EC2 Container Registry (ECR), testing, and pushing to ECR and deploying to AWS EC2 Container Service (ECS). Universal Worker Docker - Demonstrates running Docker builds on Solano CI Universal Workers. Ruby¶ Solano - The solano gem implements a command line interface for you to start a parallel run from your local git workspace, along with ways to edit configuration and retrieve your account status all from within your terminal. Erlang¶ Hello Phoenix - Demonstrates using Erlang, Elixir, and the Phoenix web framework on Solano CI AWS CodePipeline¶ AWS CodePipeline sample_application - Demonstration repo for a complex AWS CodePipeline Continuous Deployment setup using Solano CI as the test runner. Watch the webinar to learn more Java¶ Spark Examples - Demonstrates installing and using stand-alone Apache Spark on Solano CI workers.
http://docs.solanolabs.com/Setup/example-repos/
2017-06-22T22:11:20
CC-MAIN-2017-26
1498128319912.4
[]
docs.solanolabs.com
Supported Python features and modules¶ A list of unsupported Python features is maintained in the mypy wiki: Runtime definition of methods and functions¶ By default, mypy will complain if you add a function to a class or module outside its definition – but only if this is visible to the type checker. This only affects static checking, as mypy performs no additional type checking at runtime. You can easily work around this. For example, you can use dynamically typed code or values with Any types, or you can use setattr or other introspection features. However, you need to be careful if you decide to do this. If used indiscriminately, you may have difficulty using static typing effectively, since the type checker cannot see functions defined at runtime.
http://mypy.readthedocs.io/en/latest/supported_python_features.html
2017-06-22T22:05:09
CC-MAIN-2017-26
1498128319912.4
[]
mypy.readthedocs.io
User Guide Local Navigation Pan a map To pan a map, your BlackBerry smartphone must be in pan mode and not zoom mode. The mode that your smartphone is in is displayed at the top of the map. Next topic: Zoom in to or out from a map Previous topic: Copy a location Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/37644/Pan_a_map_61_1486170_11.jsp
2014-10-20T08:38:07
CC-MAIN-2014-42
1413507442288.9
[]
docs.blackberry.com
There First, tell Sonar which code coverage engine has been used to generate the reports: jacoco or cobertura or emma or clover. Then, tell Sonar where to get the code coverage reports:.
http://docs.codehaus.org/pages/viewpage.action?pageId=230397973
2014-10-20T08:07:53
CC-MAIN-2014-42
1413507442288.9
[]
docs.codehaus.org
Many payment providers use notifications, generally described as "IPNs", "endpoints", or "webhooks", to submit information asynchronously to the payment gateways that support them. Payment providers may inform a Drupal Commerce site that a new pending/complete payment should be created (if the payment happened off-site), or they may provide information about an existing payment (refunds, disputes, etc). The Drupal Commerce Payment module handles these notifications by: PaymentNotificationController, that will pass the received information on to a payment gateway that can process it. If your payment gateway module needs to handle IPNs, it can do so by implementing the SupportsNotificationsInterface. This interface defines the onNotify() method, which is the method called by the PaymentNotificationController: /** * Processes the notification request. * * @param \Symfony\Component\HttpFoundation\Request $request * The request. * * @return \Symfony\Component\HttpFoundation\Response|null * The response, or NULL to return an empty HTTP 200 response. */ public function onNotify(Request $request); onNotify()method do? The onNotify() method processes the notification request. It can create new payments or update existing payments. Typically, it will update the state of a payment based on the information in the request. If the state is set to completed, the amount of the payment will be included in the "total paid amount" for the order. The onNotify() method does not need to (and should not) touch the parent order. When the payment is saved in the onNotify() method, the total paid amount for the order will be automatically updated, based on all payments associated with the order.. You may also want to log the request or other message, especially if the request was invalid. The onNotify() method should return a Symfony Response or NULL to return an empty HTTP 200 response. All off-site payment gateways implement the SupportsNotificationsInterface interface. Generally, off-site payment gateways will create payments in the onReturn() method. However, if the payment provider supports IPNs, then creating the payment in onNotify() rather than in onReturn() is preferred, since it is guaranteed to be called even if the customer does not return to the site. The PayPlug payment gateway module has a straightforward implementation of the onNotify() method, which is used to create the payment. First, the Request is validated, using a library provided by PayPlug: $notification = $request->getContent(); Payplug::setSecretKey($this->api_key); $resource = \Payplug\Notification::treat($notification, $authentication = null); If validation fails, it returns a JsonResponse with the exception thrown by the PayPlug Notification::treat() method: return new JsonResponse($exception->getMessage(), $exception->getCode()); Otherwise, it uses the returned PayPlug resource value to create a new payment for the order and returns an empty (success) Response: $metadata = $resource->metadata; $payment_storage = $this->entityTypeManager->getStorage('commerce_payment'); $payment = $payment_storage->create([ 'state' => 'authorization', 'amount' => new Price($resource->amount / 100, $resource->currency), 'payment_gateway' => $this->entityId, 'order_id' => $metadata['order_id'], 'test' => $this->getMode() == 'test', 'remote_id' => $resource->id, 'remote_state' => empty($resource->failure) ? 'paid' : $resource->failure->code, 'authorized' => $this->time->getRequestTime(), ]); $payment->save(); return new JsonResponse(); The Ingenico payment gateway is an example of an off-site payment gateway that creates the payment before the plugin form performs the redirect. So the payment is created in neither onReturn() nor onNotify(). The Drupal Commerce payment ID is provided to the payment provider so that the existing payment can be loaded in onReturn() and onNotify(): $payment = $this->entityTypeManager->getStorage('commerce_payment')->load($request->query->get('PAYMENT_ID')); In both methods, every request is logged before any processing happens: // Log the response message if request logging is enabled. if (!empty($this->configuration['api_logging']['response'])) { \Drupal::logger('commerce_ingenico') ->debug('e-Commerce notification: <pre>@body</pre>', [ '@body' => var_export($request->query->all(), TRUE), ]); } Next, the response is verified using SHA signature/passphrase validation, as described in the Security considerations documentation. If the response received from the payment provider is invalid or unsuccessful, the payment state is set to failed and an exception is thrown. $payment->set('state', 'failed'); $payment->save(); throw new InvalidResponseException($this->t('The gateway response looks suspicious.')); Finally, if the request is valid, the onNotify() method updates the payment state: // Let's also update payment state here - it's safer doing it from received // asynchronous notification rather than from the redirect back from the // off-site redirect. $state = $request->query->get('STATUS') == PaymentResponse::STATUS_AUTHORISED ? 'authorization' : 'completed'; $payment->set('state', $state); $payment->save(); The Ingenico payment gateway uses IPNs from the payment provider to more reliably capture the correct payment state. The payment state is only set to authorized or completed in the onNotify() method; the onReturn() method does not change a payment's state. The PayPal: Express payment gateway creates payments in its onReturn() method with a remote_id value that can be used by onNotify() (and other methods) to load the payment, using the loadByRemoteId() Payment storage method. Its onNotify() method handles updates to the payment amount and state as well as refunds. Here is the portion of its onNotify() method that handles refunds using the IPN data: elseif ($ipn_data['payment_status'] == 'Refunded') { // Get the corresponding parent transaction and refund it. $payment = $payment_storage->loadByRemoteId($ipn_data['txn_id']); if (!$payment) { $this->logger->warning('IPN for Order @order_number ignored: the transaction to be refunded does not exist.', ['@order_number' => $ipn_data['invoice']]); return FALSE; } elseif ($payment->getState() == 'refunded') { $this->logger->warning('IPN for Order @order_number ignored: the transaction is already refunded.', ['@order_number' => $ipn_data['invoice']]); return FALSE; } $amount = new Price((string) $ipn_data['mc_gross'], $ipn_data['mc_currency']); // Check if the Refund is partial or full. $old_refunded_amount = $payment->getRefundedAmount(); $new_refunded_amount = $old_refunded_amount->add($amount); if ($new_refunded_amount->lessThan($payment->getAmount())) { $payment->setState('partially_refunded'); } else { $payment->setState('refunded'); } $payment->setRefundedAmount($new_refunded_amount); } Handling IPNs from PayPal involves an extra validation step: PayPal includes a validation URL in its IPN data. So the first step for the onNotify() method is to extract that URL from the request and use it to send a request back to PayPal to validate the IPN. In the Commerce PayPal module, this functionality is included in an IPNHandler service: // Make PayPal request for IPN validation. $url = $this->getIpnValidationUrl($ipn_data); $validate_ipn = 'cmd=_notify-validate&' . $request->getContent(); $request = $this->httpClient->post($url, [ 'body' => $validate_ipn, ])->getBody(); $paypal_response = $this->getRequestDataArray($request->getContents()); // If the IPN was invalid, log a message and exit. if (isset($paypal_response['INVALID'])) { $this->logger->alert('Invalid IPN received and ignored.'); throw new BadRequestHttpException('Invalid IPN received and ignored.'); } return $ipn_data; See the Security considerations documentation for additional information on how PayPal uses token-based validation for requests sent to the payment gateway. By default, your Drupal Commerce site can accept payment gateway requests at /payment/notify/PAYMENT_GATEWAY_ID, where PAYMENT_GATEWAY_ID is the id defined by the payment gateway's annotation. For example, PayPal: Express checkout accepts notifications at /payment/notify/paypal_express_checkout. You will need to read the documentation for your specific payment gateway to figure how to enable IPN/notification messages and how to configure the URL. If you would like to alter the URL for notifications, you can implement a Route Subscriber for the commerce_payment.notify route. Found errors? Think you can improve this documentation? edit this page
https://docs.drupalcommerce.org/commerce2/developer-guide/payments/create-payment-gateway/off-site-gateways/handling-ipn
2020-07-02T13:31:27
CC-MAIN-2020-29
1593655878753.12
[]
docs.drupalcommerce.org
Back to Enjin Back to Enjin Java SDK Java SDK Getting Started Getting started with the Enjin SDK for Java. Authentication Examples for authenticating a server client and player. Player Management How to get player identities, link players, and more. Creating Requests Creating ENJ approval, advanced send, and other requests with the Java SDK..
https://docs.enjin.io/tag/java-sdk/
2020-07-02T11:17:41
CC-MAIN-2020-29
1593655878753.12
[]
docs.enjin.io
What is a GIF. File Format Conceptually, GIF files have a fixed-sized graphical area filled by zero or more images. Some GIF files divide the fixed-sized graphical area or blocks into sub-images capable of functioning as animated frames in case of animated GIF. The GIF format uses the pixel depths of 1 to 8 bits to store the bitmap data. RGB colour model and palette data are always used to store the images. Depending upon the version, a fixed-length header (“GIF87a” or “GIF89a”) defines the start of a typical GIF file. Currently, two versions of GIF: 87a and 89a are available. The former is the original GIF format while the latter is the new GIF format. In this file format, the characteristics of the blocks and pixel dimensions are mentioned in a fixed-length Logical Screen Descriptor. The existence and size of a Global Colour Table may be specified by the screen descriptor, which tracks further details if present. The trailer is the last byte of the file that holds a single byte of an ASCII semicolon. A typical GIF87a file layout is as follow: Header The Header holds six bytes and is used to specify the type of file as GIF. Though the Logical Screen Descriptor is separated from actual header yet sometimes it is considered as the second header. Same structure which is used to store the header may store the Logical Screen Descriptor. All GIF files start with the 3-byte signature and uses the characters” GIF” as an identifier. The version is also three bytes in size and declares the version of the GIF file. Logical Screen Descriptor A fixed-length Image Descriptor specifies the screen and colour information necessary to create the GIF image. The Height and Width fields enclose the smallest value of screen resolution, obligatory to show the image data. If the display device is incapable of displaying the specified resolution, scaling will be needed to suitably display the image. Screen and Colour Map Information is displayed by the four subfields of the table below (whereas bit 0 is the least significant bit): Global Colour Table An optional Global Colour Table is placed right after the Logical Screen Descriptor. This table mapped to index the pixel colour data inside the image data. In the absence of a Global Colour Table, each image in the GIF file uses its Local Colour. It’s better to supply a default colour table if both Global and Local Colour Table is missing. A series of three-byte triples composes the elements of the colour table. Each byte characterizes an RGB colour value. The red, green, and blue colours are used as values of each colour table element .The entries in the Global Colour Table hits a maximum of 256 entries and always represent in a power of two. Image Data The image data stores a byte of unencoded symbols followed by linked-list of sub- along with the LZW-encoded data. Trailer Trailer represents single byte of data that is the last character in the file. The value of this byte is permanently 3Bh and specifies the end of the data stream. Every GIF file must have the trailer in the last of each file.
https://docs.fileformat.com/image/gif/
2020-07-02T11:26:45
CC-MAIN-2020-29
1593655878753.12
[]
docs.fileformat.com
Next Game Frontier 2014: videos & content Microsoft and Samsung have co-organized the first edition of the Next Game Frontier conference, a technical web event purely dedicated to games made with web standards. The initial idea came after a discussion we had with Daniel Glazman from Samsung, David Catuhe and I during Paris Web 2013. Find more photos on Flickr : Next Game Frontier album (credits to Prélude) We’ve decided to call it “Next Game Frontier” because we think that the web as a platform could be the next frontier for the gaming industry after the PC, the console and the mobile. I’m quite happy of the agenda we’ve managed to build. You’ll find it there: . Microsoft, Mozilla, Samsung, Ubisoft & well known speakers such as Jean-Marc Le Roux from Aerys, Jerome Etienne from Beloola & and Dominique from the W3C. Quick introduction Create a 3D game with WebGL and Babylon.js You can watch the replay on Youtube: You can find my slides on Slideshare: NGF2014 - Create a 3d game with webgl and babylon.js If you’d like to test my demos, click on the following links: 1 - JS Software Engine Demo , associated tutorials: Tutorial series: learning how to write a 3D soft engine from scratch in C#, TypeScript or JavaScript 2 - Babylon.js Offline demos (with IndexedDB support) 3 - Omega Crusher - Simple shoot'em up 4 - Simple demo to let you drive a car 5 - Light Speed - a simple space shooter 6 - myGeoLive 3D Bing Maps 7 - BabylonHX - a HAXE port of Babylon.js 8 - Coding4Fun tutorial: creating a 3D WebGL procedural QRCode maze with Babylon.js And of course, you can navigate into the Hill Valley demo if you’d like to: The Web as a platform for games - from WebGL to asm.js by Nicolas Silva from Mozilla. Replay on Youtube: Create 3D assets for the mobile world & the web, the point of view of a 3D designer by Michel Rousseau from Microsoft. Replay on Youtube: Slideshare: Create 3D assets for the mobile world & the web, the point of view of a 3D designer Enhancing HTML5 gaming using WebCL by Swaroop Kalasapur and Satheesh Sudarsan from Samsung. Replay on Youtube: How To Make Games in Three.js by Jerome Etienne from Beloola Replay on Youtube: Slides: How To Make Games in Three.js WebGL games with Minko by Jean-Marc Le Roux from Aerys Replay on Youtube: Slides: WebGL games with Minko Rountable: open discussions around web gaming by Stephen Shankland with Nicolas Silva (Mozilla), Satheesh Sudarsan (Samsung), Dom Hazael-Massieux (W3C), Christian Nasr (Ubisoft) and David Rousset (Microsoft) Replay on Youtube: That’s all folks! I hope you’ll enjoy it. Let’s build some great games for the web now! Follow the author @davrous
https://docs.microsoft.com/en-us/archive/blogs/davrous/next-game-frontier-2014-videos-content
2020-07-02T13:54:15
CC-MAIN-2020-29
1593655878753.12
[array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/01/10/46/metablogapi/8507.wlEmoticon-winkingsmile_3634AE85.png', "Clignement d'œil"], dtype=object) ]
docs.microsoft.com
How to enable verbose logging for Windows Server 2012/2012 R2 Essentials [This post comes to us courtesy of Swapnil Rane and Rituraj Choudhary from Global Business Support] This post explains how to increase the logging level for the individual components of Server Essentials role for troubleshooting purposes. In order to accomplish this, we need to modify the Logging.config file. This file can be located at C:\Program Files\Windows Server\Bin on a Windows Server 2012 Essentials machine. On a Windows Server 2012 R2 Essentials this file is present at C:\Windows\System32\Essentials. Make sure to save a backup copy of the file before modifying it. You need to change the ownership of Logging.config file and give the user adequate permissions to save any modifications to it. You may use the following commands on an elevated Command Prompt to make modifications to the file: For Windows Server 2012 R2 Essentials: takeown /f C:\Windows\System32\Essentials\Logging.config icacls C:\Windows\System32\Essentials\Logging.config /grant administrators:F icacls C:\Windows\System32\Essentials\Logging.config /setowner "NT Service\TrustedInstaller" notepad C:\Windows\System32\Essentials\Logging.config For Windows Server 2012 Essentials: takeown /f "C:\Program Files\Windows Server\Bin\Logging.config" icacls "C:\Program Files\Windows Server\Bin\Logging.config" /grant administrators:F icacls "C:\Program Files\Windows Server\Bin\Logging.config" /setowner "NT Service\TrustedInstaller" notepad "C:\Program Files\Windows Server\Bin\Logging.config" The file Logging.config is now ready for editing. Search for the string level= and replace the string next to level= to All if it is set otherwise. For example: <add level="Warning" name="ProviderFramework"> <listeners> <add name="DefaultTraceListener" /> </listeners> </add> Change it as: <add level="All" name="ProviderFramework"> <listeners> <add name="DefaultTraceListener" /> </listeners> </add> Changing the level to All enables verbose logging. There are other values that the level can be set to, but mostly verbose logging is preferred, and can be achieved as mentioned above. When the issue is reproduced subsequently, the logs at C:\ProgramData\Microsoft\Windows Server\Logs folder should now contain verbose information. Note: You may use the same procedure to enable verbose logging on the Essentials clients.
https://docs.microsoft.com/en-us/archive/blogs/sbs/how-to-enable-verbose-logging-for-windows-server-20122012-r2-essentials
2020-07-02T13:48:39
CC-MAIN-2020-29
1593655878753.12
[]
docs.microsoft.com
TOPICS× Opening visualizations Information about opening visualizations. Because your implementation of Data Workbench can be fully customized, it may differ from what is documented in this guide. Exact paths to each visualization are not provided in this guide. All visualizations can be opened by right-clicking within a workspace and selecting the desired menu option. After opening up a new workspace, you may need to click Add > Temporarily Unlock. Visualizations cannot be imported like workspaces. When you right-click in the worktop and select Import , you can import an existing workspace, but not a visualization residing outside of the workspace.
https://docs.adobe.com/content/help/en/data-workbench/using/client/visualizations/c-open-vis.html
2020-07-02T13:42:16
CC-MAIN-2020-29
1593655878753.12
[array(['/content/dam/help/data-workbench.en/help/home/c-get-started/c-vis/assets/import_workspace.png', None], dtype=object) ]
docs.adobe.com
Vertex Customer. Customer Exceptions Before you begin Vertex Cloud adds the Vertex Customer Code field to the customer information. Customer exceptions can be based on either the Vertex Customer Code field in the customer account, or on the tax class that is associated with the customer group to which the customer belongs. To assign the Vertex Customer Code to a customer, open the customer account in edit mode. See Managing Customer Accounts for more information. Under Account Information, scroll down to the Vertex Customer Code field at the bottom of the section. Enter a unique code to identify the customer. The code can be an abbreviation of the customer name, number, or alphanumeric string. If you need to add new customer tax classes to Magento, see Adding New Tax Classes. If you need to add new customer groups to Magento, see Customer Groups. To assign a new customer tax class to a customer group, do the following: Open the customer group in edit mode. Set Tax Class as needed. When complete, click Save Customer Group. Add a single exception Log in to your Vertex Cloud account. In the sidebar, choose Configure. On the Configure Overview page, in the Customer Exceptions column of the company section, do one of the following: - Click Add. - Click Customers. Then above the grid, click Add Customer Exception. On the Add Single Exception tab under Customer Information, do the following: Add Single Exception In the Customer Name field, enter the full name of the customer as it appears in the Magento Customers grid. Set Customer Type to Codeor Class, which determines the value that is entered in the next field. In the Customer Code / Class field, enter the corresponding Magento value, according to the Customer Type setting. For Start Date, use the calendar to choose the date that the customer becomes available to Vertex Cloud. The customer Start Date must be on or after the company Start Date. If applicable, use the calendar to choose the End Date when the customer is no longer available to Vertex Cloud. Customer Information To set Tax Result, do one of the following: - Choose Taxableif the customer normally pays sales or use tax. - Choose Exemptif the customer is exempt from paying sales or use tax. Tax Result Complete the Certificate Information as follows: Set Exception Type to one of the following: For the Start Date field, use the calendar to choose the date the customer exception becomes available to Vertex Cloud for tax calculations and reporting purposes. If applicable, use the calendar to choose the End Date when the customer exception stops being available to Vertex Cloud. Set Exception Jurisdiction to the state or territory where the exception applies. Set Exception Reason to the code that identifies the reason for the exception, according to the jurisdiction. For the Direct Pay Permit exception type, the field defaults to Direct Pay Permit. If the exception applies to a specific product, choose the Product Name from the list of products that have been mapped to Vertex Cloud. Enter the Exception Number from the certificate. When complete, click Validate to verify that the information is correct. Certificate Information In the Certificate Reporting Criteria section, do the following: In the Issue Date field, enter the date the exception certificate was issued to the customer by the jurisdiction. In the Review Date field, enter the date when the validity of the exception certificate is scheduled to be reexamined. In the Expiration Date field, enter the date that is printed on the exception certificate to indicate when it stops being valid. Certificate Reporting Criteria Under Certificate Image, click Browse to upload a PDF of the certificate. Then, navigate to the image file on your computer. The PDF, JPG and BMP image formats are supported when exceptions are uploaded from a CSV file. Certificate Image Upload multiple exceptions Log in to your Vertex Cloud account. Then in the sidebar, choose Configure. On the Configure Overview page, in the Customer Exceptions column of the company section, click Add. Choose the Upload Multiple Exceptions tab and click Download File Format Template. Locate the ExceptionFileFormat.csvfile in your downloads location and open the file in your spreadsheet application. For information about each item in the template, see the following column descriptions: Complete the upload data according to the following field descriptions and save it as a comma-separated value (CSV) file. Column Descriptions To upload the completed template, click Browse. Then, navigate to the file on your computer and click Upload File. Upload Multiple Exceptions Customer Information Tax Result Certificate Information Certificate reporting criteria Certificate Image
https://docs.magento.com/user-guide/tax/vertex-setup-customer-exceptions.html
2020-07-02T13:36:47
CC-MAIN-2020-29
1593655878753.12
[]
docs.magento.com
Regex tagger The Regex Tagger can turn parts of the imported text into inline tags. You may want to do this to preserve parts that look like code, placeholders, or XML tags - so that they are not altered during translation. Practically, you can do this to parts of text that belong to the structure rather than the contents. There are three clear advantages of turning some parts of the text into tags: - They can't be changed during translation - you can't break program code by changing placeholders or tags accidentally. - They are easy to copy: During translation, in the translation editor, press F9 or Ctrl to copy tags from the source cell. - They give better matches from the translation memory: If tags are different, you can still get a close-to-exact match. For example, your text may contain a placeholder that looks like '{{number}}'. If it isn't tagged in the text, and there is a TM match where the placeholder is different, the match rate will be below 90 percent. But if these placeholders are tagged both in the text and the TM, the match rate will be higher than 95%. Save more: Tagging structural parts of the text may allow you to save time and money. The Regex Tagger uses regular expressions to find the parts that need to be tagged (as the name suggests). You can't import documents into memoQ with the Regex Tagger alone. But it can be the second or third filter in a cascading filter. Use Regex Tagger or Regex taggers after another filter: For example, a cells in an Excel workbook may contain tags that must not be altered. You can set up a cascading filter where the second - or last - filter is the Regex Tagger. At the end of the chain, you can add a sequence of Regex taggers, to tag the document several different ways. Tag text directly in the translation editor: During translation, if you discover that something needs to be tagged, you don't have to import the document again. You can run the Regex Tagger directly from the translation editor. It's on the Preparation ribbon. How to get here - Start importing a document (any monolingual format). - In the Document import options window, select the documents, and click Change filter and configuration. - The Document import settings window appears. - Below Filter configuration, click the Add cascading filter link. - The Select filter to chain window appears. From the Filter drop-down box, choose Regex tagger. Click OK. - In the Document import settings window, the filter chain appears below the cascading filter controls. Click Regex tagger. - The Regex tagger controls appear. - Open a project, and a document for translation. - In the Preparation ribbon, click Regex Tagger. - The Tag current document window appears. It's the same as the Regex tagger settings in the Document import settings window. What can you do? Most of this window is for writing and testing regular expression rules that memoQ uses to find parts of text that are replaced with inline tags. After you write up rules like this, you can save them as a filter configuration. You can also load a set of rules that was saved earlier. To load an existing set of patterns: Choose one from the Filter configuration drop-down box. To save the rules you just created: In the Filter configuration drop-down box, choose <new configuration> , and click the Save icon next to it. The Create new filter configuration window appears, where you can give a name to the new set of rules. You can set up several rules in a single regular expression filter configuration. These are listed in the top box of the Rules section. To add a pattern, first type a regular expression in the Regular expression text box. This can be a simple expression: for example, if you want to replace the word 'memoQ' with an inline tag, simply type 'memoQ' in the Regular expression text box. You can also enter more complex expressions where a simple pattern can represent several different character sequences. If you click the Pattern link next to the text box, you get a menu of elements: For example, the regular expression '<[^/]*?>' matches text that starts with the '<' character, followed by the shortest possible sequence of characters that does not contain the '/' character, and ends in a '>' character. In short, text that matches this pattern looks like an XML <tag>. To learn more about regular expressions: See the topic about Regular expressions. After you type the regular expression, choose what type of tag you want to see in the place of the text. You can choose to use an opening tag , a closing tag , or an empty tag . These correspond to the types of tags commonly used in XML markup. If you check the Required check box, memoQ will add a tagging error to a segment if you don't copy the corresponding tag to the target text in the translation editor. In the Display text text box, you can specify what memoQ should write inside the tag. This is called a replacement rule, and you also use these in auto-translation rules. You can write any text here, but you can also use the pre-defined $0 expression: if the replacement rule is $0, the tag will contain the text that memoQ found when matching the pattern. Note: If the regular expression contains several non-fixed parts, you can use $1, $2 etc. to refer to the first, second etc. non-fixed part in the replacement rule. You can choose from available options if you click the Patternlink next to the Display text box. After you fill in the Display text box, click Add to add the rule to the list. To modify an existing rule, click the rule in the list, and click Change. To remove a rule from the list, click the rule, and click Delete. If you want the Regex Tagger to work on tabs and newlines, too, check the Rules handle tabs and newlines check box. Dealing with tabs and newlines: A segment in memoQ never contains tabs or line breaks. If they appear, they are represented by a type of a tag. But when you need to import a text-based document (TXT, HTML, XML etc.), you may want to tag newlines and tabs yourself. Normally, the filters before the Regex Tagger will have already converted these characters into a tag. But then you do not have the chance to tag them yourself. To work with tabs and newlines in the Regex tagger, check the Rules handle tabs and newlines check box. Then the filters before the Regex tagger will not touch the tabs and newlines (they won't convert them into tags). But then need to make sure that you tag the tabs and newlines with the Regex Tagger. If you do not tag them, memoQ won't import the document. The lower part of the window shows how the rules work. After you fill in or edit the Rules list, the Input text box shows what parts of the original text will match your patterns. The Result box shows how memoQ tags them. Matches and replacements are highlighted in red. Normally, the Input text and Result boxes highlight the matches from all patterns. If you want to see highlights from one rule only, click the Apply only selected rule radio button, then click a rule in the Rules list. The order of the rules matters: Click the Up and Down buttons to move rules up and down. This can be useful if two patterns match the same paragraph, and the parts they match are overlapping. When you finish . To confirm the settings, and return to the translation editor: Click OK. memoQ starts to tag the document. To return the translation editor, and not tag the document: Click Cancel.
https://docs.memoq.com/current/en/Places/regex-tagger.html
2020-07-02T11:25:49
CC-MAIN-2020-29
1593655878753.12
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Images/r-s/regextagger.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Images/t-z/tag_this_document_pattern_elements.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.memoq.com
The Servers settings page lets you configure which servers Adviser should scan and allows you to initiate a scan. This is also the page where you can manage your license. This section shows the list of servers that Adviser has detected in your environment. For each server, you can specify whether it should be scanned. Note that you cannot include more servers than the maximum number of.
https://docs.teamstudio.com/plugins/viewsource/viewpagesrc.action?pageId=36406278
2020-07-02T12:15:49
CC-MAIN-2020-29
1593655878753.12
[]
docs.teamstudio.com
A non-mirrored volume resource is a DataKeeper resource where no data is replicated to any node in the cluster. This resource type should only be used where the data is temporary and/or non-critical such as MS SQL Server tempdb space. In this case, when MS SQL restarts on another node after a failover or switchover, the tempdb space is automatically recreated so replication of the data is not necessary. This non-mirrored volume resource will be able to come Online and go Offline on all cluster node without ever affecting the configured volume. Additionally, the volume will remain unlocked and writable at all times on all nodes in the cluster. To configure a non-mirrored volume resource: - Configure a volume on all cluster nodes using the same drive letter (all nodes must use the same drive letter). - Create any directories that are required for the volume on all cluster nodes. - Create a DataKeeper Volume resource using the Failover Clustering UI. Provide a name that best describes its intended use – (Example: “DataKeeper Volume F (Non-Mirrored)”). The following steps will set the Properties needed for the non-mirrored resource: • In the Failover Cluster Manager, right-click the Cluster Group or the Role that will contain the non-mirrored DataKeeper Volume Resource. • Select Add a Resource, More Resources, then select DataKeeper Volume. • Right-click on the new DataKeeper volume resource and select Properties. • Enter the Resource Name you selected earlier (Example: “DataKeeper Volume F (Non-Mirrored)”) then click OK. You do not need to change any other properties at this time. The following steps will set the Properties needed for the non-mirrored resource: - Assign the following properties using Powershell: VolumeLetter =F (if the drive letter is F, otherwise whatever the drive letter is) NonMirrored =1 (there is no space between Non and Mirrored) - Add the properties using Powershell: Get-ClusterResource “DataKeeper Volume F (Non-Mirrored)” | Set-ClusterParameter –Name VolumeLetter –Value “F” Get-ClusterResource “DataKeeper Volume F (Non-Mirrored)” | Set-ClusterParameter –Name NonMirrored –Value 1 If this non-mirrored volume resource is to be used with MS SQL Server for tempdb space the following configurations steps are needed: - Ensure that the volume security settings for the user account that is chosen to run SQL Server services has full access to the volume on all nodes in the cluster. - Ensure that the “SQL Server” resource in the Failover Clustering group has a dependency on the new DataKeeper Volume resource. Post your comment on this topic.
http://docs.us.sios.com/dkse/8.6.2/en/topic/non-mirrored-volume-resource?q=setting+browser+security+parameters
2020-07-02T13:30:29
CC-MAIN-2020-29
1593655878753.12
[]
docs.us.sios.com
: { "Sid": "AWSDatadogPermissionsForCloudtrail", "Effect": "Allow", "Principal": { "AWS": "<ARN_FROM_MAIN_AWS_INTEGRATION_SETUP>" }, "Action": ["s3:ListBucket", "s3:GetBucketLocation", "s3:GetObject", "s3:ListObjects"], "Resource": [ "arn:aws:s3:::<YOUR_S3_CLOUDTRAIL_BUCKET_NAME>", "arn:aws:s3:::<YOUR_S3_CLOUDTRAIL_BUCKET_NAME>/*" ] } Note: The principal ARN is the one listed during the installation process for the main AWS integration. If you are updating your policy (as opposed to adding a new one), you don’t need the SID or the Principal. Install the Datadog - AWS Cloudtrail integration: On the integration tile, choose the types of events to show as normal priority (the default filter) in the Datadog events stream. The accounts you configured in the Amazon Web Services tile are also shown here. If you would like to see other events that are not mentioned here, please reach out to Datadog support.. On this Page
https://docs.datadoghq.com/integrations/amazon_cloudtrail/
2020-07-02T13:28:00
CC-MAIN-2020-29
1593655878753.12
[]
docs.datadoghq.com
A process check monitor watches the status produced by the Agent check process.up. At the Agent level you can configure your check thresholds based on the number of matching processes. To create a process check monitor in Datadog, use the main navigation: Monitors –> New Monitor –> Process Check. From the drop-down list, select a process to monitor. Filter the list by entering your search criteria. Select the hosts to monitor by choosing host names, tags, or choose All Monitored Hosts. Only hosts or tags reporting a status for the selected process are displayed. If you need to exclude certain hosts, use the second field to list names or tags. ANDlogic. All listed host names and tags must be present on a host for it to be included. ORlogic. Any host with a listed name or tag is excluded. A check alert tracks consecutive statuses submitted per check grouping and compares it to your thresholds. For process check monitors, the groups are static: host and process. Set up the check alert: Trigger the alert after selected consecutive failures: <NUMBER> Each check run submits a single status of OK, WARN, or CRITICAL. Choose how many consecutive runs with the WARN and CRITICAL status trigger a notification. For example, your process resolves the alert. A cluster alert calculates the percent of process checks in a given status and compares it to your thresholds. Set up a cluster alert: Decide whether or not to group your process:
https://docs.datadoghq.com/monitors/monitor_types/process_check/?tab=checkalert
2020-07-02T13:15:38
CC-MAIN-2020-29
1593655878753.12
[]
docs.datadoghq.com
Slick(Edit) Blog I’m a SlickEdit Fan …… The team over at SlickEdit have started a blog and begun posting some interesting stuff. The blog is located here: Here is a sampling of some of the initial posts. * VSIP: Detecting code window switches in VS 2005 * do { ... } while (false); * How to Write an Effective Design Document * What is a Power Programmer? * Is Your Editor Working Hard Enough? * Key Binding, Command, Menu: The Golden Triangle
https://docs.microsoft.com/en-us/archive/blogs/joestagner/slickedit-blog
2020-07-02T12:57:44
CC-MAIN-2020-29
1593655878753.12
[]
docs.microsoft.com
Style on Form, & Components and Integration on Landing Page New update June 28, 2019 Crisp Integration on Landing Page This latest feature is able to add Crisp on your landing page. Crisp is one of the live chat tools to make you faster in responding to anyone who wants to ask something on your landing page. Crisp can be a new alternative to the contact person for those of you who want to create an event using a landing page, or can be a customer service for your customers who want to ask or discuss issues related to your brand or company. Disqus Component on Landing Page Disqus or the comment system service in cyberspace can now be integrated on your landing page. Disqus can be a means of discussion for visitors with you as an admin, to create good interaction and feedback from visitors. Layout Form The Form Layout feature in the Form Settings has now added its latest improvement in the form of more varied layouts. Layout form can be made horizontally and vertically, which certainly can make the appearance of your landing page more varied and interesting.
https://docs.mtarget.co/en/product-update/7nov19/
2020-07-02T13:23:36
CC-MAIN-2020-29
1593655878753.12
[array(['https://res.cloudinary.com/mailtarget/image/upload/v1561625072/update%20fitur%20di%20landing%20page/crisp_1.gif', None], dtype=object) array(['https://res.cloudinary.com/mailtarget/image/upload/v1561625073/update%20fitur%20di%20landing%20page/disqus_1.gif', None], dtype=object) array(['https://res.cloudinary.com/mailtarget/image/upload/v1561625073/update%20fitur%20di%20landing%20page/form_layout.gif', None], dtype=object) ]
docs.mtarget.co
Content Management System The Content Management System (CMS) is a ServiceNow application that enables users to create a custom interface for the ServiceNow platform and ServiceNow applications. The CMS application is powerful and flexible. Customers use it for a wide variety of projects, from creating entire websites to integrating with other products. The Content Management application is active by default. This video provides an overview of the CMS application. CMS application A CMS typically requires a systems administrator or a web developer to set up and add features. Non-technical users can use the CMS application as a tool for website maintenance. You also want to consider the timing of the addition of content management, and the maturity level of ServiceNow data. For more information, see CMS Planning. Following are several CMS project ideas: Design a company-wide service catalog that offers a collection of services. Present a customized UI for a knowledge base. Create customized login pages, search pages, views of lists, tables, charts, or graphs. Design a complete website. Integrate ServiceNow with other company applications. Build a tailored self-service portal for end users that is in compliance with a corporate style guide. Example CMS sites There are two common interface approaches within the ServiceNow community: An image and text-based interface similar to Amazon.com A search-based interface similar to Google Both approaches have been used successfully. The approach you select depends on the needs of the people using the data and how easy it is to train them. While the two design philosophies are different, both approaches share the common goal of UI simplicity. Activating CMS designBefore building a website in the CMS, it is important to have a good understanding of what to build and who the audience is. Domain separation in the Content Management SystemThis is an overview of domain separation and the Content Management System. Domain separation enables you to separate data, processes, and administrative tasks into logical groupings called domains. You can then control several aspects of this separation, including which users can see and access data.Configure Content Management sitesPlanning a CMS site involves obtaining resources, communicating with others about design, and gathering content.Content Management integration pointsIntegration points use content blocks in CMS to link different applications together using static and dynamic methods.Content Management testingTest your site to ensure that all pages display correctly, links go to the specified address, and images are not broken. It is important to test the site as you build it. Do not wait until just before launch to begin testing.Global search in Content ManagementWhen you add global search to a CMS site, two different search result blocks can display, depending on the user role: global or no global.CMS translationYou can translate CMS sites by activating internationalization plugins and manually translating custom interface strings.
https://docs.servicenow.com/bundle/newyork-servicenow-platform/page/administer/content-management/concept/c_ContentManagementSystem.html
2020-07-02T13:35:41
CC-MAIN-2020-29
1593655878753.12
[]
docs.servicenow.com
Coding Applications When coding your application, you must consider the following issues: Initialize the application resources. Typically, at least take care of creating the UI and restoring the latest application state. Write code for application-specific features and functionalities, and handle events. Define how the application behaves during the application’s state transitions, such as switching between foreground and background. You must also define event handlers corresponding to system events, if necessary. Destroy allocated resources and save the current application state. Using Hover Help Tizen Studio supports hover help for Web API and W3C Widget APIs. The hover help provides input from the API Reference. Figure: Hover help Adding External Source Code or Library External source files are located in the project directory, and its /js and /css sub-directories. You can add a new folder or source file (such as CSS, HTML, JSON, XML, and JavaScript) to your existing project. You can add files in the following ways (as an example, the instructions illustrate the adding of a CSS file): Adding a new file: In the Project Explorer view, right-click an existing project and select New > CSS File. In the New CSS File view, select the location of the new CSS file and enter the file name. Optionally, you can select a template as initial content in the CSS file. Click Finish. Adding an existing file: - In the Project Explorer view, right-click the /csssub-directory and select Import > General > File System. - In the Import view, click Browse and select the import location. - Click Finish. Note You can also drag and drop external source files or libraries. If you drop a file to the Project Explorer view, the File Operation dialog appears, and allows you to either copy the file or create a link to it.
https://docs.tizen.org/application/web/tutorials/process/coding-app/
2020-07-02T13:43:43
CC-MAIN-2020-29
1593655878753.12
[array(['../media/hover_help.png', 'Hover help'], dtype=object)]
docs.tizen.org
A Mainnet in the context of the Baseline Protocol is an always-on public utility, a state machine that resists tampering and censorship but sacrifices speed, scalability and fast finality in order to allow anyone to help maintain the integrity of the database. Because the Mainnet is a permanent public record, encrypted information placed there can be observed by anyone at any time, forever. This includes parties with the means and know-how to perform advanced analytics and ascertain patterns that may reveal strategic intelligence even without having to defeat the encryption of the data itself. Used correctly, the Mainnet can be employed by business to solve long-standing problems: Automate B2B agreements without creating new silos (like private blockchains do) Integrate new relationships with existing ones flexibly without losing system integrity or adding integrations on top of integrations Enforce consistency between different parties’ records without having to move the data or business logic from legacy systems Enforce continuity in a multiparty workflow (e.g., an invoice always agrees with the purchase order) while compartmentalizing which parties know the details in each step The purpose of this section is to define the long-run specifications for a Mainnet required by organizations conducting baselining. These specifications, both functional and non-functional, will be used to articulate requirements to bodies such as the Ethereum core development community. And should the Ethereum mainnet fail to deliver on the essential requirements in a timeframe that commercial outlook can contemplate, then this specification will be used to seek out an alternative. In the end, from the perspective of the Baseline Protocol, the important thing about the Mainnet is that it can become, like the Internet, a widely-used public good. This section is a work in progress. Stay tuned as the community bootstraps.
https://docs.baseline-protocol.org/baseline-protocol/standards/mainnet
2020-07-02T13:21:15
CC-MAIN-2020-29
1593655878753.12
[]
docs.baseline-protocol.org
When you create a RUM application dashboards are created within Datadog to analyze all the data collected. RUM dashboards can be found in the dashboards list and have the Datadog logo: You can also access these dashboards through your RUM application page. Click on the Dashboard links associated with your application: The following dashboards are available: You can customize your dashboards as you would with any other one, or directly explore the underlying data in your RUM explorer. RUM dashboards are generated for all your applications with a set of default template variables automatically created. Use the template variables on top of it to start filtering RUM dashboards. For instance, use the applicationId template variable to filter down to a specific application. To explore all the individual events, click on any graph and select View related views to be redirected to the RUM Explorer with the currently selected filters. Clone your RUM dashboards and customize them to fit your needs. You can add widgets and modify the template variables: Additional helpful documentation, links, and articles:
https://docs.datadoghq.com/real_user_monitoring/dashboards/
2020-07-02T12:55:32
CC-MAIN-2020-29
1593655878753.12
[]
docs.datadoghq.com
How to get glass to show up on Windows Vista Beta 1 One question I get asked frequently is "hey will this piece of hardware run Glass on Vista?". Working day-to-day at Micrsoft, the answer is always kind of a puzzle, depending on the hardware that is in question, the build of Windows you installed, and what drivers you may have gotten your hands on. But for folks out in the real world, who are using Windows Vista Beta 1, I can provide a little more guidance. In Beta 1, the first thing you need is an LDDM display driver. (You can learn more about LDDM by typing "LDDM WinHEC" into your favorite search-box, and find the DirectX team's slides on the new driver model.) There are other requirements as well - your graphics part needs to support the entire DX9 API, and you'll also want plenty of memory allocated for graphics. The reall challenge people run into is picking the right graphics card. Luckily, ATI and nVidia have each posted pages on their web-sites that list the supported cards, and available drivers. If you install Windows Vista Beta 1 on a computer with one of the listed parts, and the listed driver, you will see Glass!
https://docs.microsoft.com/en-us/archive/blogs/kamvedbrat/how-to-get-glass-to-show-up-on-windows-vista-beta-1
2020-07-02T13:38:57
CC-MAIN-2020-29
1593655878753.12
[]
docs.microsoft.com
In the tool bar you can save the current search by pressing the Save button of the toolbar. Than insert the name of the saved search and an optional description. In the Saved Searches section you can find the list of your saved searches. Double click on one of the items to launch the search. Each user has his own set of saved seaches.
https://docs.logicaldoc.com/en/search/save-a-search
2020-07-02T12:58:22
CC-MAIN-2020-29
1593655878753.12
[array(['/images/stories/en/savedsearches.gif', None], dtype=object)]
docs.logicaldoc.com
Creating an Entity Framework Data Model for an ASP.NET MVC Application (1 of 10) by Tom Dykstra Download Completed Project Note. Code First. MVC The sample application is built on ASP.NET MVC. If you prefer to work with the ASP.NET Web Forms model, see the Model Binding and Web Forms tutorial series and ASP.NET Data Access Content Map. Software versions Questions If you have questions that are not directly related to the tutorial, you can post them to the ASP.NET Entity Framework forum, the Entity Framework and LINQ to Entities forum, or StackOverflow.com. Acknowledgments See the last tutorial in the series for acknowledgments and a note about VB. Original version of the tutorial The original version of the tutorial is available in the EF 4.1 / MVC 3 e-book.. Prerequisites The directions and screen shots in this tutorial assume that you're using Visual Studio 2012 or Visual Studio 2012 Express for Web, with the latest update and Azure SDK for .NET installed as of July, 2013. You can get all of this with the following link: Azure SDK for .NET . Create an MVC Web Application Open Visual Studio and create a new C# project named "ContosoUniversity" using the ASP.NET MVC 4 Web Application template. Make sure you target .NET Framework 4.5 (you'll be using enum properties, and that requires .NET 4.5). In the New ASP.NET MVC 4 Project dialog box select the Internet Application template. Leave the Razor view engine selected, and leave the Create a unit test project check box cleared. Click OK. Set Up the Site Style: - Replaces the template instances of "My ASP.NET MVC Application" and "your logo here" with "Contoso University". - Adds several action links that will be used later in the tutorial. In Views\Home\Index.cshtml, replace the contents of the file with the following code to eliminate the template paragraphs about ASP.NET and MVC: @{ ViewBag. <div class="content-wrapper"> <hgroup class="title"> <h1>@ViewBag.Title.</h1> <h2>@ViewBag.Message</h2> </hgroup> </div> </section> } In Controllers\HomeController.cs, change the value for ViewBag.Message in the Index Action method to "Welcome to Contoso University!", as shown in the following example: public ActionResult Index() { ViewBag.Message = "Welcome to Contoso University"; return View(); } Press CTRL+F5 to run the site. You see the home page with the main menu. Create the Data Model Next you'll create entity classes for the Contoso University application. You'll start with the following three entities: ID_7<< virtual Course Course { get; set; } public virtual Student Student { get; set; } } }. The Course Entity In the Models folder, create Course.cs, replacing; } } } The Enrollments property is a navigation property. A Course entity can be related to any number of Enrollment entities. We'll say more about the [DatabaseGenerated(DatabaseGeneratedOption.None)] attribute in the next tutorial.. would be named Students, Courses, and Enrollments. Instead, the table names will be Student, Course, and Enrollment. Developers disagree about whether table names should be pluralized or not. This tutorial uses the singular form, but the important point is that you can select whichever form you prefer by including or omitting this line of code. SQL Server Express LocalDB collection, as shown in the following example. (Make sure you update the Web.config file in the root project folder. There's also a Web.config file is in the Views subfolder that you don't need to update.) <add name="SchoolContext" connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=ContosoUniversity. Set up and Execute a Code First Migration. Enable Code First Migrations From the Tools menu, click NuGet Package Manager and then Package Manager Console. At the PM>prompt enter the following command: enable-migrations -contexttypename SchoolContext This command creates a Migrations folder in the ContosoUniversity project, and it puts in that folder a Configuration.cs file that you can edit to configure Migrations. The Configurationclass includes a Seedmethod that is called when the database is created and every time it is updated after a data model change. internal sealed class Configuration : DbMigrationsConfiguration<ContosoUniversity.Models.SchoolContext> { public Configuration() { AutomaticMigrationsEnabled = false; } protected override void Seed(ContosoUniversity.Models.School" } // ); // } } The purpose of this Seedmethod is to enable you to insert test data into the database after Code First creates it or updates it. Set up the Seed Method want to be inserted System; using System.Collections.Generic; using System.Data.Entity.Migrations; using System.Linq; using ContosoUniversity.Models; internal sealed class Configuration : DbMigrationsConfiguration<ContosoUniversity.DAL.SchoolContext> { public Configuration() { AutomaticMigrationsEnabled = false; } protected override void Seed(ContosoUniversity.DAL.SchoolContext context) { var students = new List<Student> { new Student { FirstMidName = "Carson", LastName = "Alexander", EnrollmentDate = DateTime.Parse("2010-09-01") }, new Student { FirstMidName = "Meredith", LastName = "Alonso", EnrollmentDate = DateTime.Parse("2012-09-01") }, new Student { FirstMidName = "Arturo", LastName = "Anand", EnrollmentDate = DateTime.Parse("2013-09-01") }, new Student { FirstMidName = "Gytis", LastName = "Barzdukas", EnrollmentDate = DateTime.Parse("2012-09-01") }, new Student { FirstMidName = "Yan", LastName = "Li", EnrollmentDate = DateTime.Parse("2012-09-01") }, new Student { FirstMidName = "Peggy", LastName = "Justice", EnrollmentDate = DateTime.Parse("2011-09-01") }, new Student { FirstMidName = "Laura", LastName = "Norman", EnrollmentDate = DateTime.Parse("2013-09-01") }, new Student { FirstMidName = "Nino", LastName = "Olivetto", EnrollmentDate = DateTime.Parse("2005-08-11") } }; students.ForEach(s => context.Students.AddOrUpdate(p => p.LastName, s)); context.SaveChanges(); var courses = new List, } }; courses.ForEach(s => context.Courses.AddOrUpdate(p => p.Title, s)); context.SaveChanges(); var enrollments = new List<Enrollment> { new Enrollment { StudentID = students.Single(s => s.LastName == "Alexander") == e.StudentID && s.Course.CourseID == e.CourseID).SingleOrDefault(); if (enrollmentInDataBase == null) { context.Enrollments.Add. Some of the statements that insert data use the AddOrUpdate method to perform an "upsert" operation. Because the Seedmethod Seed method uses both approaches. The first parameter passed to the AddOrUpdate method specifies the property to use to check if a row already exists. For the test student data that you are providing, the LastNameproperty can be used for this purpose since each last name in the list is unique: context.Students.AddOrUpdate(p => p.LastName, s) This code assumes that last names are unique. If you manually add a student with a duplicate last name, you'll get the following exception the next time you perform a migration. Sequence contains more than one element For more information about the AddOrUpdatemethod, see Take care with EF 4.3 AddOrUpdate Method on Julie Lerman's blog. The code that adds Enrollmententities doesn't use the AddOrUpdatemethod. It checks if an entity already exists and inserts the entity if it doesn't exist. This approach will preserve changes you make to an enrollment grade when migrations run.method and how to handle redundant data such as two students named "Alexander Carson", see Seeding and Debugging Entity Framework (EF) DBs on Rick Anderson's blog. Build the project. Create and Execute the First Migration In the Package Manager Console window, enter the following commands: add-migration InitialCreate update-database The add-migrationcommand". The Upmethod of the InitialCreateclass creates the database tables that correspond to the data model entity sets, and the Downmethod deletes them. Migrations calls the Upmethod to implement the data model changes for a migration. When you enter a command to roll back the update, Migrations calls the Downmethod. The following code shows the contents of the InitialCreatefile:ID); CreateTable( "dbo.Enrollment", c => new { EnrollmentID = c.Int(nullable: false, identity: true), CourseID = c.Int(nullable: false), StudentID = c.Int(nullable: false), Grade = c.Int(), }) .PrimaryKey(t => t.EnrollmentID) .ForeignKey("dbo.Course", t => t.CourseID, cascadeDelete: true) .ForeignKey("dbo.Student", t => t.StudentID, cascadeDelete: true) .Index(t => t.CourseID) .Index(t => t.StudentID); CreateTable( "dbocommand runs the Upmethod to create the database and then it runs the Seedmethod to populate the. Click OK. Expand SchoolContext and then expand Tables. Right-click the Student table and click Show Table Data to see the columns that were created and the rows that were inserted into the table. Creating a Student Controller and Views The next step is to create an ASP.NET MVC controller and views in your application that can work with one of these tables. To create a Studentcontroller, right-click the Controllers folder in Solution Explorer, select Add, and then click Controller. In the Add Controller dialog box, make the following selections and then click Add: Controller name: StudentController. Template: MVC controller with read/write actions and views, using Entity Framework. Model class: Student (ContosoUniversity.Models). (If you don't see this option in the drop-down list, build the project and try again.) Data context class: SchoolContext (ContosoUniversity.Models). Views: Razor (CSHTML). (The default.).StudentID }) | @Html.ActionLink("Details", "Details", new { id=item.StudentID }) | @Html.ActionLink("Delete", "Delete", new { id=item.StudentID }) </td> </tr> } Press CTRL+F5 to run the project. Click the Students tab to see the test data that the Seedmethod inserted. Conventions The amount of code you had to write in order for the Entity Framework to be able to create a complete database for you is minimal because of the use of conventions, or assumptions that the Entity Framework makes. Some of them have already been noted: - The pluralized forms of entity class names are used as table names. - Entity property names are used for column names. - Entity properties that are named IDor classname IDare recognized as primary key properties.. Summary.
https://docs.microsoft.com/en-us/aspnet/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/creating-an-entity-framework-data-model-for-an-asp-net-mvc-application
2020-07-02T13:29:25
CC-MAIN-2020-29
1593655878753.12
[array(['creating-an-entity-framework-data-model-for-an-asp-net-mvc-application/_static/image1.png', 'Students_Index_page'], dtype=object) array(['creating-an-entity-framework-data-model-for-an-asp-net-mvc-application/_static/image2.png', None], dtype=object) array(['creating-an-entity-framework-data-model-for-an-asp-net-mvc-application/_static/image3.png', 'New_project_dialog_box'], dtype=object) array(['creating-an-entity-framework-data-model-for-an-asp-net-mvc-application/_static/image4.png', 'Project_template_options'], dtype=object) array(['creating-an-entity-framework-data-model-for-an-asp-net-mvc-application/_static/image5.png', 'Contoso_University_home_page'], dtype=object) array(['creating-an-entity-framework-data-model-for-an-asp-net-mvc-application/_static/image6.png', 'Class_diagram'], dtype=object) array(['creating-an-entity-framework-data-model-for-an-asp-net-mvc-application/_static/image7.png', 'Student_entity'], dtype=object) array(['creating-an-entity-framework-data-model-for-an-asp-net-mvc-application/_static/image8.png', 'Enrollment_entity'], dtype=object) array(['creating-an-entity-framework-data-model-for-an-asp-net-mvc-application/_static/image9.png', 'Course_entity'], dtype=object) ]
docs.microsoft.com
AZ-500: Microsoft Azure Security Technologies Languages: English, Japanese, Chinese (Simplified), Korean Retirement date: none This exam measures your ability to accomplish the following technical tasks: manage identity and access; implement platform protection; manage security operations; and secure data and applications. Price based on the country in which the exam is proctored.
https://docs.microsoft.com/en-us/learn/certifications/exams/az-500
2020-07-02T14:04:06
CC-MAIN-2020-29
1593655878753.12
[]
docs.microsoft.com
FTX historical data for all it's instruments is available since 2019-08-01. Historical CSV datasets for the first day of each month are available to download without API key. See downloadable CSV files documentation. Derivative ticker datasets are available since 2020-05-13 - date since we've started collecting that data via FTX REST API (instrument channel). Historical data format is the same as provided by real-time FTX",from_date="2020-01-01",to_date="2020-01-02",filters=[Channel(name="orderbook", symbols=["BTC-PERP"])])# messages as provided by FTX real-time streamasync for local_timestamp, message in messages:print(message)asyncio.run(replay()) See Python client docs. // npm install tardis-devconst { replay } = require('tardis-dev');async function run() {try {const messages = replay({exchange: 'ftx',from: '2020-01-01',to: '2020-01-02',filters: [{ channel: 'orderbook', symbols: ['BTC-PERP'] }],apiKey: 'YOUR_API_KEY'});// messages as provided by FTX real-time streamfor await (const { localTimestamp, message } of messages) {console.log(localTimestamp, message);}} catch (e) {console.error(e);}}run(); See Node.js client docs. curl -g '[{"channel":"orderbook","symbols":["BTC-PERP"]}]&offset=0' See HTTP API docs. curl -g 'localhost:8000/replay?options={"exchange":"ftx","filters":[{"channel":"orderbook","symbols":["BTC-PER. instrument - generated channel, available since 2020-05-13 Since FTX does not offer currently real-time WebSocket instrument info channel with next funding rate, open interest or mark price data, we simulate it by fetching that info from FTX REST API ( and) every 3-5 seconds for each derivative instrument. Such messages are marked with "channel":"instrument" and "generated":true fields and data field has the same format as REST API responses. Market data collection infrastructure for FTX since 2020-05-14 is located in GCP asia-northeast1 (Tokyo, Japan), before that it was located in GCP europe-west2 region (London, UK). Real-time market data is captured via multiple WebSocket connections. FTX servers are located in AWS ap-northeast-1 region (Tokyo, Japan).
https://docs.tardis.dev/historical-data-details/ftx
2020-07-02T12:29:25
CC-MAIN-2020-29
1593655878753.12
[]
docs.tardis.dev
Removing Paper Chute Cog Wheels.
https://docs.toonboom.com/help/harmony-17/scan/scan-module/scanner-configuration/remove-paper-chute-cog-wheel.html
2020-07-02T12:41:28
CC-MAIN-2020-29
1593655878753.12
[array(['../../Resources/Images/HAR/CCenter_Server/Scanners/HAR10_CogWheel.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Language Integration Issues This document discusses some of the benefits and potential pitfalls of installing multiple languages into the XMB1 system. Contents English and its Close Cousins One of XMB's greatest features is its support for several languages that use the same electronic character set. Translations at various times have included Albanian, Croatian, Dutch, Estonian, French, German, Italian, Portuguese, Spanish, and Swedish. These languages are all compatible with one another, and with several others, due to the fact that XMB is written in English using the same character encoding called "ISO 8859-1". All Other Languages XMB has at various times included translations in Chinese, Finnish, Hungarian, Polish, Russian, and Ukranian. These languages can be installed and expected to work independently of each other very smoothly. However, these languages are not natively compatible with English (or with each other for that matter) because they do not use the ISO 8859-1 character encoding. Database Character Encoding The XMB1 database is required to use a single-byte character encoding such as ISO 8859-1. One benefit of this design is that it is binary safe, meaning there is a one-to-one relationship between each byte being stored and retrieved. This makes the storage engine itself relatively blind to the type of data or language being used on the message board. One potential pitfall of this design is that the original authors of the XMB1 software forgot to specify a character encoding in the database schema. As a result, all language and related localization issues have to be treated as though the database is using an unknown default character encoding, which might not be ISO 8859-1. So long as the default encoding is one that uses single-byte characters, then this is a trivial concern. Database Connection Character Encoding As of versions 5 of ~MySQL and PHP, it is possible to change the default connection encoding to something other than what the database itself uses. This can be a major problem, as was discovered by Paulo from postcrossing.com. His database was using the ISO 8859-1 character set, but the default connection encoding on the server was UTF-8. As as a result, depending on which language the XMB end user was transmitting, some of the binary input data were being treated as multi-byte characters and narrowed to single-byte characters before storage. As you can see, it is imperative that the XMB database connection encoding exactly match the database (table) character encoding. Integrating All Languages Some webmasters have a great desire to allow their users to interact in any language at any time. This configuration is possible, however it is not officially supported by XMB1. The following pitfalls need to be considered before attempting this configuration: Language File Character Encoding - All of the XMB language files would have to be completely re-encoded into a single common encoding such as UTF-8. Without doing this, the XMB output would be meaningless binary garbage for any non-ISO 8859-1 language. Existing Data Encoding - The biggest pitfall is for boards that have been in operation with members posting messages containing non-English characters. This is not a limitation of any particular encoding, but a major problem when changing between them. UTF-8 is only backward-compatible with the US ASCII character set, which in turn only includes half of the characters in ISO-8859-1. Any non-English data that were stored by users will be represented by two or more bytes in UTF-8, and are therefore incompatible unless they are completely re-encoded. This is a difficult task considering users may change their language setting at any time. Existing Encrypted Data - Another major pitfall in language integration has to do with passwords. Any change to the website's character encoding will cause password inputs to change the same way as all other character data. Only US ASCII passwords will remain valid. Passwords are stored in a non-reversible cryptographic hash, making it impossible to re-encode the saved data. Users will have to request a password reset if the encoding of their password has changed. Database Character Encoding - Again, the database must use a single-byte encoding such as ISO 8859-1. Database Connection Character Encoding - Again, the connection encoding must match the database encoding.
https://docs.xmbforum2.com/index.php?title=Language_Integration_Issues&oldid=164
2020-07-02T12:41:19
CC-MAIN-2020-29
1593655878753.12
[]
docs.xmbforum2.com
Sometimes customers are not familiar with how to add a digital loyalty card to their wallet. We've created these simple to follow instructions for your customers to follow. Simply refer them to loyalty.is/123 Or print these pamphlets out and give to them when they first sign up for your loyalty programme. Remember to share the correct QR code (or URL). The URL and the card shown in the pamphlet is for illustrative purposes only.
http://docs.loopyloyalty.com/en/articles/2355226-help-customers-save-your-card-to-their-wallet
2020-07-02T12:13:36
CC-MAIN-2020-29
1593655878753.12
[]
docs.loopyloyalty.com
Use Metamask For Binance Smart Chain { width=50%} Warning Note: Make sure it’s offered by metamask.io - Click on “Add to Brave” That’s it! You have successfully installed MetaMask extension in Brave! Tip The workflow is the same for all browsers - Click on the “Create a wallet” button Create Password of at least 8 characters Click on “Create” and then write down your backup phrase. Select each phrase in order to make sure it is correct then click “Confirm”. Congratulations! you have create your MetaMask account! Connect Your MetakMast With Binance Smart Chain testnet Go to setting page Add a new network - Network name: Binance Smart Chain Testnet - RPC URL: - ChainID: 96 - Symbol: BNB - Block Explorer:2E!
https://docs.binance.org/smart-chain/wallet/metamask.html
2020-07-02T11:37:06
CC-MAIN-2020-29
1593655878753.12
[]
docs.binance.org
As we know - every API requires composable routing. Lets assume that we have a separate User feature where its API endpoints respond to GET and POST methods on /user path. Deprecation warning With an introduction of Marble.js 3.0, old EffectFactory HTTP route builder is deprecated. Please use r.pipe builder instead.$,]); Route effects can be grouped together using combineRoutes function, which combines routing for a prefixed path passed as a first argument. Exported group of Effects can be combined with other Effects like in the example below. import { combineRoutes, r } from '@marblejs/core';import { user$ } from './user.effects';const root$ = r.pipe(r.matchPath('/'),r.matchType('GET'),r.useEffect(req$ => req$.pipe(// ...)));const foo$ = r.pipe(r.matchPath('/foo'),r.matchType('GET'),r.useEffect(req$ => req$.pipe(// ...)));export const api$ = combineRoutes('/api/v1', [root$,foo$,user$, // 👈]); As you can see, the previously defined routes can be combined together, so as a result the routing is built in, e.g. to authenticate requests only for a selected group of endpoints. Instead of composing middlewares using use operator for each route separately, you can compose them via the extended second parameter in combineRoutes() function. import { combineRoutes } from '@marblejs/core';const user$ = combineRoutes('/user', {middlewares: [authorize$],effects: [getUsers$, postUser$],}); Marble.js doesn't come with a built-in mechanism for parsing POST, PUT and PATCH request bodies. In order to get the parsed request body you can use dedicated @marblejs/middleware-body package. A new req.body object containing the parsed data will usage. By design, the req.body, req.params, req.queryare of type unknown. In order to work with decoded values you should validate them before (e.g. using dedicated validator middleware) or explicitly. import { r } from '@marblejs/core properties: {foo: 'bob',bar: '12',} For parsing and decoding URL parameters, Marble.js makes use of path-to-regexp library. All properties and values in req.params object are untrusted and should be validated before usage. You should validate incoming URL params using dedicated requestValidator$ middleware. Path parameters can be suffixed with an asterisk ( *) to denote a zero or more parameter matches. The code snippet below shows an example use case of a "zero-or-more" parameter. For example, it can be useful for defining routing for static assets. import { r } from '@marblejs/core;import { map, mergeMap } from 'rxjs/operators' parsing and decoding query parameters, Marble.js makes use of qs library. usage. You should validate incoming req.query parameters using dedicated requestValidator$ middleware.
https://docs.marblejs.com/http/routing
2020-07-02T12:07:38
CC-MAIN-2020-29
1593655878753.12
[]
docs.marblejs.com
Location and Sensors .NET applications: You can create geofences, which are virtual perimeters for a real-world geographic area. When a geofence is active, you can monitor the user location and receive alerts when the user enters or leaves the geofence area.. You can use map features, such as geocoding, place searching, and routing. The maps service requires a map provider, from which you can retrieve the required map details. You can read and manage data from various sensors on the device. You can access information from various environmental sensors, such as the light and magnetic sensors, and from user-related sensors, such as the heart rate monitor. Related Information - Dependencies - Tizen 4.0 and Higher
https://docs.tizen.org/application/dotnet/guides/location-sensors/overview/
2020-07-02T11:31:04
CC-MAIN-2020-29
1593655878753.12
[]
docs.tizen.org
Edition 1.0 No devices found to install FedoraError Message httpdservice/Sendmail Hangs During Startup /etc/crypttab /etc/fstab /sbin/initProgram")); } } boot 11, Installing Without Media. Table of Contents fedora/linux/releases/14/. This directory contains a folder for each architecture supported by that release of Fedora. CD and DVD media files appear inside that folder, in a folder called iso/. For example, you can find the file for the DVD distribution of Fedora 14 for x86_64 at fedora/linux/releases/14/Fedora/x86_64/iso/Fedora-14-x86_64-DVD.iso. i386Works for Most Windows Compatible Computers i386. i386architecture. The 230 and 330 Series Atom processors are based on the x86_64architecture. Refer to for more details. Fedora-14. Applicationsfolder.-14-i686-Live.iso, to a folder named Downloadsin your home directory. You have a USB flash drive plugged into your computer, named /dev/sdc1 su - mkdir /mnt/livecd mount -o loop /home/ Username/Downloads/Fedora-14-i686-Live.iso /mnt/livecd LiveOSdirectory of the live CD image: cd /mnt/livecd/LiveOS ./livecd-iso-to-disk /home/ Username/Downloads/Fedora-14-i686-Live.iso /dev/sdc1 boot. linux askmethodoption to boot the installer from DVD and continue installation from a different installation source — refer to Section 4.5, “Selecting an Installation Method”. efidisk.imgfile in the images/directory on the Fedora 14 F14-Server Table of Contents No devices found to install FedoraError Message httpdservice/Sendmail Hangs During Startup /etc/fstab, /etc/crypttabor other configuration files which refer to devices by their device node names will not work in Fedora 14. Before migrating these files, you must therefore edit them to replace device node paths with device UUIDs instead. You can find the UUIDs of devices with the blkidcommand. /and swap) must be dedicated to Fedora. linux askmethodor linux repo=cdrom:boot option, or by selecting on the menu (refer to Section 8.3, “Installation Method”). device:/ device linux askmethodor linux repo=cdrom:boot option, or by selecting on the menu (refer to Section 8.3, “Installation Method”). device:/ device linux askmethodor linux repo=hd:boot option), or by selecting on the menu (refer to Section 8.3, “Installation Method”). Refer to Section 8.3.2, “Installing from a Hard Drive”, for hard drive installation instructions. device:/ path linux askmethodor linux repo=nfs:boot option, or the option on the menu described in Section 8.3, “Installation Method”). Refer to Section 8.3.4, “Installing via NFS” for network installation instructions. Note that NFS installations may also be performed in GUI mode. server :options:/ path linux askmethod, linux repo=ftp://, or user: host/ path linux repo= option, or the option on the menu described in Section 8.3, “Installation Method”). Refer to Section 8.3.5, “Installing via FTP or HTTP”, for FTP and HTTP installation instructions. host/ path askmethod, the next stage loads automatically from the DVD. Proceed to Section 8.6, “Language Selection”. boot:prompt: linux mediacheck /var/www/inst/f14/f14, for an HTTP install. dd if=/dev/ dvdof= /location/of/disk/space/F14.iso dvdrefers to your DVD drive device. README-enfile on CD-ROM#1. install.imgfile, and optionally the product.imgfile available on the network server via NFS. mv /location/of/disk/space/F14.iso /publicly/available/directory/ $”). directory is exported via NFS via an entry indirectory is exported via NFS via an entry in /publicly/available/directory /etc/exportson the network server. /publicly/available/directory client.ip.address(ro) /publicly/available/directory* (ro) /sbin/service nfs start). If NFS is already running, reload the configuration file (on a Fedora system use /sbin/service nfs reload). boot:prompt: linux mediacheck install.imgfile extracted from the ISO image. product.imgfile extracted from the ISO image. dd if=/dev/ dvdof= /location/of/disk/space/F”). boot:prompt: linux mediacheck /booton sda1, /on sda2, and /homeon sdb1. This will allow you to identify specific partitions during the partitioning process.. boot:prompt should appear. The screen contains information on a variety of boot options. Each boot option also has one or more help screens associated with it. To access a help screen, press the appropriate function key as listed in the line at the bottom of the screen. boot:prompt appears, the installation program automatically begins if you take no action within the first minute. To disable this feature, press one of the help screen function keys. linux text linux repo=cdrom: device linux repo=ftp:// username: URL linux repo=http:// URL linux repo=hd: device linux repo=nfs :options: server:/ path linux repo=nfsiso :options: server:/ path cdromrefers to a CD or DVD drive, ftprefers to a location accessible by FTP, httprefers to a location accessible by HTTP, hdrefers to an ISO image file accessible on a hard drive partition, nfsrefers to an expanded tree of installation files accessible by NFS, and nfsisorefers to an ISO image file accessible by NFS. linux mediacheck xdriver=vesaoption – refer to Chapter 10, Boot Options Xkey command combination as a way of clicking on buttons or making other screen selections, where Xis replaced with any underlined letter appearing within that screen. boot:prompt: linux text /root/anaconda-screenshots. autostep --autoscreenshotoption to generate a screenshot of each step of the installation automatically. Refer to Section 14.3, “Creating the Kickstart File” for details of configuring a Kickstart file. askmethodboot option, use the arrow keys on your keyboard to select an installation method (refer to Figure 8.3, “Installation Method”). With your selected method highlighted, press the Tab key to move to the button and press the Enter key to confirm your choice. repo=hdboot option, you already specified a partition. /dev/sd. Each individual drive has its own letter, for example /dev/sda. Each partition on a drive is numbered, for example /dev/sda1. /./. askmethodor repo=options, you can install Fedora from a network server using FTP, HTTP, or NFS protocols. You can also instruct the installation program to consult additional software repositories later in the process. repo=nfsboot option, you already specified a server and path. eastcoastin the domain example.com, enter eastcoast.example.comin the NFS Server field. .. /export/directory/ mountand nfsfor a comprehensive list of options. the protocol. repo=ftpor repo=httpboot option, you already specified a server and path. /imagesdirectory for your architecture. For example: /pub/fedora/linux/releases/14/Fedora/i386/os/ {ftp|http}://<user>:<password>@<hostname>[:<port>]/<directory>/ latin. /etc/fstabfile. address/ netmask, along with the gateway address and nameserver address for your network. hostname. domainnameor as a short host name in the format hostname. Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this machine, specify the short host name only. system-config-networkcommand in a shell prompt to launch the Network Administration Tool. If you are not root, it prompts you for the root password to continue. system-config-datecommand in a shell prompt to launch the Time and Date Properties Tool. If you are not root, it prompts you for the root password to continue. timeconfig. su. clearpart --initlabel(refer to Chapter 14, Kickstart Installations) /homepartition and perform a fresh installation. For more information on partitions and how to set them up, refer to Section 8.15, “Disk Partitioning Setup”. rpm -qa --qf '%{NAME} %{VERSION}-%{RELEASE} %{ARCH}\n' > ~/old-pkglist.txt su -c 'tar czf /tmp/etc-`date +%F`.tar.gz /etc' su -c 'mv /tmp/etc-*.tar.gz /home' /homedirectory as well as content from services such as an Apache, FTP, or SQL server, or a source code management system. Although upgrades are not destructive, if you perform one improperly there is a small possibility of data loss. /homedirectory. If your /homedirectory is not a separate partition, you should not follow these examples verbatim! Store your backups on another device such as CD or DVD discs or an external hard disk. /boot/partition must be created on a partition outside of the RAID array, such as on a separate hard drive. An internal hard drive is necessary to use for partition creation with problematic RAID cards. /boot/partition is also necessary for software RAID setups. /boot/partition. /bootpartition. Refer to Appendix C, Disk Encryption for information on encryption. /dev/sdaor LogVol00), its size (in MB), and its model as detected by the installation program. physical volume (LVM), or part of a software RAID /; enter . software RAID md0to md15. lvmcommand. To return to the text-mode installation, press Alt+F1. physical volume (LVM) swappartition /bootpartition /partition on systems that store user data. Refer to Section 8.17.5.1.1, “Advice on Partitions” for more information. /partition, upgrades become easier. Refer to the description of the Edit option in Section 8.17, “ Creating a Custom Layout or Modifying the Default Layout ” for more information. /foomust be at least 500 MB, and you do not make a separate /foopartition, then the /(root) partition must be at least 500 MB. /homedirectory within a volume group. With a separate /homepartition, you may upgrade or reinstall Fedora without erasing user data files. /homepartition. /bootpartition. Unless you plan to install a great many kernels, the default partition size of 250 MB for /bootshould suffice. /boot, such as Btrfs, XFS, or VFAT. /vardirectory holds content for a number of applications, including the Apache web server. It also is used to store downloaded update packages on a temporary basis. Ensure that the partition containing the /vardirectory has enough space to download pending updates and hold your other content. /var/cache/yum/by default. If you partition the system manually, and create a separate /var/partition, be sure to create the partition large enough (3.0 GB or more) to download package updates. /usrdirectory holds the majority of software content on a Fedora system. For an installation of the default set of software, allocate at least 4 GB of space. If you are a software developer or plan to use your Fedora system to learn software development skills, you may want to at least double this allocation. /usron a separate partition /usris on a separate partition from /, the boot process becomes much more complex, and in some situations (like installations on iSCSI drives), might not work at all. /var/lib/mysql, make a separate partition for that directory in case you need to reinstall later. /boot/grub/grub.conffile. If you cannot boot, you may be able to use the "rescue" mode on the first Fedora installation disc to reset the GRUB password. grub-md5-cryptutility. For information on using this utility, use the command man grub-md5-cryptin a terminal window to read the manual pages. /boot/partition was created. /bootLinux partition on the first 1024 cylinders of your hard drive to boot Linux. The other Linux partitions can be after cylinder 1024. parted, 1024 cylinders equals 528MB. For more information, refer to: linux rescueat the installation boot prompt. Refer to Chapter 18, Basic System Recovery for a more complete description of rescue mode. repodata. For instance, the "Everything" repository for Fedora is typically located in a directory tree releases/14/Everything/, where arch/os archis a system architecture name. Languagescategory. /root/install.logonce you reboot your system. login:prompt or a GUI login screen (if you installed the X Window System and chose to start X automatically) appears. [2] A root password is the administrative password for your Fedora system. You should only log in as root when needed for system maintenance. The root account does not operate within the restrictions placed on normal user accounts, so changes made as root can have implications for your entire system. [3] The fsck application is used to check the file system for metadata consistency and optionally repair one or more Linux file systems. No devices found to install FedoraError Message httpdservice/Sendmail Hangs During Startup /tmpdirectory. These files include: /tmp/anaconda.log /tmp/program.log /tmp/storage.log /tmp/yum.log /tmp/syslog /tmp/anacdump.txt. mediacheck xdriver=vesaboot option at the boot prompt. Alternatively, you can force the installer to use a specific screen resolution with the resolution=boot option. This option may be most helpful for laptop users. Another solution to try is the driver=option to specify the driver that should be loaded for your video card. If this works, you should report it as a bug, because the installer failed to detect your video card automatically. Refer to Chapter 10,. Local diskselected, displaying user's home directory Bugzillaselected, displaying fields for username, password, and description Remote serverselected, displaying fields for username, password, host, and destination file . /boot/grub/grub.conffile. grub.conffile, comment out the line which begins with splashimageby inserting the #character at the beginning of the line. bto boot the system. grub.conffile is reread and any changes you have made take effect. grub.conffile. start: #: id:3:initdefault:from a 3to a 5. 3to 5. id:5:initdefault: df . linux. cat . system-config-printercommand at a shell prompt to launch the Printer Configuration Tool. If you are not root, it prompts you for the root password to continue. Table of Contents linuxat the option boot:prompt.option to display additional menus that enable you to specify the installation method and network settings. You may also configure the installation method and network settings at the boot:prompt itself. boot:prompt, use the repooption. Refer to Table 10.1, “Installation methods” for the supported installation methods. boot:prompt. You may specify the ipaddress, netmask, gateway, and dnsserver settings for the installation system at the prompt. If you specify the network configuration at the boot:prompt, these settings are used for the installation process, and the Configure TCP/IP screen does not appear. 192.168.1.10: linux ip= 192.168.1.10netmask= 255.255.255.0gateway= 192.168.1.1dns= 192.168.1.2,192.168.1.3. mediacheckoption. /mnt/sysimage/. upgrade, has been superceded by a stage in the installation process where the installation program prompts you to upgrade or reinstall earlier versions of Fedora that it detects on your system. /etc/redhat-releasefile have changed. The boot option upgradeanyrelaxes the test that the installation program performs and allows you to upgrade a Fedora installation that the installation program has not correctly identified. vmlinuzand initrd.imgfiles from a Fedora DVD (or DVD image) to the /boot/directory, renaming them to vmlinuz-installand initrd.img-installYou must have rootprivileges to write files into the /boot/directory. /boot/grub/grub.conf. To configure GRUB to boot from the new files, add a boot stanza to /boot/grub/grub.confthat refers to them. title Installation root (hd0,0) kernel /vmlinuz-install initrd /initrd.img-install kernelline of the boot stanza. These options set preliminary options in Anaconda which the user normally sets interactively. For a list of available installer boot options, refer to Chapter 10, Boot Options. ip= repo= lang= keymap= ksdevice=(if installation requires an interface other than eth0) vncand vncpassword=for a remote installation defaultoption in /boot/grub/grub.confto point to the new first stanza you added: default 0 askmethodboot option with the Fedora DVD. Alternatively, if the system to be installed contains a network interface card (NIC) with Pre-Execution Environment (PXE) support, it can be configured to boot from files on another networked system rather than local media such as a DVD. 12.. This file can be created with the Kickstart Configurator. Refer to Chapter 15, Kickstart Configurator for details. tftpServer tftpand xinetdservices to immediately turn on and also configure them to start at boot time in runlevels 3, 4, and 5. allow booting; allow bootp; class "pxeclients" { match if substring(option vendor-class-identifier, 0, 9) = "PXEClient"; next-server <server-ip>; filename "linux-install/pxelinux.0"; } <server-ip>should be replaced with the IP address of the tftpserver. ncconnectboot parameter: boot: linux vncconnect= HOSTncconnect=to the boot arguments for the target system. HOST is the IP address or DNS host name of the VNC viewer system. Enter the following at the prompt: HOST boot: linux vncconnect= HOSTncconnectmethod may work better for you. Rather than adding the vncboot parameter to the kickstart file, add the vncconnect=parameter to the list of boot arguments for the target system. For HOST, put the IP address or DNS host name of the VNC viewer system. See the next section for more details on using the vncconnect mode. HOST vncboot parameter, you may also want to pass the vncpasswordparameter in these scenarios. While the password is sent in plain text over the network, it does provide an extra step before a viewer can connect to a system. Once the viewer connects to the target system over VNC, no other connections are permitted. These limitations are usually sufficient for installation purposes. vncpasswordoption. It should not be a password you use on any systems, especially a real root password. vncconnectparameter. In this mode of operation, you start the viewer on your system first telling it to listen for an incoming connection. Pass vncconnect=at the boot prompt and the installer will attempt to connect to the specified HOST (either a hostname or IP address). HOST /tmpdirectory to assist with debugging installation failures. /root/anaconda-ks.cfg. You should be able to edit it with any text editor or word processor that can save files as ASCII text. %packagessection — Refer to Section 14.5, “Package Selection” for details. %preand %postsections — These two sections can be in any order and are not required. Refer to Section 14.6, “Pre-installation Script” and Section 14.7, “Post-installation Script” for details. upgradekeyword. -. ignoredisk(optional) ignoredisk, attempting to deploy on a SAN-cluster the kickstart would fail, as the installer detects passive paths to the SAN that return no partition table. ignorediskoption is also useful if you have multiple paths to your disks. ignoredisk --drives= drive1,drive2,... driveNis one of sda, sdb,..., hda,... etc. --only-use— specifies a list of disks for the installer to use. All other disks are ignored. For example, to use disk sdaduring installation and ignore all other disks: ignoredisk --only-use=sda autostep(optional) --autoscreenshot— Take a screenshot at every step during installation and copy the images over to /root/anaconda-screenshotsafter installation is complete. This is most useful for documentation. author authconfig(required) authconfigcommand, which can be run after the install. By default, passwords are normally encrypted and are not shadowed. --enablemd5— Use md5 encryption for user passwords. - (UIDs, home directories, shells, etc.) from an LDAP directory. To use this option, you must install the nss_ldappackage._ldappackage installed. You must also specify a server and a base DN with --ldapserver=and --ldapbasedn=. -. --enablehesiod—package. Hesiod is an extension of DNS that uses DNS records to store information about users, groups, and various other items. --hesiodlhs— The Hesiod LHS ("left-hand side") option, set in /etc/hesiod.conf. This option is used by the Hesiod library to determine the name to search DNS for when looking up information, similar to LDAP's use of a base DN. --hesiodrhs— The Hesiod RHS ("right-hand side") option, set in /etc/hesiod.conf. This option is used by the Hesiod library to determine the name to search DNS for when looking up information, similar to LDAP's use of a base DN. jim:*:501:501:Jungle Jim:/home/jim:/bin/bash). For groups, the situation is identical, except jim.group<LHS><RHS> would be used. --enablesmbauth—command to make their accounts known to the workstation. To use this option, you must have the pam_smbpackage installed. --smbservers=— The name of the server(s). bootloader(required) --append=— Specifies kernel parameters. To specify multiple parameters, separate them with spaces.). --password=— If using GRUB, sets the GRUB boot loader password to the one specified with this option..) one. firstboot(optional) -) halt(optional) haltoption is roughly equivalent to the shutdown -hcommand. poweroff, reboot, and shutdownkickstart options. install(optional) cdrom, harddrive, nfs, or url(for FTP or HTTP installations). The installcommand and the installation method command must be on separate lines. cdrom— Install from the first optical drive on the system. harddrive— Install from a Red Hat installation tree on a local drive, which must be either vfat or ext2. --biospart= --partition= --dir= directory of the installation tree.directory of the installation tree. variant harddrive --partition=hdb2 --dir=/tmp/install-tree> interactive(optional) autostepcommand. iscsi(optional) iscsiparameter, you must also assign a name to the iSCSI node, using the iscsinameparameter. The iscsinameparameter must appear before the iscsiparameter) key(optional) --skip— Skip entering a key. Usually if the key command is not given, anaconda will pause at this step to prompt for a key. This option allows automated installation to continue if you do not have a key or do not want to provide one. keyboard(required) /usr/lib/python2.2/site-packages/rhpl/keyboard_models.pyalso contains this list and is part of the rhplpackage. lang(required) lang en_US /usr/share/system-config-language/locale-listprovides a list of the valid language codes in the first column of each line and is part of the system-config-languagepackage. langsupport(deprecated) %packagessection of your kickstart file. For instance, adding support for French means you should add the following to %packages: @french-support logvol(optional). --bytes-per-inode=— Specifies the size of inodes on the filesystem to be made on the logical volume. Not all filesystems support this option, so it is silently ignored for those cases. -. --encrypted— Specifies that this logical volume should be encrypted. --passphrase=— Specifies the passphrase to use when encrypting this logical volume. Without the above --encryptedoption, this option does nothing. If no passphrase is specified, the default system-wide one is used, or the installer will stop and prompt if there is no default. -. part pv.01 --size 3000 volgroup myvg pv.01 logvol / --vgname=myvg --size=2000 --name=rootvol logging(optional) -. mediacheck(optional) monitor(optional) - try to probe the monitor. --vsync=— Specifies the vertical sync frequency of the monitor. mouse(deprecated) network(optional) networkoption configures networking information for kickstart installations via a network as well as for the installed system. --bootproto=— One of dhcp, bootp, or static. dhcp. bootpand dhcpare treated the same. network --bootproto=dhcp network --bootproto=bootp=— Used to select a specific Ethernet device for installation. Note that using --device=is not effective unless the kickstart file is a local file (such as ks=hd), since the installation program configures the network to find the kickstart file. For example: network --bootproto=dhcp --device=eth0 --ip=— IP address for the machine to be installed. - at boot time. --dhcpclass=— The DHCP class. --mtu=— The MTU of the device. --noipv4— Disable IPv4 on this device. --noipv6— Disable IPv6 on this device. multipath(optional) partor partition(required for installs, ignored for upgrades) --noformatand --onpartare used. partin action, refer to Section 14.4.1, “Advanced Partitioning Example”. <mntpoint>— The <mntpoint>is where the partition is mounted and must be of one of the following forms: / <path> /, /usr, swap --recommendedoption: swap --recommended raid. <id> raid). pv. <id> logvol). --size=— The minimum partition size in megabytes. Specify an integer value here such as 500. Do not append the number with MB. --grow— Tells the partition to grow to fill available space (if any), or up to the maximum size setting. --grow=without setting --maxsize=on a swap partition, Anaconda will limit the maximum size of the swap partition. For systems that have less than 2GB of physical memory, the imposed limit is twice the amount of physical memory. For systems with more than 2GB, the imposed limit is the size of physical memory plus 2GB. - the already existing device. For example: partition /home --onpart=hda xfs, ext2, ext3, ext4, swap, vfat, and hfs. --bytes-per-inode=— Specifies the size of inodes on the filesystem to be made on the partition. Not all filesystems support this option, so it is silently ignored for those cases. --recommended— Determine the size of the partition automatically. --onbiosdisk— Forces the partition to be created on a particular disk as discovered by the BIOS. --fsoptions— Specifies a free form string of options to be used when mounting the filesystem. This string will be copied into the /etc/fstabfile of the installed system and should be enclosed in quotes. --encrypted— Specifies that this partition should be encrypted. --passphrase=— Specifies the passphrase to use when encrypting this partition. Without the above --encryptedoption, this option does nothing. If no passphrase is specified, the default system-wide one is used, or the installer will stop and prompt if there is no default. -. poweroff(optional) haltoption is used as default. poweroffoption is roughly15, and each may only be used once. --bytes-per-inode=— Specifies the size of inodes on the filesystem to be made on the RAID device. Not all filesystems support this option, so it is silently ignored for those cases. --spares=— Specifies the number of spare drives allocated for the RAID array. Spare drives are used to rebuild the array in case of drive failure. --fstype=— Sets the file system type for the RAID array. Valid values are xfs, ext2, ext3, ext4, swap, vfat, and hfs. -— Specifies that this RAID device should be encrypted. --passphrase=— Specifies the passphrase to use when encrypting this RAID device. Without the above --encryptedoption, this option does nothing. If no passphrase is specified, the default system-wide one is used, or the installer will stop and prompt if there is no default. - 14.4.1, “Advanced Partitioning Example”. reboot(optional) rebootoption is roughly equivalent to the shutdown -rcommand. rebootto automate installation fully when installing in cmdline mode on System z. halt, poweroff, and shutdownkickstart options. haltoption is the default completion method if no other methods are explicitly specified in the kickstart file. rebootoption may result in an endless installation loop, depending on the installation media and method. repo(optional)) rootpw [--iscrypted] --iscrypted— If this is present, the password argument is assumed to already be encrypted. roughly equivalent to the shutdowncommand. halt, poweroff, and rebootkickstart options. skipx(optional) sshpw(optional) sshpw --username= <name> <password>[--iscrypted|--plaintext] [--lock] --username— Provides the name of the user. This option is required. --iscrypted— If this is present, the password argument is assumed to already be encrypted. --plaintexthas the opposite effect — the password argument is assumed to not be encrypted. --lock— If this is present, the new user account is locked by default. That is, the user will not be able to login from the console. text(optional) timezone(required) timezone [--utc] <timezone> --utc— If present, the system assumes the hardware clock is set to UTC (Greenwich Mean) time. upgrade(optional) user(optional). 14.4.1, “Advanced Partitioning Example”. xconfig(optional) -. --depth=— Specify the default color depth for the X Window System on the installed system. Valid values are 8, 16, 24, and 32. Be sure to specify a color depth that is compatible with the video card and monitor. zerombr(optional) zerombris specified any invalid partition tables found on disks are initialized. This destroys all of the contents of disks with invalid partition tables. zerombris specified, any DASD visible to the installer which is not already low-level formatted gets automatically low-level formatted with dasdfmt. The command also prevents user choice during interactive installations. If zerombris not specified and there is at least one unformatted DASD visible to the installer, a non-interactive kickstart installation will exit unsuccessfully. If zerombris not specified and there is at least one unformatted DASD visible to the installer, an interactive installation exits if the user does not agree to format all visible and unformatted DASDs. To circumvent this, only activate those DASDs that you will use during installation. You can always add more DASDs after installation is complete. zerombr yes. This form is now deprecated; you should now simply specify zerombrin your kickstart file instead. zfcp(optional) zfcp [--devnum= <devnum>] [--wwpn= <wwpn>] [--fcplun= <fcplun>] %include(optional) clearpart, raid, part, volgroup, and logvolkick @Everythingis not supported @Everythingor simply *in the %packagessection. Red Hat does not support this type of installation. @Conflicts (group, where variant) variantis Serveror Client. If you specify @Everythingin a kickstart file, be sure to exclude @Conflicts (or the installation will fail: variant) @Everything (Server) -@Conflicts @Everythingin a kickstart file, even if you exclude @Conflicts (. variant) %packagescommand to begin a kickstart file section that lists the packages you would like to install (this is for installations only, as package selection during upgrades is not supported).. variant/repodata/comps-*.xml Coreand Basegroups are always selected by default, so it is not necessary to specify them in the %packagessection. %packagesselection: %packages @ X Window System @ GNOME Desktop Environment @ Graphical Internet @ Sound and Video dhcp ). -autofs %packagesoption: --nobase --resolvedeps --ignoredeps --ignoremissing %packages --ignoremissing. --interpreter /usr/bin/python /usr/bin/pythonwith the scripting language of your choice. %presection: %pre #!/bin/sh hds="" mymedia="" for file in /proc/ide/h* do mymedia=`cat $file/media` if [ $mymedia == "disk" ] ; then hds="$hds `basename $file`" fi done set $hds numhd=`echo $#` drive1=`echo $hds | cut -d' ' -f1` drive2=`echo $hds | cut -d' ' -f %include /tmp/part-include %postcommand. This section is useful for functions such as installing additional software and configuring an additional nameserver. . - %post --log=/root/ks-post.log wget -O- | /bin/bash /usr/sbin/rhnreg_ks --activationkey= <activationkey> Section 3.3, “Making Minimal Boot Media”. boot.isoimage file that you can download from the Software & Download Center of the Red Hat customer portal. useroption in the Kickstart file before installing additional systems from it (refer to Section 14.4, “Kickstart Options” for details) or log into the installed system with a virtual console as root and add users with the addusercommand. kscommand line argument is passed to the kernel. ddoption as well. For example, to boot off a boot diskette and use a driver disk, enter the following command at the boot:prompt: linux ks=hd: partition:/ path/ks.cfg dd boot:prompt (where ks.cfgis the name of the kickstart file): linux ks=cdrom:/ks.cfg askmethod autostep debug dd dhcpclass= <class> dns= <dns> driverdisk expert gateway= <gw> graphical isa ip= <ip> keymap= <keymap> ks=nfs: <server>:/ <path> <server>, as file <path>. The installation program uses> <server>, as file <path>. The installation program uses DHCP to configure the Ethernet card. For example, if your HTTP server is server.example.com and the kickstart file is in the HTTP directory /mydir/ks.cfg, the correct boot command would be ks=. ks nostorage nousb nousbstorage vnc vncconnect= <host>[: <port>] <host>, and optionally use port <port>. vncpassword= /usr/sbin/system-config-kickstart. If Kickstart Configurator does not appear on the menu or you cannot start it from the command line, run su - yum install system-config-kickstartto make sure that the package is installed, or search for the package in your graphical package manager. mediacheckboot option as discussed in Section 10, telnet, root. yumutility. Type this command to begin a full update of your system with yum: su -c 'yum update' rootpassword when prompted. yum. yumutility. The update process downloads information and packages from a network of servers. package`' rootaccount: su - yum groupinstall "X Window System" "GNOME Desktop Environment" yum groupinstall "X Window System" KDE yum groupinstall "X Window System" XFCE /etc/inittabfile: vi /etc/inittab insertmode. initdefault. Change the numeral 3to 5. :wqand press the Enter key to save the file and exit the vi text editor. CD-14 mount -r -o loop /mnt/temp/Downloads/Fedora-14/fedora.repoand /etc/yum.repos.d/fedora-updates.repofiles to use the new repository. In each case: vi /etc/yum.repos.d/fedora.repo insertmode. #character at the start of any line in the file that starts with baseurlor mirrorlist. The #character comments out the line so that the package management software ignores it. [fedora]section of the fedora.repofile or the [updates]section of the fedora-updates.repofile. Note that this section includes a line that now starts # baseurlthat you previously commented out. # baseurl: baseurl=file:// /path/to/repo baseurl= insertmode. :wqand press the Enter key to save the file and exit the vi text editor. /etc/yum.repos.d/fedora.repoand /etc/yum.repos.d/fedora-updates.repofiles again to undo the changes that you made. /partition changes, the boot loader might not be able to find it to mount the partition. To fix this problem, boot in rescue mode and modify the /boot/grub/grub.conffile. joefor editing configuration files emacs, pico, or vi, the joeeditor is started.). /boot/grub/grub.conffile, as additional entries may be needed for. .rpmsaveextension (for example, sendmail.cf.rpmsave). The upgrade process also creates a log of its actions in /root/upgrade.log. yum install preupgradeat the command line and press Enter. preupgradeat the command line and press Enter. /etc/fedora-releasefile have been changed from the default, your Fedora installation may not be found when attempting an upgrade to Fedora 14.and press Enter. Diskpart displays a list of the partitions on your system with a volume number, its drive letter, volume label, filesystem type, and size. Identify the Windows partition that you would like to use to occupy the space vacated on your hard drive by Fedora and take note of its volume number (for example, your Windows C:drive might be "Volume 0").. gpartedat the command line and pressing Enter. su -and press Enter. When the system prompts you for the root password, type the password and press Enter. gedit /boot/grub/grub.confand press Enter. This opens the grub.conffile in the gedit text editor. grub.conffile consists of four lines: grub.conf grub.conf, each corresponding to a different version of the Linux kernel. Delete each of the Fedora entries from the file. Grub.conf. default=line contains the number one below the number of your chosen default operating system in the list. grub.conffile and close gedit gpartedat the command line and pressing Enter. /dev/sda3. e2fsckat a command line and press Enter, where partition partitionis the partition that you just resized. For example, if you just resized /dev/sda3, you would type e2fsck /dev/sda3. resize2fsat a command line and press Enter, where partition partitionis the partition that you just resized. For example, if you just resized /dev/sda3, you would type resize2fs /dev/sda3. gpartedat the command line and pressing Enter. unallocated. Right-click on the unallocated space and select New. Accept the defaults and GParted will create a new partition that fills the space available on the drive. /dev/sda3on device /dev/sda. fdiskand press Enter, where device deviceis the name of the device on which you just created a partition. For example, fdisk /dev/sda. Command (m for help):, press T and Enter to use fdisk to):, type the code 8eand press Enter. This is the code for a Linux LVM partition. Command (m for help):, press W and Enter. Fdisk writes the new type code to the partition and exits. /sbin/initProgram partedutility. This is a freely available program that can resize partitions..). /so that the system will automatically log in to them when it starts. If /is placed on an iSCSI target, initrd will log into this target and anaconda does not include this target in start up scripts to avoid multiple attempts to log into the same target. /is placed on an iSCSI target, anaconda sets NetworkManager to ignore any network interfaces that were active during the installation process. These interfaces will also be configured by initrd when the system starts. If NetworkManager were to reconfigure these interfaces, the system would lose its connection to /. /. /bootPartition and LVM /bootpartition. /booton LVM logical volumes with linear mapping. /and swap partitions within LVM volumes, with a separate /bootpartition. /boot, such as Btrfs, XFS, or VFAT. /boot/partition is above the 1024 cylinder head of the hard drive or when using LBA mode. The Stage 1.5 boot loader is found either on the /boot/partition or on a small part of the MBR and the /boot/partition. /boot/sysroot/into memory. Once GRUB determines which operating system or kernel to start, it loads it into memory and transfers control of the machine to that operating system. E.3, “Installing GRUB”. /sbin/grub-install, where <location> <location>is the location that the GRUB Stage 1 boot loader should be installed. For example, the following command installs GRUB to the MBR of the master IDE device on the primary IDE bus: /sbin/grub-install /dev/hda /bootdirectory must reside on a single, specific disk partition. The /bootdirectory cannot be striped across multiple disks, as in a level 0 RAID. To use a level 0 RAID on your system, place /boot on a separate partition outside the RAID. /bootdirectory must reside on a single, specific disk partition, GRUB cannot boot the system if the disk holding that partition fails or is removed from the system. This is true even if the disk is mirrored in a level 1 RAID. ( <type-of-device><bios-device-number>, <partition-number>) . <partition-number>specifies the number of a partition on a device. Like the <bios-device-number>, most types of partitions are numbered starting at 0. However, BSD partitions are specified using letters, with acorresponding to 0, bcorresponding to 1, and so on. 0, not 1. Failing to make this distinction is one of the most common mistakes made by new users. (hd0)and the second as (hd1). Likewise, GRUB refers to the first partition on the first drive as (hd0,0)and the third partition on the second hard drive as (hd1,2).,0)+1 chainloadercommand with a similar blocklist designation at the GRUB command line after setting the correct device and partition as root: chainloader +1 (hd.0,0)0,0)/grub/stage2. p — This option tells the— This option tells the <config-file> installcommand to look for the menu configuration file specified by , such as, such as <config-file> (hd0,0)/grub/grub.conf. installcommand overwrites any information already located on the MBR. kernel... — Specifies the kernel file to load when booting the operating system. Replace </path/to/kernel> <option-1> <option-N> </path/to/kernel>with an absolute path from the partition specified by the root command. Replace <option-1>with options for the Linux kernel, such as root=/dev/VolGroup00/LogVol00to specify the device on which the root partition for the system is located. Multiple options can be passed to the kernel in a space separated list. kernelcommand: kernel /vmlinuz-2.6.8-1.523 ro root=/dev/VolGroup00/LogVol00 hda5partition. root (— Configures the root partition for GRUB, such as <device-type> <device-number>, <partition>) (hd0,0), and mounts the partition. rootcommand: root (hd0,0) rootnoverify (— Configures the root partition for GRUB, just like the <device-type> <device-number>, <partition>) rootcommand, but does not mount the partition. help --allfor a full list of commands. For a description of all GRUB commands, refer to the documentation available online at. /boot/grub/grub.conf),. /boot/grub/grub.conf. The commands to set the global preferences for the menu interface are placed at the top of the file, followed by stanzas for each operating kernel or operating system listed in the menu. default=0 timeout=10 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu rhgb quiet initrd /initrd-2.6.27.19-170.2.35.fc10.i686.img # section to load Windows title Windows rootnoverify (hd0,0) chainloader +1 titleline in the GRUB configuration file. For the Windowssection to be set as the default in the previous example, change the default=0to default=1.. kernel— Specifies the kernel file to load when booting the operating system. Replace </path/to/kernel> <option-1> <option-N> </path/to/kernel Red Hat root (— Configures the root partition for GRUB, such as <device-type> <device-number>, <partition>) (hd0,0),> title— Specifies a title to be used with a particular group of commands used to load a kernel or operating system. group-title #). kernel. [7] For more on the system BIOS and the MBR, refer to Section F.2.1, “The BIOS”. /boot/partition. /sbin/initprogram. /sbin/initprogram loads all services and user-space tools, and mounts all partitions listed in /etc/fstab. /boot/grub/grub.conf—> /sysroot/, a RAM-based virtual file system, via cpio. The initramfsis used by the kernel to load drivers and modules necessary to boot the system. This is particularly important if SCSI hard drives are present or if the systems use the ext3 or ext4 file system. initramfsimage(s) are loaded into memory, the boot loader hands control of the boot process to the kernel. initcommand, the same sequence of events occurs on every architecture. So the main difference between each architecture's boot process is in the application used to find and load the kernel.
https://docs.fedoraproject.org/en-US/Fedora/14/html-single/Installation_Guide/index.html
2019-01-16T05:34:58
CC-MAIN-2019-04
1547583656897.10
[]
docs.fedoraproject.org
iTDS Integration One of the Pros of using a iTDS is that it allows for the distribution of configurations which include entities as well as the icons associated with them. You can read more about it on the paired configuration page. Alternatively if you are not using a server or wish to simply share your entities with another Maltego client you can following the exporting guide below. Exporting Configuration First setup your Maltego client with the various entities you would like to export. Once you have completed this you can export your custom entities by following the entity export guide found in our Maltego user guide. This will export your custom entities to a .mtz file that can be shared with other Maltego users.
https://docs.maltego.com/support/solutions/articles/15000019247-entity-distribution
2019-01-16T06:12:25
CC-MAIN-2019-04
1547583656897.10
[]
docs.maltego.com
Querying your data / Working in the query window / Generate charts / Flat world map by coordinatesDownload as PDF Flat world map by coordinates Description This chart displays information on a world map using latitude and longitude coordinates. This chart requires you to select three fields: Creating a flat world map by coordinates Go to Data Search and open the required table. Perform the required operations to get the data you want to use in the chart. - Select Additional tools → Charts → Maps → Flat world map by coordinates from the query toolbar. - Click and drag the column headers to the corresponding fields. - The flat world map by coordinates is displayed.
https://docs.devo.com/confluence/ndt/querying-your-data/working-in-the-query-window/generate-charts/flat-world-map-by-coordinates
2019-01-16T05:45:34
CC-MAIN-2019-04
1547583656897.10
[]
docs.devo.com
Querying your data / Working in the query window / Generate charts / Graph diagram / Graph diagram menuDownload as PDF Graph diagram menu The panel to the left of the graph diagram contains a menu of options that you can use to customize the appearance of the graph. The following numbered descriptions correspond to the numbered items in the image above. (1) Search Search for a specific node by its name. The search is not case-sensitive and you may type all or part of the name. Press ENTER to highlight all matching nodes in the diagram. (2) Filters Apply filters to determine which nodes to display in the graph. You can filter by: - Link width - Number of links. - Node size - Number of occurrences. - Node degree - This is the number of connections that a node has with other nodes. Note that the filters should be applied in the order they are displayed on the menu: first by width, secondly by size and finally by degree. Each time you apply a filter, all the weights are recalculated and new values are assigned to links width and nodes size.. (3) Selection Select a node or nodes in the diagram, then apply filters to exclusively show or hide them in the diagram. - Select a data node by clicking it. - Double-click it to select all the other nodes connected to it. - By further clicking on the same initial node, the selection will propagate to other connected nodes. You can also select a group of nodes by drawing the area that contains the desired nodes: Once you have selected the nodes, you can apply the filters in the Selection area: - Filter into the selection. This keeps the selected nodes and hides the rest of them. - Filter out selection. This hides the selected nodes and keeps the rest of the them. - Filter out selected links. This excludes the links to the selected nodes. - Re-apply the previous filter. - Apply the next filter. - Clear the filter. (4) Map mode When activated, all nodes with geo-coordinates are plotted on a world map. This is useful for analyzing the geographical distribution of the nodes. - Nodes that do not have geo-coordinates are grouped on an area of the map (usually the Atlantic ocean). - If needed, you can set a different area to display the unpositioned nodes by selecting Show unpositioned nodes. (5) Graph construction There are several options available that you can use when building the graph: - Ignore nulls - if activated, it ignores the records with null values. - Convert null synonyms - if activated, it converts null synonyms ("unknown", "none") into null. - Define the link widths and the node sizes. To customize the display of the graph diagram, you can choose to: - Ignore nulls - This will hide all records that contain null values. - Convert null synonyms - This will convert any null synonyms (for example, "unknown" or "none") into null. - Specify different methods for calculating the link widths and node sizes. (6) View options You can configure the following view options: - Selection mode - This determines which related entities. - Hops - When you double-click a level 0 node multiple times successively, the nodes that are within n hops from it will be selected. - Same row - This will select the nodes in the same row as the level 0 node. Double-click the node selected to keep on showing levels. - Thread - This will select the nodes from the same type as the level 0 node. Double-click the node selected to keep on showing levels. Tree - This will select the nodes the parent and child nodes of the level 0 node. Double-click the node selected to keep on showing levels.When you select a node, links going from that node to other ones are green. Links arriving to that node are purple and discontinuous. - Timebar - Activate this option to animate the node distribution over the time. There are two different play options in the timebar: - If you click the common play button, both limits of the time range move to the right. - The other play button makes the left limit stay fixed, only the right one moves through time. This allows you to check how data "accumulates" through time. This is useful for checking, for example, how a virus expands geographically. (7) Nodes Set options related to the graph nodes: - Show info - Activate this option to display the size of the nodes on the diagram. - Hide singletons - Activate this option to hide the nodes with no links. - Truncate labels - Activate this option to truncate the display of node labels. (8) Links Configure the aspect of the graph links: - Show info - Activate this option to display the width of the links on the diagram. - Curvature - This defines the degree of link curvature. The default is 90. 0 is no curve. - Transparency - This defines the degree of transparency of normal and selected links. (9) Screen scale Set the minimum and maximum values for the Node size and the Link width. (10) Layout You can configure the graph arrangement by selecting one layout (Standard, Structural, Sequential, Hierarchical, Radial or Lenticular) and defining its Orientation and Tightness. (11) Number format Select the format of the numbers displayed in the graph and number of decimals places.
https://docs.devo.com/confluence/ndt/querying-your-data/working-in-the-query-window/generate-charts/graph-diagram/graph-diagram-menu
2019-01-16T05:28:23
CC-MAIN-2019-04
1547583656897.10
[]
docs.devo.com
About This Documentation¶ This is the documentation for the mangos-zero server for World of Warcraft version 1.12 also known as vanilla WoW. Wait what? I thought World of Warcraft was always being updated and patched? True, but we at mangos-zero believe it is simply awesome to have the original game preserved as it was. If you are big on nostalgia or just missed out on the early days of World of Warcraft, stay with us as vanilla WoW is right here. The chapters in this documentation will teach you all there is to know about running your own vanilla WoW server, even describing how you can build upon it and develop your own customizations. The MaNGOS Project¶ Ever since 2005 you have the option to play old game content from the World of Warcraft because the MaNGOS project (and others) have released their server as Open Source to the public. As of now, the MaNGOS project has been split up. If you look for other expansions such as The Burning Crusade or Wrath of the Lich King, you should pay a visit to the CMaNGOS project which provides maintained server projects for both World of Warcraft expansions. The mangos-zero Project¶ As of December 2013 this project has been my personal - as in me, Daniel S. Reichenbach - pet project. I created forks of the MaNGOS and ScriptDev2 repositories and published my bitbucket project where you can grab all code, and - of course - this documentation, too. The Team¶ Currently (July 2014) there is no real team behind this project, it is just me. Of course, I do accept contributions in the form of pull requests or issue reports but I am not actively seeking developers. I have had my fair share of trouble with MaNGOS and the numerous forks, due to most of them focussing on short term results instead of working on things that will result in long term benefits for users and developers. The Goals¶ Coming from a professional development background, I intend to turn mangos-zero and thus what is left of vanilla WoW into a project of which I can be proud of and which you - the user - can fully enjoy. My guiding principles are: make it easy to use. Getting the software, building and installing, and using it should be easy as having a piece of cake. have awesome documentation and preserve knowledge. The WoW emulation scene often tends to keep secrets and avoids documentation. This results in forums seeing the same questions over and over ever since 2005. Here you will not see this. If a question is asked, it will be solved and documented for all users. Also I intend to publish every background information I have. This includes things such as protocol information for client / server communication, a full run-down on database structures and usage, or even practical guides on how to use scripting in mangos-zero. produce good code and stick with it. Good code does follow best practices, is readable, documented and thus easy to use for the new developer in town. I do not want to see you looking at the code and having to ask 20 questions, one should be sufficient to let you use and benefit from the projects’ code. If all this is still not enough, I recommend taking a look at the minifesto which I intend to follow with every change and update I make for this project.
https://mangoszero-docs.readthedocs.io/en/stable/intro.html
2019-01-16T06:27:44
CC-MAIN-2019-04
1547583656897.10
[]
mangoszero-docs.readthedocs.io
Notifications Devo generates notifications that communicate system events. The Notifications area lists the notifications that have been triggered when: - Someone uploads data or inject content to a table. - An aggregation task is created, stopped or deleted. - A new lookup table is ready for use. - A new relay is added. You can remove notifications from this list using the X buttons if you prefer to view only the most recent notifications each time you open this area. When a new unread notification is detected, an indicator appears next to the area name in the navigation pane.
https://docs.devo.com/confluence/ndt/alerts-and-notifications/notifications
2019-01-16T05:46:03
CC-MAIN-2019-04
1547583656897.10
[]
docs.devo.com
/src/github.com/performancecopilot - ActiveState ActiveGo 1.8 ... ActiveGo 1.8 ActiveGo ▽ Get ActiveGo Get started Packages Directory /src/github.com/performancecopilot Name Synopsis .. speed Package speed implements a golang client for the Performance Co-Pilot instrumentation API. bytewriter Package bytewriter implements writers that support concurrent writing within a fixed length block initially tried to use bytes.Buffer but the main restriction with that is that it does not allow the freedom to move around in the buffer. examples acme A golang implementation of the acme factory examples from python and Java APIs The python implementation is in mmv.py in PCP core () The Java implementation is in examples in parfait core () To run the python version of the example that exits do go run examples/acme/main.go To run the java version of the example that runs forever, simply add a --forever flag go run examples/acme/main.go --forever basic_histogram http_counter instance_string runtime simple simple_string_metric singleton_counter singleton_string this example showcases speeds metric inference from strings property mmvdump Package mmvdump implements a go port of the C mmvdump utility included in PCP Core It has been written for maximum portability with the C equivalent, without having to use cgo or any other ninja stuff the main difference is that the reader is separate from the cli with the reading primarily implemented in mmvdump.go while the cli is implemented in cmd/mmvdump the cli application is completely go gettable and outputs the same things, in mostly the same way as the C cli app, to try it out, ``` go get github.com/performancecopilot/speed/mmvdump/cmd/mmvdump ``` cmd mmvdump Build version go1.8.3. Portions of this page are modifications based on work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License. © 2018 ActiveState Software Inc. All rights reserved. Trademarks.
http://docs.activestate.com/activego/1.8/pkg/github.com/performancecopilot/
2019-01-16T05:27:22
CC-MAIN-2019-04
1547583656897.10
[]
docs.activestate.com
Speaker¶ Speaker objects. The speaker object is used to give sound in the 3D View. After adding the object, the various settings can be changed in the Properties editor. Options¶ Sound¶ - Open The Data-Block Menu for loading audio files. There are two properties you can check when loading a sound: - Cache - This means the whole sound will be decoded. Enable this if you want to use those effects for a file with multiple channels. -¶ There is no setting to choose the start time when the speaker should start playing, because you might want a single speaker to play multiple times. Therefore you have to open the NLA Editor where you can add sound strips that defines when the sound should start (nothing else, so any other properties of the strips, like length don’t matter). When you add a speaker object such a strip will be added at the current frame. The Shortcut to add a strip in the NLA Editor is Shift-K. Distance¶ Speaker properties. - If the object is farther away than this distance, this distance is used to calculate the distance-based volume. Influence of this value also depends on the distance model. - Reference - The distance at which the volume is 100% (1.0). Set this value to the distance used for recording the sound. Usually sound effects recordings should be made exactly 1 m away from sound to get an accurate volume. Cone¶ Directionality relevant settings. Imagine a cone with the top at the original of the speaker object and the main axis of it facing in the same direction as the speaker. There are two cones an inner and an outer cone. The angles represent their opening angles, so 360°. - Angle - Outer - Angle of the outer cone in degrees. Outside this cone the volume is equal to the Outer volume. - Inner - Angle of the inner cone in degrees. Inside the cone the volume is 100%. - Volume - Outer - Volume outside the outer cone.
https://docs.blender.org/manual/en/latest/render/audio/speaker.html
2019-01-16T06:07:27
CC-MAIN-2019-04
1547583656897.10
[array(['../../_images/render_audio_speaker_objects.png', '../../_images/render_audio_speaker_objects.png'], dtype=object) array(['../../_images/render_audio_speaker_properties.png', '../../_images/render_audio_speaker_properties.png'], dtype=object)]
docs.blender.org
Interface patterns give you an opportunity to explore different interface designs. Be sure to check out How to Adapt a Pattern for Your Application. This pattern shows the best practice for combining tags with standard-sized rich text, or plain text, using a side by side layout. This page explains how you can use this pattern in your interface, and walks through the design structure in detail. Tags are useful for calling out important attributes on an interface. This pattern is made up of read-only text, rich text icons and tags in a side by side layout. One local variable is set up at the beginning of the expression. This variable stores a list of shops and their name, rating, and tags we are going to display. In this section, we setup the first loop with a!forEach(). On line 14, we store the current value of fv!item into its own local variable for use within a nested a!forEach(). Here, we create a side by side layout and set the first side by side item to be a rich text display field. Note that we set the labelPosition to "COLLAPSED" in order to remove unnecessary spacing above the text item. We will do this on all three display items in this interface. Also note that we set the width of the side by side item to "MINIMIZE". This allows the text to only take up as much space as necessary, moving the star icons closer to the text. Here, we follow the pattern to Show a Numeric Rating as Rich Text Icons. Similar to above, we also set the labelPosition of the rich text display field as "COLLAPSED" and the width of the side by side item to "MINIMIZE". This side by side item displays our tags. Since we are interested in showing the tags inline next to standard-sized text, we set the tag size to "SMALL". This size is exactly the same height as standard-sized text. Again, setting the labelPosition on the tag field to "COLLAPSED" and the marginBelow to "MINIMIZE" removes unnecessary vertical space. We did not set the side by side item width to "MINIMIZE" in this case. If we had more tags, keeping the width as "AUTO" (default) would allow the tags to wrap if needed. In this example, each shop only shows one or zero tags. Notice that we do not need to add special handling for the shop with no tags– a tag item with null text does not render anything or reserve space. If you would like to show more than one tag, simply iterate over each tag in a list with a!forEach() as we did for the rich text items. On This Page
https://docs.appian.com/suite/help/21.1/inline-tags-for-side-by-side-pattern.html
2022-05-16T22:28:04
CC-MAIN-2022-21
1652662512249.16
[]
docs.appian.com
debops.rsyslog¶ The rsyslog package is used to read, process, store and forward system logs in different ways, on local or remote systems. The debops.rsyslog role can be used to easily configure log forwarding to a central log server, as well as store logs on the filesystem or other storage backends. - Getting started - Unprivileged syslog and encrypted connections - debops.rsyslog default variables - Default variable details debops.rsyslog - Manage syslog daemon using Ansible
https://docs.debops.org/en/stable-3.0/ansible/roles/rsyslog/index.html
2022-05-16T21:49:21
CC-MAIN-2022-21
1652662512249.16
[]
docs.debops.org
Output format Output Types XML Data Compare has two different types of outputs: DeltaV2 and "Side by Side Folding DiffReport". By default the full deltaV2 result is generated. Examples of the different output types: DeltaV2 The DeltaV2 output type is an XML format that represents changes to the original XML. Changes are represented in the DeltaV2 output using annotations in the form of specific XML elements and attributes within the DeltaXML namespace. The DeltaV2 XML is designed to be transformed into a view that shows the differences to data in a form that is easier to understand. This could be, for example, a rendering of a set of data tables with changes highlighted in red or greed for deletions and additions respectively. The full specification for the DeltaV2 format can be found at: Two and Three Document DeltaV2 Format DeltaV2 Format Options With the DeltaV2 output type you can choose to have just the changes included so that any unchanged elements or attributes are excluded from the result. This is controlled via the changes-only attribute. Configuration File Setting for a 'Changes Only' DeltaV2 <dcf:output The Side by Side Folding DiffReport The Side by Side DiffReport is an HTML rendering of the two input XML documents in their raw XML form. Each XML document is shown with differences highlighted and aligned with the corresponding part of the other XML document. Differences can be selected using up/down arrows in the menu bar. An example of a Side by Side DiffReport can be seen here. The DiffReport can be downloaded and viewed in a browser. For full details see the Configuration File Schema Guide. Configuration File Setting for a Side by Side Folding DiffReport <dcf:output See the sample on Bitbucket for examples of this.
https://docs.deltaxml.com/xml-data-compare/2.1/Output-format.2960818213.html
2022-05-16T22:18:38
CC-MAIN-2022-21
1652662512249.16
[]
docs.deltaxml.com
Gearset's metadata filters control what Salesforce metadata and Vlocity data packs are included in your comparison. Click Manage custom filters to pick individual types and items to include. Gearset shows all of the Salesforce metadata types and Vlocity data pack types available to compare. Select a type to include all data packs of that type in the comparison. This filter will compare all of our Data Raptors. If you've got a lot of data packs of the same type, clicking the All items toggle will let you select individual data packs to include in the comparison. This can make the comparison significantly faster if you know that only a few data packs have changed. This filter will compare our Gearset_Docs_English OmniScript. You can compare Salesforce metadata and Vlocity data packs at the same time, by selecting the types to compare in the Metadata and Vlocity tabs. Gearset will show all of the changes in the same comparison grid, even highlighting dependencies between metadata and data packs. Gearset will compare the dependencies of any data pack you include in the metadata filter. For example, if our Gearset_Docs_English OmniScript depends on a DataRaptor named gearsetGetMasterAccountDetails, Gearset will compare the gearsetGetMasterAccountDetails DataRaptor even if DataRaptor isn't selected in the metadata filter. This makes it easy to make sure dependencies are included during the deployment, so that the OmniScript works as expected in the target org. You can save your metadata filter selections and share them with your team, to make it easy for your whole team to run the same comparison with the same settings every time.
https://docs.gearset.com/en/articles/5821948-customising-the-vlocity-metadata-filter
2022-05-16T22:11:40
CC-MAIN-2022-21
1652662512249.16
[]
docs.gearset.com
Links may not function; however, this content may be relevant to outdated versions of the product. Creating and running an Extract: Filter Criteria Use the Business Intelligence Exchange (BIX) Filter Criteria tab on the BIX ruleform to select filter conditions for properties of the primary class and on the clipboard at runtime. In Criteria, specify the following filter information: In Filter conditions to apply, define the filter criteria by referencing the labels of the rows below and applying the SQL operators AND, OR, and parentheses. For example, if there are four filters labeled A, B, C, and D, you can enter the following logical expression: "(A OR B OR C) AND D". In the filter table, click +Add filter add a row and enter the appropriate filter information: Click Save to apply this filter criteria. Return to Creating and running an Extract rule to continue creating an Extract rule. Previous topic Creating and running an Extract rule Next topic More about Extract rules
https://docs.pega.com/data-management-and-integration/84/creating-and-running-extract-filter-criteria
2022-05-16T21:51:03
CC-MAIN-2022-21
1652662512249.16
[]
docs.pega.com
Exit codes¶ Running pytest can result in six different exit codes: - Exit code 0 All tests were collected and passed successfully - Exit code 1 Tests were collected and run but some of the tests failed - Exit code 2 Test execution was interrupted by the user - Exit code 3 Internal error happened while executing tests - Exit code 4 pytest command line usage error - Exit code 5 No tests were collected They are represented by the pytest.ExitCode enum. The exit codes being a part of the public API can be imported and accessed directly using: from pytest import ExitCode Note If you would like to customize the exit code in some scenarios, specially when no tests are collected, consider using the pytest-custom_exit_code plugin.
https://docs.pytest.org/en/latest/reference/exit-codes.html
2022-05-16T22:23:57
CC-MAIN-2022-21
1652662512249.16
[]
docs.pytest.org
# Kube-hunter Kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments. We run kube-hunter in passive mode. This means kube-hunter will not attempt to exploit any of the vulnerabilities it finds in order to find additional vulnerabilities. We also run kube-hunter in pod mode (opens new window). This effectively discovers what a malicious pod (or someone who gained access to a vulnerable pod) would be able to do inside the cluster. # Remediation Refer to the kube-hunter documentation (opens new window) for details and remediation steps for each particular kube-hunter finding. # Sample Report kube-hunter reports contain a list of Nodes, Services, and detected vulnerabilities. { "_fairwindsReportVersion": "501", "hunter_statistics": [ { "description": "Checks if Node is running a Kubernetes version vulnerable to known CVEs", "name": "K8s CVE Hunter", "vulnerabilities": 0 } ], "nodes": [ { "location": "10.244.0.1", "type": "Node/Master" } ], "services": [ { "description": "The Kubelet is the main component in every Node, all pod operations goes through the kubelet", "location": "10.244.0.1:10250", "service": "Kubelet API" } ] }
https://insights.docs.fairwinds.com/technical-details/reports/kube-hunter/
2022-05-16T21:01:43
CC-MAIN-2022-21
1652662512249.16
[]
insights.docs.fairwinds.com
. Before you Upgrade If you are using IMDG Enterprise version 4.x, you can migrate your data to a Hazelcast Platform cluster. See Migrating Data from IMDG 3.12.x. Merge of Hazelcast and Jet The open source and Enterprise code and documentation of former Hazelcast IMDG and Jet have been merged as Hazelcast, as version 5.0. The following sections elaborate the changes performed for this merge. Version Compatibility You can find the rules for compatibility in this section. Semantic Versioning Hazelcast uses semantic versioning: MAJOR version when you make incompatible API changes MINOR version when you add functionality in a backwards-compatible manner PATCH version when you make backwards-compatible issue fixes. Compatibility with Former IMDG Versions Hazelcast Platform is fully API-compatible with former Hazelcast IMDG 4.x versions with the exception of a few breaking changes..2-SNAPSHOT.. Merge of SQL Modules The former Hazelcast product had hazelcast-sql, hazelcast-sql-core and hazelcast-jet-sql Maven modules in its distribution. These have been merged into a single hazelcast-sql module as a part of the Hazelcast Platform distribution. Default configuration files (not example ones) Scripts checks and validates your YAML configurations during a cluster startup. According to this validation: the top-level hazelcastobject must exist client and member YAML configurations must be separate, not in the same file there must be no case insensitive enum values. While upgrading to Hazelcast Platform, if a YAML configuration violates any of the above, the cluster will not start. You need to either edit and update your YAML configuration files accordingly or disable the validation by setting the hazelcast.config.schema.validation.enabled property to false.
https://docs.hazelcast.com/hazelcast/5.2-snapshot/migrate/upgrading-from-imdg-4
2022-05-16T22:24:52
CC-MAIN-2022-21
1652662512249.16
[]
docs.hazelcast.com
Hi, my name is Hugo Apéro. Nice to meet you. I’m a Hugo theme you’ll want to hang out with. 🇫🇷 The page you are reading is based on a markdown file- look in content/about/ to edit. There, look inside the header, main, and sidebar folders to get started building your own “about” page. Why apéro? Apéro is a unique kind of casual get-together in French culture, when you gather with friends and get to know each other better over some apéritifs, snacks, and anything in between. A good apéro is one where you’d happily spend a few hours just hanging out. I hope this theme helps you create your own virtual apéro. A place where you and your site’s visitors enjoy spending time, and one that helps folks get to know you better. Customize your homepage The first page your visitors will see is the homepage. This should also be the first page you touch when you make your site. Add an image, social links, and an action link to help users hang out and explore your site longer.Read more Who is using Hugo Apéro? Zoë Turner Talks that Last A campfire Here is a talk I gave on making awesome personal websites using Hugo, blogdown, GitHub, and Netlify.Read more Featured categoriesexample (17) Theme Features (6) evergreen (2) Hugo Apéro A Hugo theme How to say my name
https://hugo-apero-docs.netlify.app/about/
2022-05-16T22:11:21
CC-MAIN-2022-21
1652662512249.16
[]
hugo-apero-docs.netlify.app
duplicate manager View and create duplicates of the current image. The duplicate manager lists all versions of the current darkroom image with their preview thumbnails. Hold down the left mouse button on a thumbnail to temporarily show that version in the center view. Double click to switch to this version and edit it. The buttons on the top-right allow you to create new duplicates of the current image. You can either create a ‘virgin’ version (with an empty history stack) or an exact duplicate of the current image. For your convenience, you can also assign a name to each version. The displayed name is stored in the version name metadata tag, which can also be edited within the metadata editor module.
https://docs.darktable.org/usermanual/3.6/en/module-reference/utility-modules/darkroom/duplicate-manager/
2022-05-16T22:21:17
CC-MAIN-2022-21
1652662512249.16
[]
docs.darktable.org
ASPxClientCalendar.SetMaxDate(date) Method Sets the maximum date of the calendar. Declaration SetMaxDate( date: Date ): void Parameters Remarks Use the ASPxClientCalendar.SetMinDate method to specify the minimum date on client side. To specify the minimum and maximum dates on the server, use the ASPxCalendar.MinDate and the ASPxCalendar.MaxDate properties. The ASPxClientCalendar.GetMaxDate and ASPxClientCalendar.GetMinDate methods are useful, to obtain the minimum and maximum dates on the calendar. Note Note that a calendar maximum date value specified by the client SetMaxDate method is not synchronized with the ASPxCalendar.MaxDate property value. When the ASPxCalendar.MaxDate property is assigned on the server, its value is saved in the Control.ViewState. The Control.ViewState is saved on the page in a hidden field and cannot be modified on the client. When a postback is sent, the previous value of the ASPxCalendar.MaxDate property (saved in ViewState) is applied and a new editor value becomes invalid and is changed to the previous one.
https://docs.devexpress.com/AspNet/js-ASPxClientCalendar.SetMaxDate(date)?v=20.2
2022-05-16T23:13:47
CC-MAIN-2022-21
1652662512249.16
[]
docs.devexpress.com
Introduction Grenadine can assist your team in collecting information from participants or attendees through the use of surveys. Surveys can be sent through the Event Manager to any person on your list. You can email surveys on mass to your lists of people to collect information on a variety of topics including: For participants - Whether or not they will attend - What they would like to present on/talk about - What areas are of interest to them - Their biography/background - Picture - Their schedule availability - Any other information your planning team may require For attendees - Whether they want to attend - If they have meal preferences - Whether they need information on parking or lodging - Whether they will opt-in to special functions that you’ll be holding - Information on details such as if they are left-handed or right-handed, in case a sporting activity such as golf is on the schedule. - Any other information that you need to collect By allowing you to collect this information from participants and attendees, Grenadine makes it easy for you to set things up without missing any vital information. The surveys also provide peace of mind to both participants and attendees because they know their needs will be met once at your event. Before you begin creating and sending out surveys, first navigate to . You will see a section called “Email addresses”. Make sure that all of your return email addresses are set-up correctly in that section. The various email addresses in here will be used to send your surveys.
https://docs.grenadine.co/email-surveys.html
2022-05-16T20:54:53
CC-MAIN-2022-21
1652662512249.16
[]
docs.grenadine.co
System Thread SINGLE_THREADED_BLOCK() SINGLE_THREADED_BLOCK() declares that the next code block is executed in single threaded mode. Task switching is disabled until the end of the block and automatically re-enabled when the block exits. Interrupts remain enabled, so the thread may be interrupted for small periods of time, such as by interrupts from peripherals. // SYNTAX SINGLE_THREADED_BLOCK() { // code here is executed atomically, without task switching // or interrupts } Here's an example: void so_timing_sensitive() { if (ready_to_send) { SINGLE_THREADED_BLOCK() { // single threaded execution starts now // timing critical GPIO digitalWrite(D0, LOW); delayMicroseconds(250); digitalWrite(D0, HIGH); } // thread swapping can occur again now } } You must avoid within a SINGLE_THREADED_BLOCK: - Lengthy operations - Calls to delay() (delayMicroseconds() is OK) - Any call that can block (Particle.publish, Cellular.RSSI, and others) - Any function that uses a mutex to guard a resource (Log.info, SPI transactions, etc.) - Nesting. You cannot have a SINGLE_THREADED_BLOCK within another SINGLE_THREADED_BLOCK. The problem with mutex guarded resources is a bit tricky. For example: Log.info uses a mutex to prevent multiple threads from trying to log at the same time, causing the messages to be mixed together. However the code runs with interrupts and thread swapping enabled. Say the system thread is logging and your user thread code swaps in. The system thread still holds the logging mutex. Your code enters a SINGLE_THREADED_BLOCK, then does Log.info. The system will deadlock at this point. Your Log.info in the user thread blocks on the logging mutex. However it will never become available because thread swapping has been disabled, so the system thread can never release it. All threads will stop running at this point. Because it's hard to know exactly what resources will be guarded by a mutex its best to minimize the use of SINGLE_THREADED_BLOCK.
https://docs.particle.io/cards/firmware/system-thread/single_threaded_block/
2022-05-16T21:50:36
CC-MAIN-2022-21
1652662512249.16
[]
docs.particle.io
Limiting offline packaging to data from the user's worklist Decrease the duration of data synchronization and limit the amount of data that the offline-enabled mobile app downloads, by implementing optimistic packaging for offline-enabled case types during data synchronization. With optimistic packaging, the app only packages and downloads the current flow that is active on the user's worklist.For example, in an offline-enabled app for expense reporting, users might only need to access a flow in which they attach a receipt scan to a group expense report. Therefore, you can implement optimistic packaging for an expense submission case type to improve the app performance in offline mode. - adjust the packaging, click the Edit configuration icon. - In the case type configuration dialog box, expand the Advanced section. - Select the Available offline only when in user's worklist check box. - In the dialog box, click OK. - On the mobile channel rule form, click Save. - In the navigation pane of Dev Studio, click Records. - Expand the Decision category, and then click When. - In the list of when rule instances, search for the pyPackageAssignmentFlows when rule, and then set it to true=true.This rule is available in the @baseclass class and is set to false by default. For more information, see When Condition rules. - On the when rule form, click Save. Previous topic Disabling processing of attachments and signatures in offline mode Next topic Disabling server-side postprocessing in offline cases
https://docs.pega.com/mobile/85/limiting-offline-packaging-data-users-worklist
2022-05-16T21:20:34
CC-MAIN-2022-21
1652662512249.16
[]
docs.pega.com
Detailed Example of Traffic Routing in Replug Link Following is a step-by-step guide for using Traffic Routing Rules. Step 1 Login or Sign In to Step 2 Click on the Replug Links from the drop-down menu of Manage in the header. Step 3 Click on New Link Button at the top left of the screen. Step 4 Create Your Replug Link will appear asking for several inputs according to the brand/campaign you selected or created. Step 5 Select your branded campaign from the drop-down or you can also create one. - To Create A New Branded CTA, visit Step 6 Enter Your URL you want to shorten or optimize and will act as destination URL. - For Information Regarding UTM, visit - For a step by step guide of the below-mentioned fields, visit Step 7 Enable the toggle for Traffic Routing Rules and click on Add New Rule. Note: For using Traffic Routing Rule, the Campaign type is very important. Whenever a CTA type campaign is selected, destination URLs must be.) Step 8 A pop-up will appear to Create a Routing Rule. Step 9 Click on Add a Filter and Select an option from the drop-down list. - For Instance, you want to target the audience of a specific country or various countries. Click on a country and then Select Country and Check the boxes against countries as per requirements. - You can add more Filters as shown below. - You can also add multiple blocks for conditional routing by using Add another Block or Add another Rule. - Add another Block means that we can add another destination URL (Then) which can be visited if all the filters (inside the block) are matched. Step 10 Once done with Rules click on the Save button at the left bottom of the popup. Note: You can add up to 10 rules. Each Rule may consist of multiple Blocks and each block can have multiple filters. If all Filters will be matched then a block will work and for a Rule to work at least one block must be matched, otherwise, the default destination URL will be loaded. Step 11 Traffic Routing Rules have been added. You can also edit the rules by clicking on the pen icon against the routing rules. Note: Traffic Routing Rules and A/B testing cannot be used simultaneously. Step 12 Click on the Save Link button. Congratulations! You have successfully created a link. Note: New features are enabled or disabled on the basis of the user plans. If you are unable to use any feature that means you need to update your plan. Traffic Routing Rules and A/B testing are allowed for Professionals while if you are an Agency user you can easily use all the new features. Example for TR. Routing Rules act like conditional statements. Each routing rule consists of one or multiple blocks. Each block consists of one or multiple filters. For a block to be considered as 'Matched', ALL filters should be matched within the block. For a routing rule to be considered as 'Matched', ANY block amongst a rule should be matched. Destination URL = Rule#1 or Rule#2 or Rule#3 or ... Rule = Block#1 or Block#2 or Block#3 or ... Block = Filter#1 and Filter#2 and Filter#3 and ... Each Rule contains a destination URL known as 'Then', when any block of the rule is matched, we follow along with the destination URL mentioned in that Rule. For a block to be considered as Matched, ALL filters in that block should be matched.
https://docs.replug.io/article/892-traffic-routing-rules
2022-05-16T20:47:13
CC-MAIN-2022-21
1652662512249.16
[]
docs.replug.io
Title Culture of Pinfish at Different Stocking Densities and Salinities in Recirculating Aquaculture Systems Document Type Article Publication Date 2010 Abstract There is great demand for marine baitfish in U.S. coastal states. The supply of marine baitfish in the United States is almost completely wild caught, and this fishery is seasonal and inconsistent. Aquaculture may be able to consistently supply marine baitfish for anglers. Two experiments were conducted to evaluate the effects of stocking density and salinity on the growth and survival of pinfish Lagodon rhomboides cultured in recirculating aquaculture systems. For the stocking density experiment, juvenile pinfish were stocked (50, 200, 400, and 600 fish/m3) into 1,600-L circular tanks in three identical recirculating systems with a salinity of 27 g/L and were cultured for 82 d. Mean survival was not statistically different among densities and ranged from 94.3% to 99.18%. Daily growth of pinfish ranged from 0.35 to 0.39 g·fish−1·d−1. Mean percent weight gain ranged from 624% to 690% and followed a density-dependent trend. Final total length followed a density-dependent pattern, with each increasing density exhibiting statistically significant decreases in length. Mean feed conversion ratio (FCR) ranged from 1.70 to 1.89. In the salinity experiment, juvenile pinfish were stocked at a density of 120 fish/m3 into 1,600-L tanks within four identical recirculating systems and were cultured for 65 d. Treatment salinities were 9 or 27 g/L; each salinity level was maintained in two systems. Two size-classes were stocked separately into two tanks within all four systems, resulting in four replicates per treatment. Mean survival was not significantly different among treatments and ranged from 98.2% to 99.9%. Mean percent weight gain ranged from 234% to 284%, with no significant differences between salinities. Mean FCR ranged from 2.5 to 3.1 and did not significantly differ between salinities, although fish in the small size-class converted feed more efficiently than those in the large size-class. Pinfish show great potential as a new aquaculture species and can be successfully cultured in recirculating systems at stocking densities of 600 fish/m3 and at a salinity as low as 9g/L. Recommended Citation Ohs, C., A.L. Rhyne, S. Grabe, M. Dimaggio, and E. Stenn. 2010. "Culture Ofpinfish (Lagodon Rhomboides) at Different Stocking Densities and Salinities in Recirculating Aquaculture Systems." North American Journal of Aquaculture 72: 132-140. Published in: North American Journal of Aquaculture, Volume 72, Issue 2, 2010
https://docs.rwu.edu/fcas_fp/156/
2022-05-16T22:09:34
CC-MAIN-2022-21
1652662512249.16
[]
docs.rwu.edu
- Elasticsearch clustering - Usage as a role dependency - debops.elasticsearch default variables - APT packages, version - UNIX user and group - Ansible inventory layout - Firewall configuration - Connection encryption, TLS - Elastic X-Pack options - Elasticsearch users and roles management - Elasticsearch network options - Elasticsearch cluster options - Node functions - Memory options - Paths - Elasticsearch configuration file - Plugin configuration - Java Policy configuration - Configuration for other Ansible roles - Default variable details debops.elasticsearch - Install and manage Elasticsearch database clusters Copyright (C) 2014-2016 Nick Janetakis <[email protected]> Copyright (C) 2014-2017 Maciej Delmanowski <[email protected]>
https://docs.debops.org/en/stable-3.0/ansible/roles/elasticsearch/index.html
2022-05-16T21:25:34
CC-MAIN-2022-21
1652662512249.16
[]
docs.debops.org