content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Create simple dashboards with the visual dashboard editor This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Contents - Define inline search strings for dashboard panels Create simple dashboards with the visual dashboard editor Splunk's visual dashboard editor enables you to create simple dashboard views quickly without touching a line of code., if you'd like to increase the complexity of a dashboard that you've created with the visual dashboard editor, and perhaps do things with it that you can't achieve through the visual editor, you can. For more information, see the "Build dashboards" section of the Developer manual, specifically the section on simple dashboard development. Starting out - Open the Actions drop-down list and click Create new dashboard.... - Name your new dashboard. Designate a unique ID for it and give it the Name that it will be identified by in the top-level navigation menu. - Click Create to create your new dashboard. - When your new dashboard appears, it is empty. To start defining panels for it, click Edit the dashboard to open the visual dashboard editor. Create your first panel In the visual dashboard editor, start by choosing a Panel type. There are four varieties of dashboard panels: - The Data table panel type presents report results in tabular format: - The Chart panel type displays report results as a chart. The Chart panel type takes the chart formatting parameters from the saved report that feeds into it. So if a chart panel is displaying a column chart and you'd rather see a stacked area chart, you have to change the formatting parameters of the saved report that you've associated with the panel. - Note: If you want your charts to display in the dashboard with custom formatting (a pie chart instead of the default bar chart that the search might otherwise produce, specific labels for the chart, x-axis, and y-axis, and so on) you have two options: - Make sure that the panel is associated with a saved report containing the required chart formatting parameters (as opposed to a saved search, which does not include chart formatting information). - Override the default chart formatting by modifying the simple syntax dashboard XML for the panel. For more information, see "Editing dashboards created with the visual dashboard editor," set up code "Simple dashboards" in the Developer manual. Change the dashboard name or XML coding Select Edit name/XML to edit the dashboard name and the simple syntax dashboard XML behind the dashboard. For more information about editing XML for dashboards created with the visual dashboard editor, see "Simple dashboards" in the Developer manual. Change dashboard permissions Select Edit permissions to expand or restrict the role-based read and write permissions for the dashboard. When you set dashboard permissions you can also define the app's availability.. - Add inline search strings to dashboard panels. Define inline search strings syntax dashboard XML behind the dashboard. Editing dashboards created with the visual dashboard editor You can edit any dashboard that was created with the visual dashboard editor (or which uses the simple dashboard syntax) code behind the navigation menu to: - Change the the location of your. If you have write permissions for your app, you can access its navigation menu code by opening Manager, clicking Navigation Menus, and then clicking the name of the navigation menu for your app. See the "Customize navigation menus" topic in the Developer manual for details about working with the navigation menu.
http://docs.splunk.com/Documentation/Splunk/4.0.10/User/CreateSimpleDashboards
2012-05-26T16:51:27
crawl-003
crawl-003-017
[]
docs.splunk.com
Sometimes, you may not want to change code, but want to find references to a particular identifier. The refactoring engine provides Find References, Find Local References, and Find Declaration Symbol commands. Both Find References and Find Local References commands provide you with a hierarchical list in a separate Find References window, showing you all occurrences of a selected reference. If you choose the Find References command, you are presented with a treeview of all references to your selection in the entire project. If you want to see local references only, meaning those in the active code file, you can select the Find Local References command from the Search menu. If you want to find the original declaration within the active Delphi code file, you can use the Find Declaration Symbol command. The Find Declaration Symbol command is only valid in Delphi and does not apply to C#. The following sample illustrates how the Find References refactoring will proceed: 1 TFoo = class 2 loc_a: Integer; // Find references on loc_a finds only 3 procedure Foo1; // this line (Line 2) and the usage 4 end; // in TFoo.Foo1 (Line 15) 5 var 6 loc_a: string; // Find references on loc_a here // finds only this line (Line 6) and // the usage in procedure Foo (Line11) 7 implementation 8 {$R *.nfm} 9 procedure Foo; 10 begin 11 loc_a := 'test'; 12 end; 13 procedure TFoo.Foo1; 14 begin 15 loc_a:=1; 16 end;
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/findrefov_xml.html
2012-05-27T03:22:21
crawl-003
crawl-003-017
[]
docs.embarcadero.com
Depending namespaces. You can select multiple namespaces to add to the using clause. The feature works identically in Delphi, although in a Delphi project, the operation attempts to find the appropriate unit containing the definition of the selected object, then adds the selected unit to the uses clause.
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/findingunits_xml.html
2012-05-27T03:22:02
crawl-003
crawl-003-017
[]
docs.embarcadero.com
Header File conio.h Category Console I/O Routines Prototype int puttext(int left, int top, int right, int bottom, void *source); Description Copies text from memory to the text mode screen. puttext writes the contents of the memory area pointed to by source out to the onscreen rectangle defined by left, top, right, and bottom. All coordinates are absolute screen coordinates, not window-relative. The upper left corner is (1,1). puttext places the contents of a memory area into the defined rectangle puttext is a text mode function performing direct video output. Note:This function should not be used in Win32 GUI applications. Return Value puttext returns a nonzero value if the operation succeeds; it returns 0 if it fails (for example, if you gave coordinates outside the range of the current screen mode). Example #include <conio.h> int main(void) { char buffer[512]; /* put some text to the console */ clrscr(); gotoxy(20, 12); cprintf("This is a test. Press any key to continue ..."); getch(); /* grab screen contents */ gettext(20, 12, 36, 21,buffer); clrscr(); /* put selected characters back to the screen */ gotoxy(20, 12); puttext(20, 12, 36, 21, buffer); getch(); return 0; } Portability
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/puttext_xml.html
2012-05-27T00:10:50
crawl-003
crawl-003-017
[]
docs.embarcadero.com
Break: During a debugging session, any line of code that is eligible for a breakpoint is marked with a blue dot in the left gutter of the Code Editor. You can also set breakpoints on frames displayed in the Call Stack window. The breakpoint icons in the Call Stack window are similar to those in the Code Editor, except that the blue dot indicates only that debug information is available for the frame, not whether a breakpoint can be set on that frame. Breakpoints are displayed in the Breakpoint List window, available by selecting View Debug windows Breakpoints. The following icons are used to represent breakpoints in the Code Editor gutter. If the conditional expression evaluates to true (or not zero), the debugger pauses the program at the breakpoint location. If the expression evaluates to false (or zero), the debugger does not stop at the breakpoint location.
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/breakpoints_xml.html
2012-05-27T02:45:13
crawl-003
crawl-003-017
[]
docs.embarcadero.com
View Debug Windows Breakpoints Displays, enables, or disables breakpoints currently set in the loaded project. Also changes the condition, passes count, or groups associated with a breakpoint. If no project is loaded, it shows all breakpoints set in the active Code Editor or in the CPU window. The following icons are used to represent breakpoints in the Breakpoint List window. Right-click the Breakpoint List window (not on an actual breakpoint) to display the following commands: Right-click on a breakpoint to display the following commands:
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/breakpointlist_xml.html
2012-05-27T02:45:08
crawl-003
crawl-003-017
[]
docs.embarcadero.com
No Shares After Switching from http to https At first i am sorry to say but all shares from old posts are lost, except the ones which are stored in the MashShare cache. When you switch your domain from non ssl http to ssl https than the shares for the old page are lost because the links are different for google. When the cache gets recreated Facebook returns the shares of the https link and not the ones of the http link. You can check this by yourself with visiting the following link and changing the link to your site: As a solution you could manual put the old share count into the custom fields of the mashshare plugin but keep in mind that they will not increase until the new share count for that link equals the share count you paste into the custom field. Read here how to do so: If the shares are still existing in the mashshare cache (they are when you never used the MashShare purge cache function) it would be possible to write a extension for mashshare which collects this old shares and use it as basis for the new share count. But this would be additional work of a few hours. It all depends on how important old share count data is for you.
https://docs.mashshare.net/article/125-no-shares-after-switching-from-http-to-https
2019-12-06T02:57:56
CC-MAIN-2019-51
1575540484477.5
[]
docs.mashshare.net
Difference between revisions of "iPi Recorder" Revision as of 14:12, 19 November 2015) - Microsoft Kinect 2 for Windows - Microsoft Kinect for Windows / Kinect XBOX 360 -.
http://docs.ipisoft.com/index.php?title=iPi_Recorder&diff=prev&oldid=1018&printable=yes
2019-12-06T03:43:10
CC-MAIN-2019-51
1575540484477.5
[]
docs.ipisoft.com
Scheduled Hardware Upgrade Posted by docbyron on June 18th, 2015 at 03:58 Hey all, just an early warning that our host will be performing some hardware upgrades on June 24th. Not sure about what time, but the upgrade should last about 10 minutes. The site will likely be unavailable during this time, and there is a small chance it could take longer if there are any complications. There are no comments yet
http://www.docs-lab.com/blogs/9/scheduled-hardware-upgrade
2017-12-11T05:40:46
CC-MAIN-2017-51
1512948512208.1
[]
www.docs-lab.com
Measuring scan performance and time The following formula can be used to measure scan performance: (number live assets) X (number of ports to be scanned) X (maximum retries) / (minimum packets per second) / 60 seconds = minutes to scan basic Manipulating Scan Performance and Time You can edit the scan template to change scan performance and time. For example, for the following parameters: - 105 = live assets discovered (you can get that from a scan log) - 65535 = number of ports to be scanned (you can get that from the nmap params line in the scan log) - 1 = maximum retries (can be found in the nmap params line in the scan log) - 200 = Packets Per Second (can be found in the nmap params line in the scan log, look for the value after --min-rate) 105 (number live assets) X 65535 (number of ports to be scanned) X 1 (maximum retries) / 200 (minimum packets per second) / 60 seconds = 1146.86 minutes to scan In the example above, that scan template will require 19 hours to complete. So, if you need a scan to complete in a 4 hour window, for example, you have to change the template to make that happen. - You can reduce retries to 0, which gives zero margin for error but will double scan performance. - You can increase minimum packets per second. For each doubling you double scan performance. So, for example, increasing from 200 to 400 minimum packets per second gives you 573 minutes instead of 1146 minutes. - You can decrease the TCP ports being scanned. Whether you need to scan all 64,000 depends on why you're running the scan. For example, PCI requires scanning all 64,000 ports. Did this page help you?
https://docs.rapid7.com/insightvm/measuring-scan-performance-and-time/
2020-11-24T03:44:10
CC-MAIN-2020-50
1606141171077.4
[]
docs.rapid7.com
Menus are used for navigating the content on your site. You can create menus and add items for existing content such as pages, posts, categories, tags, formats, or custom links. Menu Locations Mai Theme supports 5 menu locations: - Primary - Header Left - Header Right - Footer - Mobile You can create as many menus as you want, but only one menu can be used in each location. A menu location does NOT need to use a menu. If there’s no menu, the menu location will not be created. Consult “A Beginners Guide to the Genesis Framework” for complete instructions on adding and editing menus. Menu Classes Before adding any custom menu classes you need to make sure the CSS Classes field is enabled for menu items. This can be enabled via Screen Options setting on the admin Menus screen. Search Icon To add a search icon with popup search box just build your add a Custom Link menu item and add search to the CSS Classes field. Highlight Buttons Mai Theme allows you to easily add a highlight button around a menu item with the highlight class. Just build your menu item as normal and add highlight to the CSS Classes field. Mobile Menu Here are some of the ways you can customize Mai Theme’s extremely flexible mobile menu. Standard Menu (slide down) or Side Menu (slide in from the right) Mobile Menu Items By default, Mai Theme will automatically display any menus the are shown on desktop (larger browser sizes) on mobile (and smaller browser sizes). Therefore, you technically don’t need to do anything, the mobile menu “just works”. Mobile Menu Content If you’d like to add custom content like text, images, social icons, etc., you can use the Mobile Menu widget area built in to Mai Theme. Simply add any widgets you’d like into that widget area. The Mobile Menu widget area overrides the default mobile menu, so make sure you add at least one Navigation Menu widget if you want your mobile menu to actually display a menu. Other widgets that work well in the Mobile Menu widget area are Text, Custom HTML, Image, Search, Simple Social Icons, Genesis – User Profile, and pretty much anything else! Related YouTube videos: How To Add A Button (Highlight) To A Menu Item In Genesis Mai Theme How To Add A Menu Search Icon In Mai Theme Add Text And Content To The Mobile Menu With Customizations In Mai Theme Use A Slide Out Side Menu On Mobile In Genesis Mai Theme
https://docs.bizbudding.com/classic-docs/mai-theme-classic/menus/
2020-11-24T04:03:37
CC-MAIN-2020-50
1606141171077.4
[array(['https://docs.bizbudding.com/wp-content/uploads/2020/05/file-G8tQPj53J2-1024x501.png', None], dtype=object) array(['https://docs.bizbudding.com/wp-content/uploads/2020/05/file-INj5fsCkfe.png', None], dtype=object) array(['https://docs.bizbudding.com/wp-content/uploads/2020/05/Menus_‹_Mai_Pro_Theme_Demo_—_WordPress-2-1024x176.jpg', None], dtype=object) array(['https://docs.bizbudding.com/wp-content/uploads/2020/05/Menus_‹_Mai_Pro_Theme_Demo_—_WordPress-1024x243.jpg', None], dtype=object) array(['https://docs.bizbudding.com/wp-content/uploads/2020/05/Screen-Shot-2018-04-16-at-12.59.02-PM.png', None], dtype=object) array(['https://docs.bizbudding.com/wp-content/uploads/2020/05/p-chipuP4lQVcrta1bBZl1GtsqG94J8zZXz5Mcs4BqIDXw0-In2EzhyncNP01GOdMjhtlADhrRdj0PBD7mmyxgeSsi_hSDGzWirOSvXnMh9WBk2P1HYYffL0HhrIvPOMgI4X-K8-1024x104.png', None], dtype=object) array(['https://docs.bizbudding.com/wp-content/uploads/2020/05/file-4th2tUzj4l.png', None], dtype=object) array(['https://docs.bizbudding.com/wp-content/uploads/2020/05/mai-theme-mobile-menus.png', None], dtype=object) ]
docs.bizbudding.com
- Remove Bot Traffic in Usage Analytics Reports When your goal is to focus solely on the users visiting and interacting with your search interfaces, you have to identify and remove bot traffic from your usage analytics reports. If the META tag of your web pages allows robots to either index public web pages or follow the links they contain (i.e., <META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">), you can consider that web robots actions are included in your reports. Moreover, when you index your sites using Coveo, the Coveobot is also producing human-like events in your analytics data (e.g., visiting pages and clicking links). There’s an automatic way to remove most bot traffic from your reports using a filter based on the user agent. Other robots can visit your websites for testing, diagnostic, and monitoring purposes. To remove bot traffic in usage analytics reports On the Named Filters page, add the following filter (see Create Named Filters): Device Category is not Bot In the report you want to remove bot traffic, apply the named filter you just created.
https://docs.coveo.com/en/2732/
2020-11-24T04:13:56
CC-MAIN-2020-50
1606141171077.4
[]
docs.coveo.com
Wrapping a general loss function inside of BaseLoss provides extra functionalities to your loss functions: - flattens the tensors before trying to take the losses since it's more convenient (with a potential tranpose to put axisat the end) - a potential activationmethod that tells the library if there is an activation fused in the loss (useful for inference and methods such as Learner.get_predsor Learner.predict) - a potential decodesmethod that is used on predictions in inference (for instance, an argmax in classification) The args and kwargs will be passed to loss_cls during the initialization to instantiate a loss function. axis is put at the end for losses like softmax that are often performed on the last axis. If floatify=True, the targs will be converted to floats (useful for losses that only accept float targets like BCEWithLogitsLoss), and is_2d determines if we flatten while keeping the first dimension (batch size) or completely flatten the input. We want the first for losses like Cross Entropy, and the second for pretty much anything else. tst = CrossEntropyLossFlat() output = torch.randn(32, 5, 10) target = torch.randint(0, 10, (32,5)) #nn.CrossEntropy would fail with those two tensors, but not our flattened version. _ = tst(output, target) test_fail(lambda x: nn.CrossEntropyLoss()(output,target)) #Associated activation is softmax test_eq(tst.activation(output), F.softmax(output, dim=-1)) #This loss function has a decodes which is argmax test_eq(tst.decodes(output), output.argmax(dim=-1)) tst = CrossEntropyLossFlat(axis=1) output = torch.randn(32, 5, 128, 128) target = torch.randint(0, 5, (32, 128, 128)) _ = tst(output, target) test_eq(tst.activation(output), F.softmax(output, dim=1)) test_eq(tst.decodes(output), output.argmax(dim=1)) tst = BCEWithLogitsLossFlat() output = torch.randn(32, 5, 10) target = torch.randn(32, 5, 10) #nn.BCEWithLogitsLoss would fail with those two tensors, but not our flattened version. _ = tst(output, target) test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target)) output = torch.randn(32, 5) target = torch.randint(0,2,(32, 5)) #nn.BCEWithLogitsLoss would fail with int targets but not our flattened version. _ = tst(output, target) test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target)) tst = BCEWithLogitsLossFlat(pos_weight=torch.ones(10)) output = torch.randn(32, 5, 10) target = torch.randn(32, 5, 10) _ = tst(output, target) test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target)) #Associated activation is sigmoid test_eq(tst.activation(output), torch.sigmoid(output)) tst = BCELossFlat() output = torch.sigmoid(torch.randn(32, 5, 10)) target = torch.randint(0,2,(32, 5, 10)) _ = tst(output, target) test_fail(lambda x: nn.BCELoss()(output,target)) tst = MSELossFlat() output = torch.sigmoid(torch.randn(32, 5, 10)) target = torch.randint(0,2,(32, 5, 10)) _ = tst(output, target) test_fail(lambda x: nn.MSELoss()(output,target)) On top of the formula we define: - a reductionattribute, that will be used when we call Learner.get_preds - an activationfunction that represents the activation fused in the loss (since we use cross entropy behind the scenes). It will be applied to the output of the model when calling Learner.get_predsor Learner.predict - a decodesfunction that converts the output of the model to a format similar to the target (here indices). This is used in Learner.predictand Learner.show_resultsto decode the predictions class LabelSmoothingCrossEntropyFlat[source] LabelSmoothingCrossEntropyFlat(* args, axis= -1, eps= 0.1, reduction= 'mean', flatten= True, floatify= False, is_2d= True) :: BaseLoss Same as LabelSmoothingCrossEntropy, but flattens input and target.
https://docs.fast.ai/losses.html
2020-11-24T03:14:31
CC-MAIN-2020-50
1606141171077.4
[]
docs.fast.ai
Message-ID: <386382497.17829.1606188412056.JavaMail.confluence@ip-172-30-0-133> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_17828_2004174591.1606188412047" ------=_Part_17828_2004174591.1606188412047 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:) was a lightweight standalone applicati= on which provides information about the Gateway. It also provides some basi= c functionality such as starting and stopping the Gateway, opening the Gate= way webpage, performing a Gateway Backup and Restore, resetting the Gateway= password, and more. Older versions of Ignition featured a visual = Gateway Control Utility or GCU that = could start and stop the Ignition service. This visua= l element of the GCU, as well as the ability to start and = stop the service have since been removed in Igntion 8.0. For more informati= on on starting or stopping the service, please see the Gateway&nb= sp;Settings page. More information on the older version of the GCU c= an be found in Deprecated Features= a> of the user manual. Graphical User Interface (GUI) (pronounced 'gooey') enables a user to in= teract with a software application through graphics instead of text. The ru= ntime See Gateway Control Utility.=20 See GUI.=20 See Gateway Command-line Utility.=20 The Gateway Command-line Utility (gwcmd) provides a list of commands tha= t perform specific functions in the Gateway. The Gateway Command-line Utili= ty or 'gwcmd' provides basic commands, such as triggering System Commission= ing (resetting the main password), changing the Gateway's port, and restart= ing the Gateway.=20
https://docs.inductiveautomation.com/exportword?pageId=26021695
2020-11-24T03:26:52
CC-MAIN-2020-50
1606141171077.4
[]
docs.inductiveautomation.com
R/drake_plan_helpers.R trigger.Rd Use this function inside a target's command in your drake_plan() or the trigger argument to make() or drake_config(). For details, see the chapter on triggers in the user manual: trigger( command = TRUE, depend = TRUE, file = TRUE, seed = TRUE, format = TRUE, condition = FALSE, change = NULL, mode = c("whitelist", "blacklist", "condition") ) A list of trigger specification details that drake processes internally when it comes time to decide whether to build the target. A target always builds if it has not been built before. Triggers allow you to customize the conditions under which a pre-existing target rebuilds. By default, the target will rebuild if and only if: Any of command, depend, or file is TRUE, or condition evaluates to TRUE, or change evaluates to a value different from last time. The above steps correspond to the "whitelist" decision rule. You can select other decision rules with the mode argument described in this help file. On another note, there may be a slight efficiency loss if you set complex triggers for change and/or condition because drake needs to load any required dependencies into memory before evaluating these triggers. # A trigger is just a set of decision rules # to decide whether to build a target. trigger()#> drake_triggers #> $ command : logi TRUE #> $ depend : logi TRUE #> $ file : logi TRUE #> $ seed : logi TRUE #> $ format : logi TRUE #> $ condition: logi FALSE #> $ change : NULL #> $ mode : chr "whitelist"# This trigger will build a target on Tuesdays # and when the value of an online dataset changes. trigger(condition = today() == "Tuesday", change = get_online_dataset())#> drake_triggers #> $ command : logi TRUE #> $ depend : logi TRUE #> $ file : logi TRUE #> $ seed : logi TRUE #> $ format : logi TRUE #> $ condition: language today() == "Tuesday" #> $ change : language get_online_dataset() #> $ mode : chr "whitelist"if (FALSE) { isolate_example("Quarantine side effects.", { if (suppressWarnings(require("knitr"))) { load_mtcars_example() # Get the code with drake_example("mtcars"). # You can use a global trigger argument: # for example, to always run everything. make(my_plan, trigger = trigger(condition = TRUE)) make(my_plan, trigger = trigger(condition = TRUE)) # You can also define specific triggers for each target. plan <- drake_plan( x = sample.int(15), y = target( command = x + 1, trigger = trigger(depend = FALSE) ) ) # Now, when x changes, y will not. make(plan) make(plan) plan$command[1] <- "sample.int(16)" # change x make(plan) } }) }
https://docs.ropensci.org/drake/reference/trigger.html
2020-11-24T03:45:10
CC-MAIN-2020-50
1606141171077.4
[]
docs.ropensci.org
Event types Contents Feature coming soon!Learn how to configure how event-related information appears to agents. Prerequisites - Configure the following permissions in Genesys Cloud: - Journey > Session Type > View - Journey > Session Type > Add - Journey > Session Type > Edit - Journey > Session Type > - Journey > Event Type > View - Journey > Event Type > Add - Journey > Event Type > Edit - Journey > Event Type > Overview An event type represents a type of organization-specific activity that you track as part of a custom session or a web session. For each event type, there are multiple event instances as customers interact with your website or third-party products. Each event type has distinct attributes. For example, the Bike delivery scenario includes an order placed event type with an attribute of order ID. Each time a delivery session occurs, there are multiple order placed events. For each order placed event, there is a unique order ID value. Where agents see event-specific information Agents see event-related data in the journey map when they view session details. The following image shows a web session. The pages that the customer visited appear along the left, and the specific events that occurred during the session appear in the customer journey map. When the agent rests their mouse on an icon in the map, a tooltip provides more details.As the following section explains, you can change the names of the events and the information in the tooltip. Make event information meaningful to agents Following are the ways that you can help agents understand this event-related data: - Configure the list of events - Use friendly names: Events originate as API calls for custom events or as the result of SDK methods for a web session. When Genesys Predictive Engagement receives events, it uses the original naming conventions. Typically, these event names are technical names that may be unfamiliar to agents. You can use the Session Library to configure friendlier names that are appropriate for agents. - Hide unnecessary events: A session can provide any number of events to Genesys Predictive Engagement, but only some of those events may be helpful or useful to agents. For example, for the delivery session, you might have a custom event called Generate BOL with attributes that list all the purchased items on a Bill of Lading. A Bill of Lading is significant to personnel directly involved in the movement of freight. Customers and agents would not have questions or concerns about it. Therefore, you might opt to hide all instances of this event type from agents. - Configure the tooltip - Use friendly names: Agents can see an event's attributes when they rest their mouse on the event in the customer journey map. Ensure that the information appears as you want it to. - Hide unnecessary events: To keep tooltips easy to use, showcase only a few events there. View event types for a session Configure the list of events - page_viewed - widget_web_chat_submitted - Next to the event, under Actions click Edit. - In the Event Information tab, in the Event name box, type a meaningful name. - To specify whether agents see this event in the list, set the Display to agents toggle. Configure the tooltip When agents rest their mouse on an event in the customer journey map, they see a tooltip that contains event details. The event attribute appears, followed by its value for the specific event. Ensure that the event attribute name is meaningful to agents. For more information and examples, see Make event information meaningful to agents. - On the Attributes tab, give each event a meaningful name. - To display the event attribute on the tooltip, set the toggle to Yes. - To specify whether agents see this event in the tooltip, set the Display to agents toggle. - To delete a tooltip, click the Delete icon. Delete an event Deleting an event means that the event no longer appears in the Session Library and agents can no longer see it.
https://all.docs.genesys.com/ATC/Current/AdminGuide/Event_types
2020-11-24T03:53:54
CC-MAIN-2020-50
1606141171077.4
[]
all.docs.genesys.com
Will Full Page Zoom slow down my page loading times? No, Full Page Zoom does not have an impact on the page loading time. The zoomed images are loaded in the background once the page has been loaded. However, the size of the zoomed images may affect the app responsiveness if they are too big. You can limit the size of the zoomed images in the app preferences page (Zoom preferences), please check the related articles for more information and/or see the image below for your reference.
https://docs.codeblackbelt.com/article/70-will-full-page-zoom-slow-down-my-page-loading-times
2020-11-24T03:38:36
CC-MAIN-2020-50
1606141171077.4
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594a823404286305c68d3fa9/images/5d163af32c7d3a6ebd22b528/file-Bh1ElQSv0P.png', None], dtype=object) ]
docs.codeblackbelt.com
Customizing Theme ArticlesTheme Customizer Customizing Theme Options How to Set up Menu? How to manage Site Identity, Header Logo and Colors? How to add Header Image? How to manage Widget Area? Resortica Custom Post Types How to manage Additional CSS? Doc navigation← Installation Was this article helpful to you? Yes No How can we help? Name Email subject message
https://docs.codethemes.co/docs/resortica/customizing-theme/
2020-11-24T03:19:24
CC-MAIN-2020-50
1606141171077.4
[]
docs.codethemes.co
Contents Strategic Partner Links Sepasoft - MES Modules Cirrus Link - MQTT Modules Resources Knowledge Base Articles Inductive University Forum IA Support SDK Documentation SDK Examples All Manual Versions Ignition 8 Ignition 7.9 Ignition 7.8 Python is very easy to learn, and with some understanding of its basic syntax, you can get started making your own scripts. Let's get right to everyone's favorite example, "Hello World." The following script will print out " Hello World " to the output console. The Variables are created by simply assigning a value to them. Variables do not need to be declared, because Python has a dynamic type system. That means Python figures out the type of the variable on the fly when the script is executed. The following script would print out: 15 x=5 y=3 print x*y Python makes limited distinction between single and double quotes. As long as they are used consistently then there are few times when the type of quotation mark you use matters. Some of the rules are shown here: print "This is my text" # Using double quotation marks print 'This is my text' # Using single quotation marks print "This is my text' # This will not work because python does not allow mixing the single and double quotation marks print "My name is 'David'" # This will print: My name is 'David' print 'My name is "David"' # This will print: My name is "David" print 'My name is Seamus O\'Malley' # This will print: My name is Seamus O'Malley Perhaps Python's most unique feature is logical blocks which are defined by an indentation in Python. A colon (:) starts a new block, and the next line must be indented (typically using a tab or 4 spaces). The block ends when the indentation level returns to the previous level. For example, the following will print out " 5 4 3 2 1 Blast-off " with each value on a new line. The final print is not part of the while loop because it isn't indented. countdown = 5 while countdown > 0: print countdown countdown = countdown - 1 print "Blast-off!" You can start and end a section of comments with a triple quote (''' or """). ''' This is a lot of text that you want to show as multiple lines of comments. Script written by Professor X. Jan 5, 1990 ''' print 'Hello world' While Python doesn't explicitly have a way to block comment (comment out multiple lines), multi-line strings are functionally similar. In Ignition, you can use the Ctrl-/ keyboard shortcut to comment several lines of code at once. Just highlight one or more lines of code and hold the Ctrl key and press "/". This will prepend all of the selected lines of code with the pound/hash (#) sign. Press Ctrl-/ again to remove the pound/hash sign. Control Flow statements, that is the ifs and loops, make the language do things differently based on the various conditions. Python has all of the basic control flow statements that you'd expect from a programming language. An if statement allows you to check if a condition is true or not true. Depending on the condition, you can either execute a block of code, or do something else. Many of these can be chained together to determine under what conditions should certain code execute. if condition == True: print value1 Looping can be done with a for, which executes a block of code a set number of times, or a while, which executes a block of code until a certain condition is met. Both can be incredibly useful. for item in myList: print item In This Section ...
https://docs.inductiveautomation.com/display/DOC79/Python+Scripting
2020-11-24T03:17:38
CC-MAIN-2020-50
1606141171077.4
[]
docs.inductiveautomation.com
Marketo Programs can be synced with Salesforce Campaigns. Here is an overview of how this works. Why should I sync Marketo programs with Salesforce campaigns? - Use the powerful features of a Marketo Program. - Keep members and their status in sync between a Marketo program and a Salesforce Campaign. - Tap into the reporting features in Marketo and Salesforce. How is a Marketo Program and a Salesforce campaign synced? In Marketo, you have the option to create a one-to-one mapping between a program and a Salesforce campaign. The channel and period cost in Marketo sync to Salesforce as the campaign type and actual cost. This sync is one way, from Marketo to Salesforce. Marketo program members and their progression statuses are kept in sync with the Salesforce campaign members and campaign member statues. This is a bidirectional sync, so any changes made in either Marketo or Salesforce are reflected in both systems. Note If there are members in the Marketo program that don't exist in Salesforce, Marketo creates those people as leads in Salesforce. What are the triggers/filters related to campaigns? Triggers: - Added to SFDC Campaign - Removed from SFDC Campaign - Status is Changed in SFDC Campaign Filters: - Member of SFDC Campaign Can I add Marketo People to my SFDC campaign? Yes, use the Add to SFDC campaign flow action. If this person doesn't exist in Salesforce, Marketo will create it in Salesforce and then add him/her to the campaign. Can I remove members from my SFDC campaign using Marketo? Yes, use the Remove from SFDC Campaign flow action. Can I change campaign member status using Marketo? Yes, use the Change Status in SFDC Campaign flow action. Why can't I see any of my Salesforce campaigns? Here are things you can check: - Make sure the campaign sync is enabled. - Confirm that your Marketo Sync User is a Marketing User in Salesforce. Note If your Salesforce campaign and the mapped Marketo program have incompatible program statuses, you may receive an error message. We recommend that you match the program statuses prior to the sync. Related Articles - Sync an SFDC Campaign with a Program - Understanding Program Membership - Enable/Disable Campaign Sync - Make Marketo Sync User a Marketing User
https://docs.marketo.com/display/public/DOCS/SFDC+Sync%3A+Campaign+Sync
2020-11-24T04:41:42
CC-MAIN-2020-50
1606141171077.4
[array(['/download/attachments/557070/attach.png', None], dtype=object) array(['/download/attachments/557070/attach.png', None], dtype=object)]
docs.marketo.com
Removal of NodeSecurity, GoLint, and SCSSLint¶ On the week of March 9th 2020, we'll be removing some tools from Codacy. Those tools are: NodeSecurity, GoLint and SCSSLint. These tools have become deprecated or stopped being updated by their maintainers and started providing a bad experience for Codacy users either by reporting false positives or causing other unexpected issues. We've been working on alternatives for each tool. Continue reading to find out how you can replace each of the removed tool: - NodeSecurity: This tool detects outdated NPM dependencies. It stopped being maintained two years ago when it was acquired and replaced by NPM itself. Since it stopped being updated it will no longer find new vulnerabilities, thus becoming useless and only providing a false sense of security. Recommendation: If you have an NPM repository you now have this functionality out-of-the-box. - GoLint: Last month we launched a new Go tool - revive - which is a drop-in replacement for GoLint supporting all of its rules and even more. Recommendation: Revive was enabled by default for your Go repositories with GoLint's. Check that the default patterns make sense for your team. - SCSSLint: This tool is now unmaintained for four years with the old maintainers suggesting that users migrate to Stylelint which is already support by Codacy for quite some time. Recommendation: Check that the Stylelint is enabled and its default patterns make sense for your team. Last update: November 13, 2020
https://docs.codacy.com/release-notes/cloud/removal-of-nodesecurity-golint-and-scsslint/
2020-11-24T03:53:11
CC-MAIN-2020-50
1606141171077.4
[]
docs.codacy.com
Difference between revisions of "FIFO DAT" Revision as of 16:56, 24 January 2019 - FIFO Page Callbacks DAT callbacks - The Callbacks DAT will execute once for each row added to the FIFO DAT. Execute from executeloc - ⊞ - Determines the location the script is run from.() Keep First Row firstrow - Keeps first row.
https://docs.derivative.ca/index.php?title=FIFO_DAT&diff=next&oldid=11677
2020-11-24T04:16:11
CC-MAIN-2020-50
1606141171077.4
[]
docs.derivative.ca
Monitor a webhook's performance From Genesys Documentation This topic is part of the manual Genesys Predictive Engagement Administrator's Guide for version Current of Genesys Predictive Engagement. Contents Learn how to monitor a webhook's performance Prerequisites - Configure the following permissions in Genesys Cloud: - Journey > Report > View - Journey > Action Map > View (to see action maps in the report) Monitor a webhook's performance Use the Action Map Performance to monitor your webhook performance. When you create an action map that uses a webhook, the metric names have the following, specific meanings.
https://all.docs.genesys.com/ATC/Current/AdminGuide/Monitor_webhooks
2020-11-24T03:37:48
CC-MAIN-2020-50
1606141171077.4
[]
all.docs.genesys.com
cfme.utils.soft_get module¶ cfme.utils.soft_get. soft_get(obj, field_base_name, dict_=False, case_sensitive=False, best_match=True, dont_include=None)[source]¶ This function used for cases that we want to get some attribute that we either know only few parts of its name or want to prevent from case issues. Example Imagine you have a relationships table and you want to get ‘image’ field. Since sometimes the exact name of the field is changing among versions, pages, etc. it could be appear as ‘Images’, ‘Image’, ‘Container Images’, Containers Images’, etc. Since we don’t care for the exact name and know that ‘image’ is a unique in the table, we can use this function to prevent from this complexity. - Parameters obj (*) – The object which we want to get the attribute field_base_name (*) – The base name, a string that we know for sure that is a sub-string of the target field dict_ (*) – Whether this is a dict AND we want to perform the same functionality on its keys case_sensitive (*) – Whether the search is a sensitive case. best_match (*) – - If True: in case that it found more than 1 match field, it will take the closest one - If False: in case that it found more than 1 match field, it will raise an error dont_include (*) – Strings that should not be a part of the field. Used to prevent cases like: soft_get(obj, ‘image’) -> obj.image_registry - Returns The value of the target attribute
https://cfme-tests.readthedocs.io/en/stable/modules/cfme/cfme.utils.soft_get.html
2020-11-24T04:26:40
CC-MAIN-2020-50
1606141171077.4
[]
cfme-tests.readthedocs.io
Webhooks In order to get the best performance, you can use webhooks to receive the push update when the API is made asynchronously: - Calculate Rate - Create Label - Cancel Label - Create Bulk Download - Manifest How to setup the Webhooks You can follow this to setup the webhooks in your account.. The webhook envelope contains: { "event": "calculate_rates", "date_time": "2016-09-26T07:15:30+00:00", "meta": { // meta and body, are same as our API envelope }, "data": { // } } Events Example Calculated rate webhook { "event": "calculate_rates", "date_time": "2016-09-26T07:15:30+00:00", "meta": { "code": 4713, "message": "All or partial failed in rate request.", "details": [], "retryable": false }, "data": { "created_at": "2016-09-26T07:15:26+00:00", "id": "xxxxxx-3b12-4376-bc3d-33333333", "updated_at": "2016-09-26T07:15:30+00:00", "status": "calculated", "rates": [ { "shipper_account": { "id": "yyyyyy-1b7a-45da-8e1f-67898736567", "slug": "fedex", "description": "fedex testing account" }, "service_type": null, "service_name": null, "pickup_deadline": null, "booking_cut_off": null, "delivery_date": null, "transit_time": null, "error_message": null, "info_message": null, "charge_weight": null, "total_charge": null, "detailed_charges": [] } ] } } Retry Webhooks Postmen sent the events for each webhook URL with POSTs. suggestion is to simply include a secret key in the URL that your provide and check the secret GET parameter in your scripts.
https://docs.postmen.com/webhooks.html
2020-11-24T03:23:46
CC-MAIN-2020-50
1606141171077.4
[]
docs.postmen.com
Live Licensing InsightVM will only store assessment data for your assets up to the licensed maximum. For the benefit of awareness, the Security Console tracks license usage information and will display alerts when your assessed asset count nears the currently allotted asset limit. Assets assessed by agents are also factored into this total. This feature will allow you to plan ahead when it comes to determining what license requirements are suitable for your organization. Your organization’s license determines the number of assets that can be assessed for vulnerabilities or policy compliance for storage in the console. Your license does not limit the number of assets that can be discovered on your network. Access to license usage data Only global administrators can view the extent of an organization’s license limit in full detail. Non-admin user access is limited to overall and discovered asset counts, as well as license limit banner notifications. NOTE The permissions of the current user role will determine overall and discovered asset count visibility. A user with access to only a subset of their organization’s assets will be limited to the totals of that subset. What counts as an asset? The application is licensed for unique assessed assets. An asset is considered assessed when its vulnerability or policy assessment data is stored in the Security Console. The application uses correlation heuristics to determine whether an asset is unique based on the following hierarchy of attributes: - Universally Unique Identifier (UUID), weighted at 1.0 - Hostname(s), weighted at 0.49 - Media Access Control (MAC) address(es), weighted at 0.35 - IP address(es), weighted at 0.16 Assets identified and successfully correlated are only counted once. A single asset that has been assessed by both an agent and a credentialed scan will not be double-counted as a result. To correlate agent assets with unauthenticated scans, see our documentation on correlating assets with Insight Agent UUIDs. Assets in AWS and Microsoft Azure Each instance or virtual machine within these environments counts as one asset until it is terminated. Rapid7’s unique cloud connections with AWS and Azure facilitate the immediate removal of these terminated assets from your license count as soon as they are decommissioned. Discovery connections for these environments automatically know when an asset has been decommissioned. As a result, stale assets will not count toward your license limit as you continue to scan your network. Containers For containers, each host counts as one asset. A single host running multiple containers will still count as one asset. Your license count will not be affected by assessments run on a per-image basis, or by data used for this assessment. Viewing your asset count REMINDER Data concerning an organization’s license limit is only available to global administrators. Assets page Click the Assets tab in the Security Console. Your organization’s license usage will be shown in the form of a progress bar located below the overall asset count. Administration page Click the Administration tab in the Security Console. In the “Global and Console Settings” frame, click Administer. The “Security Console Configuration” screen will be shown. Click the Licensing tab on this page to view your organization’s license details. License usage notifications NOTE These notifications will appear for both admin and non-admin users. 90% usage The Security Console will display an orange notification when licensed assets exceed 90% of the allotted maximum. 100% usage The Security Console will display a red notification when licensed assets exceed 100% of the allotted maximum. In the event that your licensed assets exceed the license limit, you will be allowed temporary flexibility to scan above your license limit in order to remain secure. 110% usage The Security Console will display a red notification banner when licensed assets exceed 110% of the allotted maximum. This banner cannot be dismissed. It will stop appearing only after your licensed asset count is brought within your license limit. If no action is taken toward increasing your license at this stage, further scans may trigger automatic enforcement. Automatic enforcement Automatic enforcement may take place if you continue to scan assets after exceeding 110% of your allotted maximum. Should automatic enforcement occur, new scan data will not be retained. In order to avoid the potential loss of new scan data, clean up duplicate or stale assets in your Security Console or contact your Rapid7 CSM to increase your license. Automatic enforcement will deactivate after your asset count is brought within your licensed maximum (this can be done by removing assets or increasing the limit of your license). New scan data will be saved as usual once this has been resolved. Scan data that was not saved during automatic enforcement will not be available after your asset count is brought within your license limit. Scan data that is collected during that time is not recoverable.
https://docs.rapid7.com/insightvm/live-licensing/
2020-11-24T04:05:06
CC-MAIN-2020-50
1606141171077.4
[array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/insightvm/images/Assets_page_license_usage.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/insightvm/images/Administration_page_license_usage.png', None], dtype=object) ]
docs.rapid7.com
Using the Quick Filter You can apply the Quick Filter in the following ways: Filtering by field values Filtering by an excluded field’s value Filtering with the Filter Row Filtering by custom criteria Once you apply a quick filter, the filtering condition appears in the Condition Panel. From the Condition Panel, you can perform the following tasks: NOTE: By default, Alloy Discovery converts the Quick Filter into the Advanced Filter, when you save a grid view (for details, see Working with views). To disable automatic conversion, clear the check box in the Advanced Filter Options section of the Advanced Preferences.
https://docs.alloysoftware.com/alloydiscovery/8/help/using-grids/using-the-quick-filter.htm
2020-11-24T03:10:17
CC-MAIN-2020-50
1606141171077.4
[array(['../Resources/Images/Condition_Panel.png', None], dtype=object)]
docs.alloysoftware.com
Customizing Theme ArticlesCreating a custom Homepage Customizing Theme Options How to modify Footer? How to set up Menu? How to manage Site Title, Logo and Site Icon? How to manage Widget Area? How to use Elementor? Custom Elementor Elements How to manage Additional CSS? Doc navigation← InstallationVideo Tutorial → Was this article helpful to you? Yes No How can we help? Name Email subject message
https://docs.codethemes.co/docs/robolist/customizing-theme/
2020-11-24T03:26:20
CC-MAIN-2020-50
1606141171077.4
[]
docs.codethemes.co
System Design / Components¶ Components¶ - Base Models There are base abstract models which holds attributes and method that the system expect. Those attributes and fields are very generic. For reference: Base Classes - Admin Site and Models A custom admin site and admin.ModelAdmin subclasses that holds functionality needed for a smooth framework dashboard experience For reference: The Admin - Report Registry and Views A Registry for reports created. It’s also responsible for report menu generation. ReportView is a subclass of slick_reporting.ReportView with many addition like caching and ajax. - Front End Report/Widget Loader A collection of Javascript / jQuery function and wrappers to easily create chart/report widgets and take full control on how they are displayed. Why using the admin?¶ It’s much simpler especially around CRUD intensive apps (like ERPs). * With just one class you can manage all CRUD operations. * Easily have your url named and reversed * Multiple Formsets support out of the box. * It’s unified, no need to learn new terminologies. If you worked with the admin you’ll know your way around here. * Admin goodies (list filter, auto form generation, save and continue, save and add new buttons, discovery of related objects that would get deleted) * Out of the box permissions handling Now, imagine having to write all of this, again, in class based views.. that’s a pain that no one should face. “But the admin should be for site admins only.” Well, that’s an old phrase that gets passed around which i dont find convincing enough. Maybe it was true in the old days; But now, you dont have to be a staff member to be able to log in an admin dashboard. Also, Ra dashboard is a custom admin site (independent from your typical admin). Reporting¶ Reporting engine itself was moved from this package to be an independent package Django Slick Reporting Ra framework, away from the calculation itself, holds functionality of organizing the reports and create html widgets out of those reports, which can be controlled . and by default support showing results in tables, and different kinds of charts, all in speed.
https://ra-framework.readthedocs.io/en/latest/getting_started/components.html
2020-11-24T04:02:10
CC-MAIN-2020-50
1606141171077.4
[]
ra-framework.readthedocs.io
Network Security Page last updated: This topic describes some of the networking and routing security options for your Ops Manager deployment. Securing Traffic and Controlling Routes You can enable and configure a number of customization options to secure traffic in and out of your Ops Manager deployment. - TLS Connections in Ops Manager Deployments - Securing Traffic into TAS for VMs - Providing a Certificate for Your TLS Termination Point - Enabling TCP Routing Using the IPsec Add-On The IPsec add-on for Ops Manager provides additional security to the network layer for each BOSH-deployed virtual machine (VM). The Ops Manager IPsec add-on secures network traffic within an Ops Manager deployment and provides internal system protection if a malicious actor breaches your firewall. - Securing Data in Transit with the IPsec Add-on - Rotating IPsec Credentials - Installing the Ops Manager IPsec Add-On Network Communication Paths in Ops Manager - BOSH DNS Network Communications - Cloud Controller Network Communications - Container-to-Container Network Communications - CredHub Network Communications - Diego Network Communications - Loggregator Network Communications - MySQL Network Communications - NATS Network Communications - Routing Network Communications - UAA Network Communications
https://docs.pivotal.io/ops-manager/2-9/security/networking/index.html
2020-11-24T04:14:04
CC-MAIN-2020-50
1606141171077.4
[]
docs.pivotal.io
Installing Right Click Tools Enterprise Standalone Right Click Tools Basic System Requirements - Windows Operating System - ASP.net 4.7 or higher - Functional ConfigMgr Client Download the newest version of Right Click Tools. You can download the latest installer for Right Click Tools by logging into the Recast Portal site at: Installation Close all open ConfigMgr consoles on your chosen computer. Then double click on the installer .msi to install. The installer will open and you should click "Next" to continue. Choose Right Click Tools Version In the screen that follows select the Recast RCT Enterprise Standalone your license expires at some time in the future. If you attempt to run an Enterprise tool after your license has expired, you will receive the error "Invalid Recast Permissions."> Licensing Licenses can be downloaded during your Right Click Tools installation by entering your Recast Portal email address and password and clicking Download License. The license information should show up in the right pane if the retrieval was successful. If the computer with your ConfigMgr console does not have interenet access you can use the Browse for License button to browse the filesystem for a license file that has been exported from the Portal. Click install to finish the installation Verification To verify that your Right Click Tools have installed correctly, open the ConfigMgr console on the computer where you just installed Right Click Tools. Click on the Recast node on the left hand side of the screen. On the right side of the resulting screen it should show the Right Click Tools version and the Enterprise License box should be checked. Configuration Settings There are some settings that will be useful to add to the Configure Recast RCT application so that Right Click Tools will run smoothly. Click the start menu and open Configure Recast RCT Underneath the SQL Configuration tab, entering the SQL Server and SQL Database information for your ConfigMgr Server may speed up actions and help with WMI quota violations. Entering the MBAM information in the lower half of this window will help with the MBAM Dashboards and with the Security Tools management of Bitlocker keys. Under the Interactive Command Prompt tab, downloading PsExec will allow you to use the Interactive Command Prompt action.
https://docs.recastsoftware.com/features/Installation/enterprise_standalone/index.html
2020-11-24T04:32:48
CC-MAIN-2020-50
1606141171077.4
[array(['media/RCT_First_install_screen.png', 'Right Click Tools Enterprise Server Installation'], dtype=object) array(['media/RCT_second_install_standalone.png', 'Right Click Tools Enterprise Server Installation'], dtype=object) array(['media/RCT_third_install_standalone.png', 'Right Click Tools Enterprise Server Installation'], dtype=object) array(['media/RCT_verify.png', 'Right Click Tools Enterprise Server Installation'], dtype=object) array(['media/Configure_Recast_RCT.png', 'Configure Recast RCT'], dtype=object) array(['media/SQL_Configuration_RCT.png', 'SQL Configuration RCT'], dtype=object) array(['media/Interactive_command_Prompt.png', 'Interactive Command Prompt'], dtype=object)]
docs.recastsoftware.com
Release Notes Contributor Guide¶ Release notes for StarlingX projects are managed using Reno allowing release notes go through the same review process used for managing code changes. Release documentation information comes from YAML source files stored in the project repository, that when built in conjunction with RST source files, generate HTML files. More details about the Reno Release Notes Manager can be found at: Locations¶ StarlingX release notes documentation exists in the following projects: starlingx/clients: StarlingX Client Libraries starlingx/config: StarlingX System Configuration Management starlingx/distcloud: StarlingX Distributed Cloud starlingx/distcloud-client: StarlingX Distributed Cloud Client starlingx/fault: StarlingX Fault Management starlingx/gui: StarlingX Horizon plugins for new StarlingX services starlingx/ha: StarlingX High Availability/Process Monitoring/Service Management starlingx/integ: StarlingX Integration and Packaging starlingx/metal: StarlingX Bare Metal and Node Management, Hardware Maintenance starlingx/nfv: StarlingX NFVI Orchestration starlingx/tools: StarlingX Build Tools starlingx/update: StarlingX Installation/Update/Patching/Backup/Restore starlingx/upstream: StarlingX Upstream Packaging Directory structures¶ The directory structure of release documentation under each StarlingX project repository is fixed. This example shows the stx-confi project: releasenotes/ ├── notes │ └── release-summary-6738ff2f310f9b57.yaml └── source ├── conf.py ├── index.rst └── unreleased.rst The initial modifications and additions to enable the API Documentation service in each StarlingX project are as follows: .gitignore Modifications to ignore the building directories and HTML files for the release notes. .zuul.yaml Modifications to add jobs to build and publish the api-refdocument. releasenotes/notes Directory created to store your release notes files in YAML format. releasenotes/source Directory created to store your API reference project directory. releasenotes/source/conf.py Configuration file to determine the HTML theme, Sphinx extensions, and project information. releasenotes/source/index.rst Source file to create your index RST source file. releasenotes/source/unrelased.rst Source file to avoid breaking the real release notes build job on the master branch. doc/requiremets.txt Modifications to add the os-api-refSphinx extension. tox.ini Modifications to add the configuration to build the API reference locally. See stx-config [Doc] Release Notes Management as an example of this first commit: Once the Release Notes Documentation service has been enabled, you can create a new release notes. Release notes files¶ The following shows the YAML source file for the stx-config project: stx-config/releasenotes/ ├── notes │ └── release-summary-6738ff2f310f9b57.yaml To create a new release note that documents your code changes via the tox newnote environment: $ tox -e newnote hello-my-change A YAML source file is created with a unique name under releasenote/notes/ directory: stx-config/releasenotes/ ├── notes │ ├── hello-my-change-dcef4b934a670160.yaml The content is grouped into logical sections based in the default template used by reno: features issues upgrade deprecations critical security fixes other Modify the content in the YAML source file based on reStructuredText format. Developer workflow¶ Start common development workflow to create your change: “Hello My Change”. Create its release notes, no major effort since title and content might be reused from the Git commit information. Add your change including its release notes and submit for review. Release team workflow¶ Start development work to prepare the release. This might include a Git tag. Generate the Reno Report. Add your change and submit for review.
https://docs.starlingx.io/contributor/release_note_contribute_guide.html
2020-11-24T04:18:51
CC-MAIN-2020-50
1606141171077.4
[]
docs.starlingx.io
The activation limit allows a license to be activated against LicenseSpot. If this limit isn’t present you’ll get an error message when trying to activate a license indicating that it doesn’t support activation. This limit can be combined with any of the other limits. The following options are available when configuring the activation limit: - Max Activations: indicates how many times the license or serial number can be activated. This is useful for example if want to allow the same license to be activated in more than one computer or by more than one user. - Grace Period: this is the period the user is allowed to use the license without having to activate it. This number represents the number days. If you set this zero, the license validation process will fail unless the license is activated, in essence, it’ll force the user to activate right away. - Use Hardware Lock: this option will added a hardware ID code to the license automatically. This hash code is generated based on hardware components and it’ll prevent the license from being copied from one computer to another. Example of an activation To activate using the LicenseSpot you can use the following code. C# ExtendedLicense license = ExtendedLicenseManager.GetLicense(typeof(Form1), this,"PUBLIC_KEY"); //paste your public key from Key Pairs license.Activate("serial number") VB.NET Dim license as ExtendedLicense license = ExtendedLicenseManager.GetLicense(Me.GetType(), Me, "your public key") license.Activate("serial number")
http://docs.licensespot.com/license/activation/
2018-03-17T05:59:10
CC-MAIN-2018-13
1521257644701.7
[]
docs.licensespot.com
Advanced tutorial¶ Here we have a more advanced bioinformatics pipeline that adds some new concepts. This is a simple script that takes an input file and returns the file size and the number of sequencing reads in that file. This example uses a function from from the built-in NGSTk toolkit. In particular, this toolkit contains a few handy functions that make it easy for a pipeline to accept inputs of various types. So, this pipeline can count the number of reads from files in BAM format, or fastq format, or fastq.gz format. You can also use the same functions from NGSTk to develop a pipeline to do more complicated things, and handle input of any of these types. First, grab this pipeline. Download count_reads.py, make it executable ( chmod 755 count_reads.py), and then run it with ./count_reads.py). You can grab a few small data files in the microtest repository. Run a few of these files like this: ./count_reads.py -I ~/code/microtest/data/rrbs_PE_R1.fastq.gz -O $HOME -S sample1 ./count_reads.py -I ~/code/microtest/data/rrbs_PE_fq_R1.fastq -O $HOME -S sample2 ./count_reads.py -I ~/code/microtest/data/atac-seq_SE.bam -O $HOME -S sample3 This example is a documented vignette; so just read it and run it to get an idea of how things work. #!/usr/bin/env python """ Counts reads. """ __author__ = "Nathan Sheffield" __email__ = "[email protected]" __license__ = "GPL3" __version__ = "0.1" from argparse import ArgumentParser import os, re import sys import subprocess import yaml import pypiper parser = ArgumentParser( description="A pipeline to count the number of reads and file size. Accepts" " BAM, fastq, or fastq.gz files.") # First, add standard arguments from Pypiper. # groups="pypiper" will add all the arguments that pypiper uses, # and adding "common" adds arguments for --input and --sample--name # and "output_parent". You can read more about your options for standard # arguments in the pypiper docs (section "command-line arguments") parser = pypiper.add_pypiper_args(parser, groups=["pypiper", "common", "ngs"], args=["output-parent", "config"], required=['sample-name', 'output-parent']) # Add any pipeline-specific arguments if you like here. args = parser.parse_args() if not args.input or not args.output_parent: parser.print_help() raise SystemExit if args.single_or_paired == "paired": args.paired_end = True else: args.paired_end = False # args for `output_parent` and `sample_name` were added by the standard # `add_pypiper_args` function. # A good practice is to make an output folder for each sample, housed under # the parent output folder, like this: outfolder = os.path.abspath(os.path.join(args.output_parent, args.sample_name)) # Create a PipelineManager object and start the pipeline pm = pypiper.PipelineManager(name="count", outfolder=outfolder, args=args) # NGSTk is a "toolkit" that comes with pypiper, providing some functions # for dealing with genome sequence data. You can read more about toolkits in the # documentation # Create a ngstk object ngstk = pypiper.NGSTk(pm=pm) raw_folder = os.path.join(outfolder, "raw/") fastq_folder = os.path.join(outfolder, "fastq/") # Merge/Link sample input and Fastq conversion # These commands merge (if multiple) or link (if single) input files, # then convert (if necessary, for bam, fastq, or gz format) files to fastq. # We'll start with a timestamp that will provide a division for this section # in the log file pm.timestamp("### Merge/link and fastq conversion: ") # Now we'll rely on 2 NGSTk functions that can handle inputs of various types # and convert these to fastq files. local_input_files = ngstk.merge_or_link( [args.input, args.input2], raw_folder, args.sample_name) cmd, out_fastq_pre, unaligned_fastq = ngstk.input_to_fastq( local_input_files, args.sample_name, args.paired_end, fastq_folder) # Now we'll use another NGSTk function to grab the file size from the input files # pm.report_result("File_mb", ngstk.get_file_size(local_input_files)) # And then count the number of reads in the file n_input_files = len(filter(bool, local_input_files)) raw_reads = sum([int(ngstk.count_reads(input_file, args.paired_end)) for input_file in local_input_files]) / n_input_files # Finally, we use the report_result() function to print the output and # log the key-value pair in the standard stats.tsv file pm.report_result("Raw_reads", str(raw_reads)) # Cleanup pm.stop_pipeline()
http://pypiper.readthedocs.io/en/latest/tutorials-advanced.html
2018-03-17T06:02:50
CC-MAIN-2018-13
1521257644701.7
[]
pypiper.readthedocs.io
Kazoo Debug# To be able to debug your setup you need to check the logs. The command tail -f xxx.log will open the logfile and present a live running view. Essential log files are: /var/log/2600hz/kazoo.log The main log of the Kazoo platform, tells you roughly what is happening on your systems in terms of Kazoo. It requires some getting used to, but after that its your best friend. If you need it to show you more details you can set the verbose level: /opt/kazoo/utils/sup/sup whistle_maintenance syslog_level debug ( ) /var/log/haproxy.log Underestimated tiny log file, really descriptive. It tells you if haproxy is doing the needed magic, if not your system won't run nicely. Used by Kazoo to access mulitple systems as if they where one (BigCOUCH DB): /var/log/kamailio/kamailio.log Kamailio is your SBC, it receives registration requests (+some) and validates them: /var/log/freeswitch/debug.log Freeswitch should be obvious, all calls are handles by it. This file will give you a lot of info. One could also use fs_cli on a Freeswitch box: /var/log/rabbitmq/rabbit.log Rabbitmq is the communication tool used by Kazoo to communicate internally. Typical usage: User case: Inbound call fails: Just imagine what should happen for a call to be accepted: A call needs to be placed by someone, then delivered to Kazoo platform, then accepted by Freeswitch, then dealt with. So.. can you confirm that someone (you?) is dialing the right number? Is the number configured at the DID provider to be routed to Kazoo? ARE U SURE?? Ok, so lets first shutdown one FS box or tail -f /var/log/freeswitch/debug.log on all FS boxes. Place a test call...do you see an invite coming in that file? Yes? Great...find the CALL ID and close the log file, then grep CALL ID /var/log/freeswitch/debug.log The result of that action should be relevant log lines for that call. Check it line for line to see what happened and why it did not do what you expected. Most errors in the early stage of the Kazoo learning curve have to do with ACLS. Also check Inbound Calls Fail and this page No invite? That suggests that the call is not delivered to your systems. Can you confirm that someone (you?) is dialing the right number? Is the number configured at the DID provider to be routed to Kazoo? ARE YOU SURE? If so, please check if you dont have a firewall in place thats messing stuff up. One easy but dangerous way to test is to disable the firewall for a while. Still nothing? If you are using a SIP address when directing calls to Kazoo, please check DNS and DNS propagation, if unsure try to use the ip address instead of domain name. Still nothing you could check the Kazoo log, i dont think it will contain anything but you (actually i) never know.
https://docs.2600hz.com/sysadmin/doc/kazoo/kazoo-debug/
2018-03-17T06:30:48
CC-MAIN-2018-13
1521257644701.7
[]
docs.2600hz.com
Creating Themes Themes are a set of view files used to render your blog. Because you actually have access to the CodeIgniter super object, your theme files can act like any other CodeIgniter view file. The $blog variable, which is an instance of the Fuel_blog class, is passed to each main view file as well. You could even load in directly any of the blog's models (posts, categories, comments, links and author). Below are steps to creating a new theme: - Duplicate the default theme and rename the folder - In the settings module, change the Theme location parameter to the path to your new theme folder - Edit the files listed below to your liking View Files The following files can be found in your renamed theme folder for you to begin editing to your liking: Main View Files - index.php - the main homepage of the blog. Displays post excerpts (similar to posts.php below) - archives.php - displays a list of older posts grouped by month - author.php - displays the authors bio information - authors.php - displays a list of authors for the blog - categories.php - displays a list of categories - category.php - displays all the posts for a given category - post.php - displays the contents of a single post - posts.php - displays post excerpts - search.php - displays the search results Blocks View Files - about.php - displays the description found in the settings - archives.php - displays side menu links for older blog posts - author.php - displays the various authors of the blogs and number of posts associated with them - categories.php - displays side menu links of blog categories - comment.php - displays a single comment - comment_form.php - displays the comment form for a post - comment_thanks.php - displays the thanks information after a successful comment has been made on a post - header.php - the HTML header content which normally includes the RSS feed and css links - post_unpublished.php - displayed when a blog post is not currently published (and you are logged in to FUEL) - posts.php - used to render an post excerpts and is used for both the posts and category main views - search.php - displays the side menu search box - sidemenu.php - displays all the contents for the side menu
http://docs.getfuelcms.com/modules/blog/themes
2018-03-17T06:11:17
CC-MAIN-2018-13
1521257644701.7
[]
docs.getfuelcms.com
Entities in LUIS Entities are key data in your application’s domain. Entities represent data Entities are data you want to pull from the utterance. This can be a name, date, product name, or any group of words. Entities are optional but highly recommended. Entities are shared across intents. Types of entities LUIS offers many types of entities; prebuilt entities, custom machine learned entities and list entities. Machine-learned entities work best when tested via endpoint queries and reviewing endpoint utterances. Regular expression entities use the open-source Recognizers-Text project. There are many examples of the regular expressions in the /Specs directory for the supported cultures. If your specific culture or regular expression isn't currently supported, contribute to the project. Exact-match entities use the text provided in the entity to make an exact text match. Composite vs hierarchical entities Composite entities and hierarchical entities both have parent-child relationships and are machine learned. The machine-learning allows LUIS to understand the entities based on different contexts (arrangement of words). Composite entities are more flexible because they allow different entity types as children. A hierarchical entity's children are only simple entities. Data matching multiple entities If a word or phrase matches more than one entity, the endpoint query returns each entity. If you add both prebuilt number entity and prebuild datetimeV2, and have an utterance create meeting on 2018/03/12 for lunch with wayne, LUIS recognizes all the entities and returns an array of entities as part of the JSON endpoint response: { "query": "create meeting on 2018/03/12 for lunch with wayne", "topScoringIntent": { "intent": "Calendar.Add", "score": 0.9333419 }, "entities": [ { "entity": "2018/03/12", "type": "builtin.datetimeV2.date", "startIndex": 18, "endIndex": 27, "resolution": { "values": [ { "timex": "2018-03-12", "type": "date", "value": "2018-03-12" } ] } }, { "entity": "2018", "type": "builtin.number", "startIndex": 18, "endIndex": 21, "resolution": { "value": "2018" } }, { "entity": "03/12", "type": "builtin.number", "startIndex": 23, "endIndex": 27, "resolution": { "value": "0" ] } } ] } Best practices See Entity best practices to learn more. Next steps See Add entities to learn more about how to add entities to your LUIS app.
https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-concept-entity-types
2018-03-17T06:37:42
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
How to Create a Style Kit This guide is intended to help novices and pros alike, but assumes you know a little CSS. It offers best practices advice aimed at Style Kits intended for sale or client projects with regard to format and naming conventions. If you’re just getting started with using CSS to customize WordPress sites, see How to Customize Your Site with CSS Style Kits are comprised of three main parts: - Page builder layouts and their widget’s settings - Custom CSS code - The content (XML) We recommend starting on a fresh WordPress install with no pre-existing content. We also recommend setting up your demo on a live server before doing the export so others can reference what the intended outcome should be, and images come over in the content import without additional tweaking or manual upload. Creating Page Builder Presets - Build your pages starting with a blank page. When designing for the general public, only Layers widgets should be used, as 3rd Party widget data is not saved by our export function. - Add custom styling that is specific to the widget or page layout using the Advanced option of the widget Design Bar. - On the Page Editor view in your admin, ensure the page’s Title and Slug match and clearly describe the kind of layout it is. Example: Portfolio, Features, Team Members, etc. - Export the page See How to Export Page Builder Layouts Build Your Custom CSS The following will help you decide where to best place custom styling and if the Advanced option in the widget should be used vs the main CSS area. See All Layers Customization Tutorials Advanced Widget CSS Each widget has an Advanced option on the Design Bar for creating a custom class. This class is added to the main container of the widget, so any properties applied to the class alone will only affect that container (or any other widget with the same class assigned) and not necessarily everything in it. This allows you to apply a style to just this widget or page layout and is the only way you can add a custom class name to builder elements in Layers without touching HTML. Without it, you would need to use much more complex selectors to target specific elements or widgets. Targeting elements in CSS by ID is also not a good idea, as those IDs may be in use already with inline styles, anchors or Javascript. The benefit of using this method is that custom classes and styles added to the Advanced options in widget setups are saved along with the other data when exporting a page file. This makes importing the styling aspect a little more foolproof. See our Advanced Custom CSS Tutorial for a detailed walkthrough of using this feature to customize your presets. Our example kit uses advanced techniques covered in the Advanced Tutorial to replace the large button in widgets with an app store image hosted on the app providers websites. Additionally, it adds some global CSS to the main CSS area which is not saved in the page export. Custom CSS added to the main CSS area is not associated with any one page, so needs manual export and import, explained next. Custom CSS Custom CSS is used to add global custom styling to the CSS tab under the main Customizer menu if you need to provide additional styling changes. When building your custom CSS, keep the following in mind: - Never copy entire styles from the inspector or core stylesheets – your declarations should contain only changed properties and new additions. - Use waterfall formatting to make it easier to read and edit. - Lint your CSS before finalizing it and saving it to your kit. - If your custom css is really extensive, it may be time to consider a child theme. Style kit CSS should be as simple as possible and focused on basic styling changes like colors or element styles. It should not contain advanced CSS like transforms, layout hacks or hard-coded font sizing that might interfere with the user’s font or widget choices in the normal Layers options. To prepare your Custom CSS for your kit, simply copy it into a plan-text document named css.txt. Mac Users, hit Shift+Cmd+T in TextEdit to switch to plain-text mode. View the example kit css.txt file Export WordPress Content Including a content XML is an important step in ensuring the base pages used for your presets and the images used in the widgets can be easily imported by the user. Before exporting your content, check the following: - There are no tags under→ - If your content includes demo posts, ensure any categories you created are relevant to your kit, otherwise create only one, such as “Demo”. This makes customization and cleanup of the content easier for end-users and reduces the risk of having a duplicate category with a bad slug. - Pages other than Layers builder pages should not use “Home”, “Portfolio”, “Shop” or “Blog” as the page title. - Empty the trash on your posts and page indexes - Remove all comments - Ensure all plugins are deactivated to avoid import of CPT data, etc Export the content from→ and name the file your_kit-content.xml where your_kit is your kit name. Example: kittn-content.xml Create the Readme Your readme should include general information about you as the author, a link to the demo, if one exists, and some basic instructions for use. You may copy the demo kit’s readme and customize it for your kit, if desired. If you will be hosting your kit on GitHub, don’t forget to add a readme.md file. Pack It Up Your kit should use a simple structure like this one: - folder - images - css.txt - layout-one.json - layout-two.json - kit-content.xml - readme.txt - Test the import process on a live server to ensure everything works! - Re-export all the data if your original files were staged locally. This ensures others can import images without uploading them manually. - Zip the folder and name it kitname_layers-style-kit.zip where kitname is your Style Kit name. Test It Go through How to Import a Style Kit to verify your Style kit works as expected on a fresh install. Example Kit Download Example See the Demo
http://docs.layerswp.com/doc/how-to-create-a-style-kit/
2018-03-17T06:09:54
CC-MAIN-2018-13
1521257644701.7
[]
docs.layerswp.com
Social Link Workflow With Social Link your existing users can link their account to one or more social networks or identity providers such as Facebook, Google, Twitter and Yahoo. These users may then either login with their already-existing credentials or by using our Social Login service. Once a user is authenticated, our system will provide a consistent user_token, allowing you to tie the social network account to an existing account in your database. Following user authentication, you can also our REST API for advanced social network integration features.
http://docs.oneall.com/services/workflow/social-link/
2018-03-17T06:34:27
CC-MAIN-2018-13
1521257644701.7
[array(['http://public.oneallcdn.com/img/docs/flowcharts/social-link-implementation.png', 'Social Link Implementation'], dtype=object) ]
docs.oneall.com
CFEngine Enterprise Can I use an existing PostgreSQL installation? No. Although CFEngine keeps its assumptions about Postgres to a bare minimum, CFEngine should use a dedicated PostgreSQL database instance to ensure there is no conflict with an existing installation. Do I need experience with PostgreSQL? PostgreSQL is highly configurable and you should have some in-house expertise to properly configure your database installation. The defaults are well tuned for common cases but you may find optimizations depending on your hardware and OS. What is the system user for the CFEngine dedicated PostgreSQL database? The database runs under the cfpostgres user. What are the requirements for installing CFEngine Enterprise? General Information Users and Permissions - CFEngine Enterprise makes an attempt to create the local users cfapacheand cfpostgres, as well as group cfapacheduring install. How does Enterprise Scale? See best practices on scalability Is it normal to have many cf-hub processes running? What steps should I take after installing CFEngine Enterprise? There are general steps to be taken outlined in Post-Installation Configuration. In addition to this, Enterprise uses the local mail relay, and it is assumed that the server where CFEngine Enterprise is installed on has proper mail setup.
https://docs.cfengine.com/docs/3.10/guide-faq-enterprise.html
2017-05-22T23:10:02
CC-MAIN-2017-22
1495463607242.32
[]
docs.cfengine.com
The Divide field acts as a divider between other fields. Arguments NOTE: When using the Divide field with required, the divider cannot be hidden by default. It’s best only to use the required argument with this field when the fold is shown by default. Example Declaration $fields = array( 'id' =>'divider_1', 'desc' => __('This is the description field, again good for additional info.', 'redux-framework-demo'), 'type' => 'divide' );
https://docs.reduxframework.com/core/fields/divide/
2017-05-22T23:17:02
CC-MAIN-2017-22
1495463607242.32
[array(['https://f.cloud.github.com/assets/3412363/1568018/c18bbb38-50a7-11e3-8ea2-82de146fcc90.png', 'Divide Field'], dtype=object) ]
docs.reduxframework.com
Glossary¶ - DBAPI - PEP 249 – Python Database API Specification v2.0 - ipdb - ipdb exports functions to access the IPython debugger, which features tab completion, syntax highlighting, better tracebacks, better introspection with the same interface as the pdb module. - MySQL A popular database server. - pep8 Python style guide checker pep8 is a tool to check your Python code against some of the style conventions in PEP 8 – Style Guide for Python Code. - pyflakes passive checker of Python programs A simple program which checks Python source files for errors. Pyflakes analyzes programs and detects various errors. It works by parsing the source file, not importing it, so it is safe to use on modules with side effects. It’s also much faster. - PyMySQL Pure-Python MySQL client library. The goal of PyMySQL is to be a drop-in replacement for MySQLdb and work on CPython, PyPy, IronPython and Jython. - sqlalchemy The Python SQL Toolkit and Object Relational Mapper.
https://aiomysql.readthedocs.io/en/v0.0.8/glossary.html
2019-08-17T23:19:47
CC-MAIN-2019-35
1566027313501.0
[]
aiomysql.readthedocs.io
How to Animate Objects and the Camera In Harmony, you can animate objects by drawing them on their individual layer, then positioning them at different locations on different keyframes across the timeline, creating a motion path. The same principle can be applied to the scene's camera, since it is a layer itself. Animating a Layer You can create a motion path directly on layers (animated layers). You can control and define a trajectory using several different parameters, including: - X, Y and Z positions (3D Path or Separate Positions) - Angle (rotation) - Skew - X and Y Scales 3D objects and 3D-enabled layers also have these extra paramaters: - Euler Angle or Quaternion Angle (when 3D option is enabled) - Z Scale (when 3D option is enabled). -_4<< -. Animating the Camera A scene's camera can be manipulated and animated just like any other layer. It is listed in the Timeline view and you can use the same tools and selection modes to offset or animate it. However, the camera layer itself is static, which means it keeps the same position and angle throughout the whole scene. In order to be able to animate the camera, you need to connect it to a peg layer, which can be animated, and which will directly affect the position and angle of the camera. You can animate your camera movements directly in the Camera view. Alternatively, you can use the Side or Top views, which can be especially useful when animating a camera in a multiplane scene, where each layer is positioned at a different distance from the camera - Do one of the following: - From the top menu, select Windows > Top or Side. - From any view already open, click the Add View button at the top-right corner and select Top or Side.button at the top-right corner and select Top or Side. - By default, new scenes do not have a camera layer. To add a camera layer, do one of the following: - From the top menu, select Insert > Camera. - From the Layers toolbar, click the Add Layers button and select Camera.button and select Camera. - From the Node Library view, select a Camera node and drag it to the Node view. A new camera layer is added to the scene and appears in the Timeline view. -. If the new Peg layer did not appear directly above the camera, you may have clicked elsewhere in the scene, which deactivated the layer on which you want to add the Peg layer. To fix this: - Select the Camera layer and drag and drop it under the new Peg layer. Or delete the misplaced Peg layer, select the Camera layer and click the Add Peg button again.button again. - From the Node Library view, select a Peg node and drag it to the Node view. Then connect the peg’s output port to the camera’s input port. You can also press Ctrl + P (Windows/Linux) or ⌘ + P (macOS) to create a peg and connect it to the camera - In the Tools toolbar, enable the Animate mode. -.
https://docs.toonboom.com/help/harmony-15/premium/getting-started-guide/camera.html
2019-08-17T22:49:09
CC-MAIN-2019-35
1566027313501.0
[array(['../Resources/Images/HAR/Stage/Paths/an_animatinglayer5.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/SceneSetup/HAR11/HAR11_AccessParametersTimeline.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/Steps/HAR11_AnimateLayer_001.png', 'Set first Keyframe Set first Keyframe'], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/Steps/HAR11_timeline1.png', None], dtype=object) array(['../Resources/Images/EDU/HAR/Student/Steps/an_animatinglayer3.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/Steps/HAR11_AnimateLayer_002.png', 'Select Last Frame Select Last Frame'], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/Steps/HAR11_timeline2.png', None], dtype=object) array(['../Resources/Images/EDU/HAR/Student/Steps/an_animatinglayer5_0.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_camera_layer.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_select_cam_layer.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_collapsed_expanded_peg_layers.png', None], dtype=object) array(['../Resources/Images/EDU/HAR/Student/anp_addpegtocamera.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/select_kf1_peg.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/add_kf1_peg.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_add_kf_peg.png', None], dtype=object) ]
docs.toonboom.com
Changelog for package rocon_app_manager 0.6.1 (2013-09-11) report details of currently running app. disable uuid arg shunting was not enabled for concert clients. 0.6.0 (2013-08-30) disable uuids by default, also fire up the paired invitations by default for convenience. use a proper regular expression for the target. zeroconf name should match app manager name. bugfix remaps which shouldn't remap. pass on screen parameter settings from rocon_launch. missed an update for the new resource finding rapp lists. protect services from initialising in parallel. diagnostic flips for pairing mode. 0.5.4 (2013-08-07) public is now 11311 now private master is 11312 apply rosparm to set zeroconf parameter add gateway and hub as dependeny 0.5.3 (2013-07-22) install concert directory adding install rule installing pairing_master 0.5.2 (2013-07-17) force faster initialisation of the gateway advertisements in standalone and public pairing. push application namespace underneath the node name in standalone mode to match remote control mode styles - for android apps. app manager icon parameters as resource names. use resource names for rapp lists instead of full paths. flag for disabling the cleanup watchdog and consolidating services locally. pairing mode cleanup when android device is gone. manual pairing invitations now working. convenience pause to ensure small apps flip promptly. no longer need app manager robot_xxx parameters. bugfix missing shutdown of start and stop app services when remote control changes. pairing clients infra. bugfix the list apps service to respond with correct running apps signature. make the default application namespace with a gateway underneath the gateway name, not root. publish an icon with the platform information. fix publishing of listed/running apps. renamed paired launchers to be less confusing. remove trivial debug print about to move on start app latched list apps publisher 0.5.1 (2013-06-10) 0.5.0 0.5.0 (2013-05-27) Point to correct license file Removed (now) incorrect comments fix bad reference to non-exsistant parameter file. fix bad reference to non-exsistant parameter file. fix remappings to match roslaunch style Merge pull request #41 <> from robotics-in-concert/fix_app_list_file_not_found Fix app list file not found warnings and errors if app list file not found, fixes #40 <> . app list to rapp list app_lists args to rapp_lists trivial cleanup of a comment. auto invite false in paired master. trivial comment. eliminating duplicated code between paired and concert client launchers. minor reorginisation of app manager launchers (more modular). android can now finnd us via robot type and name parameters. close down quietly if gateway shut down before the app manager. flip with default application namespace remove old services before updating with new. don't do the hard work of advertisements. pairing updates. a few bugfixes starting the pairing starting to add components for pairing. return values from error status was wrong better errors messages for stop app. fix stop app for naturally terminating apps. create a useful pointer to the running rapp in the manager while it runs. better errors messages for stop app. fix stop app for naturally terminating apps. create a useful pointer to the running rapp in the manager while it runs. apps starts with human readable namespace standalone app manager. 0.4.0 gateway info now a msg. minor pep8 stuff. robot namespace back robot namespacing fix now it supports action_client and action_server public interface remove screen flag in concert_client/gateway logs out app compatibility. 0.3.0 (2013-02-05 15:23) 0.2.0 (2013-02-05 13:18) adding rocon_apps dependency .app -> .rapp correcting wiki url no more concert client taking the concert client out of the loop concert status -> app manager status, part of first redesign. has its own status now, labelled statusd till concert client swaps its own out. remote_control -> invite, start on general app design concert_msgs dependency removed parameter cleanup common create_rule code moved to rocon_utilities much minor refactoring. collapse advertisements. 0.1.1 (2013-01-31) advertising list apps, also correcting advertising behaviour in the client. remove unused logger. stop flipping the platform info. advertising the platform info service. platform info to rocon_app_manager_msgs revert loginfo Rapp->App Manager launch apps under a unique namespace so caller_id's are guaranteed to be unique. refactoring app->rapp.
http://docs.ros.org/hydro/changelogs/rocon_app_manager/changelog.html
2019-08-18T00:01:22
CC-MAIN-2019-35
1566027313501.0
[]
docs.ros.org
Are you looking to provide a discount like the scenarios below ? - Buy 3 for $10 - Buy 6 for $20 In short, this actually means, you wanted to offer - 3 quantities of the same or different products for $10 - 6 quantities of the same or different products for $20 Example: - You are selling a "Hair Band". - Each Hair Band costs $4 - Buying 3 will cost $12. But you wanted to provide 3 for $10 (that is $2 discount) Limitations Before getting into this scenario, consider the following "Limitations" - This will work ONE specific product. Example, Buy 3 Product A for $10 - if you want to apply this for different products, then all of them should have the same price. Example: Buy any 3 from "Hair Accessory" for $10. In this case, the Hair Accessory should have all products at the same price. If Product A is $4, then product B should also be $4 Only if your store satisfies the above conditions, this discount scenario will work. Otherwise, NO Let us see this with an example Cost of a CAP is $100 Buy 2 for $150 (that is customer, gets a $25 discount per quantity) IMPORTANT: You need to work out the individual quantity discount General: Condition: Specific Products rules. Discount tab: Buy 2 for $150 (that is customer, gets a $25 discount per quantity) IMPORTANT: You need to work out the individual quantity discount Woo Discount Rules works based on discount. So you can only define the discount amount. So, if you want this discount in an order, like increase the price discount when the quantities increases. Here is the Cart page: Here is a Video Tutorial : Quick links: To Know more about Discount table -> Click here Still unclear ? Click the live chat icon button that is visible in the right bottom corner or you can submit a support request to [email protected] We are always happy to assist you :)
https://docs.flycart.org/en/articles/2419662-fixed-price-discounts
2019-08-17T22:50:13
CC-MAIN-2019-35
1566027313501.0
[array(['https://downloads.intercomcdn.com/i/o/75121284/c502d4b12850e76bf8dea55d/screenshot-demo.flycart.org-2018.09.06-13-11-48.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/81018330/5d778a3b703f203d97015e6d/general.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/81019473/2f3b7512f11fc5f0085555ca/Specific+product.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/81019707/dea7a929c9066c2a0befa1c5/firstset.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/81020870/c63cbcb4078293e17bf638f5/secondset.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/81573139/2a1682cdee9e9be5969a94e4/2for150.png', None], dtype=object) ]
docs.flycart.org
Starting with Xcode 10 XCLOC is the new format for project localization. Xcode Localization Catalog is designed to provide additional contextual information for translators. It also provides support for a wide range of Localization Resources like .strings (.stringsdict), images, markdown, and others. It is built on top of XLIFF format and still uses it for storing localized strings. Localization Resources are store in Localized Contents directory inside the Catalog. Lokalise offers support only for text resources exported as XLIFF files. Generating You can generate XCLOC file by selecting your project in Xcode and running Editor -> Export for localization... action. Importing In order to import newly exported localization, you need to open your Localized Contents directory inside your Localization Catalog (XCLOC) and selected the generated XLIFF file. Exporting When exporting translation from Lokalise select Apple XLIFF (.xliff) format. You can import generated XLIFF files info Xcode project the same way you would import a Localization Catalog. Resources Apple XLIFF (.xliff) support details. Internalization resources by Apple. New Localization Workflows in Xcode 10 from WWDC 2018.
https://docs.lokalise.co/en/articles/2500996-xcode-localization-catalog-xcloc
2019-08-17T22:42:01
CC-MAIN-2019-35
1566027313501.0
[array(['https://downloads.intercomcdn.com/i/o/86063288/e61a91ee2f1bc64ba7b41f09/LocalizationResources.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/86041505/699eeb67591fed9ea2061567/GeneratingXCLOC.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/86063175/2b1156d195728dc779511f79/Importing.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/86063204/0af9a148ebe3e4419f0ca255/Exporting.png', None], dtype=object) ]
docs.lokalise.co
Application.DecimalSeparator property (Project) Gets the character that separates the whole and fractional parts of a number. Read-only String. Syntax expression. DecimalSeparator expression A variable that represents an Application object. Remarks Project sets the DecimalSeparator property equal to the corresponding value in the Regional and Language Options dialog box of the Microsoft Windows Control Panel. Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/project.application.decimalseparator
2019-08-17T22:55:08
CC-MAIN-2019-35
1566027313501.0
[]
docs.microsoft.com
View type and member definitions Developers often need to view the source code definitions for types or class members they use in their code. In Visual Studio, the Go To Definition and Peek Definition features enable you to easily view the definition of a type or member. If the source code is not available, metadata is displayed instead. Go To Definition The Go To Definition feature navigates to the source of a type or member, and opens the result in a new tab. If you are a keyboard user, place your text cursor somewhere inside the symbol name and press F12. If you are a mouse user, either select Go To Definition from the right-click menu or use the Ctrl-click functionality described in the following section. Ctrl-click Go To Definition Ctrl+click is a shortcut for mouse users to quickly access Go To Definition. Symbols become clickable when you press Ctrl and hover over the type or member. To quickly navigate to the definition of a symbol, press the Ctrl key and then click on it. It's that easy! You can change the modifier key for mouse-click Go To Definition by going to Tools > Options > Text Editor > General, and selecting either Alt or Ctrl+Alt from the Use modifier key drop-down. You can also disable mouse-click Go To Definition by unchecking the Enable mouse click to perform Go To Definition checkbox. Peek Definition The Peek Definition feature lets you preview the definition of a type without leaving your current location in the editor. If you are a keyboard user, place your text cursor somewhere inside the type or member name and press Alt + F12. If you are a mouse user, you can select Peek Definition from the right-click menu. To enable Ctrl+click functionality, go to Tools > Options > Text Editor > General. Select the option Open definition in peek view and click OK to close the Options dialog box. Then, press Ctrl (or whichever modifier key is selected in Options), and click on the type or member. If you peek another definition from the popup window, you start a breadcrumb path that you can navigate using the circles and arrows that appear above the popup. For more information, see How to: View and edit code by using Peek Definition (Alt+F12). View metadata as source code (C#) When you view the definition of C# types or members whose source code is not available, their metadata is displayed instead. You can see the declarations of the types and members, but not their implementations. When you run the Go To Definition or Peek Definition command for an item whose source code is unavailable, a tabbed document that contains a view of that item's metadata, displayed as source code, appears in the code editor. The name of the type, followed by [from metadata], appears on the document's tab. For example, if you run the Go To Definition command for Console, metadata for Console appears in the code editor as C# source code. The code resembles its declaration, but does not show an implementation. Note When you try to run the Go To Definition or Peek Definition command for types or members that are marked as internal, Visual Studio does not display their metadata as source code, regardless of whether the referencing assembly is a friend or not. View decompiled source definitions instead of metadata (C#) You can set an option to see decompiled source code when you view the definition of a C# type or member whose source code is unavailable. To turn on this feature, choose Tools > Options from the menu bar. Then, expand Text Editor > C# > Advanced, and select Enable navigation to decompiled sources. Note Visual Studio reconstructs method bodies using ILSpy decompilation. The first time you access this feature, you must agree to a legal disclaimer regarding software licensing and copyright and trademark laws. See also Feedback
https://docs.microsoft.com/en-us/visualstudio/ide/go-to-and-peek-definition?view=vs-2019
2019-08-17T23:42:24
CC-MAIN-2019-35
1566027313501.0
[array(['media/click_gotodef.gif?view=vs-2019', 'Mouse click go to definition animation'], dtype=object) array(['media/editor_options_mouse_click_gotodef.png?view=vs-2019', 'Enabling mouse-click go to definition'], dtype=object) array(['media/editor_options_peek_view.png?view=vs-2019', 'Setting the mouse-click peek definition option'], dtype=object) array(['media/peek_definition.gif?view=vs-2019', 'Peek definition animation'], dtype=object) array(['media/metadatasource.png?view=vs-2019', 'Metadata as Source'], dtype=object) array(['media/go-to-definition-decompiled-sources.png?view=vs-2019', 'Viewing a decompiled definition'], dtype=object) ]
docs.microsoft.com
QuickTime Movie Parser Filter This component has been removed from Windows Vista and later operating systems. It is available for use in the Microsoft Windows 2000, Windows XP, and Windows Server 2003 operating systems. The QuickTime Movie Parser filter splits Apple® QuickTime® data into audio and video streams. It supports QuickTime 2.0 and earlier. The input pin connects to a source filter such as the Async File Source filter or the URL File Source filter. The Parser uses the AVI Decompressor or QT Decompressor filter to decompress QuickTime files. The filter creates one output pin for the video stream and one output pin for the audio stream. Related topics
https://docs.microsoft.com/en-us/windows/win32/directshow/quicktime-movie-parser-filter
2019-08-18T00:24:51
CC-MAIN-2019-35
1566027313501.0
[]
docs.microsoft.com
Compatible VAST Wrapper Players As explained in How to implement In-Stream Part 1 - Setting up the Ad zone the Wrapper element is the code below in between . It will point to a secondary ad server where the ad and the other VAST elements are placed. This basically means that the VAST file will look like this: VAST Wrapper Coding Example:VAST Wrapper Coding Example: <VAST version="3.0"> <Ad id="1"> <Wrapper> <AdSystem>Sample Ad Wrapper</AdSystem> <VASTAdTagURI><![CDATA[]]></VASTAdTagURI> <Impression><![CDATA[]]]]></Impression> <Error><![CDATA[[ERRORCODE]&idzone=00000]]></Error> <Creatives> <Creative sequence="1" id="1"> <Linear skipoffset="00:00::05"> <TrackingEvents> <Tracking event="progress" offset="00:00:10.000"><![CDATA[]]> </Tracking> </TrackingEvents> <VideoClicks> <ClickTracking><![CDATA[]]></ClickTracking> </VideoClicks> </Linear> </Creative> </Creatives> </Wrapper> </Ad> </VAST> You must ensure that the player you are using to display ads is Wrapper compatible, meaning that it can get and read the content within the The players listed in How to implement In-Stream Part 2 - Implementation examples are 100% VAST compatible with our ads, which means that they will also work for Wrapper VAST tags. The players are: - - - - However, here is an explanation of how to integrate VAST in FlowPlayer The code shared is just to guide you and is not guaranteed that it will work as it is. Please check also the links provided in order to check the status of the player and how to integrate VAST ad tags properly in each case. Steps to follow VAST Wrapper incompatibilities: - Check the Player with a test wrapper zone: - Check the player with the third party VAST tag (refer to How to implement In-Stream Part 3 - Testing the ad zone) - If it is not compatible: Update or change your Player. Review the compatible Players listed in How to implement In-Stream Part 2 - Implementation examples If you are unable to update or to change the Player: Disable VAST Tag campaigns on zone level (enabled by default). The main reason for doing this is if you are using an incompatible Player your ads will be displayed, but no impressions or views will be counted. This would be misleading for our advertisers and they might stop targeting your ad zone causing a negative impact on your revenues.
https://docs.exoclick.com/docs/faqs/publishers/tutorials/compatible-vast-wrapper-players/
2020-09-18T11:45:51
CC-MAIN-2020-40
1600400187390.18
[array(['/img/media/0EbigZDHURJDA256K8RurtRuzHgxr7fP.png', None], dtype=object) ]
docs.exoclick.com
Using the development key in your app or the API will tell Leanplum to send the data to the development/test pipeline. This data will be processed in real time, and the last 50 requests (track, start, setUserAttributes etc.) for the last 24 hours are displayed in the Debugger. Access Debugger from Side Navigation -> More -> Debugger Updated 6 months ago
https://docs.leanplum.com/docs/debugger
2020-09-18T09:42:26
CC-MAIN-2020-40
1600400187390.18
[array(['https://files.readme.io/1c9e606-debugger.png', 'debugger.png'], dtype=object) array(['https://files.readme.io/1c9e606-debugger.png', 'Click to close...'], dtype=object) ]
docs.leanplum.com
The Ortho command restricts the movement of the cursor to multiples of a specified angle from the last point created. The OrthoAngle command specifies the angle the movement of the cursor is restricted to when ortho mode is active. The SetOrtho command turns ortho mode on, off, or toggles the current state. This is useful for inclusion in a script file for the ReadCommandFile command. Use modeling aids Model with precision Cursor constraints Rhinoceros 6 © 2010-2020 Robert McNeel & Associates. 12-Sep-2020
https://docs.mcneel.com/rhino/6/help/en-us/commands/ortho.htm
2020-09-18T10:29:26
CC-MAIN-2020-40
1600400187390.18
[]
docs.mcneel.com
Scala# This guide covers configuring Scala projects on semaphore. If you’re new to semaphore we recommend reading the guided tour first. Hello world# The following Semaphore configuration file compiles and executes a Scala program: # .semaphore/semaphore.yml version: v1.0 name: Using Scala in Semaphore agent: machine: type: e1-standard-2 os_image: ubuntu1804 blocks: - name: Choose version task: jobs: - name: Choose Scala version commands: - checkout - sem-version scala 2.12 - scala hello.scala The contents of the Scala program are as follows: // hello.scala object HW { def main(args: Array[String]) { println("Hello World!") } } Scala Play example project# Semaphore provides a tutorial and demo Play application with a working CI pipeline that you can use to get started quickly: Supported Scala versions# The supported Scala versions via the Ubuntu1804 image are: - 2.11.11 - 2.12.6 Changing Scala version# You can choose the Scala version to use with the help of the sem-version utility Choosing Scala version 2.11 can be done with the following command: sem-version scala 2.11 In order to go back to Scala 2.12, which is the default version of the Semaphore Virtual Machine, you should execute the following command: sem-version scala 2.12 Dependency management# You can use Semaphore's cache tool to store and load any files or Scala libraries that you want to reuse between jobs. System dependencies# Scala projects might need packages like database drivers. As you have full sudo access on each Semaphore 2.0 VM, you are free to install all required packages.
https://docs.semaphoreci.com/programming-languages/scala/
2020-09-18T10:04:36
CC-MAIN-2020-40
1600400187390.18
[]
docs.semaphoreci.com
ReportServer Element (ASSL) Contains the name of the Microsoft SQL Server Reporting Services instance that is used by the ReportAction. Syntax <ReportAction> ... <ReportServer>...</ReportServer> ... </ReportAction> Element Characteristics Element Relationships Remarks The element that corresponds to the parent of ReportServer in the Analysis Management Objects (AMO) object model is ReportAction.
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2012/ms126974(v=sql.110)
2020-09-18T11:55:38
CC-MAIN-2020-40
1600400187390.18
[]
docs.microsoft.com
The Developer perspective in the web console provides you the following options from the Add view to create applications and associated services and deploy them on OpenShift Dedicated: From Git: Use this option to import an existing codebase in a Git repository to create, build, and deploy an application on OpenShift Dedicated. Container Image: Use existing images from an image stream or registry to deploy it on to OpenShift Dedicated.. To create applications using the Developer perspective ensure that: You have logged in to the web console. You are in the Developer perspective. You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Dedicated. The following procedure walks you through the Import from Git option in the Developer perspective to create an application. Create, build, and deploy an application on OpenShift Dedicated using an existing codebase in GitHub as follows: In the Add view, click From Git to see the Import from git form. Git. In the Resources section, select: Deployment, to create an application in plain Kubernetes style. Deployment Config, to create an OpenShift style application. Knative Service, to create a microservice. is used.. Click the Resource Limit link to set the amount of CPU and Memory resources a container is guaranteed or allowed to use when running. Click the Labels link to add custom labels to your application. Click Create to create the application and see its build status in the Topology view.
https://docs.openshift.com/dedicated/4/applications/application_life_cycle_management/odc-creating-applications-using-developer-perspective.html
2020-09-18T11:18:19
CC-MAIN-2020-40
1600400187390.18
[]
docs.openshift.com
Before creating a list, it might be helpful to think through some of the logistics of how your list will work in your simulation model: You'll primarily use the Toolbox to create and manage lists. See Using the Toolbox for general information about working with tool components like lists. This topic will explain how to delete and rename lists and other related tasks. To create a list: Fields keep track of data about each entry on the list. They are the building blocks that you will use to create queries to filter and prioritize list items. (See Key Terms - Fields for more information.) For that reason, choosing the list's fields is one of the most important aspects of building a list. Be aware that you can use the pre-built fields or create custom fields of your own. To add fields: To move or reorganize the fields, simply drag the handle on the left of the field to re-position it: To delete fields, click the Remove button next to the field. You can use menu options (picklists) on various properties or triggers found in the properties window to connect a 3D object to a list. The method that you use to connect the 3D object will depend on whether you are pushing a flow item, task, or object to a list. The following sections will explain the most common methods. Fixed resources can push or pull flow items to a list. You'll primarily use the Flow tab on the fixed resource's property window to push or pull flow items. To push flow items to a list: To pull flow items from a list: You can push a task executer to a list whenever the task executer is free to be assigned to another task. The list can then prioritize and assign tasks to the available task executers. To push task executers to a list, you need to set the task executers so that they push themselves onto a list at the beginning of a simulation run and every time they are available to work on another task: When a fixed resource has a task that needs to be completed, it can pull a task executer from a list: If you want to push and pull tasks or fixed resource from a list, you should consider using the Process Flow tool. See the following section about Using Lists in Process Flow for more information. You can use the Process Flow tool to create custom list logic if you want to use lists in ways that haven't yet been pre-built into FlexSim. The Process Flow tool is especially effective for creating task sequences and working with lists that use abstract values such as numbers and strings. Additionally, Process Flow gives you more flexibility in controlling objects like task executers, when you need to do more than just pushing and pulling from lists. See Overview of the Process Flow Interface for more information about the Process Flow tool. Also, see Process Flow Tutorial 1 and Process Flow Tutorial 4 for examples of lists in process flow. To open a list and view its entries during a simulation run: Although queries are optional, queries are the key to getting the most out of your lists. Queries are the custom logic that will determine which list entries should get pulled first from a list or which back orders should get fulfilled first. Queries can also filter or restrict which list entries will get pulled from a list. See List Features and Key Terms for examples of possible queries. You don't add queries to the list itself in the list properties window. You add a query to the object or Process flow activity that is pulling from the list. For example, if you set a processor to pull flow items from a list, you would create a query by opening the processor's property window. On the Flow tab, you'd add the query to the Pull Strategy property: If you were to add a query in the Process Flow tool, you'd add the query to the Query property on a Pull From List activity: List queries use SQL syntax. SQL is a programming language that has mostly been used for querying, filtering, and prioritizing data contained in tables and databases. Since this is very similar to how lists function in FlexSim, SQL is a good fit for querying lists. If you're already comfortable with SQL, you'll find it very easy to create queries. If you're not, then just be assured that you'll only need a very introductory level to SQL syntax in order to create most of the list queries you're interested in. You'll primarily use two clauses in this context to create a list query, as explained in the following table: See Introduction to SQL Queries for a deeper explanation of how to create SQL queries. Any time you see a property where you can enter a query, there is a picklist that can help you. Picklists can give you hints and suggestions about how to create a valid list query. To create a query using picklists: Imagine your simulation model has 3 queues and these queues are pushing flow items that have a label named type that can have a value of 1, 2, or 3. The queues push the items to an item list with the following fields: When a processor pulls an item from this list, it could possibly use the query: WHERE type == 2 ORDER BY age ASC The WHERE clause in this query filters the flow items by the type field. In this case, it restricts the processor so that it only pulls items with an item type that is equal to 2. Therefore, the processor will only pull items that have a 2. The ORDER BY clause then uses the age field to prioritize which items should get pulled from the list. The age field contains data about how long the flow item has been on the list. The ASC clause means ascending. It means that it will pull the items that have been waiting the least amount of time on the list. If you wanted to pull the oldest items first, you would use DESC (which means descending) instead. In this example, the processor would pull /Queue3/Box~2 first because it has a type of 2 and has been on the list for less time than /Queue3/Box. Before applying a query to a list, you might want to test it to see if it's filtering and prioritizing the list entries correctly. To test a particular query: In the same way that you can use a query to prioritize which list entries should get pulled first, you can also prioritize back orders so that the list will fulfill back orders that have specific attributes first. For example, you could prioritize back orders based on a label on a fixed resource or based on some condition in the model. When you create a back order queue strategy, you might want to pay attention to when the list evaluates the list of back orders and re-organizes the list. You might want to be mindful of certain conditions in the simulation model that can cause the back orders to be evaluated in different ways. To create a back order queue strategy:
https://docs.flexsim.com/en/20.0/ConnectingFlows/Lists/WorkingWithLists/WorkingWithLists.html
2020-09-18T11:13:43
CC-MAIN-2020-40
1600400187390.18
[]
docs.flexsim.com
FlexSim is designed to run on most modern Windows systems. Unique or complex modeling situations may carry additional hardware requirements. Review the FlexSim Answers article Recommended System Requirements for an in-depth discussion regarding the hardware components that make the most difference when building and running your simulation models. Most desktop and laptop computers produced in the last few years meet FlexSim’s minimum requirements. A system that meets these minimum requirements will allow you to: For the best experience - and for larger, more graphically intense, customized, or complex models, including using features such as importing CAD drawings, custom 3D objects, using the experimenter or optimizer, etc. - we recommend certain upgrades above the specifications listed in the minimum requirements. Our recommendations are included in the following table. A high-end computer system produced within the last two years will typically include such specifications. Some modern low power processors, such as Intel’s Atom, have lower performance in highly demanding apps like FlexSim than mainstream desktop and notebook processors. Consequently, software responsiveness will suffer and simulations will take longer to complete on these low power CPUs. ARM processors are not supported. Once an operating system reaches its end of extended support with Microsoft, FlexSim no longer actively supports that OS. Windows on ARM is not supported. FlexSim uses functionality provided by Microsoft’s latest .NET framework. A compatible .NET framework is already installed on most computers, though if necessary, it can be downloaded directly from Microsoft. In addition, for FlexSim versions prior to 20.0.1, the OptQuest Add-on requires the .NET Framework version 3.5, which can be installed through the "Turn Windows features on or off" feature in Windows or can be downloaded directly from Microsoft. See The Experimenter and Optimizer for more information. OptQuest with FlexSim versions 20.0.1 and later uses Microsoft's latest .NET framework.
https://docs.flexsim.com/en/20.0/Reference/SystemRequirements/SystemRequirements.html
2020-09-18T11:35:20
CC-MAIN-2020-40
1600400187390.18
[]
docs.flexsim.com
Managing Two-Factor Authentication Users who are unable to sign in to the Admin with two-factor authentication (2FA) can try to sync or troubleshoot the problem. You can also reset the authenticator associated with the account. When reset, the user must sign in again and reconfigure the authenticator. If you have trouble signing in with 2FA, consider the following: - Some mobile apps include options to sync. This option reconnects the app and server, and synchronizes the time settings on the device. - Blocking cookies prevents some authenticators, such as Google Authenticator, from completing the verification process. Add a rule to your browser that allows cookies for your Magento instance. To reset authenticators from the command line and more advanced troubleshooting information, see Two-Factor Authentication in the Magento developer documentation. Reset authenticators per user account To reset 2FA providers for other users, you must be an administrator or have custom permission under Stores > Settings > Configuration > Two Factor Auth. To learn more, see User Roles. On the Admin sidebar, go to Stores > Settings > All Users. Select the user and open the account in edit mode. Scroll down to the Current User Identity Verification section and enter Your Password. In the left panel, click 2FA. In the Configuration reset section, click Reset [provider]. When prompted, click OK to confirm. If the user wants to restore the 2FA solution to their account, it must be reconfigured from the Sign On page. When complete, click Save User. Enable 2FA for User
https://docs.magento.com/user-guide/stores/security-two-factor-authentication-manage.html?_ga=2.252072335.789038167.1595312466-1471288439.1590563599
2020-09-18T11:52:29
CC-MAIN-2020-40
1600400187390.18
[]
docs.magento.com
MB-Lab Documentation¶ MB-Lab is a project aimed to create a powerful 3D humanoid editor. It is based off the popular ManuelBastioniLAB, now a community developed and supported project. Contents¶ - About - Requirements - Installation - Getting Started - Modeling Process - Creation Tools - Skin Editor - Finalize - After Creation Tools - Save and Export - Developer Index - License
https://mb-lab-docs.readthedocs.io/en/latest/
2020-09-18T09:46:34
CC-MAIN-2020-40
1600400187390.18
[array(['_images/hexdna_logo_03.png', '_images/hexdna_logo_03.png'], dtype=object) ]
mb-lab-docs.readthedocs.io
AWS Lambda Deployments Overview This topic describes the concept of a Harness AWS Lambda deployment by describing the high-level steps involved. For a quick tutorial, see the AWS Lambda Quickstart. Before You Begin Before learning about Harness AWS Lambda deployments, you should have an understanding of Harness Key Concepts. What Does Harness Need Before You Start? A Harness AWS Lambda deployment requires the following: - AWS account - An AWS account you can connect to Harness. - Lambda function file stored on an artifact server, typically AWS S3. Artifact Source Support Harness supports the following artifact sources with Lambda: What Does Harness Deploy? Setting up a Lambda deployment is as simple as adding your function zip file, configuring function compute settings, and adding aliases and tags. Harness takes care of the rest of the deployment, making it consistent, reusable, and safe with automatic rollback. Basically, the Harness setup for Lambda is akin to using the AWS CLI aws lambda create-function, update-function-code, and update-function-configuration commands, as well as the many other commands that are needed. The benefit with Harness is that you can set up your Lambda deployment once, with no scripting, and then have your Lambda functions deployed automatically as they are updated in your AWS S3 bucket. You can even templatize the deployment Environment and Workflow for use by other devops and developers in your team. Furthermore, Harness manages Lambda function versioning to perform rollback when needed. What Does a Harness AWS Lambda Deployment Involve? The following list describes the major steps of a Harness AWS Lambda deployment: Next Steps Read the following topics to build on what you've learned: - AWS Lambda Quickstart tutorial
https://docs.harness.io/article/96mqftt93v-aws-lambda-deployments-overview
2020-09-18T10:10:28
CC-MAIN-2020-40
1600400187390.18
[array(['https://files.helpdocs.io/kw8ldg1itf/articles/z24n8ut61d/1578087633031/image.png', None], dtype=object) ]
docs.harness.io
Accengage, Europe’s #1 Push Notification Technology provider, is joining forces with Airship (formerly Urban Airship), the world’s pioneer and largest company in push notifications and digital customer engagement. Together, we are the global leader in mobile customer engagement with nearly 300 employees worldwide, more than 1,500 customers and Europe’s largest team of over 100 local engineering, service and customer success professionals, all dedicated to your success. To learn more, read the press release.
http://docs.accengage.com/display/GEN/2019/04/
2020-09-18T11:18:03
CC-MAIN-2020-40
1600400187390.18
[]
docs.accengage.com
TOPICS× Set up the Visual Studio project and build the Windows app A image displays the directory structure of the windows folder in the src folder. Setting up the environment For Windows devices, you need: - Microsoft Windows 8.1 or Windows 10 - Microsoft Visual Studio 2015 - Microsoft Visual Studio Tools for Apache Cordova Setting up Visual Studio Project for AEM Forms app Perform the following steps to set up the AEM Forms app project in Visual Studio. - Copy the adobe-lc-mobileworkspace-src-<version>.zip archive to %HOMEPATH%\Projects folder in the Windows 8.1 or Windows 10 device with Visual Studio 2015 installed and configured. - Extract the archive in the %HOMEPATH%\Projects\MobileWorkspace directory. - Navigate to the %HOMEPATH%\Projects\MobileWorkspace\adobe-lc-mobileworkspace-src-[versionsrc]\windows directory. - Open the CordovaApp.sln file using Visual Studio 2015 and proceed to building the AEM Forms app. Build AEM Forms app Perform the following steps to build and deploy AEM Forms app. Data stored on Windows file system for AEM Forms app is not encrypted. It is recommended that you use a third-party tool like Windows BitLocker Drive Encryption to encrypt disk data. - In the Visual Studio Standard Toolbar, select Release from the drop-down for build mode. - Select Windows-AnyCPU, Windows-x64, or Windows-x86 based on your platform. Windows-AnyCPU is recommended. - . - In the Create App Packages wizard, select weather or not you want to upload your app to the windows store and then click Next . - Make the changes in the parameters, such as the version and output location of the app build, as required. - After the project is built, you can install the app using: The .appx package requires the following items to install successfully: - Windows PowerShell - Visual Studio The directory Platforms\windows\AppPackages\CordovaApp.Windows_3.0.2.0_anycpu_Test contains the four main components in it: - WinJS library - Ensure that the package comes with a self-signed certificate, or a trusted authority signed public certificate such as VeriSign. - Developer license - .appx file - Certificate (Currently it is a self-signed certificate by Apache Cordova) - Dependency folder - PowerShell file (.ps1 extension) Deploying an app using Windows PowerShell There are two ways to install the application on a Windows device. By acquiring the developer license -. By using enterprise owned devices. Deploying an app using Visual Studio.
https://docs.adobe.com/content/help/en/experience-manager-64/forms/aem-forms-app/setup-visual-studio-project-build-installer.html
2020-09-18T12:06:17
CC-MAIN-2020-40
1600400187390.18
[array(['/content/dam/help/experience-manager-64.en/help/forms/using/assets/mws-content-2.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/forms/using/assets/win-dir.png', None], dtype=object) ]
docs.adobe.com
Cut-Copy-Paste Cut Keyboard shortcut: Ctrl+X. Moves the selected structure to the clipboard. Copy Keyboard shortcut: Ctrl+C. Creates a copy of the selected structure on the clipboard. Paste Keyboard shortcut: Ctrl+V. Places the copied or cut structure onto the canvas. Direct copy-paste between Marvin JS and other applications is available in MRV or MDL Molfile (V2000) formats without web services. If web services are available, direct copy-paste between Marvin JS and other applications are possible in every text formats. Please, note that in some browsers, copying and pasting the content from/to Marvin JS is possible only through keyboard shortcuts. In these browsers, after clicking the Cut, Copy or, Paste toolbar buttons, a message appears on the canvas.
https://docs.chemaxon.com/display/docs/Cut-Copy-Paste.html
2020-09-18T10:45:44
CC-MAIN-2020-40
1600400187390.18
[]
docs.chemaxon.com
2019-2020 Wisconsin Legislator Briefing Book Chapter 1 - Wisconsin's Structure of Government and Law Chapter 2 - Legislative Process, Resources, and Glossary Chapter 3 - Legislative Agencies, Staff, and Organizations Chapter 4 - Administrative Rulemaking Chapter 5 - Open Meetings Law Chapter 6 - Open Records Law Chapter 8 - Alcohol Beverages Chapter 9 - Criminal Justice, Corrections, and Juvenile Justice Chapter 10 - Economic Development Chapter 11 - Education, Elementary-Secondary Chapter 12 - Education, Post-Secondary Chapter 13 - Environmental Protection and Natural Resources Chapter 14 - Ethics, Lobbying, Elections, and Campaign Finance Chapter 15 - Family Law Chapter 16 - Financial Institutions Chapter 17 - General Insurance Chapter 18 - Health Care and Health Insurance Chapter 19 - Housing and Landlord-Tenant Law Chapter 20 - Human Services and Aging Chapter 21 - Labor and Employment Law Chapter 22 - Municipal and County Government Chapter 23 - Privacy Chapter 24 - State-Tribal Relations Chapter 25 - Taxes, Revenue, and the Budget Process Chapter 26 - Transportation Chapter 27 - Utilities and Energy Chapter 28 - Veterans, Military Affairs, and Emergency Management
https://docs.legis.wisconsin.gov/misc/lc/briefing_book
2020-09-18T11:30:26
CC-MAIN-2020-40
1600400187390.18
[]
docs.legis.wisconsin.gov
Connect to OpenSearch® cluster with Python# You can interact with your cluster with the help of the Python OpenSearch® client. It provides a convenient syntax to send commands to your OpenSearch cluster. Follow its README file for installation instructions. To connect with your cluster, you need the Service URI of your OpenSearch cluster. You can find the connection details in the section Overview on Aiven web console. Alternatively, you can retrieve it via the avn service get command with the Aiven CLI. Notice that service_uri contains credentials; therefore, should be treated with care. The Service URI has information in the following format: <https://<user>:<password>@<host>:<port> For security reasons is recommended to use environment variables to save your credential information. You can use the dotenv Python library to manage your environment variables. Follow its README file for installation instructions. After is installed, you need to create .env file in the root directory of your project with the SERVICE_URI on it: SERVICE_URI=<https://<user>:<password>@<host>:<port> And import it in your Python script. import os from dotenv import load_dotenv load_dotenv() SERVICE_URI = os.getenv("SERVICE_URI") Now, you can use it as a string variable saved in the SERVICE_URI. You can import OpenSearch and create an instance of the class to connect with your cluster. In this example, we will be giving the full path and enabling the use_ssl to secure our connection. from opensearchpy import OpenSearch opensearch = OpenSearch(SERVICE_URI, use_ssl=True) Note There are other ways that you can create your OpenSearch instance configuring parameters such as verify_certs, ssl_assert_hostname and others authentication details that can be configured. Check documentation for more details. Those are the steps to create a Python OpenSearch client class instance that can be used to perform request operations and send responses to your cluster.
https://docs.aiven.io/docs/products/opensearch/howto/connect-with-python.html
2022-09-24T23:25:32
CC-MAIN-2022-40
1664030333541.98
[]
docs.aiven.io
Model Selects the Color Space for the number fields below. RGB, HSV/HSL, Hex Note In Blender, the RGB and HSV/HSL values are in Scene Linear color space, and are therefore not Gamma corrected. On the contrary, Hex are automatically Gamma corrected for the sRGB Color Space. For more information, see Color Management. - the Alpha Channel, another slider “A” is added. - Eyedropper (pipette icon) Samples a color from inside the Blender window using the Eyedropper. Shortcuts¶ Ctrl-LMB (drag) snaps to hue. Shift-LMB (drag) precision motion. Wheel adjust the brightness. Backspace reset the value.
https://docs.blender.org/manual/en/2.93/interface/controls/templates/color_picker.html
2022-09-24T21:51:50
CC-MAIN-2022-40
1664030333541.98
[]
docs.blender.org
Date Picker The componentThe component The date picker allows you to select any date range, which will be used in other components for filtering data sets. The component has two main features: - Quick and easy navigation - Selection of any kind of time range Depending on the theme, you will see different colors but they use the same scheme: - The darkest colors indicate the start and end time of the selected date range - The lightest colors indicate the time inside the selected date ranges - The darker color indicates 'today' Date navigationDate navigation Looking at the image above, this date picker shows one calendar month. Browsing to another month can be done in two ways: - The shift icons (the arrows next to the month and year) will go to the previous and next calendar month - The quick pickers allow you to navigate to the month and/or year directly There are two quick pickers in this component. By clicking either on month... ... or year... ... you can easily browse for the date period you are interested in. Selecting a date rangeSelecting a date range In the date picker you always have to select a start date and an end date. This may be one day but it will always be a range. There are two ways in which you can select a date range. Date selectionDate selection Click on a day you are interested in, then move your mouse (without holding the mouse button) to the last day of the range and then click on the day. You will notice the custom range is automatically and immediately applied to all the planning boards. Before you confirm the date range (by clicking on the second date), a tooltip will tell you the duration of the date range you are about to confirm: Note that is perfectly possible to select a range spanning multiple months. Clicking the shift icons won't result in a loss of focus of the start date selection: As mentioned earlier, it is possible to select a range of just one day. You'd just have to click the same day twice. Week selectionWeek selection The first column in the date picker component shows the week number. These aren't just informational, because they allow a quick selection of the week as a range:
https://docs.dimescheduler.com/docs/en/user-manual/components/components-datepicker
2022-09-24T23:33:15
CC-MAIN-2022-40
1664030333541.98
[array(['/docs/assets/user-manual/components/date-picker/date-picker.png', 'Date picker'], dtype=object) array(['/docs/assets/user-manual/components/date-picker/date-picker-month.png', 'Event editor details'], dtype=object) array(['/docs/assets/user-manual/components/date-picker/date-picker-year.png', 'Event editor details'], dtype=object) array(['/docs/assets/user-manual/components/date-picker/date-picker-tooltip.png', 'Event editor details'], dtype=object) array(['/docs/assets/user-manual/components/date-picker/date-picker-multiple-months.png', 'Event editor details'], dtype=object) array(['/docs/assets/user-manual/components/date-picker/date-picker-week.png', 'Event editor details'], dtype=object) ]
docs.dimescheduler.com
Structure Structure is a powerful add-on that lets you create pages, generate navigation, manage content through a simple interface and build robust sites faster than ever. It forgoes the default template_group/template setup and creates “static” and “listing” pages that are all editable through a tree sitemap view. With Structure enabled, traditional page style content and multiple entry pages can live within the same area. Note: Documentation for Structure is still being migrated. Until this is complete, please refernce the Structure documentation on EEHarbor’s website.
https://docs.expressionengine.com/latest/add-ons/structure/overview.html
2022-09-24T23:20:26
CC-MAIN-2022-40
1664030333541.98
[]
docs.expressionengine.com
Getting Started with Jamf Now This guide contains instructions on how to initially set up Jamf Now. Ensure you are using a computer to set up your Jamf Now account. You can access Jamf Now from a mobile device, but setup requires a Mac or Windows computer. Ensure you are using Safari, Chrome, or Firefox. Create an account at signup.jamfnow.com. You will receive an activation email from [email protected]. If you do not receive the email, check your junk or spam folder. Enroll in Apple Business Manager or Apple School Manager. This process can take up to five business days for approval. - Step 1: Set Up Your Apple Push Notification Service (APNs) APNs enables you to create a trusted relationship between your devices, Apple, and Jamf. APNs setup is required before you can enroll devices. To configure APNs, log in to Jamf Now and click APNs. For more information, see Setting Up Apple Push Notification Service (APNs) Certificate. - Step 2: Set Up Automated Device Enrollment Automated Device Enrollment using Apple Business Manager or Apple School Manager allows you to automatically download Jamf Now management settings to Apple devices immediately upon activation. To set up Automated Device Enrollment, log in to Jamf Now and click Auto-Enrollment. For more information, see Setting Up Automated Device Enrollment. - Step 3: Set Up Volume Purchasing App distribution with Jamf Now is a simple process when you integrate with volume purchasing (Apps and Books) in Apple School Manager or Apple Business Manager. Apps and Books provides the best way for organizations to centrally purchase and deploy apps. When you purchase apps for managed distribution, your organization can assign apps to an employee's Apple ID or device while retaining ownership of the app. Jamf Now requires a volume purchasing integration to connect to Apps and Books and deploy paid iPad or iPhone apps, or any Mac or Apple TV apps. To set up volume purchasing, log in to Jamf Now and click Volume Purchasing. For more information, see Setting Up Volume Purchasing. - Step 4: Configure and Assign Blueprints Blueprints allow you to easily bundle applications, settings, and restrictions. After configuring a Blueprint, you can assign devices to that Blueprint to deploy selected bundled settings to those devices. To configure Blueprints, log in to Jamf Now and click Blueprints.Note: While configuring Blueprints, be aware of the features that require supervision. Supervision allows for a higher level of device management. For more information, see Supervision. For more information on configuring Blueprints, see Creating a New Blueprint. For more information on assigning Blueprints to devices, see Preassigning Blueprints to Devices Enrolled with Automated Device Enrollment or Designating a Blueprint to Assign to Devices Enrolled with Open Enrollment. - Step 5: Enroll Devices Enrollment is the process of adding devices and establishing a connection between devices and Jamf Now. This allows you to perform inventory, configuration, security management, and distribution tasks on the devices. You can enroll devices using Automated Device Enrollment or Open Enrollment. Open enrollment is a lower form of management that is most commonly used when employees want to bring personal devices into the workplace (known as BYOD).Note: Jamf Now is an Apple only MDM platform. You can access your Jamf Now account from any operating system, but only Apple Devices with current operating systems can enroll in Jamf Now. For more information on operating system requirements, see System Requirements.
https://docs.jamf.com/jamf-now/documentation/Getting_Started_with_Jamf_Now.html
2022-09-24T22:21:59
CC-MAIN-2022-40
1664030333541.98
[]
docs.jamf.com
重要 The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy. Notes - Instrumentation of Servlet 3.0 async processing. Processing initiated by startAsync is included in metrics and transaction traces. - Custom instrumentation configured through an XML file. For details see the documentation. - Request attribute for setting app name now allows multiple app names - Bug fix: auto RUM inserts header and footer into script tags
https://docs.newrelic.com/jp/docs/release-notes/agent-release-notes/java-release-notes/java-agent-2100
2022-09-24T22:47:15
CC-MAIN-2022-40
1664030333541.98
[]
docs.newrelic.com
TAS for VMs Concepts Page last updated: VMware Tanzu Application Service for VMs (TAS for VMs) is based on Cloud Foundry, which is an open-source cloud app platform, providing a choice of clouds, developer frameworks, and app services. TAS for VMs makes it faster and easier to build, test, deploy, and scale apps. It is an open-source project and is available through a variety of private cloud distributions and public cloud instances. For more information, see Cloud Foundry on GitHub. This documentation presents an overview of how TAS for VMs works and a discussion of key concepts. To learn more about TAS for VMs fundamentals, see the following topics:
https://docs.pivotal.io/application-service/2-12/concepts/index.html
2022-09-24T22:55:35
CC-MAIN-2022-40
1664030333541.98
[]
docs.pivotal.io
Creating Tasks With a data model defined, you are able to add tasks to your workflow. There are a few different ways to do so, depending on your use case. Manually creating a task To create a new task directly in Flowdash, simply click the + New Task button in the workflow table page. You'll notice a modal pop-up with all previously defined custom fields. Import from CSV In cases where you want to migrate an existing data set to Flowdash, you can bulk import tasks using a CSV import. Import from Form Sometimes, you want a public-facing form that anyone can use to submit data to Flowdash. In those cases, you can design a web form within Flowdash that will automatically push responses to your workflow. Import from API Another way to create tasks is programmatically through the API. This is useful when integrating with your core application, or using Zapier to integrate with another 3rd party software. Learn more about using the API to create tasks. Updated 9 months ago
https://docs.flowdash.com/docs/creating-tasks
2022-09-24T23:41:43
CC-MAIN-2022-40
1664030333541.98
[array(['https://files.readme.io/4beb20d-add-new.png', 'add-new.png 1094'], dtype=object) array(['https://files.readme.io/4beb20d-add-new.png', 'Click to close... 1094'], dtype=object) array(['https://files.readme.io/90184ee-add-new-user.png', 'add-new-user.png 2650'], dtype=object) array(['https://files.readme.io/90184ee-add-new-user.png', 'Click to close... 2650'], dtype=object) ]
docs.flowdash.com
Expressions Panel The Expressions panel displays the properties of the selected Not all properties can be configured using an expression. To configure a property with an expression, select the ... at the end of the property. Selecting this button opens the Expression Editor. In the Expression Editor you can configure the expression for the property. For more information on configuring properties with expressions, see Expressions. Print Template Items List The
https://docs.vertigisstudio.com/printing/latest/help/Content/shr/help/expressions-panel.htm
2022-09-24T21:46:20
CC-MAIN-2022-40
1664030333541.98
[array(['../../Resources/Images/shr/icons/expressions-22.png', None], dtype=object) array(['../../Resources/Images/prn5/print-template-items-list.png', None], dtype=object) ]
docs.vertigisstudio.com
wsgidav.prop_man.mongo_property_manager¶ Description Implements a property manager based on MongoDB. Usage: add this lines to wsgidav.conf: from wsgidav.prop_man.mongo_property_manager import MongoPropertyManager prop_man_opts = {} property_manager = MongoPropertyManager(prop_man_opts) Valid options are (sample shows defaults): opts = {"host": "localhost", # MongoDB server "port": 27017, # MongoDB port "dbName": "wsgidav-props", # Name of DB to store the properties # This options are used with `mongod --auth` # The user must be created with db.addUser() "user": None, # Authenticate with this user "pwd": None, # ... and password } Classes Functions Other Members
https://wsgidav.readthedocs.io/en/latest/_autosummary/wsgidav.prop_man.mongo_property_manager.html
2022-09-24T21:57:39
CC-MAIN-2022-40
1664030333541.98
[]
wsgidav.readthedocs.io
site and clicks “I Like Ponies” he will inadvertently click on the online store’s “Buy Now” button and unknowingly purchase the item.: To set the same X-Frame-Options value for all responses in your site, add 'django.middleware.clickjacking.XFrameOptionsMiddleware' to MIDDLEWARE_CLASSES: MIDDLEWARE_CLASSES = ( ... 'django.middleware.clickjacking.XFrameOptionsMiddleware', ... ). The X-Frame-Options header will only protect against clickjacking in a modern browser. Older browsers will quietly ignore the header and need other clickjacking prevention techniques. A complete list of browsers supporting X-Frame-Options.
http://django-docs-zh.readthedocs.io/zh_CN/latest/ref/clickjacking.html
2018-03-17T05:56:15
CC-MAIN-2018-13
1521257644701.7
[]
django-docs-zh.readthedocs.io
FSCTL_QUERY_RETRIEVAL_POINTERS control code The FSCTL_QUERY_RETRIEVAL_POINTERS control code retrieves a mapping between virtual cluster numbers (VCN, offsets within the file/stream space) and logical cluster numbers (LCN, offsets within the volume space), starting at the beginning of the file up to the map size specified in InputBuffer. FSCTL_QUERY_RETRIEVAL_POINTERS is similar to FSCTL_GET_RETRIEVAL_POINTERS. However, FSCTL_QUERY_RETRIEVAL_POINTERS only works in kernel mode on local paging files or the system hives. The paging file is guaranteed to have a one-to-one mapping from the VCN in a volume to the LCN that refer more directly to the underlying physical storage. You must not use FSCTL_QUERY_RETRIEVAL_POINTERS with files other than the page file, because they might reside on volumes, such as mirrored volumes, that have one-to-many mappings of VCNs to LCNs. To perform this operation, call FltFsControlFile or ZwFsControlFile with the following parameters. Parameters FileObject FltFsControlFile only. A file object pointer for the paging file or hibernation file. This parameter is required and cannot be NULL. FileHandle ZwFsControlFile only. A file handle for the paging file or hibernation file. This parameter is required and cannot be NULL. FsControlCode The control code for the operation. Use FSCTL_GET_RETRIEVAL_POINTERS for this operation. InputBuffer A user-supplied buffer that contains a pointer to a quadlet that specifies the map size. The map size is the number of clusters to map. InputBufferLength Length, in bytes, of the input buffer at InputBuffer. OutputBuffer A pointer to a buffer of paged pool memory that contains an array of elements of the following type: struct { LONGLONG SectorLengthInBytes; LONGLONG StartingLogicalOffsetInBytes; } MappingPair; This array of quadlet pairs defines the disk runs of the file. The value of the SectorLengthInBytes member in the last element of the array is zero. OutputBufferLength The size, in bytes, of the buffer pointed to by the OutputBuffer parameter. Status block FltFsControlFile and ZwFsControlFile both return STATUS_SUCCESS or an appropriate NTSTATUS error value. Remarks **ReFS: **This code is not supported. Requirements See also FSCTL_GET_RETRIEVAL_POINTERS
https://docs.microsoft.com/en-us/windows-hardware/drivers/ifs/fsctl-query-retrieval-pointers
2018-03-17T06:49:03
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
IPService - Global Encapsulates an IP Service. Use this class during the discovery scanning phase. IPService - creates The table where this service creates entries. Table 1. Field Name Type Description creates String The table where this service creates entries. IPService - description Description of the IPService. Table 2. Field Name Type Description description String Description of the IPService. IPService - getFromArrayList(Array list) Returns an array of IPService instances specified by a Java ArrayList of sys_ids. Table 3. Parameters Name Type Description list Array List of sys_ids. Table 4. Returns Type Description Array IPService instances IPService - IPService(Object source) Creates an instance of the IPService class. Table 5. Parameters Name Type Description source Object Either a GlideRecord instance or a sys_id string IPService - port The TCP or UDP port used by the service. Table 6. Field Name Type Description port String The TCP or UDP port used by the service. IPService - protocol The protocol used by the service ("UDP", "TCP", or "TCP/UDP"). Table 7. Field Name Type Description protocol String The protocol used by the service ("UDP", "TCP", or "TCP/UDP"). IPService - name A short name or handle for the IPService. Table 8. Field Name Type Description name String Name of the IPService IPService - serviceName A long, descriptive English name for the IPService. Table 9. Field Name Type Description serviceName String A long, descriptive English name for the IPService. IPService - sysID The sys_id of this record. Table 10. Field Name Type Description sysID String The sys_id of this record.
https://docs.servicenow.com/bundle/kingston-application-development/page/app-store/dev_portal/API_reference/IPService/concept/c_IPServiceAPI.html
2018-03-17T06:07:18
CC-MAIN-2018-13
1521257644701.7
[]
docs.servicenow.com
Cloud Code with Obj-C In this guide you’ll learn how to use Cloud Code with your Obj-C app. Step 1 – Create Cloud Code file Create a new file and name it as ‘main.js’. Add the following code to the file: Parse.Cloud.define("test", function(request, response) { var text = "hello world"; var jsonObject = { "answer": text }; response.success(jsonObject); }); The first parameter of the Cloud Code is the function name. It is important to use the very same name you define inside the Cloud Code file into your Android App. You can also pass parameters to your function in your Android App. You can get these parameters by accessing them within the ‘request.params’ object. Step 2 – Upload your code to Cloud Code Go to your Dashboard, find the following option and click on it: Next, click on “Choose file”, select the .js file, then click on “Save”: After that, your file will be uploaded. Then, it is necessary to add some code to your App. Step 3 – Add Objective-C Code: To call the function, we have to create the parameters that will be passed to the function, pass in the name of the function, pass the parameters and then a function callback. The following code calls the function // This function calls the Cloud Code // params has to be defined already [PFCloud callFunctionInBackground:@"test" withParameters:params block:^(NSString *myAlertMsg, NSError *error){ if(!error) { // Everything went alright } else { // Something went wrong } } ]; The first parameter of callFunctionInBackground is the function name on the Cloud Code and the second one is the dictionary that has every parameter that will be used by the function. The third argument is the block that will be executed after the function has been called. Here’s the git repository for the guide:
https://docs.back4app.com/docs/cloud-code-with-obj-c/
2018-03-17T06:21:11
CC-MAIN-2018-13
1521257644701.7
[array(['https://docs.back4app.com/wp-content/uploads/2016/08/Screen-Shot-2016-08-09-at-10.41.05-PM.png', 'Screen Shot 2016-08-09 at 10.41.05 PM'], dtype=object) array(['https://docs.back4app.com/wp-content/uploads/2016/10/cloudcode.png', 'cloudcode'], dtype=object) ]
docs.back4app.com
PowerShell Updated: April 17, 2012 Applies To: Windows Vista, Windows XP, Windows Server 2008, Windows 7, Windows Server 2003 with SP2, Windows Server 2008 R2, Windows Server 2012, Windows 8 Windows PowerShell™. Using PowerShell.exe You can use the PowerShell.exe command-line tool to start a Windows PowerShell session in a Command Prompt window.. For a complete list of the PowerShell.exe command-line parameters, see about_PowerShell.Exe. Other Ways to Start Windows PowerShell For information about other ways to start Windows PowerShell, see Starting Windows PowerShell. Remarks. See Also about_PowerShell.Exe about_PowerShell_Ise.exe Windows PowerShell Scripting with Windows PowerShell
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/ff950685(v=ws.11)
2018-03-17T06:56:05
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
Basic tutorial¶ Now, download basic.py and run it with python basic.py (or, better yet, make it executable (chmod 755 basic.py) and then run it directly with ./basic.py). This example is a documented vignette; so just read it and run it to get an idea of how things work. #!/usr/bin/env python """Getting Started: A simple sample pipeline built using pypiper.""" # This is a runnable example. You can run it to see what the output # looks like. # First, make sure you can import the pypiper package import os import pypiper # Create a PipelineManager instance (don't forget to name it!) # This starts the pipeline. pm = pypiper.PipelineManager(name="BASIC", outfolder="pipeline_output/") # Now just build shell command strings, and use the run function # to execute them in order. run needs 2 things: a command, and the # target file you are creating. # First, generate some random data # specify target file: tgt = "pipeline_output/test.out" # build the command cmd = "shuf -i 1-500000000 -n 10000000 > " + tgt # and run with run(). pm.run(cmd, target=tgt) # Now copy the data into a new file. # first specify target file and build command: tgt = "pipeline_output/copied.out" cmd = "cp pipeline_output/test.out " + tgt pm.run(cmd, target=tgt) # You can also string multiple commands together, which will execute # in order as a group to create the final target. cmd1 = "sleep 5" cmd2 = "touch pipeline_output/touched.out" pm.run([cmd1, cmd2], target="pipeline_output/touched.out") # A command without a target will run every time. # Find the biggest line cmd = "awk 'n < $0 {n=$0} END{print n}' pipeline_output/test.out" pm.run(cmd, "lock.max") # Use checkprint() to get the results of a command, and then use # report_result() to print and log key-value pairs in the stats file: last_entry = pm.checkprint("tail -n 1 pipeline_output/copied.out") pm.report_result("last_entry", last_entry) # Now, stop the pipeline to complete gracefully. pm.stop_pipeline() # Observe your outputs in the pipeline_output folder # to see what you've created.
http://pypiper.readthedocs.io/en/latest/tutorials-basic.html
2018-03-17T05:58:05
CC-MAIN-2018-13
1521257644701.7
[]
pypiper.readthedocs.io
This module allows you, i.e. to the right, so that the velocity is taken into account when the particles are created. Use the Rotation Velocity's Layer Properties panel to adjust the effect's parameters. Related Topics
https://docs.toonboom.com/help/harmony/Content/HAR/Stage/019_Effects/129_H3_Rotation_Velocity.html
2018-03-17T06:25:35
CC-MAIN-2018-13
1521257644701.7
[]
docs.toonboom.com
<![CDATA[ ]]>User Guide > Supervision > Project Management Project Management T-SBADV-002-001 To distribute work to different storyboard artists, it is necessary to split up a large storyboard into different files. Once the work on all of the parts is completed, you must reassemble them into the same project to complete the storyboard. Once the various parts of your project are complete and you are ready to bring it all back into your master project, you can easily merge and replace the changed scenes.
https://docs.toonboom.com/help/storyboard-pro-5/storyboard/supervision/project-management.html
2018-03-17T06:29:58
CC-MAIN-2018-13
1521257644701.7
[array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../Resources/Images/HAR/_Skins/Activation.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/download.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
.
https://docs.toonboom.com/help/toon-boom-studio-81/Content/TBS/Getting_Started/007a_H1_InstantMotion.html
2018-03-17T06:25:41
CC-MAIN-2018-13
1521257644701.7
[array(['../../Resources/Images/TBS/User_Guide/Scene/instant_motion.png', None], dtype=object) array(['../../Resources/Images/TBS/User_Guide/Scene/instant_motion_screen.png', None], dtype=object) array(['../../Resources/Images/TBS/User_Guide/Scene/arrow_cursor.png', None], dtype=object) array(['../../Resources/Images/TBS/User_Guide/Scene/scale_handle.png', None], dtype=object) array(['../../Resources/Images/TBS/User_Guide/Scene/rot_handle1.png', None], dtype=object) array(['../../Resources/Images/TBS/User_Guide/Scene/path_recorded.png', None], dtype=object) array(['../../Resources/Images/TBS/User_Guide/MotionPath/tbs8_motionstacking.png', None], dtype=object) ]
docs.toonboom.com
You can access virtual machine applications, guest operating system functions, and Fusion functions from the applications menu icon that appears in the Apple menu bar. With the Fusion applications menu, you can think about your virtual computing environment in terms of applications rather than virtual machines. The applications menu is a single source for finding every application on every virtual machine on your Mac. The applications menu is useful when you use Unity view as your working environment. You can access the contents of the Windows start menu without having the taskbar visible and can access the Virtual Machine and View menu without having Fusion be the active application.
https://docs.vmware.com/en/VMware-Fusion/8.0/com.vmware.fusion.using.doc/GUID-8B11DE12-F9EA-4176-95BC-ACF4CFAC1F67.html
2018-03-17T06:41:58
CC-MAIN-2018-13
1521257644701.7
[]
docs.vmware.com
You can decide whether a Host Profile component is applied or considered during compliance check. This allows administrators to eliminate non-critical attributes from consideration or ignore values that, while part of the Host Profile, are likely to vary between hosts. Procedure - Edit a Host Profile. - Expand the Host Profile Component hierarchy until you reach the desired component or component element. -.
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.hostprofiles.doc/GUID-CC00594A-6426-4D4F-990F-53C9494C5EBB.html
2018-03-17T06:41:53
CC-MAIN-2018-13
1521257644701.7
[]
docs.vmware.com
If you plan to set up and use an iSCSI LUN as the boot device for your host, you need to follow certain general guidelines. The following guidelines apply to booting from independent hardware iSCSI and iBFT. Review any vendor recommendations for the hardware you use in your boot configuration. For installation prerequisites and requirements, review vSphere Installation and Setup. Use static IP addresses to reduce the chances of DHCP conflicts. Use different LUNs for VMFS datastores and boot partitions. Configure proper ACLs on your storage system. The boot LUN should be visible only to the host that uses the LUN. No other host on the SAN should be permitted to see that boot LUN. If a LUN is used for a VMFS datastore, it can be shared by multiple hosts. ACLs on the storage systems can allow you to do this. Configure a diagnostic partition.. To collect your host's diagnostic information, use the vSphere ESXi Dump Collector on a remote server. For information about the ESXi Dump Collector, see vSphere Installation and Setup and vSphere Networking.
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-6B38E0C1-877A-4410-AB2C-028C5D85419A.html
2018-03-17T06:41:50
CC-MAIN-2018-13
1521257644701.7
[]
docs.vmware.com
You can view tasks that are associated with a single object or all objects in the vSphere Web Client. About this task. Procedure - Navigate to an object in the inventory. - Click the Monitor tab, then click Tasks. The task list contains tasks performed on the object and detailed information, such as target, task status, initiator, and start/completion time of the task. - (Optional) To view related events for a task, select the task in the list.
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.vcenterhost.doc/GUID-996B7673-C9B9-4F7E-BD1D-57362668F170.html
2018-03-17T06:41:45
CC-MAIN-2018-13
1521257644701.7
[]
docs.vmware.com
After you redirect the direct console to a serial port, you can make that setting part of the host profile that persists when you reprovision the host with Auto Deploy. Prerequisites The serial port must not already be in use for serial logging and debugging. Procedure - From the vSphere Web Client, connect to the vCenter Server. - Select the host in the inventory. - Click the Manage tab. - Select Settings. - Select Advanced System Settings. -. Results The setting to redirect the direct console to a serial port is stored by vCenter Server and persists when you reprovision the host with Auto Deploy.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.install.doc/GUID-7651C4A2-C358-40C2-8990-D82BDB8127E0.html
2018-03-17T06:41:36
CC-MAIN-2018-13
1521257644701.7
[]
docs.vmware.com
This database view contains product metadata, including that for operating systems and applications. Table 1. VUMV_PRODUCTS Field Notes PRODUCT_ID Unique ID for the product, generated by the Update Manager server NAME Name of the product VERSION Product version FAMILY Windows, Linux, ESX host, or Embedded ESXi host, Installable ESXi host Parent topic: Database Views
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.update_manager.doc/GUID-7BAB39C2-213C-4B2E-ADC0-FE8AA911C497.html
2018-03-17T06:41:38
CC-MAIN-2018-13
1521257644701.7
[]
docs.vmware.com
Out of the box, the TPAC includes a number of placeholder text and links. For example, there is a set of links cleverly named Link 1, Link 2, and so on in the header and footer of every page in the TPAC. Let’s customize that for our templates_BR1 skin. To begin with, we need to find the page(s) that contain the text in question. The simplest way to do that is with the handy utility ack, which is much like grep but with built-in recursion and other tricks. On Debian-based systems, the command is ack-grep as ack conflicts with an existing utility. In the following example, we search for files that contain the text "Link 1": Searching for text matching "Link 1". bash$ ack-grep "Link 1" /openils/var/templates/opac /openils/var/templates/opac/parts/topnav_links.tt2 4: <a href="">[% l('Link 1') %]</a> Next, we copy the file into our overrides directory and edit it with vim: Copying the links file into the overrides directory. bash$ cp /openils/var/templates/opac/parts/topnav_links.tt2 \ /openils/var/templates_BR1/opac/parts/topnav_links.tt2 bash$ vim /openils/var/templates_BR1/opac/parts/topnav_links.tt2 Finally, we files. NOTE. As Evergreen supports multiple languages, any customizations to Evergreen’s default text must use the localization function. Also, note that the localization function supports placeholders such as [_1], [_2] in the text; these are replaced by the contents of variables passed as extra arguments to the l() function. Once we have edited the link and link text to our satisfaction, we can load the page in our Web browser and see the live changes immediately (assuming we are looking at the BR1 overrides, of course).
http://docs.evergreen-ils.org/2.6/_changing_some_text_in_the_tpac.html
2018-03-17T06:26:45
CC-MAIN-2018-13
1521257644701.7
[]
docs.evergreen-ils.org
Push API \ Publish Post On LinkedIn PHP SDK Push API Resources Workflow Request: the code to send to the API Send a POST request with the data below to the endpoint /push/identities/<identity_token>/linkedin/post.json to share a new message on behalf of a LinkedIn user. The <identity_token> is obtained whenever one of your users connects using a social network account. To be able to use this endpoint LinkedIn must be fully configured for your OneAll Site and the user must have given consent to publish content on his behalf when he logged in with LinkedIn. POST data to include in the request { "request":{ "push":{ "post":{ "title": "#title#", "description": "#description#", "message": "#message#", "link": "#link#", "picture": "#picture#", } } } } Result: the code returned by the API Resultset Example { "response":{ "request":{ "date": "Fri, 22 Sep 2017 12:00:53 0200", "resource": "/push/identities/498ac3a3-ec9d-4a56-ba4a-0a3f3960145d/linkedin\/post.json", "status":{ "flag": "success", "code": 200, "info": "Your request has been processed successfully" } }, "result":{ "data":{ "provider": "linkedin", "object": "post", "post_id": "UPDATE-111111111-6316934079757451264", "post_location": "" } } } }
http://docs.oneall.com/api/resources/push/linkedin/post/
2018-03-17T06:26:08
CC-MAIN-2018-13
1521257644701.7
[]
docs.oneall.com
Creating and Accessing a Cache When you create a native client cache, you are creating a native client cache instance. You must provide some basic configuration information such as a connection name and cache initialization parameters for the native client’s cache instance. When you create a cache, you provide the following input: - Connection name. Used in logging to identify both the distributed system connection and the cache instance. If you do not specify a connection name, a unique (but non-descriptive) default name is assigned. cache.xmlto initialize the cache (if the initialization is not done programmatically). To modify the cache structure, edit cache.xmlin your preferred text editor. No changes to the application code are required. If you do not specify a cache initialization file, you need to initialize the cache programmatically. The cache.xml file contains XML declarations for cache, region, and region entry configuration. This XML declares server connection pools and regions: <cache> <region name="clientRegion1" refid="PROXY"> <region-attributes </region> <region name="clientRegion2" refid="PROXY"> <region-attributes </region> <region name="localRegion3" refid="LOCAL"/> <pool name="serverPool1"> <locator host="host1" port="40404"/> </pool> <pool name="serverPool2"> <locator host="host2" port="40404"/> </pool> </cache> When you use the regions, the client regions connect to the servers through the pools named in their configurations. This file can have any name, but is generally referred to as cache.xml . For a list of the parameters in the cache.xml file, including the DTD, see Cache Initialization File. To create your cache, call the CacheFactory create function. The cache object it returns gives access to the native client caching API. For example: CacheFactoryPtr cacheFactory = CacheFactory::createCacheFactory(); CachePtr cachePtr = cacheFactory->create(); Note: For more information on how to create a cache, see the Pivotal GemFire Native Client C++ API section on creating a cache or the Pivotal GemFire Native Client .NET API section on creating a cache.
http://gemfire-native-90.docs.pivotal.io/native/client-cache/create-access-cache.html
2018-03-17T06:16:06
CC-MAIN-2018-13
1521257644701.7
[]
gemfire-native-90.docs.pivotal.io
Quick Start Guide to BonusChimp Welcome to BonusChimp - the place to acquire valuable bonuses to add to your affiliate promotions to get more buyers through your link and so increase your commissions! When you log in you will be on your Starter Bonus Page You have 12 bonuses to start with. For each one you can - Preview how it will look on your bonus page - Download to use yourself - you get the images and the PDF - Get the links to the images and the PDF on our hosting so you don't have to host them yourself Hosted Links
https://docs.monkeywebapps.com/article/908-quick-start-guide-bonuschimp
2018-03-17T05:58:37
CC-MAIN-2018-13
1521257644701.7
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568c4563c69791436155bbd4/images/59d64a3d042863379ddc6eb8/file-9g14dp3ySW.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568c4563c69791436155bbd4/images/59d64b31042863379ddc6ebf/file-mTwaCU2ZSD.png', None], dtype=object) ]
docs.monkeywebapps.com
Prerequisites - Download Slack - (Recommended) Review the Integrations Overview Key Features - Customized alerts— See if your incidents have updated without opening the BigPanda console. Updates to your incidents will be sent to your configured Slack channel in real time. - Streamline workflows— Invite your team members to the Slack alert channel(s) specifically pertaining to them, clearly designating responsibilities and saving time. - Communication optimization— The invited members of Slack alert channels can all weigh in on alert causes and fixes, capitalizing on a diverse knowledge base of expertise to tackle each issue. How It Works Once installed, the Slack integration sends HTTP POST requests with a JSON payload containing alert information to a Slack Incoming Webhook. The integration sends notifications for new alerts, comments and invitations to the channel by default. You also have the option of manually sending messages containing pertinent incident updates. Installing the Integration - Login to BigPanda and, from the Integrations tab, click New Integration. - In the Create a New Integration screen, find Slack and click Integrate. - Follow the instructions in the BigPanda console to install and configure your Slack Integration. To learn more about integrating monitoring services, see Installing an Integration. Creating an Incoming Webhook for BigPanda in Slack Share BigPanda incidents with a channel. - In your BigPanda workspace in Slack, go to Slack Home > Slack Integrations and select Incoming WebHooks. - Click on the create a new channel link under the dropdown menu. - Name the channel, invite the team members you wish to include and click Create Channel. - Click on Add Incoming WebHooks Integration. - Copy the Webhook URL to the form. - Click on Save Settings. Your new channel is now visible in your BigPanda Integrations list. The area to the right of the list displays the name and app key used to create any integration selected. Deleting the Integration Delete the Integration in BigPanda to remove the Slack integration from your UI. Updated about a year ago
https://docs.bigpanda.io/docs/slack?utm_source=site-partners-page
2020-09-18T13:39:48
CC-MAIN-2020-40
1600400187899.11
[array(['https://files.readme.io/afa2f79-Slack.jpeg', None], dtype=object) array(['https://files.readme.io/afa2f79-Slack.jpeg', 'Click to close...'], dtype=object) ]
docs.bigpanda.io
Button.jsand Modal.jsimport Theme.js, editing Theme.jswill update both components. Style.createinstead of StyleSheet.create), the Fast Refresh session will continue once you fix the error. The redbox will disappear, and the module will be updated. createNavigationContainer(MyScreen). If the returned component is a class, state will be reset. // @refresh resetanywhere in the file you're editing. This directive is local to the file, and instructs Fast Refresh to remount components defined in that file on every edit. console.logor debugger;into the components you edit during a Fast Refresh session.
https://docs.expo.io/versions/v36.0.0/react-native/fast-refresh/
2020-09-18T14:09:07
CC-MAIN-2020-40
1600400187899.11
[]
docs.expo.io
[−][src]Crate cythan The Cythan machine emulator librairy. The Cythan machine is a mathematical Turing Complete computer. The machine is composed of one vector. Each value of the vector is a positive integer, "pointing" to another value. For every iteration of the machine The first case (the pointer), is incremented by 2. The 2 cases pointed by the first case before the incrementation is "executed". In a pair of executed cases, the case that as for index the second value is set to the value of the case that have as index the first value For instance, 1,5,3,0,0,999 will copy the content of the 5th case (999) into the 3rd one. The result after one iteration will be 3,5,3,999,0,999 Example use cythan::Cythan; let mut cythan = Cythan::new( vec![1,9,5,10,1,0,0,11,0,1,20,21] ); println!("Cythan start:{:?}",cythan); for a in 0..10 { cythan.next(); println!("Cythan iteration {}:{:?}",a,cythan) }
https://docs.rs/cythan/0.1.0/cythan/
2020-09-18T12:55:58
CC-MAIN-2020-40
1600400187899.11
[]
docs.rs
iPassport v3.5.4 brings a new Change Control module as well as a number of small improvements. This is a new, process-driven module, managing the change throughout its lifecycle from submission to triage, implementation and then (optionally) review and verification. The Change Control module also introduces a more in-depth risk assessment element which we will be building upon later in the year when we introduce a stand alone Risk Assessment module. Check out the video below for a quick overview or refer to the user guide for full details. This release also adds a number of small improvements around the Equipment, Non Compliance, Meetings and Controlled Document areas. It is now possible to keep track of where an asset is and maintain a clear log of its movements via the new ‘Usage’ tab. Find out more in the updated user guide. The equipment record’s Delivery tab already had a lifespan field but we have added an Estimated Replacement Date field. This is searchable in the Equipment Advanced Search area. For more information, please refer to the updated user guide’s section on Equipment Delivery. In order to make it easier to track when warranties expire we have added a new ‘Warranty Expiration’ field. If the asset has a Responsible User defined then that user will be notified one month before the warranty expiration date. Find out more in the updated user guide. When adding a Standard to the Non Compliance General Tab automatically link the standard with the non compliance. The Standard will be visible on the Non Compliance’s Links tab and the Non Compliance will be visible on the Standard’s Links tab. This new Non Compliance tab provides an area to capture lessons learned. A new tab which allows a follow up review to be scheduled in order to establish if the corrective action was successful. Clearly identify who attended and was absent from the meeting. Meeting invitations are sent as usual but previously only users who had accepted the invite were identified as having attended the meeting. Now the Attendees tab acts more as a register, allowing the meeting leader to identify accurately who attended the meeting. Meeting notes will now be sent to everyone on the invite list, not only those who attended. Full details can be found in the new Meetings user guide Changed colours used on the Meetings agenda area and display agenda text, without having to click to expand. When completing a step in an audit it is now possible to attach evidence of compliance. More information can be found in the updated user guide. A new widget has been added to show the top 20 Internal Audits due in the next month. To add this to your dashboard edit the dashboard and add the Internal Audits due within the next month (First 20) widget. It is now possible to set in advance the date on which a document should be inactivated. Find out more in the updated user guide. Added preference to require password verification when uploading a new user signature. This can be configured on an account wide basis via Admin > Settings > Miscellaneous Settings > Require password to upload new signature. When set any user who wants to be upload a new signature will have to first enter their password. This is discussed further in the User Management user guide. Validations documents are not included with minor releases. Full validation documentation for these new developments will be published when we launch the next major update of iPassport (v3.6.0) later this year.
https://ipassport-docs.genialcompliance.com/release_notes/030504/
2020-09-18T13:42:32
CC-MAIN-2020-40
1600400187899.11
[]
ipassport-docs.genialcompliance.com
sf.apps.qchem.vibronic.sample¶ sample(t, U1, r, U2, alpha, n_samples, loss=0.0)[source]¶ Generate samples for computing vibronic spectra. The following gates are applied to input vacuum states: Two-mode squeezing on all \(2N\) modes with parameters t Interferometer U1on the first \(N\) modes Squeezing on the first \(N\) modes with parameters r Interferometer U2on the first \(N\) modes Displacement on the first \(N\) modes with parameters alpha A sample is generated by measuring the number of photons in each of the \(2N\) modes. In the special case that all of the two-mode squeezing parameters tare zero, only \(N\) modes are considered, which speeds up calculations. Example usage: >>> formic = data.Formic() >>> w = formic.w >>> wp = formic.wp >>> Ud = formic.Ud >>> delta = formic.delta >>> T = 0 >>> t, U1, r, U2, alpha = gbs_params(w, wp, Ud, delta, T) >>> sample(t, U1, r, U2, alpha, 2, 0.0) [[0, 0, 2, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]] - Parameters t (array) – two-mode squeezing parameters U1 (array) – unitary matrix for the first interferometer r (array) – squeezing parameters U2 (array) – unitary matrix for the second interferometer alpha (array) – displacement parameters n_samples (int) – number of samples to be generated loss (float) – loss parameter denoting the fraction of generated photons that are lost - Returns a list of samples from GBS - Return type list[list[int]] Downloads
https://strawberryfields.readthedocs.io/en/latest/code/api/api/strawberryfields.apps.qchem.vibronic.sample.html
2020-09-18T14:44:55
CC-MAIN-2020-40
1600400187899.11
[]
strawberryfields.readthedocs.io
Deleting a backup An automatic backup is automatically deleted when its retention limit expires. If you delete a cluster, all of its automatic backups are also deleted. If you delete a replication group, all of the automatic backups from the clusters in that group are also deleted. ElastiCache provides a deletion API operation that lets you delete a backup at any time, regardless of whether the backup was created automatically or manually. Because manual backups don't have a retention limit, manual deletion is the only way to remove them. You can delete a backup using the ElastiCache console, the Amazon CLI, or the ElastiCache API. The following procedure deletes a backup using the ElastiCache console. To delete a backup Sign in to the Amazon Web Services Management Console and open the ElastiCache console at . In the navigation pane, choose Backups. The Backups screen appears with a list of your backups. Choose the box to the left of the name of the backup you want to delete. Choose Delete. If you want to delete this backup, choose Delete on the Delete Backup confirmation screen. The status changes to deleting. Use the delete-snapshot Amazon CLI operation with the following parameter to delete a backup. --snapshot-name– Name of the backup to be deleted. The following code deletes the backup myBackup. aws elasticache delete-snapshot --snapshot-name myBackup For more information, see delete-snapshot in the Amazon CLI Command Reference. Use the DeleteSnapshot API operation with the following parameter to delete a backup. SnapshotName– Name of the backup to be deleted. The following code deletes the backup myBackup. ?Action=DeleteSnapshot &SignatureVersion=4 &SignatureMethod=HmacSHA256 &SnapshotId=myBackup &Timestamp=20150202T192317Z &Version=2015-02-02 &X-Amz-Credential=<credential> For more information, see DeleteSnapshot in the Amazon ElastiCache API Reference.
https://docs.amazonaws.cn/en_us/AmazonElastiCache/latest/red-ug/backups-deleting.html
2022-09-25T05:35:56
CC-MAIN-2022-40
1664030334514.38
[]
docs.amazonaws.cn
cloudtrail-s3-dataevents-enabled Checks whether at least one Amazon CloudTrail trail is logging Amazon S3 data events for all S3 buckets. The rule is NON_COMPLIANT if trails log data events for S3 buckets is not configured. Identifier: CLOUDTRAIL_S3_DATAEVENTS_ENABLED Trigger type: Periodic Amazon Web Services Region: All supported Amazon regions except Asia Pacific (Jakarta), Middle East (UAE), Asia Pacific (Osaka) Region Parameters: - S3BucketNames (Optional) - Type: String Comma-separated list of S3 bucket names for which data events logging should be enabled. Default behavior checks for all S3 buckets. Amazon CloudFormation template To create Amazon Config managed rules with Amazon CloudFormation templates, see Creating Amazon Config Managed Rules With Amazon CloudFormation Templates.
https://docs.amazonaws.cn/en_us/config/latest/developerguide/cloudtrail-s3-dataevents-enabled.html
2022-09-25T04:55:11
CC-MAIN-2022-40
1664030334514.38
[]
docs.amazonaws.cn