content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Welcome to the BLOCKS SDK documentation. Here you will find all the information required to start creating BLOCKS applications. A brief summary of the main sections is below. This section describes how to get started writing LittleFoot scripts and how to obtain the SDK source code, either via the JUCE framework or The standalone BLOCKS SDK. Lightpad and Control Blocks communicate with a computer over USB-C or Bluetooth, and communicate with each other via magnetic connections on their sides. This section contains instructions for configuring your setup so that all the components can communicate with each other. Once you have connected your device to your computer you need to be able to discover it from your application. This section outlines the procedure for Lightpad and Control Block discovery and provides some simple example code which monitors for new connections. This section explains how to capture touch events from a compatible device and, building on the Discovering BLOCKS section, displays some example code. Getting control button events Lightpad and Control Blocks have control buttons, either a mode button on their side or labelled buttons on top, and this section shows you how to obtain button pressed and button released events. This section explains how to control the LED grid on a Lightpad. Control Blocks have a strip of lights running along one side and this section provides instructions for controlling the individual LEDs. Controlling control buttons As well as providing button pressed and button released events, control buttons also have LEDs. This section explains how to change the colour of different buttons. Getting started with BLOCKS CODE Learn how to use the BLOCKS CODE IDE to develop LittleFoot programs that run on BLOCKS devices. Advanced SDK users can specify specialised programs to run on Lightpad Blocks. These programs must be written in the LittleFoot language, which is described in this section. The standalone BLOCKS SDK The easiest way to get started using the SDK is via the JUCE framework, but if you want to integrate BLOCKS functionality into your existing application then it may be more convenient to use The standalone BLOCKS SDK. This section gives an overview of building and using the BLOCKS SDK as a library. Example LittleFoot Scripts This section provides examples of LittleFoot scripts that can be loaded onto the BLOCKS hardware. Example BLOCKS Integrations This section gives an example of how to integrate BLOCKS features into an existing application. Example JUCE Applications This section contains examples of BLOCKS applications that make use of the full potential of the JUCE library.
https://docs.juce.com/blocks/index.html
2020-02-16T22:54:31
CC-MAIN-2020-10
1581875141430.58
[]
docs.juce.com
SQL Server Query Execution Flavien is called a logical IO. The smaller the size of a record is, the more records can be read with the same number of logical IOs. (See full post here) Part 2: SQL Server: Scans and seeks The most primitive operation in SQL Server is retrieving from a table a set of rows that satisfies a given search predicate. This can be achieved using two basic strategies: scans and seeks. (See full post here) Part 3: SQL Server: Data access strategies SQL Server can use different data access strategies when retrieving rows from the table. The strategy that will be used depends on the columns of the table, the available indexes, the query, the data in the table, and the statistics. There are 7 basic data access strategies. (See full post here)
https://docs.microsoft.com/en-us/archive/blogs/gpde/sql-server-query-execution
2020-02-16T23:33:13
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
WMI Schemas While the WMI Object Model defines how programs work with WMI, the WMI schemas define the actual implementation of WMI objects. Consider an analogy of a driving manual versus a map. A driving manual explains the techniques of driving a car, whereas a map illustrates where the destinations are and how to get to them. The driving manual is analogous to the object model, while maps are analogous to schemas. Understanding WMI schemas allows you to understand the relationships among the objects that WMI manages. Part of a WMI schema is illustrated in Figure B.3. In this case, specific types of network adapters are defined by extending a general definition of network adapters (CIM_NetworkAdapter). Figure B.3 Part of a WMI. The Desktop Management Task Force. Some important concepts to understand about WMI schemas are: Namespace Contains classes and instances. Namespaces are not physical locations; they are more like logical databases. Namespaces can be nested. Within a namespace, there can be other namespaces that define subsets of objects. Class A definition of objects. Classes define the properties, their data types, methods, associations, and qualifiers for both the properties and the class as a whole. Instance A particular manifestation of a class. Instances are more commonly thought of as data. Because instances are objects, the two terms are often used interchangeably. However, instances are usually thought of in the context of a particular class, whereas objects can be of any class.. MOF Managed Object Format. MOF file A definition of namespaces, classes, instances, or providers; but in a text file. For more information, see the "Using MOF Files" section later in this appendix. MOF compiling Parsing a MOF file and storing the details in the WMI repository. CIM Common Information Model. For more information, see the Desktop Management Task Force Web site at. Association A WMI-managed relationship between two or more WMI objects. You can extend a schema by adding new classes and properties that are not currently provided by the schema. For information about extending the WMI schema, see the WMI Tutorial at. For More Information Did you find this information useful? Please send your suggestions and comments about the documentation to [email protected].
https://docs.microsoft.com/en-us/previous-versions/system-center/configuration-manager-2003/cc180287(v=technet.10)?redirectedfrom=MSDN
2020-02-16T21:32:31
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
Despite the emerging platforms and technologies, email remains one of the most powerful tools in marketing. And it’s easy to see why: virtually all people have an email address. Done the right way, email marketing can increase ROI, build a better customer experience and turn one-time customers into lifelong customers. After all, no company wants to remain in business for a couple of months, or a year. Today, the digital landscape can change in the blink of an eye. New technologies emerge, hype grows and sometimes it’s easy to get carried away. After all, who doesn’t want to try a fancy piece of technology? Everybody is doing it, right? While new technologies can really be useful if used the right way, more often than not they cause you to lose your focus on what really matters. Sometimes, companies fall for the hype; they implement a technology they don’t understand. Even the most remarkable technology falls short if it used in the wrong way. Fortunately, there is one piece of communication technology that has stood the test of time, proving its worth time and time again: email. You are probably familiar with the saying “email is king.” The saying may have become a cliché, but it is safe to say that it’s true. There’s a reason why companies and individuals alike keep using email as a primary means of communication: it works. While email may not be as fancy as social media channels like Facebook, Instagram or Twitter, it does have some huge advantages over social media and other communication platforms. For one thing, almost everyone uses email. Social media is generally geared toward young people. Email is used by almost all demographics, thus giving companies a wider reach. Email has been around for decades, but it has only gotten more sophisticated and easy to use with the passing of time. Today, companies use email to create innovative marketing campaigns, drive conversion rate, establish a personal relationship with their customers and deliver a better customer experience. Email marketing, unlike some other forms of marketing, is permission-based. For example, let’s say a potential customer visits your page and reads one or two articles about your product. If you want to send a newsletter about your product, offers and other news, you can invite the visitor to fill in their name and email address through a form. If the visitor enter his name and email address, you have ‘won’ the permission to send an email. The visitor wants to know more, he expects an email from your company. In fact, many customers opt for email as the primary form of communication with their favorite brands. It isn’t enough, however, to simply devise and launch email marketing campaigns. They have to be personalized, they have to treat each potential customer a unique human being, with hugely personal needs and desires. Gone are the days when you could send hundreds or thousands emails, hoping someone would take an interest in what you have to say. That’s why permission-based marketing email is the way to go. If you can offer timely and personalized content to your target audience, you’re on the right to way to unlocking the key to customer success. Personalized email marketing will not only help you gather new leads and turn them into loyal customers, but it can also help you build meaningful relationships with your existing customers. How can you do that? Well, there lots of ways. You can send birthday cards, offer discounts on certain products, links to download a free e-books that might be of interest to the customer, etc. These are just some of the examples of personalized email marketing. Email technology has gotten better and better over the years, but what makes personalized email marketing so powerful is related to rapid advances in marketing automation technology. Combined together, email and marketing automation tools can help you build personalized email marketing campaigns, respond to changes in consumer behavior in real time and deliver an amazing customer experience. The market is awash with marketing automation solutions, affordable even for every company- regardless of size and budget. What is more, email marketing is cost-effective. Once again, it is highly effective when combined with marketing automation technologies. Imagine if you could get insight in the forms of reports about the performance of your email marketing campaigns. Or tinker with your messages. Or quickly respond to a change in consumer behavior. Customer Relationship Management(CRM) systems have turned these into a reality. In Flexie CRM, for example, you can run A/B tests to see which has the best results and which needs to be refined further. You can also create custom reports about the performance of your email marketing campaigns. Personalized email marketing is the future. Email may be an ‘old’ communication technology, but its worth has been proven time and time again. Used the right way, it can and will help you reach and exceed your goals. To stay updated with the latest features, news and how-to articles and videos, please join our group on Facebook, Flexie CRM Academy and subscribe to our YouTube channel Flexie CRM.
https://docs.flexie.io/docs/marketing-trends/personalized-email-marketing-more-powerful-than-ever/
2020-02-16T23:43:27
CC-MAIN-2020-10
1581875141430.58
[array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-7.png', 'Welcome email'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-8.png', 'Report'], dtype=object) ]
docs.flexie.io
Understanding Namespaces & Keys Overview Custom Fields is a Shopify Metafield creator and editor. Metafields are made up of a namespace, a key, and a value. Below, pictured is a Custom Fields value print statement. Note: Your Custom Fields machine name will appear as the key for your metafield statement and the namespace will always be 'custom_fields'. Accessing the raw value stored in the metafield In order to access this information, first go to the Configure Fields page and click "Edit" on any Shopify item. At the bottom of the page, you will see a "Metafield Value" statement that you can use to print the value of the field that you set in Custom Fields. You can print this statement in your Shopify code, but it will only print the value of the field. If you want to use the whole snippet code, see below. Printing the code entered into the editor, including the HTML, Liquid code and raw value At the bottom of the edit screen for your field, the "include statement" is the syntax used to print the entirety of the code in the app's code editor for that field. The difference between this include statement and directly printing the stored value of the metafield is that the include prints all the custom code that appears inside a snippet, rather than just the stored value.
https://custom-fields.docs.bonify.io/article/153-developer-info
2021-07-23T22:47:09
CC-MAIN-2021-31
1627046150067.51
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecd69412c7d3a3dea3d0b36/images/5ecefaee0428632fb900694b/img-15938-1590622353-578961450.png', 'Metafield_Info.png'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecd69412c7d3a3dea3d0b36/images/5ecef8932c7d3a3dea3d20a0/img-15938-1590622353-584401282.gif', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecd69412c7d3a3dea3d0b36/images/5ecefaef0428632fb900694c/img-15938-1590622354-1592868103.png', 'coffee_tv.png'], dtype=object) ]
custom-fields.docs.bonify.io
After purchasing this theme please download the package from ThemeForest (to do this you must be logged in ThemeForest). Move your mouse over your ThemeForest login name in right top corner and then click Download. On the Downloads page you will find all the items that you have purchased. This theme will also be here. In the next step click the Download All files & documentation button and save package on your computer. Please unpack the entire package after downloading. The Main Files includes the following folders: - Demo Content – Contains demo content - Documentation – Contains documentation - zante – Contains theme - zante-child – Contains child theme - zante.zip – Contains theme (ZIP folder for installation) - zante-child.zip – Contains child theme (ZIP folder for installation) Install via WordPress Theme Manager Step 1. Login to your WordPress admin page. Step 2. Navigate to Appearance Themes. Step 3. Press Add New button. Step 4. Press Upload Theme button. Step 5. Press Choose File button. Step 6. Select the theme zip file zante.zip. Step 7. Then press Install Now button. The installation process starts… Please Note: If at this step you see some error (for example: The uploaded file exceeds the upload_max_filesize directive in php.ini), go to Install via FTP section. Step 8. After the installation is complete, click Activate the theme. Please Note: If you intend to make changes on theme style or theme files (now or in the future) we strongly recommend you to install & activate also the Zante Child Theme (zante-child.zip). Install Required Plugins Step 1. The theme is now activated. You should now install and activate the required plugin by clicking on Begin installing plugins on top of the screen. Step 2. Select Required plugins and then choose Install from the dropdown menu. Press Apply button to continue the installation. Step 3. After the installation is complete, click on Return to Required Plugins Installer. Step 4. Select Installed plugins and then choose Activate from the dropdown menu. Press Apply button to continue the activation. Install via FTP If you have any problems with installing the theme from the WordPress Dashboard (for example, you see this error: “The uploaded file exceeds the upload_max_filesize directive in php.ini” or “Are you sure you want to do this?”), then you need to install it via FTP. For this you need to upload non-zipped theme folder called zante to /wp-content/themes/ folder in your WordPress installation folder on your server. Next, login to your WordPress admin page and activate the theme (Appearance Themes). After the successful activation see Step 9 of this documentation. Installation Video - FTP Clients – - Adding New Themes – - Activating the Theme –
https://docs.eagle-themes.com/kb/zante/theme-installation/
2021-07-23T21:52:28
CC-MAIN-2021-31
1627046150067.51
[array(['https://docs.eagle-themes.com/wp-content/uploads/2019/10/download.jpg', None], dtype=object) array(['https://docs.eagle-themes.com/wp-content/uploads/2019/10/install.jpg', None], dtype=object) array(['https://docs.eagle-themes.com/wp-content/uploads/2019/10/upload.jpg', None], dtype=object) array(['https://docs.eagle-themes.com/wp-content/uploads/2019/10/choose.jpg', None], dtype=object) array(['https://docs.eagle-themes.com/wp-content/uploads/2019/10/select.jpg', None], dtype=object) array(['https://docs.eagle-themes.com/wp-content/uploads/2019/10/install_button.jpg', None], dtype=object) array(['https://docs.eagle-themes.com/wp-content/uploads/2019/10/activate.jpg', None], dtype=object) array(['https://docs.eagle-themes.com/wp-content/uploads/2019/10/install_plugins.jpg', None], dtype=object) array(['https://docs.eagle-themes.com/wp-content/uploads/2019/10/install_plugins_2.jpg', None], dtype=object) array(['https://docs.eagle-themes.com/wp-content/uploads/2019/10/install_plugins_3.jpg', None], dtype=object) array(['https://docs.eagle-themes.com/wp-content/uploads/2019/10/activate_plugins.jpg', None], dtype=object) ]
docs.eagle-themes.com
URL: Retrieves the collected column statistics for the specified table(s). Input Parameter Description Output Parameter Description The GPUdb server embeds the endpoint response inside a standard response structure which contains status information and the actual response to the query. Here is a description of the various fields of the wrapper:
https://docs.kinetica.com/7.1/api/rest/show_statistics_rest/index.html
2021-07-23T21:08:00
CC-MAIN-2021-31
1627046150067.51
[]
docs.kinetica.com
This guide shows how to use Behaviour Trees to set up an AI character that will patrol or chase a player. Behavior Trees assets in Unreal Engine 4 (UE4) can be used to create artificial intelligence (AI) for non-player characters in your projects. While the Behavior Tree asset is used to execute branches containing logic, to determine which branches should be executed, the Behavior Tree relies on another asset called a Blackboard which serves as the "brain" for a Behavior Tree. The Blackboard contains several user defined Keys that hold information used by the Behavior Tree to make decisions. For example, you could have a Boolean Key called Is Light On which the Behavior Tree can reference to see if the value has changed. If the value is true, it could execute a branch that causes a roach to flee. If it is false, if could execute a different branch where the roach maybe moves randomly around the environment. Behavior Trees can be as simplistic as the roach example given, or as complex as simulating another human player in a multiplayer game that finds cover, shoots at players, and looks for item pickups. If you are new to Behavior Trees in UE4, it is recommended that you go through the Behavior Tree Quick Start guide to quickly get an AI character up and running. If you are already familiar with the concept of Behavior Trees from other applications, you may want to check out the Essentials section which contains an overview of how Behavior Trees work in UE4, a User Guide to working with Behavior Trees and Blackboards, as well as reference pages for the different types of nodes available within Behavior Trees.
https://docs.unrealengine.com/4.26/en-US/InteractiveExperiences/ArtificialIntelligence/BehaviorTrees/
2021-07-23T21:46:10
CC-MAIN-2021-31
1627046150067.51
[]
docs.unrealengine.com
Domain By updated almost 2 years ago Edit Subdomain Name To edit subdomain name: Dashboard > Domain Click Edit Icon (Pencil) Change subdomain name Click Save icon (Tick) Add Domain Name - Part 1 To add a domain name: Dashboard > Domain > CONNECT DOMAIN > CONNECT EXISTING DOMAIN Enter domain name Click CONNECT Add Domain Name - Part 2 Login to your domain registrar (the site you purchased your domain) Edit Zone Record (DNS) > Add A Record Add OR Modify First A Record: Host: @ Points To: 35.233.19.92 You may need to delete any existing CNAME with www Host first. Then Add 2nd A Record: Host: www Points To: 35.233.19.92 Add Domain Name - Part 3 Head to: Dashboard > Domain > Check DNS Records Confirm Status for both @ and www are 'Connected' Add Domain Name - Part 4 Head to: Dashboard > Domain > Check Settings SSL will automatically be activated within 24 hours. You can decide whether the domain will be www or non-www by switching off or on the www option.
https://docs.worldinternetacademy.com/article/2-domain
2021-07-23T21:43:48
CC-MAIN-2021-31
1627046150067.51
[array(['https://d258lu9myqkejp.cloudfront.net/users_profiles/7028/medium/wia_logo_v02_150x150.png?1536905377', 'Avatar'], dtype=object) array(['https://cl.ly/7e6ded432053/Screen%20Recording%202019-08-10%20at%2008.06.56.52%20PM.gif', None], dtype=object) array(['https://cl.ly/a27c8cc1ea92/Screen%20Recording%202019-08-10%20at%2008.23.56.31%20PM.gif', None], dtype=object) array(['https://cl.ly/ee7ccd622009/Add%20Domain%20Name%20-%20Part%202.gif', None], dtype=object) array(['https://cl.ly/e4a9f7d3db56/Screen%20Recording%202019-08-10%20at%2008.55.02.54%20PM.gif', None], dtype=object) array(['https://cl.ly/c1fdb416b34b/Screen%20Recording%202019-08-10%20at%2009.02.06.48%20PM.gif', None], dtype=object) ]
docs.worldinternetacademy.com
. If you want to learn more about developing websites with Kentico, try our hands-on e-learning course targeted for ASP.NET MVC 5 developers. administration interface, create a content structure based on content-only pages. - Develop the MVC application - Set up features that can help build your website:). Automatic features for MVC projects All Kentico MVC applications (i.e. projects with the Kentico.AspNet.Mvc NuGet integration package installed) automatically use several features, which are not part of standard ASP.NET MVC 5. These features may modify how the application renders the output of pages or responds to certain types of requests. Relative URL resolving – The system runs an output filter that automatically resolves all virtual relative URLs (~/<link path>) in page content. The resolving occurs on the side of the MVC live site application, based on the environment where the site is actually running. This filter prevents content editors from creating invalid links in cases where your MVC application does not explicitly resolve URLs (by processing content using the Html.Kentico().ResolveUrls method). If you wish to disable the output filter and resolve URLs manually in your code, set the CMSMVCResolveRelativeUrls key to false in the appSettings section of your MVC project's Web.config file: <add key="CMSMVCResolveRelativeUrls" value="false"/> - Resource sharing with the administration domain – Only enabled automatically after applying hotfix 12.0.30 or newer. When the MVC application processes requests sent from the domain of the connected Kentico administration application, it sends a response with the Access-Control-Allow-Origin header, which enables Cross-origin resource sharing and prevents potential errors. Was this page helpful?
https://docs.xperience.io/k12sp/developing-websites/mvc-development-overview?_ga=2.195420908.1342230849.1617700835-555597562.1610106901
2021-07-23T23:22:38
CC-MAIN-2021-31
1627046150067.51
[]
docs.xperience.io
IN THIS ARTICLE button Add Email Account. Choose Google as your email provider Woodpecker has native, one-click integration with Google’s Gmail. Note that when you select Gmail you will be informed about the sending limits for this provider. The default limit in Woodpecker will be set according to your choice. Choose Automatic connection Choose Automatic connection and the App will auto-detect settings and quickly configure the email for you. Now just click Connect button below. Allow Woodpecker access to your Google account Once you click Add email, you will be asked to allow Woodpecker to access your Google Account and read, send, delete as well as manage your email. Remember to turn off the 2-step verification login for the connection time. Once your account will be connected - you can turn it back on. Troubleshooting Woodpecker may connect your email address conditionally. Check what that means and how to fix it here. In case of any other issues, have a look at our article I can't connect my mailbox._6<< Click Save … and give Woodpecker a few moments to complete the setup. Afterward, you will be able to select this email as a sender in your campaign. Your account has been connected Now you are able to select your Gmail address as a sender in your campaign. Finalize the setup Click OK to continue creating your first campaign. Click DELIVERABILITY REPORT to check information about the correctness of SPF and DKIM when connecting the mailbox. Check your inbox for Testing your email connection message ? Check Google instructions on how to set up SPF here and DKIM here. Q: What SPF should I add? If you're using Google Apps (G Suite) to send your messages, you should include the following record: v=spf1 include:_spf.google.com ~all
https://docs.woodpecker.co/en/articles/5214063-how-to-connect-gmail-gsuite-to-woodpecker
2021-07-23T23:26:37
CC-MAIN-2021-31
1627046150067.51
[array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/334051405/911c612c118d8cf55392aaea/file-HKazSIR8EN.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/334051410/c3f38ee3488bba3c4e0952b1/file-5XDtL4AYnB.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/334051413/9aeee1897da2bd0b3db51958/file-bSrKv5JCIb.jpg', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/334047990/3d47b7017e8b125a9d37445b/obraz.png', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/334051415/333696419a06a8bcb0d1aa76/file-fP25KDMxSE.png', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/334051418/3c05b476dbc334c24e36e9f8/file-VySYpPCav1.png', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/334051419/e69d0bdaa964084aecf43ff2/file-r1SDbJBw3a.png', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/334051423/882a384992e0ea31141dcb19/file-DmDTOIcozM.png', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/334051427/a5414e90710813a9f0d767fb/file-R0RbUcwcDI.png', None], dtype=object) ]
docs.woodpecker.co
The Sprite Shader This is a special shader designed to allow particle-dependent effects to be applied to X-Particles sprites. It will also affect objects produced by the Generator object but particle-dependent effects are not possible with such objects. The shader only affects sprites at render time and its effects are not visible in the editor. For more details, see the section 'Using the Sprite Shader' below. Interface This is the shader's interface: For the buttons at the bottom of the interface, please see the 'Common interface elements' page. Parameters Note: although we talk about colour in the details below, remember that this shader can also be used in channels such as transparency, bump, displacement, etc. Emitter This link field accepts an X-Particles Emitter which you drag in from the object manager. The shader obtains data on particle age, etc. from this emitter, so without it you cannot make any changes to the shader. If placed in the colour channel the shader will render sprites white in the absence of an emitter. Mode This drop-down menu has five options: Use Particle Color The sprites will have the same colour as the particle. This is particularly useful if the particles were emitted from a texture and took their colour from that texture. Random Color The sprites will each have a randomly-selected colour. Note: if you select this option the sprites may have an odd speckled colour if rendered in the preview render (i.e. the interactive renderer). This is only in the preview render and is not seen if you render the scene to the editor or picture viewer. Random (From Gradient) This option will enable the Color gradient and will randomly colour the sprites using a colour taken from the gradient. Parameter-Dependent Sprites will be coloured using colours taken from the Color gradient, the colour depending on the particle parameter selected. Modifier Sets Value In this mode the color values are set by the Sprite Shader modifier. For more details, see the explanation below. Color Gradient The colour is chosen from this gradient when 'Mode' is set to 'Random (From Gradient)' or 'Parameter-Dependent'. Particle parameter This enables you to select the parameter used for determining the colour from the Color gradient. There are five options: Particle Life This is the most commonly-used option. It selects a gradient colour using the age of the particle compared to its lifespan. Brand-new particles are coloured with the colour from the left of the gradient, while particles at the end of their life are coloured with a colour from the right of the gradient. Age This is a simple cutoff switch which changes the particle colour when it reaches the age give in the Minimum age setting (see below). As an example, if you had a blue-to-white gradient, and entered '40' into the 'Minimum Age' field, particles with an age of less than 40 would be coloured blue, while those of age 40 or above would be coloured white. There is no transition, just a one-step change. Age Range This is similar to Particle life, but instead of the colour changing over the entire life of the particle, the colour changes between the values in the Minimum age and Maximum age settings. So with the blue-to-white gradient, if 'Minimum Age' is set to 30 and 'Maximum Age' to 60, all particles with age under 30 would be blue, all particles aged over 60 would be white, and those from 30 to 60 would be coloured with a colour from the gradient depending on their age. Speed Similar to Age, but using particle speed instead. Particles with a speed less than that in the 'Minimum Speed' field are coloured using the colour at the left-hand edge of the gradient, those with a speed greater than this with the colour from the right-hand edge of the gradient. There is no transition, just a one-step change. Speed Range Similar to 'Age Range' but using speed instead. The speed range is obtained from the 'Minimum Speed' and 'Maximum Speed' values. Invert Effect This button simply reverses the colour selection from the gradient so that when the colour would normally be taken from the left of the gradient, when this button is checked the colour from the right of the gradient is used instead, etc. Minimum Age, Maximum Age Used for the Age and Age Range parameters (see above). Minimum Speed, Maximum Speed Used for the Speed and Speed Range parameters (see above). Texture This is only available when the Parameter-Dependent mode is selected. You can put any other Cinema 4D channel shader, or a bitmap, in here. The colour returned from this shader or bitmap will blend with the Gradient color according to the Blend mode setting. A common way of using this setting is to set the Gradient color to a black-to-white gradient and the Blend mode setting to multiply. This will cause the colour to get brighter as the particle ages (if you chose the Particle Life parameter, for example). If you just want to use the colour from the texture without blending with the gradient, just set the gradient to be plain white and Blend to multiply. But then, of course, you lose all the parameter dependent effects so you needn't have bothered with the sprite shader in the first place! Blend Mode This setting selects how the Texture blends with the Color gradient. There are three modes, which are the same as in other Cinema 4D shaders: - Multiply: the most common setting; the two colours are multiplied together. - Add: the colours are added together. The result will be clamped at white. - Subtract: the gradient colour is subtracted from the texture colour. The result will be clamped at black. Using the shader This shader gives you extensive control over the material applied to a sprite. It does this by varying the colour (or transparency, or whichever channel you choose) according to selected attributes of each particle. For example, you can easily change the sprite colour over the particle’s lifetime, or change its transparency over a specified age range, or its alpha channel when a particle exceeds a certain speed. To use the shader, add it to any material channel which can take shaders. You will find it in the ‘X-Particles’ sub-menu in the material editor interface's shader list. You can also add it to a Cinema 4D Layer or Filter shader or any other shader you like. Because the shader depends on data from the particle emitter which is not available until the animation is played, the particles will appear white in the editor; the colours used will only appear when each frame is rendered. Using 'Modifier Sets Value' Using this mode can be a little confusing. What happens is that the Sprite Shader Modifier changes the colour value to use, but it needs to be set up correctly in order to work Assume you have a scene with a sprite object, a material applied to it with a Sprite Shader in the Color channel, mode set to 'Modifier Sets Value' and a Sprite Shader Modifier in the scene with the mode is set to 'RGB'. When you play and render the animation, the sprites will be plain white with no colour change over time. Why? The default colour set by the Sprite Shader is white. The modifier will, by default, increment the colour. However (again by default) the colour is clamped to white in the modifier. Since the sprite colour is already white, nothing happens. In the modifier, now set the Green and Blue rates of change to zero, and the Red to -2%. Then set 'Clamp To' to black. When you render, you will see that the sprite colour changes from white to cyan, as the red is reduced to zero leaving only green and blue. If you want to set the rates of change to a positive value, to have any effect the colour must start out less than white. So, change the Red rate of change to 2%, change the 'Clamp To' colour to white, and in the Sprite Shader add a Color shader to the 'Texture' slot. Set this colour to black. This will mean that the sprites start black, and the red component increases so that the sprites turn bright red.
http://docs.x-particles.net/html/sprshader.php
2021-07-23T21:09:14
CC-MAIN-2021-31
1627046150067.51
[array(['../images/spriteshader1.jpg', None], dtype=object)]
docs.x-particles.net
AWS Network Firewall logging destinations This section describes the logging destinations that you can choose from for your Network Firewall logs. Each section provides guidance for configuring logging for the destination type and information about any behavior that's specific to the destination type. After you've configured your logging destination, you can provide its specifications to the firewall logging configuration to start logging to it.
https://docs.aws.amazon.com/network-firewall/latest/developerguide/firewall-logging-destinations.html
2021-07-23T21:04:24
CC-MAIN-2021-31
1627046150067.51
[]
docs.aws.amazon.com
The E State Automation CellVisor by default. Add any additional columns containing part of the timestamp and specify the format. By default you will need to add the UTCTime column to the list of Joins and change the Format field to use the time format HH:mm:ss. Click Apply and Next. Note You can reconfigure the file name match/sample file, parser configuration and parameter assignment from the Data Source properties dialog after creation. Use the CellVisor software to configure the logger for communication with eagle.io. Refer to your CellVisor user manual for connection instructions. The following Server settings should be assigned:
https://docs.eagle.io/en/latest/topics/device_configuration/cellvisor/index.html
2021-07-23T21:35:30
CC-MAIN-2021-31
1627046150067.51
[]
docs.eagle.io
git_remote_branch Returns the name of the current git remote default branch If no default remote branch could be found, this action will return nil. This is a wrapper for the internal action Actions.git_default_remote_branch_name 2 Examples git_remote_branch # Query git for first available remote name git_remote_branch(remote_name:"upstream") # Provide a remote name Parameters * = default value is dependent on the user's system Documentation To show the documentation in your terminal, run fastlane action git_remote_branch CLI It is recommended to add the above action into your Fastfile, however sometimes you might want to run one-offs. To do so, you can run the following command from your terminal fastlane run git_remote_branch To pass parameters, make use of the : symbol, for example fastlane run git_remote
https://docs.fastlane.tools/actions/git_remote_branch/
2021-07-23T22:51:05
CC-MAIN-2021-31
1627046150067.51
[]
docs.fastlane.tools
xcode_server_get_assets Downloads Xcode Bot assets like the .xcarchive and logs This action downloads assets from your Xcode Server Bot (works with Xcode Server using Xcode 6 and 7. By default, this action downloads all assets, unzips them and deletes everything except for the .xcarchive. If you'd like to keep all downloaded assets, pass keep_all_assets: true. This action returns the path to the downloaded assets folder and puts into shared values the paths to the asset folder and to the .xcarchiveinside it. 1 Example xcode_server_get_assets( host: "10.99.0.59", # Specify Xcode Server's Host or IP Address bot_name: "release-1.3.4" # Specify the particular Bot ) Parameters * = default value is dependent on the user's system Lane Variables Actions can communicate with each other using a shared hash lane_context, that can be accessed in other actions, plugins or your lanes: lane_context[SharedValues:XYZ]. The xcode_server_get_assets action generates the following Lane Variables: To get more information check the Lanes documentation. Documentation To show the documentation in your terminal, run fastlane action xcode_server_get_assets CLI It is recommended to add the above action into your Fastfile, however sometimes you might want to run one-offs. To do so, you can run the following command from your terminal fastlane run xcode_server_get_assets To pass parameters, make use of the : symbol, for example fastlane run xcode_server_get_assets
https://docs.fastlane.tools/actions/xcode_server_get_assets/
2021-07-23T23:15:33
CC-MAIN-2021-31
1627046150067.51
[]
docs.fastlane.tools
- Delete everything and start over - Migration wrangling - Manually access the database - Access the GDK database with Visual Studio Code Troubleshooting and Debugging Database This section is to help give some copy-pasta you can use as a reference when you run into some head-banging database problems. An easy first step is to search for your error in Slack, or search for GitLab <my error> with Google. Available RAILS_ENV: production(generally not for your main GDK database, but you may need this for other installations such as Omnibus). development(this is your main GDK db). test(used for tests like RSpec). Delete everything and start over If you just want to delete everything and start over with an empty DB (approximately 1 minute): bundle exec rake db:reset RAILS_ENV=development If you just want to delete everything and start over with sample data (approximately 4 minutes). This also does db:reset and runs DB-specific migrations: bundle exec rake db:setup RAILS_ENV=development If your test DB is giving you problems, it is safe to delete everything -e development bundle exec rails db -e Access the GDK database with Visual Studio Code Use these instructions for exploring the GitLab database while developing with the GDK: - Install or open Visual Studio Code. - Install the PostgreSQL VSCode Extension. - In Visual Studio Code click on the PostgreSQL Explorer button in the left toolbar. - In the top bar of the new window, click on the +to Add Database Connection, and follow the prompts to fill in the details: - Hostname: the path to the PostgreSQL folder in your GDK directory (for example /dev/gitlab-development-kit/postgresql). - PostgreSQL user to authenticate as: usually your local username, unless otherwise specified during PostgreSQL installation. - Password of the PostgreSQL user: the password you set when installing PostgreSQL. - Port number to connect to: 5432(default). - Use an SSL connection? This depends on your installation. Options are: - Use Secure Connection - Standard Connection (default) - (Optional) The database to connect to: gitlabhq_development. - The display name for the database connection: gitlabhq_development. Your database connection should now be displayed in the PostgreSQL Explorer pane and you can explore the gitlabhq_development database. If you cannot connect, ensure that GDK is running. For further instructions on how to use the PostgreSQL Explorer Extension for Visual Studio Code, read the usage section of the extension documentation. ActiveRecord::PendingMigrationError with Spring When running specs with the Spring pre-loader, db:migrate database version is too old to be migrated error Users receive this error when db:migrate detects that the current schema version is older than the MIN_SCHEMA_VERSION defined in the Gitlab::Database library module. Over time we cleanup/combine old migrations in the codebase, so it is not always possible to migrate GitLab from every previous version. In some cases you may want to bypass this check. For example, if you were on a version of GitLab schema later than the MIN_SCHEMA_VERSION, and then rolled back the to an older migration, from before. In this case, in order to migrate forward again, you should set the SKIP_SCHEMA_VERSION_CHECK environment variable. bundle exec rake db:migrate SKIP_SCHEMA_VERSION_CHECK=true
https://docs.gitlab.com/ee/development/database_debugging.html
2021-07-23T23:35:56
CC-MAIN-2021-31
1627046150067.51
[]
docs.gitlab.com
Configuration In this article we'll discuss about how to configure Fiddler Everywhere in your system. By default, the Fiddler Everywhere client intercepts insecure traffic (HTTP) only and needs an account with administrative rights to capture secure traffic (HTTPS). The Fiddler Everywhere client acts as a man-in-the-middle (against the HTTPS traffic). To enable capturing and decrypting HTTPS traffic, you will need to explicitly install a root trust certificate through the HTTPS sub-menu in Settings. Configure on macOS Start Fiddler Everywhere on the device that will capture the traffic. Go to Settings > HTTPS and click the Trust Root Certificate button. A keychain user and password box appears. Enter your machine administrative credentials. Select the Capture HTTPS traffic checkbox to enable HTTPS traffic capturing. Click the Save button to save the changes. Configure on Windows Start Fiddler Everywhere on the device that will capture the traffic. Go to Settings > HTTPS and click the Trust Root Certificate button. Trust certificate popup appears to confirm and add the certificate. Select the Capture HTTPS traffic checkbox to enable HTTPS traffic capturing. - Click the Save button to save the changes. Configure on Linux Many Linux distributions are using different security features and different ways of adding a root certificate. For such cases, Fiddler Everywhere provides means to export the trusted root certificate so that you can manually import it in your Linux OS. Use the Export Root Certificate to Desktop and Trust Certificate option as follows: Start Fiddler Everywhere on the device that will capture the traffic. Go to Settings > HTTPS and expand the Advanced Settings sub-menu. Click the Export Root Certificate to Desktop button. Import and trust the exported certificate. To install the Fiddler Everywhere certificate, you need to follow some additional steps on Linux: Create a directory and copy the certificate (exported in the previous steps). The last command will start the tool to upgrade the certificates. $ sudo mkdir /usr/share/ca-certificates/extra $ sudo cp ~/Desktop/FiddlerRootCertificate.crt /usr/share/ca-certificates/extra $ sudo dpkg-reconfigure ca-certificates The above command suggest that your Linux distribution is using dkpg-reconfigure command. If that is not applicable on your Linux distro then please check the article about configuring the Fiddler certificate on Fedora, CentOS and RedHat. From the prompt select Yes to install new certificates - Choose the FiddlerRootCertificate.crt and press OK - The certificates are being updated The Capture HTTPS traffic checkbox is now active. Check the box to enable capturing HTTPS traffic. Click the Save button to save the changes. can easily workaround this issue by creating a folder called Desktop at your root directory ( mkdir ~/Desktop) and then export the certificate to the newly-created directory. Once the certificate is installed, you can safely remove the directory. For more information about Fiddler Everywhere settings, visit Settings page. Additional Resources Once the client is configured, you can start using its features. Get to know how to:
https://docs.telerik.com/fiddler-everywhere/get-started/configuration
2021-07-23T23:43:50
CC-MAIN-2021-31
1627046150067.51
[array(['../images/settings/settings-trust-root-certificate.png', 'default https settings'], dtype=object) ]
docs.telerik.com
IN THIS ARTICLE 1. Select your apps 2. Grant access 3. Configure your sync 4. Finalize your setup 1. Select your apps In your main Integromat dashboard click on Create a new scenario in the top right corner. On the next screen, find your desired app using the search bar and select it by clicking on it. After that, choose a trigger or action which should be the first in your scenario from the shown list: 2. Grant access Generate the API key » from Woodpecker and copy-paste it to Integromat. Integromat needs this key to gain access to the prospect database, which will be synced for you. Go to Woodpecker and click Settings from the dropdown menu in the upper right corner. Next go to Marketplace → Integrations. Click the API Keys tab. Click the green Create a Key button. 5. Copy the API Key. 6. Paste it into the Connection field in Integromat. 7. Click on the Continue button and you’re good to go and expand your scenario. In order to include an additional app to your scenario in Integromat, you need to add the next module. To do so click the right handle of the Woodpecker one. It will create an empty module and open a menu that allows you to add another app to your scenario. 3. Configure your sync 3.1 Set up a filter Filters are a great solution to control the data flow between your modules in order to sync only the information which you need. To add a filter between two modules, click on the connecting line between them. This will bring up a panel where you can enter the name for the filter that is to be created and define one or more filter conditions. 3.2 Schedule your scenario Click the clock icon to open a dialog where you can schedule a scenario » Here, you can set when and how often your activated scenario is to be executed. If you want to learn about more advanced features in Integromat such as Flow Control » or Scenario Settings », check out their help articles » 4. Finalize your setup Now all is left is to start your syncing! Before that, you just need to test your scenario and activate it. To test if your configuration is correct, click on the Run once button in the left bottom corner of your dashboard. If your test comes back with no error you can start syncing your data. To do so, click on the switch button Scheduling under test one. That’s it! Integromat will now transfer data between your apps for you. Go visit their page to find out more about Integromat possibilities »
https://docs.woodpecker.co/en/articles/5223232-integromat-quickstart-guide
2021-07-23T22:28:20
CC-MAIN-2021-31
1627046150067.51
[array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424889/ba177302ca918d72319a4020/file-l2oRYe3Vin.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424892/5991f2c3a40aac77122e468f/file-XyS9orHa4Q.gif', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424894/126d09c8d69164acb6cf0213/file-10U8ZPAzK0.gif', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424896/e15df4b3e35aa9fd1163ccbf/file-PRArKN9Req.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424900/085e3a834f04f7f9500e5c9c/file-S5cfvgsb8y.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424905/f73393bb9b4e278ee380247a/file-NT9JG07xC7.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424913/b21c85f58c95307d1f6f3468/file-hgkostdxWs.gif', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424917/cc67055e61449316c38bf5c3/file-7NemFAEx0Q.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424920/0aa9d08f1c6a86df40d87bd2/file-E3jxaVqREI.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424930/974726be06235b806ebcc1ad/file-uCY5A10yCi.gif', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424940/084201a806a6afb14f076777/file-SUKSIS9MoA.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335424943/695efbf7b8cdd70e58c67089/file-AyrUHOQzwY.gif', None], dtype=object) ]
docs.woodpecker.co
You can encrypt 3270, 5250, and Open Systems session documents to protect them against unauthorized changes. Encryption effectively scrambles the data in a session document, helping to prevent unauthorized users from reading and changing the file's contents. For best results, use document encryption in conjunction with the encryption options in the Permissions Manager. In Reflection, you can easily encrypt sessions by saving them in the Encrypted Session Document format. Alternatively, you can encrypt documents using a command-line program installed with Reflection, FileEncrypt.exe. With this program, you can also determine whether session documents are encrypted, and if they are, you can decrypt them. To encrypt a session in Reflection If you are using this "look and feel" Do this... Microsoft Office 2007 On the Reflection button , choose Save As. Microsoft Office 2010 On the File menu, choose Save As. To encrypt, decrypt, or test sessions using FileEncrypt.exe To Type Encrypt a document fileencrypt /e [file_in] [file_out] Decrypt a document fileencrypt /d [file_in] [file_out] Test a document for encryption fileencrypt /t [file_in] where: [file_in] = The filename, including the extension and relative path. [file_out] = (Optional) A new name for the output file. For example: fileencrypt /e Session.rd3x SessionEncrypted.rd3x Note: FileEncrypt.exe searches only the current directory for session files, and requires administrative credentials to encrypt or decrypt a file. Related Topics Permissions Manager
https://docs.attachmate.com/Reflection/2011/r3/help/en/user-html/admin_encrypt_pr.htm
2021-07-23T23:06:58
CC-MAIN-2021-31
1627046150067.51
[]
docs.attachmate.com
General section provides an overview of the account and allows you to change your contact details and setup quality code information. The account owner or manager can change the account name or view a summary of current usage and billing charges for non-managed accounts. Note Only the account owner or manager can change the account name or close the account. Please ensure your contact details are always up to date so that we can contact you when required. You can optionally specifiy a different support phone number and email address for any public enquires or customer support requests. Phone number should follow the standard international format starting with ‘+’ and including country and area code. Email address is used for all account related correspondance with eagle.io. You should use your company admin email rather than your personal email address. eg. [email protected] The Quality Codes section displays a list of the historic Qualities for this Account. Quality Code settings apply to all Workspaces in the account. Use the Add button to create custom quality codes. Source and Quality Codes must be whole numbers between 0 - 65535. Refer to the Quality reference for further details. Note System qualities can not be removed, but can be updated with new Quality Codes.
https://docs.eagle.io/en/latest/topics/account_settings/general/index.html
2021-07-23T22:51:44
CC-MAIN-2021-31
1627046150067.51
[]
docs.eagle.io
The Amazon order synchronization is currently supported for Shopify, Magento, Magento 2, WooCommerce, Lightspeed, Shoplo, Prestashop, BigCommerce and CCV Shop platforms. Update your Koongo plan Prior the Amazon order sync setup, you need to upgrade your plan and add a connection. For more details, please check My Plan & Pricing manual. This step is not applied to the CCV Shop platform. Order sync setup To activate the Amazon order synchronization, please follow the steps below: Add connection Click Add Connection button. Select the Amazon connection Select the Amazon connection from the list Fill user details Select your Amazon account Marketplace and fill the Seller Id and MWS Authorization Token. Set Activate Order Synchronization to ACTIVE. Save the settings. For more details about Seller Id and MWS Authorization Token, please check the Integration over API manual. Types Order syncing will transfer only orders with selected fulfillment method. Product Identification Attribute This indicated the attribute which is mapped to Amazon Seller SKU in the Amazon feed. Default Shipping Service - Store Shipment Default value will be used in case, that order shipping service in shipment from store doesn't match Amazon allowed services. Default Order Shipping Method Default shipping method for Amazon orders - e.g. DHL DE. Leave empty if you'd like to use values imported from Amazon. Value is used also for Carrier Name during shipment upload to Amazon, if Carrier Name is not filled by store. Order Cancellation Reason The default reason for order cancellation at the Amazon channel. Ignore Amazon Tax Ignore Amazon tax lines during order item processing. Tax Is Included In Prices Tax is included in base item and shipping price, which comes from Amazon. Order details You can check each order details.
https://docs.koongo.com/display/koongo/Amazon+order+sync+manual
2021-07-23T22:34:46
CC-MAIN-2021-31
1627046150067.51
[]
docs.koongo.com
mopidy.backend — Backend API¶ The backend API is the interface that must be implemented when you create a backend. If you are working on a frontend and need to access the backends, see the mopidy.core — Core API instead. URIs and routing of requests to the backend¶ When Mopidy’s core layer is processing a client request, it routes the request to one or more appropriate backends based on the URIs of the objects the request touches on. The objects’ URIs are compared with the backends’ uri_schemes to select the relevant backends. An often used pattern when implementing Mopidy backends is to create your own URI scheme which you use for all tracks, playlists, etc. related to your backend. In most cases the Mopidy URI is translated to an actual URI that GStreamer knows how to play right before playback. For example: Spotify already has its own URI scheme ( spotify:track:..., spotify:playlist:..., etc.) used throughout their applications, and thus Mopidy-Spotify simply uses the same URI scheme. Playback is handled by pushing raw audio data into a GStreamer appsrcelement. Mopidy-SoundCloud created it’s own URI scheme, after the model of Spotify, and use URIs of the following forms: soundcloud:search, soundcloud:user-..., soundcloud:exp-..., and soundcloud:set-.... Playback is handled by converting the custom soundcloud:..URIs to immediately before they are passed on to GStreamer for playback. Mopidy differentiates between handled by Mopidy-Stream and local:...URIs handled by Mopidy-Local. Mopidy-Stream can play pointing to tracks and playlists located anywhere on your system, but it doesn’t know a thing about the object before you play it. On the other hand, Mopidy-Local scans a predefined local/media_dirto build a meta data library of all known tracks. It is thus limited to playing tracks residing in the media library, but can provide additional features like directory browsing and search. In other words, we have two different ways of playing local music, handled by two different backends, and have thus created two different URI schemes to separate their handling. The local:...URIs are converted to immediately before they are passed on to GStreamer for playback. If there isn’t an existing URI scheme that fits for your backend’s purpose, you should create your own, and name it after your extension’s ext_name. Care should be taken not to conflict with already in use URI schemes. It is also recommended to design the format such that tracks, playlists and other entities can be distinguished easily. However, it’s important to note that outside of the backend that created them, URIs are opaque values that neither Mopidy’s core layer or Mopidy frontends should attempt to derive any meaning from. The only valid exception to this is checking the scheme. Backend class¶ - class mopidy.backend.Backend[source]¶ Backend API If the backend has problems during initialization it should raise mopidy.exceptions.BackendErrorwith a descriptive error message. This will make Mopidy print the error message and exit so that the user can fix the issue. - Parameters config (dict) – the entire Mopidy configuration audio ( pykka.ActorProxyfor mopidy.audio.Audio) – actor proxy for the audio subsystem - library: Optional[LibraryProvider] = None¶ The library provider. An instance of LibraryProvider, or Noneif the backend doesn’t provide a library. - playback: Optional[PlaybackProvider] = None¶ The playback provider. An instance of PlaybackProvider, or Noneif the backend doesn’t provide playback. - playlists: Optional[PlaylistsProvider] = None¶ The playlists provider. An instance of PlaylistsProvider, or class:None if the backend doesn’t provide playlists. Playback provider¶ - class mopidy.backend.PlaybackProvider(audio: Any, backend: Backend)[source]¶ - Parameters audio (actor proxy to an instance of mopidy.audio.Audio) – the audio actor backend ( mopidy.backend.Backend) – the backend - change_track(track: Track) → bool[source]¶ Switch to provided track. MAY be reimplemented by subclass. It is unlikely it makes sense for any backends to override this. For most practical purposes it should be considered an internal call between backends and core that backend authors should not touch. The default implementation will call translate_uri()which is what you want to implement. - Parameters track ( mopidy.models.Track) – the track to play - Return type Trueif successful, else False - get_time_position() → int[source]¶ Get the current time position in milliseconds. MAY be reimplemented by subclass. - Return type - - is_live(uri: Uri) → bool[source]¶ Decide if the URI should be treated as a live stream or not. MAY be reimplemented by subclass. Playing a source as a live stream disables buffering, which reduces latency before playback starts, and discards data when paused. - Parameters uri (string) – the URI - Return type - - pause() → bool[source]¶ Pause playback. MAY be reimplemented by subclass. - Return type Trueif successful, else False - play() → bool[source]¶ Start playback. MAY be reimplemented by subclass. - Return type Trueif successful, else False - prepare_change() → None[source]¶ Indicate that an URI change is about to happen. MAY be reimplemented by subclass. It is extremely unlikely it makes sense for any backends to override this. For most practical purposes it should be considered an internal call between backends and core that backend authors should not touch. - resume() → bool[source]¶ Resume playback at the same time position playback was paused. MAY be reimplemented by subclass. - Return type Trueif successful, else False - seek(time_position: int) → bool[source]¶ Seek to a given time position. MAY be reimplemented by subclass. - should_download(uri: Uri) → bool[source]¶ Attempt progressive download buffering for the URI or not. MAY be reimplemented by subclass. When streaming a fixed length file, the entire file can be buffered to improve playback performance. - Parameters uri (string) – the URI - Return type - - stop() → bool[source]¶ Stop playback. MAY be reimplemented by subclass. Should not be used for tracking if tracks have been played or when we are done playing them. - Return type Trueif successful, else False - translate_uri(uri: Uri) → Optional[Uri][source]¶ Convert custom URI scheme to real playable URI. MAY be reimplemented by subclass. This is very likely the only thing you need to override as a backend author. Typically this is where you convert any Mopidy specific URI to a real URI and then return it. If you can’t convert the URI just return None. - Parameters uri (string) – the URI to translate - Return type string or Noneif the URI could not be translated Playlists provider¶ - class mopidy.backend.PlaylistsProvider(backend: mopidy.backend.Backend)[source]¶ A playlist provider exposes a collection of playlists, methods to create/change/delete playlists in this collection, and lookup of any playlist the backend knows about. - Parameters backend ( mopidy.backend.Backendinstance) – backend the controller is a part of - as_list() → List[Ref][source]¶ Get a list of the currently available playlists. Returns a list of Refobjects referring to the playlists. In other words, no information about the playlists’ content is given. - Return type list of mopidy.models.Ref New in version 1.0. - create(name: str) → Optional[Playlist][source]¶ Create a new empty playlist with the given name. Returns a new playlist with the given name and an URI, or Noneon failure. MUST be implemented by subclass. - Parameters name (string) – name of the new playlist - Return type mopidy.models.Playlistor None - delete(uri: Uri) → bool[source]¶ Delete playlist identified by the URI. Returns Trueif deleted, Falseotherwise. MUST be implemented by subclass. - Parameters uri (string) – URI of the playlist to delete - Return type - Changed in version 2.2: Return type defined. - get_items(uri: Uri) → Optional[List[Ref]][source]¶ Get the items in a playlist specified by uri. Returns a list of Refobjects referring to the playlist’s items. If a playlist with the given uridoesn’t exist, it returns None. - Return type list of mopidy.models.Ref, or None New in version 1.0. - lookup(uri: Uri) → Optional[Playlist][source]¶ Lookup playlist with given URI in both the set of playlists and in any other playlist source. Returns the playlists or Noneif not found. MUST be implemented by subclass. - Parameters uri (string) – playlist URI - Return type mopidy.models.Playlistor None - save(playlist: Playlist) → Optional[Playlist][source]¶ Save the given playlist. The playlist must have an uriattribute set. To create a new playlist with an URI, use create(). Returns the saved playlist or Noneon failure. MUST be implemented by subclass. - Parameters playlist ( mopidy.models.Playlist) – the playlist to save - Return type mopidy.models.Playlistor None Library provider¶ - class mopidy.backend.LibraryProvider(backend: mopidy.backend.Backend)[source]¶ - Parameters backend ( mopidy.backend.Backend) – backend the controller is a part of - browse(uri: Uri) → List[Ref][source]¶ See mopidy.core.LibraryController.browse(). If you implement this method, make sure to also set root_directory. MAY be implemented by subclass. - get_distinct(field: DistinctField, query: Optional[Query[DistinctField]] = None) → Set[str][source]¶ See mopidy.core.LibraryController.get_distinct(). MAY be implemented by subclass. Default implementation will simply return an empty set. Note that backends should always return an empty set for unexpected field types. - get_images(uris: List[Uri]) → Dict[Uri, List[Image]][source]¶ See mopidy.core.LibraryController.get_images(). MAY be implemented by subclass. Default implementation will simply return an empty dictionary. - lookup(uri: Uri) → Dict[Uri, List[Track]][source]¶ See mopidy.core.LibraryController.lookup(). MUST be implemented by subclass. - refresh(uri: Optional[Uri] = None) → None[source]¶ See mopidy.core.LibraryController.refresh(). MAY be implemented by subclass. - root_directory: Optional[Ref] = None¶ mopidy.models.Ref.directoryinstance with a URI and name set representing the root of this library’s browse tree. URIs must use one of the schemes supported by the backend, and name should be set to a human friendly value. MUST be set by any class that implements LibraryProvider.browse(). - search(query: Query[SearchField], uris: Optional[List[Uri]] = None, exact: bool = False) → List[SearchResult][source]¶ See mopidy.core.LibraryController.search(). MAY be implemented by subclass. New in version 1.0: The exactparam which replaces the old find_exact. Backend listener¶ - class mopidy.backend.BackendListener[source]¶ Marker interface for recipients of events sent by the backend actors. Any Pykka actor that mixes in this class will receive calls to the methods defined here when the corresponding events happen in a backend actor. This interface is used both for looking up what actors to notify of the events, and for providing default implementations for those listeners that are not interested in all events. Normally, only the Core actor should mix in this class. - playlists_loaded() → None[source]¶ Called when playlists are loaded or refreshed. MAY be implemented by actor. Backend implementations¶ See the extension registry.
https://docs.mopidy.com/en/latest/api/backend/
2021-07-23T21:06:13
CC-MAIN-2021-31
1627046150067.51
[]
docs.mopidy.com
PRINTING INDUSTRY. The i-DOCS formatting procedures allow the document to include any kind of general or customized formatting, based on predefined rules. The printing house will also be able to accept incoming statements formatted in HTML, so that they can be subsequently formatted in PostScript (PS), AFP or PDF, that include all control features printing procedures require. The following printing modes and features are supported: Duplex Simplex One-Up Two-Up Continuous Feed / Cut Sheets Barcodes Barcode Strings Global Counters OCR Strings Data integrity engine that compares the input with the output Full roll back and continue functionality Automated graphical user interface Grid computing on process level (1 of 10 = 10 of 1) Distributed databases (transparent on-line access to multiple archived DBs) Complete XML architecture with Java implementation i-DOCS enterprise edition for printing houses is ideally suited for handling thousands of documents and providing the following capabilities: Storage and Retrieval Since statements are stored in their final format, printing houses will have the ability to offer a complete e-billing solution on their customer's behalf. Given that the documents are reformattable, they can, if necessary, be returned to the customer to complete the verification and certification procedure. Intratemporal data consistency Even if applications and data formats, or even the documents themselves, change over time, the document database will continue to be consistent, searchable and the documents reproduced in their original form without data conversions or application modifications. Easy Administration Mapping input documents into XML schemas, as well as the design of the output forms and the definition of the related work flows, can be done by non-technical personnel given a minimum of training on the available tools included in the solution. This way, the printing house customer or the printing house itself, will be able to respond quickly to a frequently changing regulatory or business environment. Auditability i-DOCS facilitates Internal and external auditing mechanisms that require that the customer have access both to recent and older documents, can quickly locate specific documents and do a statistical analysis or other kinds of processing on them,if necessary. Compliance and legal service support Locating specific customer statements ad hoc for example, per the legal authorities, can be time consuming and costly. i-DOCS.
https://www.i-docs.com/printing
2021-07-23T22:02:09
CC-MAIN-2021-31
1627046150067.51
[array(['https://static.wixstatic.com/media/b9abd5_3fa448bf865d49009eb6583c91832145~mv2.jpg/v1/fill/w_91,h_46,al_c,q_80,usm_0.66_1.00_0.01/i-docs.jpg', 'i-docs.jpg'], dtype=object) array(['https://static.wixstatic.com/media/e859e64b2ac644a0a6a6a8d7335dd05a.jpg/v1/fill/w_779,h_312,al_c,q_80,usm_0.66_1.00_0.01/e859e64b2ac644a0a6a6a8d7335dd05a.jpg', None], dtype=object) ]
www.i-docs.com
OIDC User Pool IdP Authentication Flow When your user signs in to your application using an OIDC IdP, this is the authentication flow. Your user lands on the Amazon Cognito built-in sign-in page, and is offered the option to sign in through an OIDC IdP such as Salesforce. Your user is redirected to the OIDC IdP's authorizationendpoint. After your user is authenticated, the OIDC IdP redirects to Amazon Cognito with an authorization code. Amazon Cognito exchanges the authorization code with the OIDC IdP for an access token. Amazon Cognito creates or updates the user account in your user pool. Amazon Cognito issues your application bearer tokens, which might include identity, access, and refresh tokens. Requests that are not completed within 5 minutes will be cancelled, redirected to the login page, and then display a Something went wrong error OIDC is an identity layer on top of OAuth 2.0, which specifies JSON-formatted (JWT) identity tokens that are issued by IdPs to OIDC client apps (relying parties). See the documentation for your OIDC IdP for information about to add Amazon Cognito as an OIDC relying party. When a user authenticates, the user pool returns ID, access, and refresh tokens. The ID token is a standard OIDC
https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-oidc-flow.html
2021-07-23T23:14:30
CC-MAIN-2021-31
1627046150067.51
[array(['images/flow-cup-oidc-endpoints.png', 'User pool OIDC IdP authentication flow'], dtype=object)]
docs.aws.amazon.com
The URL can be used to link to this page Your browser does not support the video tag. AG 15-021 tETUR N'I'O: EXT: CITY OF FEDERAL WAY LAW DEPARTMENT ROUTING FORM 1. ORIGINATING DEPT/DIV: PRCS/ '` i f I_- _ q 2. ORIGINATING STAFF PERSON: K t m ^,� J S v�e-t�"T�Y1. EXT: 6613'2- 3. DATE REQ.BY: 1/1 la I I . ' 1 . _ ', (,\ f� f^ OTHER (4Y 6 e rv1 r i(a-(-1 r5Y\ yYI- &I 111 0 ' " ' • ç GLI e `t'TL. 5. PROJECT NAME: Fa IM l(u) `- L R\ ,et- 1 Cat✓Jr 6. NAME OF CONTRACTOR: t-( JL.k 11 Pli.k I lC ADDRESS: L tA 1 Li '_ - }� C CO 6 TELEPHONE:0-6(z E-MAIL: FAX: SIGNATURE NAME: V I G 1e-k_ S0✓1 Z Gi -Arr,e iV TITLE:: I 71 I I I C COMPLETION DATE: / l I t 9. TOTAL COMPENSATION:$ C REVIEWED INITIAL/DATE APPROVED PROJECT MANAGER 0,3 V2---V3 I I i't ❑ SUPERVISOR ❑ DIRECTOR ❑ RISK MANAGEMENT (IF APPLICABLE) LAW DEPT 4 L/1%`/i r ' 11. COUNCIL APPROVAL(IF APPLICABLE) COMMITTEE APPROVAL DATE: COUNCIL APPROVAL DATE: 12. CONTRACT SIGNATURE ROUTING ❑ SENT TO VENDOR/CONTRACTOR DATE SENT: DATE REC'D: ❑ ATTACH: SIGNATURE AUTHORITY,INSURANCE CERTIFICATE,LICENSES,EXHIBITS INITIAL/DATE SIGNED ❑ LAW DEPT All s/ SIGNATORY(MAYOR OR DIRECTOR) �� CITY CLERK / V fr ,JTj i ASSIGNED AG# AG# - -•. _ n SIGNED COPY RETURNED DATE SENT: III ❑RETURN ONE ORIGINAL COMMENTS: P h l . c !A. A. ' .i i &! . t- � , t► 4/1.4 (S" t)0 11-0 • we 11/9 FEDERAL WAY FAMILY HEALTH & SAFETY FAIR INDEMNIFICATION AND INSURANCE AGREEMENT This Indemnification and Insurance Agreement ("Agreement") is dated effective the later date indicated below with the signature. The parties ("Partie ") to this A reeme t are the City of Federal Way, a Washington municipal corporation("City") and ridge-al /' Gci (. 0' 0. X00 3. Providers who are only handing out informational brochures and not providing any screening, �tiagnosis, • CITY OF FEDERAL WAY ATTEST n 7713 . r• b Jim Ferrell, yo R' , , CMC, Ci C :,rk 5*talc Nutt Date: HEALTH CARE PROVIDER/VENDOR (/cry—cg2._ Cf-2(/` V ( ignature) (Pe)AMm fA/o w - suP iS.ok I// ice {2 rinted Name and Title) r V MG (49, p u 0 howl/ (Organization Name) 33¢3/ /37h N. c . e Way a/11 90°_3' (Address) (2 & ! 177 4 g°7 (Phon Date: //7&(•C. -- _ STATE OF WASHINGTON ) ) ss. COUNTY OF <6 2.9 ) On this day personally appeared before me ■ ig N" ° — , to me known to be the 1 ,(.pe1ry►5bi'' of - _ -(. . 1 i • e v that executed the foregoing instrument,and acknowledged the said instrument to be the free and voluntary act and deed of said corporation,for the uses and purposes therein mentioned, and on oath stated that he/€as authorized to execute said instrument and that the seal affixed, if any, is the corporate seal of said corporation. GIVEN my hand and official seal this 1-111 day of )(AA( 1a r-y , 201.5. ` otary's signatu'�"' *A A∎11111 ' Notary Public Votary's printed name V t 3 t I State of Washington 4Totary Public in and for tie Stateof Washington. I KYONG JUN I My Appointment Expires Sep 14,2017 y commission expires,S __-pt. V-i ZD (—( Health Fair Indemnification 10/2013 -2- 1
https://docs.cityoffederalway.com/WebLink/DocView.aspx?id=596904&dbid=0&repo=CityofFederalWay
2021-07-23T21:48:14
CC-MAIN-2021-31
1627046150067.51
[]
docs.cityoffederalway.com
Main panel¶ The main panel allows for easy selection of content elements listed under the HEADER, CONTENT and FOOTER menus. Tags, Styles and Document properties can also be accessed from this panel. The report editor is used to build the report including the document properties, content, layout and styles. Quick Links The DOCUMENT PROPERTIES menu contains general page and display options. The Page format and Orientation options specify the output page settings for PDF documents. Typically a report would use DIN A4. Select Own dimensions to specify a custom width and height. Increasing the Content height value will adjust the available vertical space in the report layout so that more elements can be added. Content will automatically span over multiple pages as necessary. Alternatively insert a page break element to force content to flow to a new page. Click or drag-drop an element icon from the editor toolbar to insert it into the page designer. Elements include text, horizontal line, image, bar code, table, frame, section and page break. Text elements are used to add text to your report. This includes headings, labels and paragraphs. Add images to your report. Choose file from the detail panel to upload a local JPG or PNG file from your device. Alternatively specify a URL or tag as the image Source. Charts and attachments from the workspaces tree can be added to the report via tags. Insert CODE 128 bar codes into your report. Use the Text field to enter the alphanumeric characters for the bar code or use tags for dynamic content. Tables make it easy to align related content. Insert a table and select the element from the main panel to access and edit the content of individual cells. Note: you can add current parameter data values to individual cells using tags however historic data in a table is not yet supported. Use frames to group other content elements or add backgrounds and borders to sections of your report. Drag elements into the frame to group them, then simply position the frame on the page as necessary. Sections are used to group similar content and can be used to include a special header and footer that will appear on every page the content spans over. Note: the use of sections is currently very limited. Click or drag-drop an element icon from the editor toolbar to insert it into the page designer. An element can be removed by clicking the x delete button next to the element in the main panel or selecting the element and pressing the DELETE key. Copy and Paste elements using CTRL+C and CTRL+V shortcut keys to quickly duplicate selected elements. Select the element by either clicking directly on the element’s bounding box in the page designer, click and drag a selection box around elements, or clicking the element in the main panel. Additional elements can be added to the selection by holding the SHIFT key. Drag the element(s) around the page designer to reposition or use the Arrow keys for small adjustments. Click and drag the bounding box handles to resize an element. Note: elements will automatically snap to the designer grid during reposition and resize operations but you can temporarily disable this behaviour by toggling the grid off. When multiple elements are selected, the alignment toolbar is available to quickly align elements together on the page. When an element is selected its properties, display, style and conditional style options are displayed in the detail panel. The general properties section includes the element content, position and size. DISPLAY section options customise the behaviour of the element in the PDF including preventing the content from being split over multiple pages, adding hyperlinks, etc. STYLE allows a predefined style to be applied or customise the appearance of elements individually. CONDITIONAL STYLE allows a different style (predefined or individually applied) to be set when a user-defined Condition is met. An example of a conditional statement is 11 > 10, or using tags {{Parameter.currentValue}} > 10. The + add style button in the main panel adds new predefined styles to the report. Headings, paragraphs and other content elements generally share similar styles. You can set the styles on every element individually but this can be tedious. Its more efficient to predefine a set of styles and then simply select the Style name on each element you want to apply it to.
https://docs.eagle.io/en/latest/topics/reports/editor/index.html
2021-07-23T22:14:34
CC-MAIN-2021-31
1627046150067.51
[array(['../../../_images/reports_editor_mainpanel.jpg', '../../../_images/reports_editor_mainpanel.jpg'], dtype=object)]
docs.eagle.io
Using MathJax Components in Node¶ This page is still under construction. It is possible to use MathJax in a node application in essentially the same was that it is used in a browser. in particular, you can load MathJax components and configure MathJax using a global MathJax object and loading the startup component or a combined component file via node’s require() command. See the MathJax node demos for examples of how to use MathJax from a node application. in particular, see the component-based examples for illustrations of how to use MathJax components in a node application. More information will be coming to this section in the future.
https://docs.mathjax.org/en/latest/server/components.html
2021-07-23T21:22:19
CC-MAIN-2021-31
1627046150067.51
[]
docs.mathjax.org
Installation Guide help EJB CDI, and then adds in JSF, persistence and transactions, EJB, Bean Validation, RESTful web services and more. Jakarta.
https://docs.wildfly.org/22/
2021-07-23T21:47:42
CC-MAIN-2021-31
1627046150067.51
[array(['images/splash_wildflylogo_small.png', 'WildFly'], dtype=object)]
docs.wildfly.org
Clear Linux* OS on DigitalOcean*¶ This guide explains how to import a Clear Linux* OS image to DigitalOcean and then deploy a VM instance. Prerequisites¶ Set up a DigitalOcean account. Create an SSH key on your client system that you will use to remote into the VM. You can follow the DigitalOcean’s SSH key creation guide. Add Clear Linux OS Image to DigitalOcean¶ Before you can deploy a Clear Linux OS instance on DigitalOcean, you need to add an image since it’s currently not available in its marketplace. You can use our pre-built image or you can build your own custom image. Use pre-built image¶ Note Our cloud images (clear-<release version>-digitalocean.img.gz) for DigitalOcean are considered Beta until we finish setting up our automated testing of the images against the DigitalOcean environment. Apart from the initial version, clear-31870-digitalocean.img.gz, we cannot guarantee that future versions and updates to the initial version is problems-free. Copy the URL for clear-31870-digitalocean.img.gz. Build custom image¶ For this method, you need a Clear Linux OS system to generate an image using the clr-installer tool. Add the clr-installer and gzip bundles. sudo swupd bundle-add clr-installer gzip Create an image configuration YAML file. See Installer YAML Syntax for more information on the clr-installer configuration YAML syntax. cat > clear-digitalocean.yaml << EOF #clear-linux-config # switch between aliases if you want to install to an actual block device # i.e /dev/sda block-devices: [ {name: "bdevice", file: "clear-digitalocean.img"} ] targetMedia: - name: \${bdevice} size: "800M" type: disk children: - name: \${bdevice}1 fstype: ext4 options: -O ^64bit mountpoint: / size: "800M" type: part bundles: [ bootloader, openssh-server, os-cloudguest, os-core, os-core-update, systemd-networkd-autostart ] autoUpdate: false postArchive: false postReboot: false telemetry: false legacyBios: true keyboard: us language: en_US.UTF-8 kernel: kernel-kvm version: 0 EOF The settings that are required in order to make the image work on DigitalOcean are: os-cloudguest bundle: Allows DigitalOcean to provision the image with settings such as hostname, resource (CPU, memory, storage) sizing, and user creation. legacyBios: true: The image need to support legacy BIOS to boot on DigitalOcean. Generate the image. sudo clr-installer -c clear-digitalocean.yaml The output should be clear-digitalocean.img. Compress the image with gzip to save bandwidth and upload time. gzip clear-digitalocean.img The output should be clear-digitalocean.img.gz. Note bzip2 is the other compression format DigitalOcean accepts. Upload image¶ On DigitalOcean’s website, go to. See Figure 1. Select an upload method. Set the DISTRIBUTION type as Unknown. See Figure 3. Choose your preferred datacenter region. Click Upload Image. Wait for the upload to finish before proceeding to the next section. Create and Deploy a Clear Linux OS Instance¶ On DigitalOcean’s website, go to Create Droplet.and then click See Figure 4. Under Choose an image, select Custom images. See Figure 5. Select your uploaded Clear Linux OS image. Under Choose a plan, select your preferred plan. See Figure 6. Under Choose a datacenter region, select the region you want the instance deployed to. See Figure 7. Assign SSH key to default clear user. By default, the user clear will be added to the instance and an SSH key must be assigned to this account. Under Authentication, select SSH keys and click New SSH Key. See Figure 8. Copy and paste your SSH public key in the SSH key content text field. See Figure 9. Give a name for the SSH key. Click Add SSH Key. Note If you need to add additional users to the instance, you can do that wth a YAML-formatted cloud-config user data script. For more information on cloud-config scripting for Clear Linux OS, see our subset implementation of cloud-init called micro-config-drive. Under Select additional options, select User data. Add your YAML-formatted cloud-config user data in the field below. Here is a simple example: #cloud-config users: - name: foobar gecos: Foo B. Bar homedir: /home/foobar ssh-authorized-keys: - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC65OihS4UP27xKOpqKWgT9 mgUNwEqhUEpTGGvopjT65Y/KU9Wfj6EYsdGzbHHcMUhFSTxAUAV4POH5d0LR MzI7sXMe528eCmpm2fTOHDDkVrurP/Jr2bjB9IrfSMkBYS8uRd603xNg/RDq EH3XzVeEDdEAxoej0mzsJ2UkQSBi+PD1J7JeCbX2lsb55x2yWzaUa+BTai7+ /TU4UabTRDtFTiXhx2rImSSguofDISVll6W5TTzbGmHdoEI+8DIAFU66ZgC9 SzL75LQi1YAWlj5XG+dXhN6Ev6KFM34odvWdxeCj0jcx5UIXcieBfOuLujEH dVybwNLG7hxDy/67BA1j [email protected] sudo: - [ "ALL=(ALL) NOPASSWD:ALL" ] Under Finalize and create: Set the number of instances you want to deploy. Set the hostname for the instance. See Figure 10. Click Create Droplet to deploy the instance. Connect to Your Clear Linux OS Instance¶ On DigitalOcean’s website, go to. See Figure 11. Get the IP address of your Clear Linux OS instance. On your client system, SSH into your instance. For example: ssh clear@<IP-address-of-instance> -i <SSH-private-key>
https://docs.01.org/clearlinux/latest/get-started/cloud-install/digitalocean.html
2021-07-23T21:27:52
CC-MAIN-2021-31
1627046150067.51
[]
docs.01.org
Content, Content Services can be configured as a single-instance, multi-tenant environment supporting a limited set of multi-tenancy (MT) features. Multi-tenancy allows multiple, independent tenants to be hosted on a single instance, which can be installed either on a single server or across a cluster of servers. The Content Services instance is logically partitioned such that it’ll appear to each tenant that they’re accessing a completely separate instance of the Alfresco repository. This can be useful for SaaS providers who make Content Services available to their customers under an OEM agreement. Enabling multi-tenancy will restrict what repository features can be used, as described in Features not supported in a multi-tenant environment. Enable multi-tenancy The multi-tenancy feature is pre-configured out-of-the-box, although it’s not enabled by default. When you install Content Services, multi-tenancy is disabled. The multi-tenancy feature is automatically enabled when the first tenant is created. Note: Only an Administrator user can create tenants. Note: If you have pre-existing user logins with syntax <name>@<domain>, you should not create a tenant with that domain name. This will break the login functionality of the existing users with logins <name>@<domain>. However, if you wish to disable multi-tenancy, you need to delete all the tenants. See Managing tenants for more information. Manage tenants The default administrator user has access to the default environment and can be considered to be a “super tenant”. Use the Tenant Console in the Admin Console to manage tenants. Open the Repo customer_tenant.com l3tm31n /usr/tenantstores/customer_tenant’s not specified, or does not exist, the repository default root content store will be used (as specified by the dir.contentstoreproperty).. Note: If you have pre-existing user logins with syntax <name>@<domain>, you should not create a tenant with that domain name. This will break the login functionality of the existing users with logins <name>@<domain>. Multi-tenancy administration When a tenant is created and enabled, the tenant administrator can access the administration features, like the Repo Repo Admin Console and Using the Share Admin Tools. Multi-tenancy implementation To implement multi-tenancy, Content Services has been logically partitioned such that each tenant has access to their own set of tenant-specific stores. These stores are typically routed to their own physical root directory. This also means that indexes are partitioned, since Content Services maintains an index per store. All related services are partitioned including node services, security services, workflow services, search and index services, and dictionary services. To support Alfresco Share in a multi-tenant environment, additional partitioned services include site services, activity services, invite services, and AVM services. The metadata is logically partitioned within the database schema. Logging enables nested diagnostic context (NDC). For a single tenant environment, the log output will show the user name context. For a multi-tenant environment, the log output also shows the tenant context. Modules Content Services supports the ability to pre-package AMPs (Alfresco Module Packages) into the Content Services WAR, which are installed into the default domain on start up. In a multi-tenant environment, the module is also installed into each tenant domain when the tenant is created or imported. Features not supported in a multi-tenant environment There are some features and components that are not supported in a multi-tenant production environment. Using multi-tenancy you can configure multiple, independent tenants on a single Content Services instance. However, multi-tenancy is not supported in the following products and features: - Alfresco Desktop Sync - Alfresco Governance Services - Smart Folders - Content replication - Encrypted Content Store - Document Transformation Engine - EMC Centera Connector - Alfresco Mobile Applications (they use the default tenant and can’t switch between tenants) - Alfresco Outlook Integration - Alfresco Media Management - Activiti Workflow Console Multi-tenancy is also not supported for the following methods: - Any authentication methods other than alfrescoNtlm - Inbound email - IMAP
https://docs.alfresco.com/content-services/6.2/admin/multi-tenancy/
2021-07-23T23:26:02
CC-MAIN-2021-31
1627046150067.51
[]
docs.alfresco.com
Installing Commerce Kickstart 2.x on a Windows Server running WampServer is a great way to get your shop up and going on the latest 64 bit server technology. There are a few pointers below to help you get the most out of this setup. Software versions What follows are two problem areas where we present the issue and our solution. The stock Drupal Commerce Kickstart 2.x enables a number of modules that require a cURL library, so it needs to be enabled. The curl library for the WampServer can be easily done by using the tray menu and selecting it from the list of PHP extensions. However, it never gets loaded properly because of an outdated dll file. We figured out this was the problem by examining the PHP error log. To enable a working version of the cURL extension (php_curl-5.3.13-VC9-x64.zip), you must first download the new extension from the end of this blogpost and replace the one at [path-to-wamp]\bin\php\php5.3.13\ext. Symptom: The page doesn’t load sometimes and the browser displays the following error: Error 101 (net::ERR_CONNECTION_RESET) Additionally, items like these appear in the Apache error log. Solution: Disable XDebug to avoid the error by manually editing php.ini. You can do this by searching the php.ini file found at path-to-wamp\bin\apache\apache2.2.22\bin\ for the word “xdebug” and commenting out the appropriate lines. Be sure to restart your server for the configuration to take effect. Drupal.org issues which might be related: Found errors? Think you can improve this documentation? edit this page
https://docs.drupalcommerce.org/commerce1/commerce-kickstart-2/installing-and-upgrading/installing-using-wampserver
2021-07-23T21:07:16
CC-MAIN-2021-31
1627046150067.51
[]
docs.drupalcommerce.org
Storage usage quota - Introduced in GitLab 12.0. - Moved to GitLab Free. A project’s repository has a free storage quota of 10 GB. When a project’s repository reaches the quota it To help manage storage, a namespace’s owner can view: - Total storage used in the namespace - Total storage used per project To view storage usage, from the namespace’s page go to Settings > Usage Quotas and select the Storage tab. The Usage Quotas statistics are updated every 90 minutes. If your namespace shows N/A as the total storage usage, push a commit to any project in that namespace to trigger a recalculation. A stacked bar graph shows the proportional storage used for the namespace, including a total per storage item. Click on each project’s title to see a breakdown per storage item. Storage usage statistics - Introduced in GitLab 13.7. - It’s deployed behind a feature flag, enabled by default. - It’s enabled on GitLab SaaS. - It’s recommended for production use. The following storage usage statistics are available to an owner: - Total namespace storage used: Total amount of storage used across projects in this namespace. - Total excess storage used: Total amount of storage used that exceeds their allocated storage. - Purchased storage available: Total storage that has been purchased but is not yet used. Excess () beside their name. Excess).
https://docs.gitlab.com/ee/user/usage_quotas.html
2021-07-23T22:25:30
CC-MAIN-2021-31
1627046150067.51
[]
docs.gitlab.com
CollisionObject2D¶ Inherits: Node2D < CanvasItem < Node < Object Inherited By: Area2D, PhysicsBody2D Category: Core Signals¶ Emitted when an input event occurs and input_pickable is true. See _input_event for details. - mouse_entered ( ) Emitted when the mouse pointer enters any of this object’s shapes. - mouse_exited ( ) Emitted when the mouse pointer exits all this object’s shapes. Member Variables¶ Description¶. Member Function Description¶ - void _input_event ( Object viewport, InputEvent event, int shape_idx ) virtual Accepts unhandled InputEvents. shape_idx is the child index of the clicked Shape2. Returns true if collisions for the shape owner originating from this CollisionObject2D will not be reported to collided with CollisionObject2Ds. Removes the given shape owner. Returns the owner_id of the given shape. Adds a Shape2D to the shape owner. Removes all shapes from the shape owner. Returns the parent object of the given shape owner. Returns the Shape2D with the given id from the given shape owner. Returns the number of shapes the given shape owner contains. Returns the child index of the Shape2D with the given id from the given shape owner. - Transform2D shape_owner_get_transform ( int owner_id ) const Returns the shape owner’s Transform2D. Removes a shape from the given shape owner. If true disables the given shape owner. If enable is true, collisions for the shape owner originating from this CollisionObject2D will not be reported to collided with CollisionObject2Ds. - void shape_owner_set_transform ( int owner_id, Transform2D transform ) Sets the Transform2D of the given shape owner.
https://docs.godotengine.org/en/3.0/classes/class_collisionobject2d.html
2021-07-23T21:58:49
CC-MAIN-2021-31
1627046150067.51
[]
docs.godotengine.org
TeX Input Processor Options¶ The options below control the operation of the TeX input processor that is run when you include 'input/tex', 'input/tex-full', or 'input/tex-base' in the load array of the loader block of your MathJax configuration, or if you load a combined component that includes the TeX input jax. They are listed with their default values. To set any of these options, include a tex section in your MathJax global object. The Configuration Block¶ MathJax = { tex: { packages: ['base'], // extensions to use inlineMath: [ // start/end delimiter pairs for in-line math ['\\(', '\\)'] ], displayMath: [ // start/end delimiter pairs for display math ['$$', '$$'], ['\\[', '\\]'] ], processEscapes: true, // use \$ to produce a literal dollar sign processEnvironments: true, // process \begin{xxx}...\end{xxx} outside math mode processRefs: true, // process \ref{...} outside of math mode digits: /^(?:[0-9]+(?:\{,\}[0-9]{3})*(?:\.[0-9]*)?|\.[0-9]+)/, // pattern for recognizing numbers tags: 'none', // or 'ams' or 'all' tagSide: 'right', // side for \tag macros tagIndent: '0.8em', // amount to indent tags useLabelIds: true, // use label name rather than tag for ids maxMacros: 1000, // maximum number of macro substitutions per expression maxBuffer: 5 * 1024, // maximum size for the internal TeX string (5K) baseURL: // URL for use with links to tags (when there is a <base> tag in effect) (document.getElementsByTagName('base').length === 0) ? '' : String(document.location).replace(/#.*$/, '')), formatError: // function called when TeX syntax errors occur (jax, err) => jax.formatError(err) } }; Note that some extensions make additional options available. See the TeX Extension Options section below for details. Note The default for processEscapes has changed from false in version 2 to true in version 3. Note Prior to version 3.2, the multlineWidth option used to be in the main tex block, but it is now in the ams sub-block of the tex block. Version 3.2 includes code to move the configuration from its old location to its new one, but that backward-compatibility code will be removed in a future vesion. Option Descriptions¶ packages: ['base'] This array lists the names of the packages that should be initialized by the TeX input processor. The input/tex and input/tex-full components automatically add to this list the packages that they load. If you explicitly load addition tex extensions, you should add them to this list. For example: MathJax = { loader: {load: ['[tex]/enclose']}, tex: { packages: {'[+]': ['enclose']} } }; This loads the enclose extension and acticates it by including it in the package list. You can remove packages from the default list using '[-]'rather than [+], as in the followiong example: MathJax = { tex: { packages: {'[-]': ['noundefined']} } }; This would disable the noundefined extension, so that unknown macro names would cause error messages rather than be displayed in red. If you need to both remove some default packages and add new ones, you can do so by including both within the braces: MathJax = { loader: {load: ['[tex]/enclose']}, tex: { packages: {'[-]': ['noundefined', 'autoload'], '[+]': ['enclose']} } }; This disables the noundefined and autoload extensions, and adds in the enclose extension. inlineMath: [['\(','\)']] This is an array of pairs of strings that are to be used as in-line math delimiters. The first in each pair is the initial delimiter and the second is the terminal delimiter. You can have as many pairs as you want. For example, inlineMath: [ ['$','$'], ['\\(','\\)'] ] would cause MathJax to look for $...$and \(...\)as delimiters for in-line mathematics. (Note that the single dollar signs are not enabled by default because they are used too frequently in normal text, so if you want to use them for math delimiters, you must specify them explicitly.). displayMath: [ ['$$','$$'], ['\[','\]'] ] This is an array of pairs of strings that are to be used as delimiters for displayed equations. The first in each pair is the initial delimiter and the second is the terminal delimiter. You can have as many pairs as you want.. processEscapes: false When set to true, you may use \$to represent a literal dollar sign, rather than using it as a math delimiter, and \\to represent a literal backslash (so that you can use \\\$to get a literal \$or \\$...$to get a backslash jsut before in-line math). When false, \$will not be altered, and its dollar sign may be considered part of a math delimiter. Typically this is set to trueif you enable the $ ... $in-line delimiters, so you can type \$and MathJax will convert it to a regular dollar sign in the rendered document. processRefs: true When set to true, MathJax will process \ref{...}outside of math mode. processEnvironments: true When true, tex2jax looks not only for the in-line and display math delimiters, but also for LaTeX environments ( \begin{something}...\end{something}) and marks them for processing by MathJax. When false, LaTeX environments will not be processed outside of math mode. digits: /^(?:[0-9]+(?:{,}[0-9]{3})*(?:.[0-9]*)?|.[0-9]+)/ This gives a regular expression that is used to identify numbers during the parsing of your TeX expressions. By default, the decimal point is .and you can use {,}between every three digits before that. If you want to use {,}as the decimal indicator, use MathJax = { tex: { digits: /^(?:[0-9]+(?:\{,\}[0-9]*)?|\{,\}[0-9]+)/ } }; tags: . tagSide: 'right' This specifies the side on which \tag{}macros will place the tags, and on which automatic equation numbers will appear. Set it to 'left'to place the tags on the left-hand side. tagIndent: "0.8em" This is the amount of indentation (from the right or left) for the tags produced by the \tag{}macro or by automatic equation numbers. useLabelIds: true This controls whether element IDs for tags use the \labelname or the equation number. When true, use the label, when false, use the equation number. maxMacros: 10000 Because a definition of the form \def\x{\x} \xwould cause MathJax to loop infinitely, the maxMacrosconstantconstant is used to limit the size of the string being processed by MathJax. It is set to 5KB, which should be sufficient for any reasonable equation. baseURL: (document.getElementsByTagName('base').length === 0) ? '' : String(document.location).replace(/#.*$/, '')) This is the base URL to use when creating links to tagged equations (via \ref{}or \eqref{}) when there is a <base>element in the document that would affect those links. You can set this value by hand if MathJax doesn’t produce the correct link. formatError: (jax, err) => jax.formatError(err) This is a function that is called when the TeX input jax reports a syntax or other error in the TeX that it is processing. The default is to generate an <merror>MathML element with the message indicating the error that occurred. You can override the function to perform other tasks, like recording the message, replacing the message with an alternative message, or throwing the error so that MathJax will stop at that point (you can catch the error using promises or a try/carchblock). The remaining options are described in the Options Common to All Input Processors section. Developer Options¶ In addition to the options listed above, low-level options intended for developers include the following: FindTeX: null The FindTeXobject instance that will override the default one. This allows you to create a subclass of FindTeXand pass that to the TeX input jax. A nullvalue means use the default FindTeXclass and make a new instance of that. TeX Extension Options¶ Several of the TeX extensions make additional options available in the tex block of your MathJax configuration. These are described below. Note that the input/tex component, and the combined components that load the TeX input jax, include a number of these extensions automatically, so some these options will be available by default. For example, the configmacros package adds a macros block to the tex configuration block that allows you to pre-define macros for use in TeX espressions: MathJax = { tex: { macros: { R: '\\mathbf{R}' } } } The options for the various TeX packages (that have options) are described in the links below: Setting Options from within TeX Expressions¶ It is sometimes convenient to be able to change the value of a TeX or TeX extension option from within a TeX expression. For example, you might want to change the tag side for an individual expression. The setoptions extension allows you to do just that. It defines a \setOptions macro that allows you to change the values of options for the TeX parser, or the options for a given TeX package. Because this functionality can have potential adverse consequences on a page that allows community members to enter TeX notation, this extension is not loaded by default, and can’t be loaded by require{}. You must load it and add it to the tex package list explicitly in order to allow the options to be set. The extension has configuration parameters that allow you to control which packages and options can be modified from within a TeX expression, and you may wish to adjust those if you are using this macro in a community setting.
https://docs.mathjax.org/en/latest/options/input/tex.html
2021-07-23T21:27:23
CC-MAIN-2021-31
1627046150067.51
[]
docs.mathjax.org
Can't open SharePoint documents in a local client (rich client) from Chrome when NPAPI plug-in is missing Symptoms When you try to open a file from an on-premises installation of Microsoft SharePoint Server or from SharePoint Online in classic view, the browser defaults to either downloading a local copy or trying to open the file in the browser. This issue occurs regardless of your library settings. Additionally, many other SharePoint integration features don't work. These include the following: - The ability to open documents in a rich client app when a document library link is selected - Lync Presence information - The ability to export to Excel from a document library or a SharePoint list - The Edit Library button and functionality - The **New Quick Step **option - The Create a workflow in SharePoint Designer option under Workflow Settings - The Open with Access and Open with Project options - The Edit Lists feature Cause This issue occurs because Netscape Plugin API (NPAPI) support is disabled. In September 2015, NPAPI support was permanently removed from Chrome and Chromium based browsers (such as Microsoft Edge). Installed extensions that require NPAPI plugins can no longer load those plugins. For more information, see The final countdown for NPAPI. Workarounds - Internet Explorer 11 relies on ActiveX controls instead of NPAPI, and will open Office files in an Office rich client application. - You can open the file in the browser by using Office Web Apps, and then select Edit in Word from the web editor to open the file in a rich client application. - You can also open the file in the rich Office client application from the Document library directly in Chrome or Microsoft Edge with the Edit button from the document library's file ellipsis menu in the preview pane. More Information SharePoint Server 2013 has reached the end of its support lifecycle and is no longer receiving feature fixes or new functionality. Microsoft is providing extended support for SharePoint Server 2013 until April 11, 2023. For more information about the servicing policy, see Product Servicing Policy for SharePoint 2013. Microsoft recommends that customers migrate to current product versions before the support end date. This way enables the customers to take advantage of the latest product innovations, and ensures uninterrupted support from Microsoft. We encourage customers to evaluate transition to Microsoft 365 with the help of their Microsoft representatives or technology partner. Third-party information disclaimer The third-party products that this article discusses are manufactured by companies that are independent of Microsoft. Microsoft makes no warranty, implied or otherwise, about the performance or reliability of these products. Still need help? Go to SharePoint Community.
https://docs.microsoft.com/en-US/sharepoint/troubleshoot/lists-and-libraries/cant-open-sp-documents-from-chrome
2021-07-23T23:47:32
CC-MAIN-2021-31
1627046150067.51
[]
docs.microsoft.com
Dashboards¶ Dashboards are a collection of charts assembled to create a single unified display of your data. Each chart shows data from a single MongoDB collection or view, so dashboards are essential to attain insight into multiple focal points of your data in a single display. Dashboards can be shared with other users. Dashboards Tab¶ The Dashboards tab shows all dashboards you have access to view. To learn more about dashboard permissions in MongoDB Charts, see Dashboard Permissions. Each dashboard shows the following information: - Title - Description - A preview of the first three charts in the dashboard, including the chart title and type - When the dashboard was last modified By default, the most recently modified dashboards are shown first in the list. You can change the sort order by using the Sort By dropdown menu. Create a New Dashboard¶ To create a new dashboard: - From the Dashboards tab, click the New Dashboard button. - In the New Dashboard dialog, enter a title for your dashboard. - (Optional) Enter a description for your dashboard. - Click Create. After after clicking the Create button you are taken to the page for your newly created dashboard, where you are prompted to add the first chart to your dashboard: Refresh Dashboard Data¶ You can refresh dashboard data to update all charts on the dashboard with the most current data from their respective data sources. When MongoDB Charts loads charts on a dashboard, it does not consistently query the data source for each chart. Instead, MongoDB Charts queries the data source when the dashboard first loads, and that data is stored in the browser cache and used to render the charts. MongoDB Charts provides options to both manually refresh dashboard data and configure the dashboard to automatically refresh at a specified time interval. These options allow you to control how current the data displayed on your dashboard is. By default, when you first create a dashboard, it is configured to refresh its data every hour. The Auto text next to the icon signifies that auto refresh is enabled. Manually Refresh Dashboard Data¶ To manually refresh dashboard data, first select a dashboard from the Dashboards tab, then click the button at the top-right of the dashboard. Configure Auto Refresh Settings¶ You can configure auto refresh settings to change the interval at which the dashboard data is refreshed. The dashboard shows the time its data was last updated and when the next update will occur at the top-right of the view. To configure auto refresh settings for a dashboard: - From the dashboard view, click the arrow next to the button and click Auto Refresh Settings. - Select the desired refresh interval. - Click Save. Auto refresh settings are stored in the local browser state. Settings dictating whether auto refresh is enabled and its configured interval are not persisted with the dashboard or shared with other users. Disable and Enable Auto Refresh¶ To disable auto refresh, click the arrow next to the button and click Disable Auto Refresh. To enable auto refresh, click the arrow next to the button and click Enable Auto Refresh. Fullscreen View¶ MongoDB Charts provides a fullscreen view for dashboards. In this view, MongoDB Charts hides the main navigation bar and exapands the dashboard to show the title, description, time of last modification, and charts in the entire space of the screen. To open a dashboard in fullscreen view, first select a dashboard from the Dashboards tab, then click the expanding arrows at the top-right of the dashboard: You can still remove, resize, rearrange, and access editing for charts in fullscreen view by hovering over the desired chart. Additionally, in fullscreen view you can still configure auto-refresh settings and manually refresh chart data using the refresh button. To exit fullscreen view, either click the contracting arrows at the top-right of the dashboard or press the escape key.
https://docs.mongodb.com/charts/current/dashboards/
2021-07-23T21:23:30
CC-MAIN-2021-31
1627046150067.51
[array(['/charts/current/images/charts/charts-dashboard-landing.png', 'Charts Dashboard Tab'], dtype=object) array(['/charts/current/images/charts/charts-dashboard-new.png', 'Charts New Dashboard Example'], dtype=object) array(['/charts/current/images/charts/full-screen-view.png', 'Fullscreen Arrows'], dtype=object) ]
docs.mongodb.com
$ atomic-openshift-installer install. This installation method is provided to make the installation experience easier by interactively gathering the data needed to run on each host. The installer is a self-contained wrapper intended for usage on a Red Hat Enterprise Linux (RHEL) 7 system. While RHEL Atomic Host is supported for running containerized OpenShift Container Platform services, the installer is provided by an RPM not available by default in RHEL Atomic Host, and must therefore be run from a RHEL 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift Container Platform cluster, but it can be. In addition to running interactive installations from scratch, the atomic-openshift-installer command can also be run or re-run using a predefined installation configuration file. This file can be used with the installer to: run an unattended installation, add nodes to an existing cluster, reinstall the OpenShift Container Platform cluster completely. Alternatively, you can use the advanced installation method for more complex environments. The installer allows you to install OpenShift Container Platform master and node components on a defined set of hosts. interested in installing OpenShift Container Platform using the containerized method (optional for RHEL but required for RHEL Atomic Host), see RPM vs Containerized to ensure that you understand the differences between these methods, then return to this topic to continue..3 Container Platform cluster. You can uninstall OpenShift Container Platform on all hosts in your cluster using the installer by running: $ atomic-openshift-installer uninstall See the advanced installation method for more options. Now that you have a working OpenShift Container Platform instance, you can: Configure authentication; by default, authentication is set to Deny All. Configure the automatically-deployed integrated Docker registry. Configure the automatically-deployed router.
https://docs.openshift.com/container-platform/3.3/install_config/install/quick_install.html
2021-07-23T23:21:27
CC-MAIN-2021-31
1627046150067.51
[]
docs.openshift.com
The basic units of OpenShift Enterprise Enterprise and Kubernetes add the ability to orchestrate Docker-formatted containers across multi-host installations. Though you do not directly interact with the Docker CLI or service when using OpenShift Enterprise, understanding their capabilities and terminology is important for understanding their role in OpenShift Enterprise and how your applications function inside of containers. The docker RPM is available as part of RHEL 7, as well as CentOS and Fedora, so you can experiment with it separately from OpenShift Enterprise. Refer to the article Get Started with Docker Formatted Container Images on Red Hat Systems for a guided introduction. Containers in OpenShift Enterprise Enterprise can provide redundancy and horizontal scaling for a service packaged into an image. You can use the Docker CLI directly to build images, but OpenShift Enterprise Enterprise can also supply its own internal registry for managing custom container images. The relationship between containers, images, and registries is depicted in the following diagram:
https://docs.openshift.com/enterprise/3.2/architecture/core_concepts/containers_and_images.html
2021-07-23T22:34:21
CC-MAIN-2021-31
1627046150067.51
[]
docs.openshift.com
Using push collections with multiple query processors Push collections can be setup to have multiple query processors. We will refer to the machine which allows addition and deletes of documents as the admin. We refer to the machine that a query processors connects to as a query processor’s master. You can setup multiple query processors to connect to a single admin as well as daisy chain query processors to other query processors. Thus a query processor can be a master. The query processors will only be able to fetch data from admin once the data has been committed. Query processors only replicate indexes, document data as well as some internal Push files. You should look at supporting_multiple_query_processors to find out about fetching query and click logs from query processors to the admin machine for better search quality and analytics. Caveat: This feature is not supported on Windows. Setting up a query processor Push query processors work on a pull model which means the admin does not know about its query processors, so almost all of the configuration will be on the query processors. If you wish to daisy chain slaves you should connect the slaves closest to the master server and work your way out. To setup a Push query processor: Ensure the master server and the query processor share the same server secret, see server secret (global.cfg). Create a push collection on the the query processors with the same name as the push collection you wish to replicate on master. Create the push collection on the the query processors following the multi-server setup steps. See: initial publication Before you make any changes to the Push collection you must first set the push.initial-mode to push.initial-mode=SLAVE. Now you must configure the query processor to talk to its master. Edit the following options in the query processor’s collection.cfg. You must set the push.replication.master.hostname to the hostname of the master e.g. push.replication.master.hostname=<master> You may need to set the push.replication.master.push-api.port to the jetty.admin_port configured in global.cfgon the master. If the slave is running on the same port as master this does not need to be set. You must now tell Push to start syncing through the push API. POST /push-api/v1/sync/collections/<collection>/state/start You can check if the Push collection is trying to synchronise with master by checking that its SyncState is Sync, by making the following request to the Push API: GET /push-api/v1/sync/collections/<collection>/state Promoting a query processor to Admin If your Admin machine suffers a failure you can promote one of your query processors to be the new Admin machine. To do this make the following call to the PushAPI on the query processors you wish to promote: POST /push-api/v1/collections/test-push2/mode/?mode=DEFAULT As query processors are not synchronised with the Admin machine, your new Admin machine may be missing some data. You should ensure that all of your query processors now point to the new master. To do this just edit the push.replication.master.* options as above and Push will automatically connect to the new master. Demoting a Admin to a query processor To demote a admin machine to a query processor you must first empty the push collection by making the following API request: DELETE /push-api/v1/collections/<collection> You must then change the mode to SLAVE by making the following API request POST /push-api/v1/collections/<collection>/mode/?mode=SLAVE Reducing network load Currently Funnelback supports ignoring some files to reduce the load on your network, at the cost of reduced usability. You may ignore the following: Document data: resulting in cache copies being unavailable on the query processor. Delete lists: has no effect on the query processor. Important! If any of the push.replication.ignore.* options are set true, you should not attempt to promote a query processor to Admin, as that will result in a corrupt Admin machine. Deleting a Push collection with slaves If a Push collection has slaves it can be difficult to delete that push collection. The easiest way to delete a Push collection with slaves is to first delete the Push collection on each of its slaves, ie delete the Push collections in the reverse order you set them up. This should be done because a Push collection must be stopped before it can be deleted and a slave will constantly make request to the Push collection effectively preventing the Push collection from being stopped.
https://docs.squiz.net/funnelback/docs/latest/build/data-sources/push/multiple-query-processors.html
2021-07-23T22:52:07
CC-MAIN-2021-31
1627046150067.51
[]
docs.squiz.net
You can configure the to run in a secure manner. The use of a secure Broker results in the following changes: - The consoles prompt for a username and password to connect to the Broker. Without a secure Broker, consoles connect to the Broker without authenticating. - The other servers and clients use their respective clientConnect.conffiles to determine what credentials to send to the Broker just as they use clientConnect.confto determine what credentials to send to a server. In particular, you can configure the clientConnect.conf files so that clients and servers prompt for connections to the Broker, as the console does, or specify the password in clientConnect.conf. Procedure - Choose a unique Smart Assurance username and password for the secure Broker credentials. The new username and password will be used by both servers and clients: You could use the username - Servers will use these credentials to register with the Broker. - Clients will use these credentials to connect to the Broker and determine the location of a server. SecureBrokerand the password Secure. Choose a unique Smart Assurance username and password. - Use the sm_editutility to open a local copy of the clientConnect.conffile, located in BASEDIR/smarts/local/conf. Edit this file, used by all clients and servers, so that Smart Assurance programs send the SecureBroker/Secure credentials when connecting to the Broker. - Comment out the following line: *:<BROKER>:BrokerNonsecure:Nonsecure - Type a new line configuring a secure Broker. This new line is added below the BrokerNonsecureline that you commented out. #*:<BROKER>:BrokerNonsecure:Nonsecure *: <BROKER> : SecureBroker : Secure Conversely, you can configure clientConnect.confso that clients and servers prompt for connections to the Broker, as well as other servers. In this example, it involves replacing the password Securewith <PROMPT>. *: <BROKER> : SecureBroker : <PROMPT> - Use sm_editto make the following changes to the local serverConnect.conffile used by the Broker: - Delete the line granting <DEFAULT>/<DEFAULT>access to the Broker. - Change the BrokerNonsecure/Nonsecureline to grant Pingaccess rather than Allaccess. Do not, however, delete this authentication record. - Add a new authentication record that grants Allaccess to the SecureBroker/Secure credentials. This new record must be below the BrokerNonsecure/Nonsecurerecord. <BROKER>:BrokerNonsecure:Nonsecure:Ping <BROKER> : SecureBroker : Secure : All
https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.5/sm-pub-smarts-security-config-guide-1015/GUID-5F3D8A3D-E74D-4E26-B3EC-F76FC3FD7896.html
2021-07-23T23:26:44
CC-MAIN-2021-31
1627046150067.51
[]
docs.vmware.com
IN THIS ARTICLE 1. How to set it up 2. How to export prospect into Woodpecker API Key, which is necessary to connect Leadpresso with Woodpecker, is a part of the API Key and Integrations add-on. Click here to learn how to get it on Marketplace » 1. How to set it up In order to connect Woodpecker to Leadpresso, you need to start with generating API Key from our app. 3. Once the API Key is generated, copy it. Moving on to the Leadpresso Once you have API Key from Woodpecker, go to your Leadpresso dashboard, to the Integrations tab. There you’ll see a box for your API Key from Woodpecker on the right. Paste it there and click Save. If you’ve done everything correctly you should see the success green bar, which means you can now look for your new prospects in Leadpresso. Visit the Leadpresso webpage for more information on how to look for your perfect prospects and organize them in the app » How to export prospects into Woodpecker Once you prepare your list with contacts that you want to move over to Woodpecker, simply click on the Export button on the right top corner of your list. Next, you’ll see a pop-up window with the export options. You can choose here if you want to export prospects into a certain campaign or only to the main prospect base. You can also switch the Update existing button if you want Leadpresso to update data about already existing prospects in Woodpecker. Besides that, you can also choose to export valid email addresses and set their Leadpresso status after exporting. Note that only contacts with email addresses can be exported to Woodpecker. After choosing your export settings just click on Confirm and your prospects will appear in Woodpecker. That’s it! Now you just need to find your potential customers and start sending campaigns for them. Kick-off looking for your perfect prospects now and head over to Leadpresso »
https://docs.woodpecker.co/en/articles/5223207-how-to-integrate-leadpresso-with-woodpecker
2021-07-23T22:02:42
CC-MAIN-2021-31
1627046150067.51
[array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335420127/e939a5c277b566650d671e79/file-ZYRBC3XBbX.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335420133/df73c130bab25e4f99ddae15/file-2gvQupwkpa.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335420138/73672f55653fb40f15d034c0/file-j8Q21IIIeS.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335420141/755d15ae21b639fcb1d28bd5/file-ds2cuu5IP7.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335420145/f09550a85373b0e05130d5d3/file-QCjPTP4yI1.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335420147/a09ec12c04416a2258ea3c58/file-rPb0ziHr9x.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335420154/28a76ff61529d9a8d7c35525/file-t4aZuwngeM.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335420157/e13dd1c35ed84aaa0ab733d9/file-mPDBcVXDZi.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335420169/936739c567e01fcd41eda625/file-NiCGKw4ITb.gif', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335420170/5a40249e0e2a537256073efd/file-IkkyqRPQhf.jpg', None], dtype=object) ]
docs.woodpecker.co
Many definitions of ontologies have been given in the last decade, but one that, in our opinion, best characterizes the essence of an ontology is based on the related definitions by [8]: An ontology is a formal, explicit specification of a shared conceptualisation. A conceptualisation refers to an abstract model of some phenomenon in the world which identifies the relevant concepts of that phenomenon. Explicit means that the type of concepts used and the constraints on their use are explicitly defined. Formal refers to the fact that the ontology should be machine understandable, i.e. the machine should be able to interpret the semantics of the information provided. Shared reflects the notion that an ontology captures consensual knowledge, that is, it is not restricted to some individual, but accepted by a group. Ontologies will allow structural and semantic definitions of documents, providing completely new possibilities: intelligent search instead of keyword matching, query answering instead of information retrieval, document exchange via ontology mappings, and definition of views on documents. RDF Schema [3]. Tim Berners-Lee calls this layered architecture the Semantic Web [2]. At the lowest level of the Semantic Web a generic mechanism for expressing machine readable semantics of data is required. The Resource Description Framework (RDF) [12] is this foundation for processing metadata, providing a simple data model and a standardized syntax for metadata. Basically, it provides the language for writing down factual statements. The next layer is the schema layer (provided by the RDF Schema specification [3]). We will show how a formal knowledge representation language can be used as the third, logical, layer. We will illustrate this by defining the ontology language OIL [6,10] as an extension of RDF Schema. OIL (Ontology Inference Layer), a major spin-off from the IST project On-To-Knowledge1 [7], is a Web-based representation and inference layer for ontologies, which unifies three important aspects provided by different communities: formal semantics and efficient reasoning support as provided by Description Logics, epistemological rich modeling primitives as provided by the Frame community, and a standard proposal for syntactical exchange notations as provided by the Web community. The content of the paper is organized as follows. In section 2 we provide a short introduction to RDF and RDF Schema. Section 3 provides a very brief introduction into OIL. Section 4 illustrates in detail how RDF Schema can be extended, using OIL as an example knowledge representation language. The result is an RDF Schema definition of OIL primitives, which allows one to express any OIL ontology in RDF syntax. In section 5 we discuss how our approach enables the added benefits of OIL, such as reasoning support and formal semantics, to be used on the Web, while retaining maximal compatibility with `pure' RDF(S). Finally, we provide our conclusions in section 6. <rdf:RDF> <rdf:Description <Publisher>World Wide Web Consortium</Publisher> </rdf:Description> </rdf:RDF>states that (the subject) has as publisher (the predicate) the W3C (the object). Since both the subject and the object of a statement can be resources, these statements can be linked in a chain: <rdf:RDF> <rdf:Description <Creator rdf: </rdf:Description> <rdf:Description <Email>[email protected]</v:Email> </rdf:Description> </rdf:RDF>States that (the subject) is created by staff member no. 85740 (the object). In the next statement, this same resource (staff member 85740) plays the role of subject to state that his email address is [email protected]. Finally, RDF statements are also resources, so that statements can be applied recursively to statements, allowing their nesting. All this leads to the underlying datamodel being a labelled hyper-graph, with each statement being a predicate-labelled link between object and subject. The graph is a hyper-graph since each node can itself again contain an entire graph. Despite the similarity in their names, RDF Schema fulfills a different role than XML Schema does. XML Schema, and also DTDs, prescribes the order and combination of tags in an XML document. In contrast, RDF Schema only provides information about the interpretation of the statements given in an RDF data model, but it does not constrain the syntactical appearance of an RDF description. Therefore, the definition of OIL in RDFS that will be presented in this document will not provide constraints on the structure of an actual OIL ontology. In this section we will briefly discuss the overall structure of RDFS and its main modeling primitives. Figure 3: An example OIL ontology, modelling the animal kingdom This language has been designed so that: An ontology in OIL is represented via an ontology container and an ontology definition part. For the container, we adopt the components defined by Dublin Core Metadata Element Set, Version 1.15. The ontology-definition part consists of an optional import statement, an optional rule-base and class, slot and axiom definitions. A class definition (class-def) associates a class name with a class description. This class description in turn consists of the type of the definition (either primitive, which means that the stated conditions for class membership are necessary but not sufficient, or defined, which means that these conditions are both necessary and sufficient), a subclass-of statement and zero or more slot-constraints. The value of a subclass-of statement is a (list of) class-expression(s). This can be either a class name, a slot-constraint, or a boolean combination of class expressions using the operators and, or and not, with the standard DL semantics. In some situations it is possible to use a concrete-type-expression instead of a class expression. A concrete-type-expression defines a range over some data type. Two data types that are currently supported in OIL are integer and string. Ranges can be defined using the expressions (min X), (max X)), (greater-than X), (less-than X), (equal X) and (range X Y). For example, (min 21) defines the data type consisting of all the integers greater than or equal to 21. As another example, (equal "xyz") defines the data-type consisting of the string "xyz". A slot-constraint (or property restriction) is a list of one or more constraints (restrictions) applied to a slot (property). Typical constraints are: An axiom asserts some additional facts about the classes in the ontology, for example, that the classes carnivore and herbivore are disjoint (that is, have no instances in common). Valid axioms are: <?xml version='1.0' encoding='ISO-8859-1'?> <rdf:RDF xmlns:rdf="" xmlns:rdfs="" xmlns:oil="" xmlns:dc="" xmlns:dcq="" <!-- The ontology defined in OIL with RDFS syntax--> </rdf:RDF>It is important to notice that namespace definitions are not import statements, and are therefore not transitive. An actual ontology also has to define the namespaces for RDF and RDFS via ?xmlns:rdf? and ?xmlns:rdfs?, otherwise, all elements of OIL that directly correspond to RDF and RDFS elements would not be available. The ontology-container of OIL provides metadata describing an OIL ontology. Because the structure and RDF-format of the Dublin Core element set is used, it is enough to import the namespace of the Dublin Core element set. Note that the fact that an OIL ontology should provide a container definition is an informal guideline in its RDFS syntax, because it is not possible to enforce this in the schema definition. Apart from the container, an OIL ontology consists of a set of definitions. The import definition is a simple list of references to other OIL modules that are to be included in this ontology. We make use of the XML namespace mechanism to incorporate this mechanism in our RDFS specification. Notice again that, in contrast to the import statement in OIL, ?inclusion? via the namespace definition is not transitive. Figure 4: The OIL extensions to RDFS in the subsumption hierarchy. To illustrate the use of these extensions, we will walk through them by means of some example OIL class definitions that need to be represented in RDFS syntax: class-def defined herbivore subclass-of animal slot-constraint eats value-type (plant or (slot-constraint is-part-of has-value plant)) class-def elephant subclass-of herbivore mammmal slot-constraint eats value-type plant slot-constraint colour has-filler grey The first defines a class ``herbivore'', a subclass of animal, whose instances eat plants or parts of plants. The second defines a class ``elephant'', which is a subclass of both herbivore and mammal. <rdfs:Class rdf: </rdfs:Class>From this definition it is not yet clear that this class is a defined class. We chose to introduce two extra classes in the OIL namespace, named PrimitiveClass and DefinedClass. In a particular class definition, we can use one of these two ways to express that a class is a defined class: <rdfs:Class rdf: <rdf:type rdf: </rdfs:Class>or: <oil:DefinedClass rdf: </oil:DefinedClass>We will use the first method of serialization throughout this article, but it is important to realize that both model exactly the same. This way of making an actual class an instance of either DefinedClass or PrimitiveClass introduces a nice object-meta distinction between the OIL RDFS schema and the actual ontology: using rdf:type you can consider the class ``herbivore'' to be an instance of DefinedClass. In OIL in general, if it is not explicitly stated that a class is defined, the class is assumed to be primitive. <rdfs:Class rdf: <rdf:type rdf: <rdfs:subClassOf rdf: </rdfs:Class>However, if one wants to define a class as a subclass of a class expression, one should use the oil:subClassOf property. To overcome this problem, we introduce the oil:hasPropertyRestriction property, which is an rdf:type of rdfs:ConstraintProperty (analogous to rdfs:domain and rdfs:range). Here we take full advantage of the intended extensibility of RDFS. We also introduce oil:PropertyRestriction as a placeholder class6 for specific classes of slot constraints, such as has-value, value-type, cardinality and so on. These are all modeled in the OIL namespace as subclasses of oil:PropertyRestriction: <rdfs:Class rdf: <rdfs:subClassOf rdf: </rdfs:Class>and similar for the other slot constraints. For the three cardinality constraints, an extra property ``number'' is introduced, which is used to assign a concrete value to the cardinality constraints. To connect a ValueType slot constraint with its actual values, such as the property it refers to and the class it restricts that property to, we introduce a pair of helper properties. These helper properties have no direct counterpart in terms of OIL primitives, but they serve to connect two classes. We define a property oil:onProperty to connect a property restriction with the subject property, and a property oil:toClass to connect the property restriction to the its class restriction. In our example ontology, the first part of the slot constraint would be serialized using the primitives introduced above as follows: <rdfs:Class rdf: <rdf:type rdf: <rdfs:subClassOf rdf: <oil:hasPropertyRestriction> <oil:ValueType> <oil:onProperty rdf: <oil:toClass> </oil:toClass> </oil:ValueType> </oil:hasPropertyRestriction> </rdfs:Class>If we would want to restrict the value type of a property to a string or an integer, we could use the toConcreteType property: ... <oil:ValueType> <oil:onProperty rdf: <oil:toConcreteType rdf: </oil:ValueType> ... We introduce oil:Expression as a common placeholder, with oil:ConcreteTypeExpression and oil:ClassExpression as specialization placeholders. oil:BooleanExpression is introduced as a sibling of these two, since we want to be able to construct boolean expressions with either kind of expression. The specific boolean operators, `and', `or' and `not', are introduced as subclasses. Also, notice that since a single class is essentially a simple kind of class expression, rdfs:Class itself should be a subclass of oil:ClassExpression (see figure 4). The `and', `or' and `not' operators are connected to operands using the oil:hasOperand property. This property again has no direct equivalent in OIL primitive terms, but is a helper to connect two class expressions, because in the RDF data model one can only relate two classes by means of a Property. In our example, we need to serialize a boolean `or'. The RDF Schema definition of the operator looks like this: <rdfs:Class rdf: <rdfs:subClassOf rdf: </rdfs:Class>and the helper property is defined as follows: <rdf:Property rdf: <rdfs:domain rdf: <rdfs:range rdf: </rdf:Property>The fact that hasOperand is only to be used on boolean class expressions is expressed using the rdfs:domain construction. This type of modeling stems directly from the RDF property-centric approach. Now we apply what we defined above to the example: <rdfs:Class rdf: <rdf:type rdf: <rdfs:subClassOf rdf: <oil:hasPropertyRestriction> <oil:ValueType> <oil:onProperty rdf: <oil:toClass> <oil:Or> <oil:hasOperand rdf: <oil:hasOperand> <HasValue> <oil:onProperty rdf: <oil:toClass rdf: </HasValue> </oil:hasOperand> </oil:Or> </oil:toClass> </oil:ValueType> </oil:hasPropertyRestriction> </rdfs:Class>Observe that the HasValue property restriction is not related to the class by a hasPropertyRestriction property, but by a hasOperand property. This stems from the fact that the property restriction plays the role of a boolean operand here. The first bit is trivial: <rdfs:Class rdf: </rdfs:Class>Next, we need to translate the OIL subsumption statement to RDFS. In this statement, a list of superclasses is given. In the RDFS syntax, we model these as seperate subClassOf statements: <rdfs:Class rdf: <rdfs:subClassOf rdf: <rdfs:subClassOf rdf: </rdfs:Class>Next, we have two slot constraints. The first of these is a value-type restriction, and it is serialized in the same manner as we showed in the "herbivore" example: PropertyRestriction> </rdfs:Class> <oil:HasFiller> <oil:onProperty rdf: <oil:stringFiller>grey</oil:stringFiller> </oil:HasFiller>In RDF(S), there is unfortunately no direct way to constrain the value of a property to a particular datatype. Therefore, the range value of oil:stringFiller can not be constrained to contain only strings. Only for clarity we created two subclasses of rdfs:Literal, named oil:String and oil:Integer. <rdfs:Class rdf: <rdfs:comment> The subset of Literals that are strings. </rdfs:comment> <rdfs:subClassOf rdf: </rdfs:Class>The range of the filler properties can now be set to the appropriate class, although it is still possible to use any type of Literal. The semantics of rdfs:Literal are only that anything of this type is atomic, i.e. it will not be processed further by an RDF processor. The fact that in this case it should be a string value can only be made an informal guideline. <rdf:Property <rdfs:domain rdf: <rdfs:range rdf: </rdf:Property>Using all this, we get the following complete translation of the class ëlephant": Filler> <oil:onProperty rdf: <oil:stringFiller>grey</oil:stringFiller> </oil:HasFiller> </oil:hasPropertyRestriction> </rdfs:Class>Observe that it is allowed to have more than one property restriction within the hasPropertyRestriction element. In the next section, we will examine how to serialize global slot definitions.. Despite these semantics for domain, a Property can have at most one range restriction in RDFS. However, according to discussions on the rdf-interest mailinglist the semantics of domain and range will very likely change in the next release of RDFS. We already anticipated on such a change, and interpret both multiple domain and multiple range restrictions with conjunctive semantics. Secondly, in contrast to RDFS, OIL not only allows classes as range and domain of properties, but also class-expressions, and - for range - concrete-type expressions. It is not possible to reuse rdfs:range and rdfs:domain for these sophisticated expressions, because of the conjunctive semantics of multiple range statements: we cannot extend the range of rdfs:range or rdfs:domain, we can only restrict it. Therefore, we introduced two new ConstraintProperties oil:domain and oil:range. They have the same domain as their RDFS equivalent (i.e., rdf:Property), but have a broader range. For domain, class expressions are valid fillers, for range both class expressions and concrete type expressions may be used: <rdfs:ConstraintProperty rdf: <rdfs:domain rdf: <rdfs:range rdf: </rdfs:ConstraintProperty> <rdfs:ConstraintProperty rdf: <rdfs:domain rdf: <rdfs:range rdf: </rdfs:ConstraintProperty>When translating a slot definition, rdfs:domain and rdfs:range should be used for simple (one class) domain and range restrictions. For example: slot-def gnaws subslot-of eats domain Rodent will be translated into: <rdf:Property rdf: <rdfs:subPropertyOf rdf: <rdfs:domain rdf: </rdf:Property>For more complicated statements the oil:range or oil:domain properties should be used: slot-def age domain (elephant or lion) range (range 0 70) is in the RDFS representation: <rdf:Property rdf: <oil:domain> <oil:Or> <oil:hasOperand rdf: <oil:hasOperand rdf: </oil:Or> </oil:domain> <oil:range> <oil:Range> <oil:integerValue>0</oil:integerValue> <oil:integerValue>70</oil:integerValue> </oil:Range> </oil:range> </rdf:Property>To specify that the range of a property is string or integer, we use our definitions of oil:String and oil:Integer as subclasses of rdfs:Literal. For example, to state that the range of age is integer, one could say: <rdf:Property <rdfs:range rdf: </rdf:Property>However, global slot-definitions in OIL allow specification of more aspects of a slot than property definitions in RDFS do. Besides the domain and range restrictions, OIL slots can also have an ``inverse'' attribute and qualities like ``transitive'' and ``symmetric''. We therefore added a property ``inverseRelationOf'' with ``rdf:Property'' as domain and range. We also added the classes ``TransitiveProperty'', ``FunctionalProperty'' and ``SymmetricProperty'' to reflect the different qualities of a slot. In the RDFS-serialization of OIL, the rdf:type property can be used to add a quality to a property. For example, the OIL definition of: slot-def has-part inverse is-part-of properties transitive is in RDFS: <rdf:Property rdf: <rdf:type rdf: <oil:inverseRelationOf rdf: </rdf:Property>or, in the abbreviated syntax: <oil:TransitiveProperty rdf: <oil:inverseRelationOf rdf: </oil:TransitiveProperty>This way of translating the qualities of properties features the same nice object-meta distinction between the OIL RDFS schema and the actual ontology as the translation of the ``type'' of a class (see section 4.2). In an actual ontology, the property ``has-part'' can be considered as an instance of a TransitiveProperty. Note that it is allowed to make a property an instance of more than one class, and thus giving it multiple qualities. Note that this way of representing qualities of properties in RDFS follows the proposed general approach of modeling axioms in RDFS, presented in [15]. In this approach, the same distinction between language-level constructs and schema-level constructs is made. One alternative way of serializing the attributes of properties would be to define the qualities ``transitive'' and ``symmetric'' as subproperties of rdf:Property. Properties in the actual ontology (e.g. ``has-part'') would in their turn be defined as subProperties of these qualities (e.g. transitiveProperty). However, this would mix up the use of properties at the OIL-specification level and at the actual ontology level. A third way would be to model the qualities as subproperties of rdf:Property again, but to define properties in the actual ontology as instances (rdf:type) of such qualities. In this approach, the object-meta level distinction is preserved. However, we dislike the use of rdfs:subPropertyOf at the meta-level, because then rdfs:subPropertyOf has two meanings, at the meta-level and at the object-level. We therefore prefer the first solution because of the clean distinction between the meta and object level. RDF only knows binary relations (properties). Therefore, we cannot simply map OIL axioms to RDF properties. Instead, we chose to model axioms as classes, with helper properties connecting them to the class expressions involved in the relation. Since axioms can be considered objects, this is a very natural approach towards modeling them in RDF (see also [16,15]). Observe also that binary relations (properties) are modeled as objects in RDFS as well (i.e., any property is an instance of the class rdf:Property). We simply introduce a new primitive alongside rdf:Property for relations with higher arity (see figure 4). We introduce a placeholder class oil:Axiom, and model specific types of axioms as subclasses: <rdfs:Class <rdfs:subClassOf rdf: </rdfs:Class>and likewise for Equivalent. We also introduce a property to connect the axiom object with the class expressions it relates to each other: oil:hasObject is a property connecting an axiom with an object class expression. For example, to serialize the axiom that herbivores, omnivores and carnivores are (pairwise) disjoint: <oil:Disjoint> <oil:hasObject rdf: <oil:hasObject rdf: <oil:hasObject rdf: </oil:Disjoint>Since in a disjointness axiom (or an equivalence axiom) the relation between the class expressions is bidirectional, we can connect all class expressions to the axiom object using the same type of property. However, in a covering axiom (like cover or disjoint-cover), the relation between class expressions is not bidirectional: one class expression plays the role of covering, several other class expressions play the role of being part of that covering. For modeling covering axioms, we introduce a seperate placeholder class, oil:Covering, which is a subclass of oil:Axiom. The specific types of coverings available are modeled as subclasses of oil:Covering again: <rdfs:Class <rdfs:subClassOf rdf: </rdfs:Class> <rdfs:Class <rdfs:subClassOf rdf: </rdfs:Class>Furthermore, two additional properties are introduced: oil:hasSubject, to connect a covering axiom with its subject, and oil:isCoveredBy, which is a subproperty of oil:hasObject, to connect a covering axiom with the classes that cover the subject. For example, we serialize the axiom that the class animal is covered by carnivore, herbivore, omnivore, and mammal (i.e. every instance of animal is also an instance of at least one of the other classes). <oil:Cover> <oil:hasSubject rdf: <oil:isCoveredBy rdf: <oil:isCoveredBy rdf: <oil:isCoveredBy rdf: <oil:isCoveredBy rdf: </oil:Cover> First, there is a problem with datatypes. It cannot be enforced that instances of oil:String are really strings or that instances of oil:Integer are really integers. Consequently, it is syntactically possible to state: <rdf:Property rdf: <rdf:range> <oil:Min> <oil:integerValue>nonsense</oil:integerValue> </oil:Min> </rdf:range> </rdf:Property>This is due to the fact that the RDF Schema specification has (intentionally) not specified any primitive datatypes. According to the specification, the work on data typing in XML itself should be the foundation for such a capability. Second, the RDF Schema specification of OIL does not prevent the intertwining of boolean expressions of classes with boolean expressions of concrete data types. Although a statement like (dog and (min 0)) is not allowed in OIL, it is syntactically possible to state: <oil:And> <oil:hasOperand rdf: <oil:hasOperand> <oil:Min> <oil:integerValue>0</oil:integerValue> </oil:Min> </oil:hasOperand> </oil:And>To prevent this kind of mixing, we could have introduced separate boolean operators for class expressions and concrete type expressions, but in our opinion, this would have made the schema too convoluted. Finally, another kind of problem is that the schema cannot prevent the unnecessary use of the OIL variants of standard RDF Schema constructs, like oil:subClassOf, oil:range and oil:domain. Although this unnecessary use does not affect the semantics of the ontology, it limits the compatibility of ontologies with plain RDF Schema. As for any ontology language, we can distinguish three levels: First, the ontology language, the language in which to state for example class-definitions, subclass-relations, attribute-definitions etc. In our case, RDF Schema and OIL. Second, the ontological classes, for example the classes "giraffe" or "herbivore", their subclass relationships, and their properties (such as eats). These are of course expressed in the language of the first level. Third, the instances of the ontology, such as individual giraffes or lions that belong to classes defined at the second level. If we look at the existing W3C RDF/RDF Schema recommendation, these levels have the following form: <rdfs:Class rdf: <rdfs:subClassOf rdf: </rdfs:Class> <rdf:Property rdf: <rdf:Description <rdf:type rdf: </rdf:Description> <rdfs:Class rdf: <rdfs:subClassOf rdf: <oil:hasPropertyRestriction> <oil:ValueType> <oil:onProperty rdf: <oil:toClass> <oil:Or> <oil:hasOperand rdf: <oil:hasOperand> <oil:HasValue> <oil:onProperty rdf: <oil:toClass rdf: </oil:HasValue> </oil:hasOperand> </oil:Or> </oil:toClass> </oil:ValueType> </oil:hasPropertyRestriction> </rdfs:Class>the semantics of the hasPropertyRestriction statement will not be interpretable by an RDF Schema processor. The entire state is legal RDF syntax, so it can be parsed, but the intended semantics of the property restriction itself can only be understood by an OIL-aware application. Notice that the first subClassOf statement is still fully interpretable even by an OIL-unaware RDF Schema processor. First, we did not take into account a restriction on the rdfs:subClassOf statement, i.e. the restriction that no cycles are allowed in the subsumption hierarchy. We think that this restriction should be dropped: without cycles one cannot even represent equivalence between two classes - in our view this is an essential modeling primitive for any knowledge representation language. Moreover, these kinds of constraints modeling point of view, allowing more than one range restriction is a much cleaner solution. During the process of extending RDFS, we encountered a couple of peculiarities in the RDFS definition itself. The most striking of these is the non-standard object-meta model, as already discussed in section 2.2.1. The main problem with this non-standard model is that some properties have a dual role in the RDFS specification, both at the schema level and instance level (cf. [14]). This makes it quite a challenge for modelers to understand the RDFS specification. We tried to make this distinction clear. Of these, only the last three have been defined on top of RDF(S). Since DAML+OIL is essentially a merger between OIL and DAML-ONT, we will focus on the comparison of our own proposal (OIL) to DAML-ONT. DAML-ONT shares with our own proposal the principle that an ontology language should maintain maximum backwards compatibility with existing web standard languages, and in particular RDF Schema. The difference between OIL and DAML-ONT lies in the degree to which the languages succeed in maximising the ontological content that can be understood by an ``RDF Schema agent'' (i.e. an application that understands RDF Schema but does not recognise the language specific extensions, OIL or DAML-ONT). Unlike OIL, DAML-ONT is built on top of RDFS in a way that allows little if any ontology content to be understood by an RDFS agent. In OIL, for example, stating simple subclass relationships between classes is done using the RDFS subClassOf property: <rdfs:Class <rdfs:subClassOf rdf: </rdfs:Class>This part of OIL ontologies is therefore accessible to any RDFS agent. In contrast, DAML-ONT uses its own locally defined ``subClassOf'' property, for example: <daml:Class <daml:subClassOf </daml:Class>The DAML-ONT subClassOf property is then defined to be ``equivalentTo'' rdfs:subClassOf, but the definition of ``daml:equivalentTo'' itself relies cyclicly on the definition of daml:sub-PropertyOf. Therefore even simple subclass relationships in a DAML ontology are inaccessible to an RDFS agent. The situation is even worse when it comes to more complex class definitions. For example, the definition of the class ``TallMan'' is the intersection of the classes ``Man'' and ``TallThing'' is expressed in DAML-ONT as: <daml:Class <daml:intersectionOf <daml:Class <daml:Class </daml:intersectionOf> </daml:Class>This is completely opaque to an RDFS agent as it will not understand the semantics of ``daml:intersectionOf'' In OIL, the definition of TallMan would rely on the fact that intersection is implicit in the semantics of rdfs:subClassOf: <rdfs:Class <rdf:type rdf: <rdfs:subClassOf rdf: <rdfs:subClassOf rdf: </rdfs:Class>making the sub-class relationships accessible to any RDF Schema agent. In conclusion, we argue that: An important advantage of our approach is the maximization of the compatibility with RDFS: not only is every RDF Schema document a valid OIL ontology declaration, but every OIL ontology can be partially interpreted by a semantically poorer processing agent. This partial interpretation will of course be incomplete, but correct under the intented semantics of the ontology. We firmly believe that our way of extending is generally applicable across knowledge representation formalisms. 2Actually they correspond to binary predicates of ground terms, where, however, the predicates may be used as terms, as well. 3Note, that in this sense a property is an instance of a class. 4It is not really clear from the RDFS specification whether rdfs:subClassOf can be applied to rdf:Property. This seems possible because the latter is also an instance of rdfs:Class. 5See "" 6A placeholder class in the OIL RDFS specification is only used to apply domain- and range restrictions to a group of classes, and will not be used in the actual OIL ontology. 7For example, it is never possible to derive class membership from a domain statement when union semantics are used. 8With ``valid'' we mean: not allowed by the BNF grammer of OIL. From the logical point of view, there's nothing wrong with a statement like (dog and (min 0)), it just happens to be equivalent to the empty class. 9DAML-ONT Initial Release, "" 10DAML+OIL "" File translated from TEX by TTH, version 2.88. On 19 Feb 2001, 14:11.
http://docs.huihoo.com/www10.org/cdrom/papers/291/index.html
2017-05-22T23:15:28
CC-MAIN-2017-22
1495463607242.32
[]
docs.huihoo.com
Installation Instructions¶ If you want to experiment with TestFixtures, the easiest way to install it is to do the following in a virtualenv: pip install testfixtures If your package uses setuptools and you decide to use TestFixtures, then you should do one of the following: Specify testfixturesin the tests_requireparameter of your package’s call to setupin setup.py. Add an extra_requiresparameter in your call to setupas follows: setup( # other stuff here extras_require=dict( test=['testfixtures'], ) ) Python version requirements This package has been tested with Python 2.6, 2.7, 3.2 to 3.4 on Linux, Mac OS X and Windows.
http://testfixtures.readthedocs.io/en/latest/installation.html
2017-05-22T23:14:04
CC-MAIN-2017-22
1495463607242.32
[]
testfixtures.readthedocs.io
Items. Read the docs page for the respective binding to get more information about possible connections and examples. There are two methods for defining items. If the binding supports it, PaperUI can do this. Otherwise items must be defined in one or more files in the items folder. Files here must have the extension .items but you can make as many .items files as you need/want however each item must be unique across them all. Refer to the installation docs to determine your specific installations folder structure. Groups are also defined in the .items files. Groups can be nested inside other groups, and items can be in none, one, or multiple groups. Typically items are defined using the openHAB Designer by editing the items definition files. Doing so you will have full IDE support like syntax checking, context assist etc. Item Syntax Items are defined in the following syntax: itemtype itemname ["labeltext"] [<iconname>] [(group1, group2, ...)] [["tag1", "tag2", ...]] [{bindingconfig}] Note: Parts in square brackets ([]) are optional. Example: Number LivingRoom_Temperature "The Temperature is [%.1f °C]" <temperature> (gTemperature, gLivingRoom) ["TargetTemperature"] {knx="1/0/15+0/0/15"} The example above defines an item: - of type Number - with name LivingRoom_Temperature - formatting its output in format %.1f °C(See Formatting section for syntax explanation) - displaying icon temperature - belonging to groups gTemperatureand gLivingRoom - tagged as a thermostat (“TargetTemperature”) for usage with I/O addons like Hue Emulation - bound to the openHAB binding knxwith write group address 1/0/15and listening group address 0/0/15 Item Types The item type defines which kind of values can be stored in that item and which commands can be sent to it. Each item type has been optimized for certain components in your smart home. This optimization is reflected in the data types, and command types. An example: A Philips Hue RGB light bulb provides three pieces of information. Its on or off state, its current brightness, and the color. If you want to change one of these values you can use any of four item types. - Switch the bulb on or off ( Switchitem) - Increase or decrease the brightness ( Dimmeritem) - Set the brightness to a specific value ( Numberitem) - Change the bulb’s color ( Coloritem) All available openHAB2 item types and their relevant commands can be viewed here. Dimmers vs Switches While a Dimmer item can accept either On/Off, Increase/Decrease, or Percent updates, Dimmer items store their state as a Percent value. See the following example: item: Dimmer Light_FF_Office "Dimmer [%d %%]" {milight="bridge01;3;brightness"} Switch item=Light_FF_Office Slider item=Light_FF_Office When the Switch widget is used, it sends ON or OFF commands to the item, but these are mapped to 100% and 0%, respectively. When the slider widget is used, it sends Percent commands to the item, which are used as the item’s state. In the example above, if you move the Slider widget to 60%, move the Switch to OFF, and finally move the switch to ON, the item’s state will be 100%. Item Name The item name is the unique name of the item which is used in the .sitemap, .rule etc. files. The name must be unique across all item files. The name should only consist of letters, numbers and the underscore character. Spaces cannot be used. Item Label The label text has two purposes. First, this text is used to display a description of the specific item (for example, in the sitemap). Second, it can be used to format or transform output from the item (for example, making DateTime output more readable). If you want to display a special character you must mask the character with a ‘%’. So, to display one ‘%’ enter the text ‘%%’. Groups The item type group is used to define a group in which you can nest/collect other items, including other groups. You don’t need groups, but they are a great help for your openHAB configuration. Groups are supported in rules, functions, the bindingname.cfg files, and more places. In all these places you can either write every single applicable item, i.e. All temperature sensors, or if you have grouped your items, you just use this group instead. A simple example group definition is: Group TemperatureSensors Nested Groups To take this a step further you can begin to nest groups like in the example below: Group All Group gSensor (All) Group gTemperature (gSensor) Number Sensor_Temperature "The Temperature is [%.1f °C]" <temperature> (gTemperature) {knx="1/0/15+0/0/15"} The item Sensor_Temperature is a member of the group gTemperature, which is itself a member of the group gSensor, which is a member of the group All. The item will only be included into each group once, regardless of the number of times the group is nested. To give an example: the item Sensor_Temperature only exists once in the group All. Group item types Group items can also be used to easily determine one or more items with a defined value or can be used to calculate a value depending on all values within the group. Please note that this can only be used if all items in the group have the same type. The format for this is: Group:itemtype:function itemname ["labeltext"] [<iconname>] [(group1, group2, ...)] By default, if no function is provided to the group, the Group uses OR. So for a Group of switches the Group state will be ON if any of the members states are ON. But this means that once one Item in the group has its state change to ON, the Group’s state gets set. Each subsequent Item that changes state to ON will not trigger “myGroup changed” because the Group isn’t changing. This is not a bug, it is the expected and designed behavior. Because the group state is an aggregate, every change in the Item members does not necessarily result in a change to the Group’s state. Group functions can be any of the following: An example of this would be: Group:Contact:OR(OPEN,CLOSED) gMotionSensors (All) Formatting Formatting is done applying Java formatter class syntax, therefore the syntax is %[argument_index$][flags][width][.precision]conversion Only the leading ‘%’ and the trailing ‘conversion’ are mandatory. The argument_index$ must be used if you want to convert the value of the item several times within the label text or if the item has more than one value. Look at the DateTime and Call item in the following example. Number MyTemperature "The Temperature is [%.1f] °C" { someBinding:somevalue } String MyString "Value: [%s]" { someBinding:somevalue } DateTime MyLastUpdate "Last Update: [%1$ta %1$tR]" { someBinding:somevalue } The output would look like this: Temperature 23.2 °C Value: Lorem ipsum Last Update: Sun 15:26 Transforming Another possibility in label texts is to use a transformation. They are used for example to translate a status into another language or convert technical value into human readable ones. To do this you have to create a .map file in your transform folder. These files are typical key/value pair files. key1=value1 key2=value2 ... Let’s make a small example to illustrate this function. If you have a sensor which returns you the number 0 for a closed window and 1 for an open window, you can transform these values into the words “opened” or “closed”. Create a map file named window.map for example and add the desired keys and values. 0=closed 1=opened NULL=unknown -=unknown Next we define two items. One showing the raw value as it is provided from our sensor and one with transformed value. Number WindowRaw "Window is [%d]" { someBinding:somevalue } Number WindowTransformed "Window is [MAP(window.map):%s]" { someBinding:somevalue } The output will be: Window is 1 Window is opened Transform files use UTF-8 encoding, so Unicode symbols will also work. ARIES=♈ Aries TAURUS=♉ Taurus WAXING_CRESCENT=🌑→🌓 Waxing Crescent FIRST_QUARTER=🌓 First Quarter Icons OpenHAB provides you a set of basic icons by default. However if you wish to use custom icons you need to place them inside the conf/icons/classic/ folder. These icons will be used in all of the openHAB frontends. The images must be in .png or .svg format, and have a name with only small letters and a hyphen or underscore (if required). The PaperUI interface (or via the classicui.cfg or basicui.cfg files) allows you to define whether you use Vector (.svg) or Bitmap (.png) icon files. As an example, to use a custom icon called heatpump.svg the correct syntax is <heatpump>. Dynamic Icons You can dynamically change the icon depending on the item state. You have to provide a default file and one icon file per state with the states name append to the icons name. Example: switch.svg switch-off.svg switch-on.svg If you want to use the dynamically items just use the image name without the added states. Switch Light_FrontDoor "Front Door light is [MAP(en.map):%s]" <switch> {somebinding:someconfig} Binding Configuration The binding configuration is the most import part of an item. It defines from where the item gets it values, and where a given value/command should be sent. You bind an item to a binding by adding a binding definition in curly brackets at the end of the item definition { channel="ns:bindingconfig" } Where ns is the namespace for a certain binding like “network”, “netatmo”, “zwave” etc. Every binding defines what values must be given in the binding configuration string. That can be the id of a sensor, an ip or mac address or anything else. You must have a look at your Bindings configuration section to know what to use. Some typical examples are:" } When you install a binding through PaperUI it will automatically create a .cfg file in conf/services/ for the appropriate binding. Inside these files are a predefined set of. If you need to use legacy openHAB 1.x bindings then you need to enable this feature through the PaperUI menu by turning on “Include Legacy 1.x Bindings” found at /configuration/services/configure extension management/. After downloading the legacy .jar files, they need to be placed in the /addons/ folder. If further configuration is required then you will need to create an openhab.cfg file in /conf/services/ and paste the appropriate binding configuration into this. For all other native openHAB2 bindings, configuration is done through a bindingname.cfg file in the same location. Restore States When restarting your openHAB installation you may find there are times when your logs indicate some items are UNDEF. This is because, by default, item states are not persisted when openHAB restarts. To have your states persist across restarts you will need to install a Persistence extension. Specifically, you need to use a restoreOnStartup strategy for all your items. Then whatever state they were in before the restart will be restored automatically. Strategies { default = everyUpdate } Items { // persist all items on every change and restore them from the MapDB at startup * : strategy = everyChange, restoreOnStartup }
http://docs.openhab.org/configuration/items.html
2017-05-22T23:33:40
CC-MAIN-2017-22
1495463607242.32
[]
docs.openhab.org
.. _cp-config-consumer: ======================= Consumer Configurations ======================= This topic provides configuration parameters available for |cp|. The parameters are organized by order of importance, ranked from high to low. .. Imported from confluentinc/ce-kafka: .. raw:: html org.apache.kafka.common.serialization.Deserializerinterface. org.apache.kafka.common.serialization.Deserializerinterface.). session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. message.max.bytes(broker config) or max.message.bytes(topic config). See fetch.max.bytes for limiting the consumer request size. group.min.session.timeout.msand group.max.session.timeout.ms. use_all_dns_ipsthen, when the lookup returns multiple IP addresses for a hostname, they will all be attempted to connect to before failing the connection. Applies to both bootstrap and advertised servers. If the value is resolve_canonical_bootstrap_servers_onlyeach entry will be resolved and expanded into a list of canonical names. timeoutparameter. message.max.bytes(broker config) or max.message.bytes(topic config). Note that the consumer performs multiple fetches in parallel. read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted' (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. Messages will always be returned in offset order. Hence, in read_committed mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, read_committed consumers will not be able to read up to the high watermark when there are in flight transactions. Further, when in read_committed the seekToEnd method will return the LSO group.instance.idwhich reach this timeout, partitions will not be immediately reassigned. Instead, the consumer will stop sending heartbeats and partitions will be reassigned after expiration of session.timeout.ms. This mirrors the behavior of a static consumer which has shutdown. org.apache.kafka.clients.consumer.ConsumerPartitionAssignorinterface allows you to plug in a custom assignment strategy.; enable.auto.commitis set to true. org.apache.kafka.clients.consumer.ConsumerInterceptorinterface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors. org.apache.kafka.common.metrics.MetricsReporterinterface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. org.apache.kafka.common.security.auth.SecurityProviderCreatorinterface.
https://docs.confluent.io/5.4.2/_sources/installation/configuration/consumer-configs.rst.txt
2021-02-25T08:10:42
CC-MAIN-2021-10
1614178350846.9
[]
docs.confluent.io
PyOTA¶ This is the official Python library for the IOTA Core. It implements both the official API, as well as newly-proposed functionality (such as signing, bundles, utilities and conversion). Join the Discussion¶ If you want to get involved in the community, need help with getting setup, have any issues related with the library or just want to discuss Blockchain, Distributed Ledgers and IoT with other people, feel free to join our Discord. You can also ask questions on our dedicated forum. If you encounter any issues while using PyOTA, please report them using the PyOTA Bug Tracker. Installation¶ To install the latest version: pip install pyota Optional C Extension¶ PyOTA has an optional C extension that improves the performance of its cryptography features significantly (speedups of 60x are common!). To install this extension, use the following command: pip install pyota[ccurl] Installing from Source¶ - Create virtualenv (recommended, but not required). git clone pip install -e . Running Unit Tests¶ To run unit tests after installing from source: python setup.py test PyOTA is also compatible with tox, which will run the unit tests in different virtual environments (one for each supported version of Python). To run the unit tests, it is recommended that you use the detox library. detox speeds up the tests by running them in parallel. Install PyOTA with the test-runner extra to set up the necessary dependencies, and then you can run the tests with the detox command: pip install -e .[test-runner] detox -v Documentation¶ PyOTA’s documentation is available on ReadTheDocs. If you are installing from source (see above), you can also build the documentation locally: Install extra dependencies (you only have to do this once): pip install '.[docs-builder]' Tip To install the CCurl extension and the documentation builder tools together, use the following command: pip install '.[ccurl,docs-builder]' Switch to the docsdirectory: cd docs Build the documentation: make html
https://pyota.readthedocs.io/en/develop/
2018-09-18T16:02:31
CC-MAIN-2018-39
1537267155561.35
[]
pyota.readthedocs.io
Exporter¶ Note Now being updated for Qgis2threejs version 2.0. Contents Window¶ When the Qgis2threejs exporter window opens first time, Layers panel is on the left side of the window and preview is on the right side. In this plugin, the word “export settings” means all configuration settings for a 3D scene, which consist of world settings, camera settings, each layer settings and so on. You can configure them via Scene menu and Layers panel. In the Layers panel, each layer item has a checkbox on its left. Check the checkbox to add the layer to current scene. To open layer properties dialog and configure settings for the layer, double-click on the layer item or click on Properties from context menu (right click menu). Export settings are automatically saved to a .qto3settings file under the same directory as the project file if you are working with a project file. Later the export settings of the project will be automatically loaded into the exporter. If you don’t want to use preview, uncheck Preview checkbox in the lower right corner of the window. For example, you might want to uncheck it to avoid waiting for updating 3D objects in the scene for each export settings update, World Settings¶ World settings dialog controls some basic configuration settings for current scene. Click on Scene - World Settings... menu entry to open the dialog. - Scale and Shift Base size Size in 3D world that corresponds to the map canvas width. The default value is 100. Vertical exaggeration Vertical exaggeration factor. This value affects terrain shape and z positions of all vector 3D objects. This also affects 3D object height of some object types with volume. Object types to be affected:Point : Cylinder, Cube, ConePolygon : Extruded 3D objects of the following types have volume, but their heights aren’t affected by this factor:Point : SphereLine : Pipe, Cone, Box The default value is 1.0.. Background Select either sky-like gradient or a solid color for the background of scene. Default is Sky. Display of coordinates Camera Settings¶ - Perspective Camera - Shows distant objects as smaller. - Orthographic Camera Controls Settings¶ Only OrbitControls is available. DEM Layer Settings¶ Geometry¶ Resampling level Select a DEM resolution from several levels. This resolution is used to resample the DEM, but is not for texture. Surroundings of each surrounding block is doubled. It means that the number of grid points in the same area becomes 1/4. Clip DEM with polygon layer Clips the DEM with a polygon layer. If you have a polygon layer that represents the area that elevation data exist or represents drainage basins, you might want to use this option. Material¶ Display type You can choose from map canvas image, layer image, a image file or a solid color. Map canvas image Render a texture image with the current map settings for each DEM block. Layer image Render a texture image with the selected layer(s) for each DEM block. Image file Textures the main DEM block with existing image file such as PNG file and JPEG file. TIFF is not supported by some browser. See Image format support for details. Solid color To select a color, press the button on the right side. Resolution Increases the size of image applied to each DEM block. This option is enabled when either Map canvas imageor Layer imageis selected. You can select a ratio to map canvas size from 100, 200 and 400 (%). Image size in pixels follows the percent. Opaciy Sets opacity of the DEM. 100 is opaque, and 0 is transparent. Transparent background (With map canvas image or layer image) Makes background of the image to be rendered transparent. Enable transparency (With image file) Enables the image transparency. Enable shading Adds a shading effect to the DEM. Other Options¶ Build sides This option adds sides and bottom to each DEM block. The z position of bottom in the 3D world is fixed. You can adjust the height of sides by changing the value of vertical shift option in the World panel. If you want to change color, edit the output JS file directly. Build frame This option adds frame to the DEM. If you want to change color, edit the output JS file directly. Vector Layer Settings¶ Vector layers are grouped into three types: Point, Line and Polygon. Common settings for all types: Z coordinate Specifies object altitude above zero-level or a DEM surface. Altitude You can use a expression to specify altitude. The unit is that of the map CRS. When Z value or M value is selected, the evaluated value is added to it. - Z value This item can be selected when the layer geometries have z coordinates and the layer type is point or line. - M value This item can be selected when the layer geometries have m values and the layer type is point or line. Altitude Mode - Absolute Altitude is distance above zero-level. - Relative to DEM layer Altitude is distance above a DEM surface. Style Usually, there are options to set object color and transparency. Refer to the links below for each object type specific settings. The unit of value for object size is that of the map CRS.¶ Point layers in the project are listed as the child items. The following object types are available: Sphere, Cylinder, Cone, Box, Disk See Point Layer section in Object Types page for each object type specific settings. Line¶ Line layers in the project are listed as the child items. The following object types are available: Line, Pipe, Cone, Box, Profile See Line Layer section in Object Types page for each object type specific settings. Polygon¶ Polygon layers in the project are listed as the child items. The following object types are available: Extruded, Overlay See Polygon Layer section in Object Types page for each object type specific settings. Export to Web Dialog¶ Template Select a template from available templates: Output directory and HTML Filename Select output HTML file path. Usually, a js file with the same file title that contains whole data of geometries and images is output into the same directory, and some JavaScript library files are copied into the directory. Leave this empty to output into temporary directory. Temporary files are removed when you close the QGIS application. Export button Exporting starts when you press the Export button. When the exporting has been done and Open exported page in web browser option is checked, the exported page is opened in default web browser (or a browser specified in Exporter Settings). Exporter Settings¶ Browser path If you want to open web page exported from the exporter with a web browser other than the default browser, enter the web browser path in this input box. See Browser Support page. Optional Features Not available in version 2.0 yet.
https://qgis2threejs.readthedocs.io/en/docs/Exporter.html
2018-09-18T15:45:48
CC-MAIN-2018-39
1537267155561.35
[array(['_images/exporter1.png', '_images/exporter1.png'], dtype=object) array(['_images/world_settings.png', '_images/world_settings.png'], dtype=object) array(['_images/export_web.png', '_images/export_web.png'], dtype=object) array(['_images/plugin_settings.png', '_images/plugin_settings.png'], dtype=object) ]
qgis2threejs.readthedocs.io
There are a number of ways global scan settings get applied to agents. A particular scan setting can apply to all agents that the server manages or only to agents with certain scan privileges. For example, if you configure the postpone Scheduled Scan duration, only agents with the privilege to postpone Scheduled Scan will use the setting. A particular scan setting can apply to all or only to a particular scan type. For example, on endpoints with both the OfficeScan server and OfficeScan agent installed, you can exclude the OfficeScan server database from scanning. However, this setting applies only during Real-time Scan. A particular scan setting can apply when scanning for either virus/malware or spyware/grayware, or both. For example, assessment mode only applies during spyware/grayware scanning.
http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/scanning-for-securit/global-scan-settings.aspx
2018-09-18T15:09:43
CC-MAIN-2018-39
1537267155561.35
[]
docs.trendmicro.com
Extending / Storage Layer / Working with Repositories Note: You are currently reading the documentation for Bolt 3.6. Looking for the documentation for Bolt 3.5 instead? Repositories manage collections of entities. At a conceptual level where an entity represents a row of data in the database, the repository represents the table. When you request a repository in Bolt you will normally ask for it via the name of the entity, and you will receive back an object that will be able to perform find, save and delete operations on a collection of (or single) entities. Here are some of the built in ways to interact with a repository. Quick Links to Repository Methods¶ Overview¶ A repository in Bolt is the primary method used for interacting with an entity, or collections of entities. It's not recommended to create a repository directly, instead you request a repository instance from the entity manager, as in the following example: $repo = $app['storage']->getRepository('Bolt\Storage\Entity\Users'); You can also use short aliases for any of the built-in tables so the following is equivalent. $repo = $app['storage']->getRepository('users'); Once you have a repository instance then the operations you perform will interact with the specific storage table and will return objects of the entity type managed. It is also possible to define content structure through in your contenttypes.yml file. For more information about this see the section on Repository and Content defined through the contenttypes.yml file createQueryBuilder()¶ $repo = $app['storage']->getRepository('users'); $qb = $repo->createQueryBuilder(); Apart from more basic queries, which can use the simpler finder methods, the primary method of querying the database is via a QueryBuilder instance. This method fetches an instance of QueryBuilder that is preset to select on the managed storage table. The returned instance is always an object of type Doctrine\DBAL\Query\QueryBuilder. Much more in-depth documentation for using this can be found here. Once you have finished building your query then you can fetch results by calling ->execute() followed by one of ->fetch() or ->fetchAll(). For example the following fetches the ten most recent published entries and for reference is functionally identical to the example in the findBy method documentation below. $repo = $app['storage']->getRepository('entries'); $qb = $repo->createQueryBuilder(); $qb->where('status="published"') ->orderBy('datepublish', 'DESC') ->setMaxResults(10); $entries = $qb->execute()->fetchAll(); find($id)¶ $repo = $app['storage']->getRepository('users'); $user = $repo->find(1); This method finds a row by id from the table and returns a single Entity object. findBy(array $criteria, array $orderBy, $limit, $offset)¶ We can now graduate to more flexible querying on the storage layer. The findBy() method allows us to pass key value parameters to the query which in turn filters the results fetched from the storage layer. For example: $repo = $app['storage']->getRepository('users'); $users = $repo->findBy(['displayname' => 'Editor']); As you can see from the accepted parameter list you can also pass in order, limit and offset parameters to the method allowing you to perform most simple queries using this method. For instance here is a query that finds the 10 most recent published entries. $repo = $app['storage']->getRepository('entries'); $entries = $repo->findBy(['status' => 'published'], ['datepublish', 'DESC'], 10); findOneBy(array $criteria, array $orderBy)¶ This method works identically to the findBy method above but will return a single Entity object rather than a collection. This is most suited for when you want to guarantee a single result, for example: $repo = $app['storage']->getRepository('users'); $user = $repo->findOneBy(['username' => $postedUser, 'password'=> $passHash]); $newestUser = $repo->findOneBy([], ['id', 'DESC']); findAll()¶ $repo = $app['storage']->getRepository('users'); $users = $repo->findAll(); The findAll method returns a collection of all the applicable entities unfiltered from the storage layer. In SQL terms it is identical to performing a SELECT * from tablename. findWith(QueryBuilder $qb)¶ ($limit) ; $entries = $repo->findWith($qb); This method lets you fetch data with more complex query previously built with QueryBuilder and it automatically hydrates results to fulfilled Entities. findOneWith(QueryBuilder $qb)¶ This method works identically to the findWith method above but will return a single Entity object rather than a collection. This is most suited for when you want to guarantee a single result, for example: (1) ; $entry = $repo->findOneWith($qb); save($entity)¶ $repo = $app['storage']->getRepository('users'); $user = $repo->find(1); $user->username = "A new username"; $result = $repo->save($entity); This method takes a modified object and persists the changes back to the database. It returns false on failure and the successfully saved id on success. delete($entity)¶ $repo = $app['storage']->getRepository('users'); $user = $repo->find(1); $result = $repo->delete($user); This method takes the supplied entity object and deletes that row from the storage table. It returns the number of deleted rows if successful, and false on failure. Couldn't find what you were looking for? We are happy to help you in the forum, on Slack or on IRC.
https://docs.bolt.cm/3.6/extensions/storage/repositories
2018-09-18T15:55:35
CC-MAIN-2018-39
1537267155561.35
[]
docs.bolt.cm
This guide is intended to help security administrators and IT administrators manage Endpoint Encryption users, devices, policies, and agents. This documentation assumes general knowledge about encryption methods, device formatting and partioning, and client-server architecture. This guide is for assistance with managing Endpoint Encryption using Trend Micro Control Manager. If you intend to use PolicyServer MMC as your primary management console, see the Endpoint Encryption PolicyServer MMC Guide. Important help topics: Resolved and Known Issues
http://docs.trendmicro.com/en-us/enterprise/endpoint-encryption-50-patch-4-administrator-guide/home_ag.aspx
2018-09-18T15:48:57
CC-MAIN-2018-39
1537267155561.35
[]
docs.trendmicro.com
The operation of VMware Site Recovery requires certain ports to be open. The components that make up the VMware Site Recovery service, namely vCenter Server, vSphere Web Client, Site Recovery Manager Server, the vSphere Replication appliance, and vSphere Replication servers, require different ports to be open. You must ensure that all the required network ports are open for VMware Site Recovery to function correctly. vCenter Server and ESXi Server network port requirements for Site Recovery Manager 8.0 Site Recovery Manager requires certain ports to be open onvCenter Server, Platform Services Controller, and on ESXi Server. Site Recovery Manager Server 8.0 network ports The Site Recovery Manager Server instances on the protected and recovery sites require certain ports to be open. Site Pairing Port Requirements Network ports that must be open on Site Recovery Manager and vSphere Replication Protected and Recovery sites Site Recovery Manager and vSphere Replication require that the protected and recovery sites can communicate. vSphere Replication 8.0 appliance network ports vSphere Replication server network ports If you deploy additional vSphere Replication servers, ensure that the subset of the ports that vSphere Replication servers require are open on those servers.
https://docs.vmware.com/en/VMware-Site-Recovery/services/com.vmware.srmaas.install_config.doc/GUID-499D3C83-B8FD-4D4C-AE3D-19F518A13C98.html
2018-09-18T15:09:13
CC-MAIN-2018-39
1537267155561.35
[]
docs.vmware.com
RadChartView: PieSeries Rad. Example: Properties and customization Radius Factor.RadiusFactor = 0.5; If you want to see what is the current radius factor, you can use the getRadiusFactor() method. Angle Range.AngleRange = new AngleRange(0,180); And this is the result: In order to see what is the current angle range, you can use the getAngleRange() method. Styles The default colors used for PieSeries come from the default palette, you can change the palette as described in this article or use the styles as demonstrated here Slice Styles The SliceStyle class allow you to create a set of stroke and fill colors which you can easily apply to the slices in a pie chart. Here's one simple SliceStyle: PieSeries instance: List<SliceStyle> styles = new List<SliceStyle>(); styles.Add(style1); pieSeries.SliceStyles = styles; Here's the result when we add a collection of four styles similar to the one in the example: Slice Offset As you may have noticed, there is a thin line between the segments. You can change its width through the method setSliceOffset(float). If you set it to 0, the line will be removed: pieSeries.SliceOffset = 0;
https://docs.telerik.com/devtools/xamarin/nativecontrols/android/chart/series/chart-series-pie.html
2018-09-18T15:50:34
CC-MAIN-2018-39
1537267155561.35
[array(['images/chart-series-pie-1.png', 'Demo of Pie chart with PieSeries. TelerikUI-Chart-Series-Pie'], dtype=object) array(['images/chart-series-pie-2.png', 'Demo of Pie chart with PieSeries in a semicircle. TelerikUI-Chart-Series-Pie-Semicircle'], dtype=object) array(['images/chart-series-pie-3.png', 'Demo of Pie chart with custom slice styles. TelerikUI-Chart-Series-Pie-Styles'], dtype=object) ]
docs.telerik.com
.? A. meansto share the source data. If you encrypt the data then you will need to provide a means to share the decryption keys. Application-side Programming Model¶ Transaction execution result: - How do application clients know the outcome of a transaction? A.. Ledger queries: - How do I query the ledger data? A.?.? A.? A.? A.? A.. Differences in Most Recent Releases¶ - As part of the v1.0.0 release, what are the highlight differences between v0.6 and v1.0? A. The differences between any subsequent releases are provided together with the Release Notes. Since Fabric is a pluggable modular framework, you can refer to the design-docs for further information of these difference. - Where to get help for the technical questions not answered above? - Please use StackOverflow.
https://hyperledger-fabric.readthedocs.io/en/v1.0.5/Fabric-FAQ.html
2018-09-18T15:14:04
CC-MAIN-2018-39
1537267155561.35
[]
hyperledger-fabric.readthedocs.io
Prerequisites¶.03.0-ce or greater is required. - Older versions of Windows: Docker Toolbox - again, Docker version Docker 17.03.0..
https://hyperledger-fabric.readthedocs.io/en/v1.0.5/prereqs.html
2018-09-18T15:53:47
CC-MAIN-2018-39
1537267155561.35
[]
hyperledger-fabric.readthedocs.io
What's New in Pyramid 1.7¶ This article explains the new features in Pyramid version 1.7 as compared to its predecessor, Pyramid 1.6. It also documents backwards incompatibilities between the two versions and deprecations added to Pyramid 1.7, as well as software dependency changes and notable documentation additions. Backwards Incompatibilities¶ The default hash algorithm for pyramid.authentication.AuthTktAuthenticationPolicyhas changed from md5to sha512. If you are using the authentication policy and need to continue using md5, please explicitly set hashalg='md5'. If you are not currently specifying the hashalgoption in your apps, then this change means any existing auth tickets (and associated cookies) will no longer be valid, users will be logged out, and have to login to their accounts again. This change has been issuing a DeprecationWarning since Pyramid 1.4. See Python 2.6 and 3.2 are no longer supported by Pyramid. See and The pyramid.session.check_csrf_token()function no longer validates a csrf token in the query string of a request. Only headers and request bodies are supported. See A global permission set via pyramid.config.Configurator.set_default_permission()will no longer affect exception views. A permission must be set explicitly on the view for it to be enforced. See Feature Additions¶ A new View Derivers concept has been added Added a require_csrfview option which will enforce CSRF checks on requests Checking CSRF Tokens Automatically, and Added a new method, pyramid.config.Configurator.set_csrf_default_options(), for configuring CSRF checks used by the require_csrf=Trueview option. This method can be used to turn on CSRF checks globally for every view in the application. This should be considered a good default for websites built on Pyramid. It is possible to opt-out of CSRF checks on a per-view basis by setting require_csrf=Falseon those views. See Checking CSRF Tokens Automatically and()API for validating the origin or referrer headers against the request's domain. See Subclasses of pyramid.httpexceptions.HTTPExceptionwill now take into account the best match for the clients Acceptheader, and depending on what is requested will return text/html, application/jsonor text/plain. The default for */*is still text/html, but if application/jsonis explicitly mentioned it will now receive a valid JSON response. See A new event, pyramid.events.BeforeTraversal, and interface pyramid.interfaces.IBeforeTraversalhave been introduced that will notify listeners before traversal starts in the router. See Request Processing as well as and A new method, pyramid.request.Request.invoke_exception_view(), which can be used to invoke an exception view and get back a response. This is useful for rendering an exception view outside of the context of the EXCVIEWtween where you may need more control over the request. See A global permission set via pyramid.config.Configurator.set_default_permission()will no longer affect exception views. A permission must be set explicitly on the view for it to be enforced. See Allow a leading =on the key of the request param predicate. For example, '=abc=1'is equivalent down to request.params['=abc'] == '1'. See Allow using variable substitutions like %(LOGGING_LOGGER_ROOT_LEVEL)sfor logging sections of the .ini file and populate these variables from the pservecommand line -- e.g.: pserve development.ini LOGGING_LOGGER_ROOT_LEVEL=DEBUG This support is thanks to the new global_confoption on pyramid.paster.setup_logging(). See The pyramid.tweens.EXCVIEWtween will now re-raise the original exception if no exception view could be found to handle it. This allows the exception to be handled upstream by another tween or middleware. See Deprecations¶ - The check_csrfview predicate has been deprecated. Use the new require_csrfoption or the pyramid.require_default_csrfsetting to ensure that the pyramid.exceptions.BadCSRFTokenexception is raised. See - Support for Python 3.3 will be removed in Pyramid 1.8. Scaffolding Enhancements¶ - A complete overhaul of the alchemyscaffold to show more modern best practices with regards to SQLAlchemy session management, as well as a more modular approach to configuration, separating routes into a separate module to illustrate uses of pyramid.config.Configurator.include(). See Documentation Enhancements¶ A massive overhaul of the packaging and tools used in the documentation was completed in. A summary follows: - All docs now recommend using pipinstead of easy_install. - The installation docs now expect the user to be using Python 3.4 or greater with access to the python3 -m venvtool to create virtual environments. - Tutorials now use py.testand pytest-covinstead of noseand coverage. - Further updates to the scaffolds as well as tutorials and their src files. Along with the overhaul of the alchemy scaffold came a total overhaul of the SQLAlchemy + URL dispatch wiki tutorial tutorial to introduce more modern features into the usage of SQLAlchemy with Pyramid and provide a better starting point for new projects. See for more. Highlights were: - New SQLAlchemy session management without any global DBSession. Replaced by a per-request request.dbsessionproperty. - A new authentication chapter demonstrating how to get simple authentication bootstrapped quickly in an application. - Authorization was overhauled to show the use of per-route context factories which demonstrate object-level authorization on top of simple group-level authorization. Did you want to restrict page edits to only the owner but couldn't figure it out before? Here you go! - The users and groups are stored in the database now instead of within tutorial-specific global variables. - User passwords are stored using bcrypt.
https://pyramid.readthedocs.io/en/latest/whatsnew-1.7.html
2018-09-18T16:19:17
CC-MAIN-2018-39
1537267155561.35
[]
pyramid.readthedocs.io
A new screen displays. > < * ^ | & ? \ / For example, if you are creating an expression for ID numbers, type a sample ID number. This data is used for reference purposes only and will not appear elsewhere in the product. None Specific characters Suffix Single-character separator.
http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/using-data-loss-prev/osce-osce_digital_as/dac_expressions/dac_expressions_cust/dac_expressions_cust1.aspx
2018-09-18T15:09:12
CC-MAIN-2018-39
1537267155561.35
[]
docs.trendmicro.com
Use this option if you have a properly-formatted .dat file containing the expressions. You can generate the file by exporting the expressions from either the server you are currently accessing or from another server. The .dat expression files generated by this Data Loss Prevention version are not compatible with previous versions. A message appears, informing you if the import was successful. If an expression to be imported already exists, it will be skipped.
http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/using-data-loss-prev/osce-osce_digital_as/dac_expressions/dac_expressions_cust/dac_expressions_cust12.aspx
2018-09-18T15:10:32
CC-MAIN-2018-39
1537267155561.35
[]
docs.trendmicro.com
Turn on notifications in Project Web App Project. Get email reminders about your work in Project Web App To turn on notifications In Project Web App, choose Settings > PWA Settings. Under Operational Policies, choose Additional Server Settings. Under Notification Email Settings (at the bottom of the page), select the Turn on notifications check box, and then choose Save.
https://docs.microsoft.com/en-us/ProjectOnline/turn-on-notifications-in-project-web-app?redirectSourcePath=%252fcs-cz%252farticle%252fzapnut%2525C3%2525AD-ozn%2525C3%2525A1men%2525C3%2525AD-v-project-web-appu-f5ed1080-1cc6-4ab7-b00e-25cbfbea03f5
2018-09-18T16:00:39
CC-MAIN-2018-39
1537267155561.35
[]
docs.microsoft.com
Kaizala settings The Kaizala Settings page lets you map your Kaizala registered phone numbers to the Office 365 account, for which you would like to manage Groups, Actions, Connectors, View reports and do more. Add phone number To add a phone number in the Kaizala management portal: On the Kaizala management portal, select Settings. Select Phone number. Enter the phone number and then choose Generate PIN. Once you get the PIN on your mobile phone, return to this screen. Enter the PIN and finally select Verify PIN.
https://docs.microsoft.com/en-us/office365/kaizala/settings?redirectSourcePath=%252fen-ie%252farticle%252fkaizala-settings-8a223b8e-995e-4788-8935-05100486d765
2018-09-18T16:00:49
CC-MAIN-2018-39
1537267155561.35
[]
docs.microsoft.com
Pega for PCF This documentation describes the Pega® Platform Service Broker for Pivotal Cloud Foundry (PCF). The Pega Platform Service Broker for PCF enables developers to deploy Pega Platform clusters on PCF. Overview Pega customers can run the Pega® Platform and applications within the PCF environment. With the release of Pega Platform 7.3, Pega customers are provided greater choice on their path to the cloud. This choice includes creating a private cloud powered by PCF and continuing to leverage the value of Pega technology. Learn about the Pega® Platform here and register here to access Pega documentation. Key Features Pega for PCF includes the following key features: - Configuration interface for the Pega Service Broker - Pega system setting and Java argument configuration - Pega Platform cluster deployment - Pega Platform database connection from a PCF environment - Pega CRM applications on PCF: Pega Sales Automation, Pega Customer Service, and Pega Marketing - Load-balanced clusters - Leveraged external file sources for file listeners and Business Intelligence Exchange (BIX) Product Snapshot The following table provides version and version-support information about Pega for PCF. Feedback Please provide any bugs, feature requests, or questions to the Pivotal Cloud Foundry Feedback list or send an email to [email protected]. License Pega for PCF is available at no additional charge and with no additional license requirements to current Pega Platform customers.
https://docs.pivotal.io/partners/pega/index.html
2018-09-18T15:17:13
CC-MAIN-2018-39
1537267155561.35
[]
docs.pivotal.io
Extensibility in Visual Studio Note This article applies to Visual Studio 2015. If you're looking for Visual Studio 2017 documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2017. Download it here. The latest version of this topic can be found at Extensibility in Visual Studio. Visual Studio gives you a lot of extensibility options. You can create your own SDKs, use the Windows, Windows Phone, and Azure SDKs (which are installed as part of Visual Studio), and extend Visual Studio itself. Extend Visual Studio You can use the Visual Studio SDK to extend just about every part of Visual Studio: commands, menus, windows, editors, and projects. To find out more, see Visual Studio SDK. Create Your Own SDKs Find out how to create, package, and deploy your own platform and extension SDKs: Creating a Software Development Kit. Roslyn Extensibility Find out how to use Roslyn for extensibility: .NET Compiler Platform ("Roslyn") Extensibility. Azure SDK Find out how to enable the Azure SDK: Enabling the Azure SDK.
https://docs.microsoft.com/en-us/visualstudio/extensibility/extensibility-in-visual-studio?view=vs-2015
2018-09-18T15:16:37
CC-MAIN-2018-39
1537267155561.35
[]
docs.microsoft.com
Server includes a rich set of highly-scalable, distributed set of algorithms such as RevoscaleR, revoscalepy, and microsoftML that can work on data sizes larger than the size of physical memory, and run on a wide variety of platforms in a distributed manner. Learn more about the collection of Microsoft's custom R packages and Python packages included with the product. Machine Learning Server bridges these Microsoft innovations and those operationalization paradigms and environments end up investing much time and effort towards this area. Pain points often resulting in inflated costs and delays include: the translation time for models, iterations to keep them valid and current, regulatory approval, managing permissions through operationalization. Machine Learning Server offers best-in-class operationalization -- from the time a machine learning model is completed, it takes just a few clicks to generate web services APIs. These web services are hosted on a server grid on-premises or in the cloud and can be integrated with line-of-business applications.. Deep ecosystem engagements to deliver customer success with optimal total cost of ownership Individuals embarking on the journey of making their applications intelligent or simply wanting to learn the new world of AI and machine learning, need the right resources to help them get started. In addition to this documentation, Microsoft provides several learning resources and has engaged several training partners to help you ramp up and become productive quickly. Key features of.
https://docs.microsoft.com/es-es/machine-learning-server/what-is-machine-learning-server
2018-08-14T11:14:43
CC-MAIN-2018-34
1534221209021.21
[]
docs.microsoft.com
Workflows for SLA SLA typically uses workflows to send notifications. You can create and edit workflows with the Workflow Editor. The default workflow that is available with the Service level management plugin is Default SLA Workflow. The Default SLA Workflow creates the events that send out notifications. For example, it creates an event to send a notification to the user assigned to a task, such as an incident, when the task SLA reaches 50% of its allotted time. Related TopicsWorkflow editor
https://docs.servicenow.com/bundle/istanbul-it-service-management/page/product/service-level-management/concept/c_WorkflowsForSLA.html
2018-08-14T10:40:43
CC-MAIN-2018-34
1534221209021.21
[]
docs.servicenow.com
You can optimize the virtual memory on the Windows Server computers on which your View Connection Server instances are installed by changing the system page-file settings. About this task When Windows Server is installed, Windows calculates an initial and maximum page-file size based on the physical memory installed on the computer. These default settings remain fixed even after you restart the computer. If the Windows Server computer is a virtual machine, you can change the memory size through vCenter Server. However, if Windows uses the default setting, the system page-file size does not adjust to the new memory size. Procedure - On the Windows Server computer on which View Connection Server is installed, navigate to the Virtual Memory dialog box. By default, Custom size is selected. An initial and maximum page-file size appear. - Click System managed size. Results Windows continually recalculates the system page-file size based on current memory use and available memory.
https://docs.vmware.com/en/VMware-Horizon-7/7.2/com.vmware.horizon-view.installation.doc/GUID-DF78870D-6EF1-4ED1-9142-9AFD87AFA71A.html
2018-08-14T10:49:52
CC-MAIN-2018-34
1534221209021.21
[]
docs.vmware.com
Below is a “sanitized” version of the Install Manager change log. This information is posted as part of a conscious effort to be more transparent in the development process. Not all information relating to a particular build is presented on this page - some information is still considered private and is therefore not included. This is the channel where the “production ready” build is distributed to the general public. This is the channel where the builds that are not considered “production ready” yet are provided for testing by a select group of individuals that represent the user-base and serve as the “front line” or the “canary in a coal mine” for a time before the build is promoted. This channel typically provides a build that is in the BETA phase of development, but technically can provide a build in the ALPHA phase.
http://docs.daz3d.com/doku.php/public/software/install_manager/change_log
2018-08-14T10:32:20
CC-MAIN-2018-34
1534221209021.21
[]
docs.daz3d.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Adds an inbound (ingress) rule to an Amazon Redshift security group. Depending on whether the application accessing your cluster is running on the Internet or an Amazon EC2 instance, you can authorize inbound access to either a Classless Interdomain Routing (CIDR)/Internet Protocol (IP) range or to an Amazon EC2 security group. You can add as many as 20 ingress rules to an Amazon Redshift security group. If you authorize access to an Amazon EC2 security group, specify EC2SecurityGroupName and EC2SecurityGroupOwnerId. The Amazon EC2 security group and Amazon Redshift cluster must be in the same AWS region. If you authorize access to a CIDR/IP address range, specify CIDRIP. For an overview of CIDR blocks, see the Wikipedia article on Classless Inter-Domain Routing. You must also associate the security group with a cluster so that clients running on these IP addresses or the EC2 instance are authorized to connect to the cluster. For information about managing security groups, go to Working with Security Groups in the Amazon Redshift Cluster Management Guide. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to AuthorizeClusterSecurityGroupIngressAsync. Namespace: Amazon.Redshift Assembly: AWSSDK.Redshift.dll Version: 3.x.y.z Container for the necessary parameters to execute the AuthorizeClusterSecurityGroupIngress service method. This example authorizes access to a named Amazon EC2 security group. var response = client.AuthorizeClusterSecurityGroupIngress(new AuthorizeClusterSecurityGroupIngressRequest { ClusterSecurityGroupName = "mysecuritygroup", EC2SecurityGroupName = "myec2securitygroup", EC2SecurityGroupOwnerId = "123445677890" }); .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Redshift/MIRedshiftAuthorizeClusterSecurityGroupIngressAuthorizeClusterSecurityGroupIngressRequest.html
2018-08-14T11:10:07
CC-MAIN-2018-34
1534221209021.21
[]
docs.aws.amazon.com
Advanced Setup Usage Logging Usage stats for the longer running commands (build and resync) can be logged to by providing a write key, project id and event collection name when running the setup command. No stats will be logged if these are not provided. AnyBar Notifications To get a status notification for the longer running commands (watch and resync) on OSX you can install AnyBar and provide a port number to use for it during the setup command. Ignoring Files/Directories when Syncing Files/directories can be excluded from being synced by the watch, resync and fetch commands. This is done by adding the files/directories to ignore to a .cp-remote-ignore file in the project root. This uses the standard rsync excludes-from format.
https://docs.continuouspipe.io/remote-development/advanced-setup/
2018-08-14T10:28:54
CC-MAIN-2018-34
1534221209021.21
[]
docs.continuouspipe.io
Tax Quick Reference Configuration Options Some tax settings have a choice of options that determines the way the tax is calculated and presented to the customer. To learn more, see: International Tax Configurations. can set different priorities, and also select the Calculate off subtotal only checkbox. This produces correctly calculated tax amounts that appear as separate line items.
https://docs.magento.com/m1/ee/user_guide/tax/quick-reference.html
2018-08-14T10:39:27
CC-MAIN-2018-34
1534221209021.21
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.magento.com
Removing). - In the Timeline view, select a key exposure to remove. - Do one of the following: - In the Timeline toolbar, click the Remove Key Exposure button.button. - In the Timeline menu, select Exposure > Remove Key Exposure. - Right-click and select Exposure > Remove Key Exposure. The key exposure is removed and replaced by the preceding exposure.
https://docs.toonboom.com/help/harmony-15/advanced/timing/remove-key-exposure.html
2018-08-14T11:31:37
CC-MAIN-2018-34
1534221209021.21
[array(['../Resources/Images/HAR/Stage/Cut-out/HAR12/example_rem_key_exp.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
You can use Site Recovery Manager to protect virtual machines on which vSphere High Availability (HA) is enabled. HA protects virtual machines from ESXi host failures by restarting virtual machines from hosts that fail on new hosts within the same site. Site Recovery Manager protects virtual machines against full site failures by restarting the virtual machines at the recovery site. The key difference between HA and Site Recovery Manager is that HA operates on individual virtual machines and restarts the virtual machines automatically. Site Recovery Manager operates at the recovery plan level and requires a user to initiate a recovery manually. To transfer the HA settings for a virtual machine onto the recovery site, you must set the HA settings on the placeholder virtual machine before performing recovery, at any time after you have configured the protection of the virtual machine. You can replicate HA virtual machines by using array-based replication or vSphere Replication. If HA restarts a protected virtual on another host on the protected site, vSphere Replication will perform a full sync after the virtual machine restarts. Site Recovery Manager does not require HA as a prerequisite for protecting virtual machines. Similarly, HA does not require Site Recovery Manager.
https://docs.vmware.com/en/Site-Recovery-Manager/6.0/com.vmware.srm.admin.doc/GUID-08058554-98B8-499C-AEE4-F1AB2BE0C605.html
2018-08-14T10:52:21
CC-MAIN-2018-34
1534221209021.21
[]
docs.vmware.com
User Experience: SputnikNet Express Splash and Welcome Page Themes How to customize the appearance of your SputnikNet Express splash and welcome pages using your brand combined with location-aware page themes from Google, Yahoo, MSN, and Sputnik. Log into your SputnikNet Express Account.Log into your SputnikNet Express Account. Captive portal settings and venue graphics.Captive portal settings and venue graphics. Be sure the following fields are filled out: - "Update" for any changes to take effect. Upload graphics to splash and welcome pages.Upload graphics to splash and welcome pages. Update your splash and welcome pages.Update your splash and welcome pages. Splash page with your brand.Splash page with your brand. Welcome page with your brand - Google theme.Welcome page with your brand - Google theme. Since the Google theme was selected, after users connect to the internet they see your brand with the Google page theme. The Google page theme includes a map of your location and location-aware links for: - Weather - Movies - Local search SputnikNet Express welcome pages show advertising from our sponsors. If you want to control and monetize advertising on your site, see "Advertising with SputnikNet".
http://docs.sputnik.com/m/express/l/4055-user-experience-sputniknet-express-splash-and-welcome-page-themes
2018-08-14T11:15:44
CC-MAIN-2018-34
1534221209021.21
[]
docs.sputnik.com
How do you manage this project? We've had questions about this in the past, as a result we wanted to document it as it is a very complex project with a unique packaging structure. This answers how university developers are able to manage this project at scale internally while contributing 100% of the code out into the open, while still doing it securely. Branch management We use multiple branches with 1 repo because our project is public and we mirror the repo internally and sync against that. This allows the public version to run ahead of what PSU uses while also ensuring greater security as to the validity of the git repo source (were github to somehow get hacked). Testing builds We use travis CI (seen here ) to automatically test builds per commit. We generally work in 1 branch unless exploring major functionality changes. We also work on a 'fork' and pull-request mentality for junior devs to be able to participate but not be able to touch the main branch. Development workflow For dev / stage / prod; we have Vagrant for pure dev and have no merged development server (it's always distributed). As we have build scripts that setup our system and pull from the latest code repo, we can ensure with a reasonable sense of accuracy that: - If travis said the commit was good, it will build - If local vagrant instance builds, it will build - If staging accepts the new code and doesn't get angry, it will work Once we've got these assurances we'll pull a random site down into our staging instance (mind you we have 250+ sites running very similar installation profiles so if it fails in 1 place it isn't going to work else where, if it works one place we're again got a high degree of confidence it just works). We are working on increasing Behat and Simpletest test coverage for additional code quality assurance. We also use the following pre-commit script () in order ensure that no development statements leak into the project and that all PHP code added to the repo has valid syntax (minor but incredibly useful in reducing gotchas). Jenkins We use Jenkins to orchestrate deployments against the 8 or so ELMSLN instances at PSU as of this writing; including staging which again ensures it's going to play correctly with a reasonably high degree of confidence. Managing drupal contrib As far as contrib, we patch and include them in the repo directly, then use a combination of Features and Drush recipes in order to deploy and play the result. What that might look like is a basic bash script (I hate provisioner formats, just another thing to learn for no real gain unless you start hitting replication and scale of replication but to each their own) that's something like upgrade.sh: - Backup the code, back up the database - Forcibly pull code in from repo of the production branch (for security but make sure .gitignores are flawless) - drush rr -- rebuild registries (drupal.org/project/registry_rebuild) - drush updb - run drush recipe that plays the changes in question (recipe files are json,xml or yaml which can also live in version control). - drush cc all - drush hss node -- seeds caches (see) All these scripts are packaged with the system if you check out the project repo (actually all the projects above are too) but the two in question that do the above are: - -- tee up the upgrade of the code base - - loop through and play DRUP recipes (drush recipe upgrade) which is a special kind of recipe which uses timestamps in the name of the file to know which upgrades to play in the right order. Here are some examples:
http://elmsln.readthedocs.io/en/latest/development/General-Infrastructure/
2018-08-14T11:14:21
CC-MAIN-2018-34
1534221209021.21
[]
elmsln.readthedocs.io
Revision history of "JController::setMessageMessage/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JController::setMessage== ===Description=== Sets the internal message that is passed with a redirect. {{Description:JController::setMes..." (and the only contributor was "Doxiki2"))
https://docs.joomla.org/index.php?title=JController::setMessage/1.6&action=history
2016-04-29T02:59:54
CC-MAIN-2016-18
1461860110356.23
[]
docs.joomla.org
Changes related to "Chunk:Adding a link to a CSS file to the document" ← Chunk:Adding a link to a CSS file to the document This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&days=1&from=&target=Chunk%3AAdding_a_link_to_a_CSS_file_to_the_document
2016-04-29T03:36:23
CC-MAIN-2016-18
1461860110356.23
[]
docs.joomla.org
Menus Menu Item User Password Reset From Joomla! Documentation Revision as of 03:12, 8 June 2011 by Topazgb (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Default Reset Layout Allows the user to reset their password, as shown below. This Layout has no unique Parameters. Retrieved from ‘’
https://docs.joomla.org/index.php?title=Help17:Menus_Menu_Item_User_Password_Reset&oldid=59409
2016-04-29T02:47:52
CC-MAIN-2016-18
1461860110356.23
[]
docs.joomla.org
Cross-media professionals from across Europe are travelling to Potsdam this July for Pixel Lab 2012. Of the 35 participants confirmed, 18 are producers who will be bringing a range of exciting projects to the lab, combining film, interactive, live event, mobile, gaming, TV, publishing or online. Now in its third year, The Pixel Lab is Power to the Pixel’s unique cross-media workshop, a residential training event at which the participants, from 16 different European countries, come together to develop and exchange skills and ideas to create, produce and finance cross-media properties. Included on the list of producers this year, is i-Docs 2012 delegate and speaker Andrew Pawlby from The London Quest Company with the proect ‘A Tale of Two Johns’. Each project producer will be partnered with a participant not bringing a project, who include screenwriters, commissioning editors, financiers, story architects, cross-media and interactive producers. Within the Lab, producers will also receive expert guidance and invaluabel from a team of cross-media tutors — producers, writers, funders and legal experts who are all international leaders in their field, which this year includes executive director of ARTE France Cinéma (FR), Sean Coleman and head of digital content & strategy, National Film Board of Canada (CAN), Adam Sigel. Commenting on the Lab, Liz Rosenthal, CEO & founder of PttP, said: “We’re thrilled to see the growing number, diversity and quality of projects and participants this year. We look forward to nurturing these innovative professionals towards the building of sustainable cross-media businesses and IP with higher awareness for 21st century audiences. Thanks to the longstanding generous support of the MEDIA Programme, Medienboard Berlin-Brandenburg, Creative Skillset, TorinoFilmLab and ARTE, Power to the Pixel is able to gather the world’s leading experts to help develop the exchange of skills and ideas, advance competitiveness and business acumen and create new networking opportunities for participants through collaborative project-driven work”. The collaborative nature of the event means that the benefits for particpents are potentially limitless; Savina Neirotti, director of TorinoFilmLab said: “Every time I take part in a Power to the Pixel’s Lab, my head starts spinning, a number of ideas come to me, and the need to explore them further, test them, and make them real. This is what The Pixel Lab is about, transforming brilliant ideas into concrete projects.” This residential week in July is the beginning of a four-month process. From July to September, producers will continue their work with Power to the Pixel heads, Liz Rosenthal &. Dan Simmons, head of film at Creative Skillset said: “It is crucial that UK film businesses continue to be pioneering and entrepreneurial to maintain our competitive edge in this digital age. The Pixel Lab 2012 will give participants the skills to respond to new patterns of consumer behaviour, capture new audiences and enhance their business opportunities. We are delighted to continue our support for a scheme that unites talent from across the creative industries and brings innovative cross-media projects into the marketplace.” The Lab seeks to increase competitiveness in the international marketplace, advance business acumen and create new collaborative and networking opportunities for participants through focused, project-driven work. Once again, it will receive generous backing from European funders who share these aims. For the third year running, the MEDIA Programme of the European Union, TorinoFilmLab, and Creative Skillset’s Film Skills Fund aresupporting The Pixel Lab. Medienboard Berlin-Brandenburg are supporting the event for the second time, ARTE France have this year awarded bursaries to three French participants, Camille Duvelleroy, Chloé Jarry and Guillaume Podrovnik., to find out more, click here.
http://i-docs.org/2012/06/07/power-to-the-pixels-cross-media-powerhouse-comes-to-potsdam/
2016-04-29T01:51:01
CC-MAIN-2016-18
1461860110356.23
[]
i-docs.org
Information for "Needs portal styling" Basic information Display titleCategory:Needs portal styling Default sort keyNeeds portal styling Page length (in bytes)429 Page ID2523723:24, 27 August 2012 Latest editorTom Hutchison (Talk | contribs) Date of latest edit23:24, 27 August 2012 Total number of edits1 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (2)Templates used on this page: Template:Ambox (view source) (protected)Template:Cat info (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=Category:Needs_portal_styling&action=info
2016-04-29T03:15:53
CC-MAIN-2016-18
1461860110356.23
[]
docs.joomla.org
Creating A VPS Testing Server From Joomla! Documentation Revision as of 21:33, 7 August 2012 by Jasondavis ext page, decide how much memory to dedicate to the Virtual Machine. Try to give CentOS atleast 1gig of RAM. Note: You can do the recommended 512 MB if you want, however the CentOS installation process will be different.
https://docs.joomla.org/index.php?title=Creating_A_VPS_Testing_Server&oldid=70707
2016-04-29T03:37:21
CC-MAIN-2016-18
1461860110356.23
[]
docs.joomla.org
Information for "Extensions Module Manager Admin Multilang" Basic information Display titleHelp35:Extensions Module Manager Admin Multilang Default sort keyExtensions Module Manager Admin Multilang Page length (in bytes)277 Page ID25517:37, 15 September 2012 Latest editorJoomlaWikiBot (Talk | contribs) Date of latest edit11:27, 30 April 2014 Total number of edits6 Total number of distinct authors4 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (5)Templates used on this page: Template:Cathelp (view source) Template:Rarr (view source) Chunk30:Help screen module manager edit toolbar (view source) Chunk30:ModuleManager-How-to-acces-module (view source) Chunk30:ModuleManager-How-to-access-module (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=Help32:Extensions_Module_Manager_Admin_Multilang&action=info
2016-04-29T03:35:38
CC-MAIN-2016-18
1461860110356.23
[]
docs.joomla.org
(java.lang.Class defaultImplType, java.lang.Class interfaceType) public java.lang.Object invoke(org.aopalliance.intercept.MethodInvocation mi) throws java.lang.Throwable invokein interface org.aopalliance.intercept.MethodInterceptor java.lang.Throwable protected java.lang.Object doProceed(org.aopalliance.intercept.MethodInvocation mi) throws java.lang.Throwable MethodInterceptor. Subclasses can override this method to intercept method invocations on the target object which is useful when an introduction needs to monitor the object that it is introduced into. This method is never called for MethodInvocationson the introduced interfaces. java.lang.Throwable
http://docs.spring.io/spring-framework/docs/3.2.0.M2/api/org/springframework/aop/support/DelegatePerTargetObjectIntroductionInterceptor.html
2016-04-29T02:01:25
CC-MAIN-2016-18
1461860110356.23
[]
docs.spring.io
Difference between revisions of "Production Working Groups/Permanent Working Groups" From Joomla! Documentation Production Working Groups Revision as of 17:26, 16 November 2013 Contents Bug Squad For further information see the Bug Squad page and Bug Squad Portal. - Coordinators: Mark Dexter and Nick Savov - PLT Contact: Nick Savov: Tom Hutchison - PLT Translations Working Group For further information see Translations Working Group. - Coordinator: Jean-Marie Simonet - PLT Contact: Javier Gomez:
https://docs.joomla.org/index.php?title=Production_Working_Groups/Permanent_Working_Groups&oldid=105036&diff=prev
2016-04-29T03:13:02
CC-MAIN-2016-18
1461860110356.23
[]
docs.joomla.org
Installer From Joomla! Documentation Revision as of 09:33, 3 September 2010 by 219jondn (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Where does this belong? Can we get this article assigned to a specific spot in the documentation? This could be of great help if it were put into the correct categories for use as a resource. 219jondn 14:32, 3 September 2010 (UTC) Retrieved from ‘’
https://docs.joomla.org/index.php?title=Talk:Installer&oldid=30452
2016-04-29T03:41:11
CC-MAIN-2016-18
1461860110356.23
[]
docs.joomla.org
Configuring the web crawler Introduction This page gives a guide to configuring the Funnelback web crawler. The web crawler is used to gather pages for indexing, by following hypertext links and downloading documents. The main collection creation page for a web collection describes the standard parameters used to configure the web crawler. These include giving it a start point, how long to crawl for, what domain to stay within and which areas to avoid. For most purposes the default settings for the other crawler parameters will give good performance. This page is for administrators who have particular performance or crawling requirements. A full list of all crawler related parameters (plus default values) is given on the configuration page. All of the configuration parameters mentioned can be modified by editing the collection.cfg file for a collection in the Administration interface. Speeding up web crawling The default behaviour of the web crawler is to be as polite as possible. This is enforced by requiring that only one crawler thread should be accessing an individual web server at any one time. This prevents multiple crawler threads from overloading a web server. It is implemented by mapping individual servers to specific crawler threads. If you have control over the web server(s) being crawled you may decide to relax this constraint, particularly if you know they can handle the load. This can be accomplished by using a site_profiles.cfg file, where you can specify how many parallel requests to use for particular web servers. Two general parameters which can also be tuned for speed are: crawler.num_crawlers=20 crawler.request_delay=250 Increasing the number of crawlers (threads) will increase throughput, as will decreasing the delay between requests. The latter is specified in milliseconds, with a default delay of one quarter of a second. We do not recommend decreasing this below 100ms. Warning: These parameters should be tuned with care to avoid overloading web servers and/or saturating your network link. If crawling a single large site we recommend starting with a small number of threads (e.g. 4) and working up until acceptable performance is reached. Similarly for decreasing the request delay i.e. work down until the overall crawl time has been satisfactorily reduced. Incremental crawling An incremental crawl updates an existing set of downloaded pages instead of starting the crawl from scratch. The crawler achieves this by comparing the document length provided by the Web server (in response to a HTTP head request) with that obtained in the previous crawl. This can reduce network traffic and storage requirements and speed up collection update times. The ratio of incremental to full updates can be controlled by the following parameter: schedule.incremental_crawl_ratio The number of scheduled incremental crawls that are performed between each full crawl (e.g. a value of '10' results in an update schedule consisting of every ten incremental crawls being followed by a full crawl). This parameter is only referenced by the update system when no explicit update options are provided by the administrator. An additional configuration parameter used in incremental crawling is the crawler.secondary_store_root setting. The webcrawler will check the secondary store specified in this parameter and not download content from the web which hasn't changed. When a web collection is created the Funnelback administration interface will insert the correct location for this parameter, and it will not normally need to be edited manually. Revisit policies Incremental crawls utilise a revisit policy to further refine the behaviour of an incremental crawl. The revisit policy is used by the crawler to make a decision of whether or not to revisit a site during an incremental crawl based on how frequently the site content has been found to change. Funnelback supports two types of revisit policies: - Always revisit policy: This is the default behaviour. Funnelback will always revisit a URL and check the HTTP headers when performing an incremental crawl. - Simple revisit policy: Funnelback tracks how frequently a page changes and will make a decision based on the some configuration settings on whether or not to skip the URL for the current crawl. See: Web crawler revisit policies Crawling critical sites/pages If you have a requirement to keep your index of a particular web site (or sites) as up-to-date as possible, you could create a specific collection for this area. For example, if you have a news site which is regularly updated you could specify that the news collection be crawled at frequent intervals. Similarly, you might have a set of "core" critical pages which must be indexed when new content becomes available. You could use some of the suggestions in this document on speeding up crawling and limiting crawl size to ensure that the update times and cycles for these critical collections meet your requirements. You could then create a separate collection for the rest of your content which may not change as often or where the update requirements are not as stringent. This larger collection could be updated over a longer time period. By using a meta collection you can then combine these collections so that users can search all available information. Alternatively, you could use an instant update. See "updating collections" for more details. Adding additional file types By default the crawler will store and index html, PDF, Microsoft Office, RTF and text documents. Funnelback can be configured to store and index additional file types. See: configure Tika to index additional supported file types Specify preferred server names For some collections you may decide you wish to control what server name the crawler uses when storing content. For example, a site may have been renamed from to, but because so many pages still link to the old name the crawler may store the content under the old name (unless HTTP or HTML redirects have been set up). A simple text file can be used to specify which name to use e.g.. This can also be used to control whether the crawler treats an entire site as a duplicate of another (based on the content of their home page). Details on how to set this up are given in the documentation for the crawler.server_alias_file parameter. Memory requirements The process which runs the webcrawler will take note of the gather.max_heap_size setting in the collection's collection.cfg file. This will specify the maximum size of the heap for the crawler process, in MB. For example the default is set at: gather.max_heap_size=640 This should be suitable for most crawls which crawl less than 250k URLs. For crawls over this size you should expect to increase the heap size up to at least 2000MB, subject to the amount of RAM available and what other large jobs might be running on the machine. Crawling dynamic web sites In most cases Funnelback will crawl dynamically generated web sites by default. However, some sites (e.g. e-commerce, product catalogs etc.) may enforce the use of cookies and sessions IDs. These are normally used to track a human user as they browse through a site. By default the Funnelback webcrawler is configured to accept cookies by default, by having the following parameters set: crawler.accept_cookies=true crawler.packages.httplib=HTTPClient This turns on cookie storage in memory (and allows cookies to be sent back to the server), by using the appropriate HTTP library. Note that even if a site uses cookies it should still return valid content if a client (e.g. the crawler) does not make use of them. It is also possible to strip session IDs and other superfluous parameters from URLs during the crawl. This can help reduce the amount of duplicate or near-duplicate content brought back. This is configured using the following optional parameter (with an example expression): crawler.remove_parameters=regexp:&style(sheet)?=mediaRelease|&x=\d+ The example above will strip off style and stylesheet parameters, or x=21037536 type parameters (e.g. session IDs). It uses regular expressions (Perl 5 syntax) and the regexp: flag is required at the start of the expression. Note that this parameter is normally empty by default. Finally, the last parameter which you may wish to modify when crawling dynamic web sites is: crawler.max_files_per_area=10000 This parameter is used to specify the maximum number of files the crawler should download from a particular area on a web site. You may need to increase this if a lot of your content is served from a single point e.g. site.com/index.asp?page_id=348927. The crawler will stop downloading after it reaches the limit for this area. In this case you would need to increase the limit to ensure all the content you require is downloaded. Crawling password protected websites Crawling sites protected by HTTP Basic authentication or Windows Integrated authentication (NTLM) is covered in a separate document on crawling password protected sites. Sending custom HTTP request header fields In some circumstances you may want to send custom HTTP request header fields in the requests that the web crawler makes when contacting a web site. For example, you might want to send specific cookie information to allow the crawler to "log in" to a web site that uses cookies to store login information. The following two parameters allow you to do this: - crawler.request_header: Optional additional header to be inserted in HTTP(S) requests made by the webcrawler. - crawler.request_header_url_prefix: Optional URL prefix to be applied when processing the crawler.request_headerparameter Form-based authentication Some websites require a login using a HTML form. If you need to crawl this type of content you can specify how to interact with the forms using a crawler.form_interaction_file. Once the forms have been processed the webcrawler can use the resulting cookie to authenticate its requests to the site. Note: Form-based authentication is different from HTTP basic authentication. Details on how to interact with this are described in a separate document on crawling password protected websites. Crawling with pre-defined cookies In some situations you may need to crawl a site using a pre-defined cookie. Further information on this configuration option is available from the cookies.txt page. Crawling HTTPS websites This is covered in a separate document: Crawling HTTPS websites. Crawling Sharepoint websites - If your Sharepoint site is password protected you will need to use Windows Integrated Authentication when crawling - see details on this in the document on crawling password protected sites. - You may need to configure "alternate access mappings" in Sharepoint so that it uses a fully qualified hostname when serving content e.g. serving content using rather than. Please see your Sharepoint administration manual for details on how to configure these mappings. Limiting crawl size In some cases you may wish to limit the amount of data brought back by the crawler. The usual approach would be to specify a time limit for the crawl: crawler.overall_crawl_timeout=24 crawler.overall_crawl_units=hr The default timeout is set at 24 hours. If you have a requirement to crawl a site within a certain amount of time (as part of an overall update cycle) you can set this to the desired value. You should give the crawler enough time to download the most important content, which will normally be found early on in the crawl. You can also try speeding up the crawler to meet your time limit. Another parameter which can be used to limit crawl size is: crawler.max_files_stored This is the maximum number of files to store on disk (default is unlimited). Finally, you can specify the maximum link distance from the start point (default is unlimited): For example, if max_link_distance = 1, only crawl the links on start_url. This could be used to restrict the crawl to a specific list of URLs, which were generated by some other process e.g. as a pre_gather command. Warning: Turning the max_link_distance parameter on drops the crawler down to single-threaded operation. Redirects The crawler stores information about redirects in a file called redirects.txt in the collection's log directory. This records information on HTTP redirects, HTML meta-refresh directives, duplicates, canonical link directives etc. This information is then processed by the indexer and used in ranking e.g. ensuring that anchortext is associated with the correct redirect target etc. Crawler couldn't access seed page In some scenarios you may see the following message in a collection's main update log: Crawler couldn't access seed page. This means the web crawler couldn't access any of the specified start URLs. To see why this is the case you should check the individual crawler logs in the "offline" view, which should give details on why it was unable to process the URL(s). Changing configuration parameters during a running crawl The web crawler monitor options provide a number of settings that can be dynamically adjusted while a crawl is running.
https://docs.funnelback.com/collections/collection-types/web/webcrawler.html
2019-03-18T18:03:56
CC-MAIN-2019-13
1552912201521.60
[]
docs.funnelback.com
This tutorial implements a polyphonic sine wave synthesiser that responds to MIDI input. This makes use of the Synthesiser class and related classes. Level: Intermediate Platforms: Windows, macOS, Linux Classes: Synthesiser, SynthesiserVoice, SynthesiserSound, AudioSource, MidiMessageCollector keyboard that can be used to play a simple sine wave synthesiser. Using keys on the computer keyboard the on-screen keyboard can be controlled (using keys A, S, D, F and so on to control musical notes C, D, E, F and so on). This allows you to play the synthesiser polyphonically. This tutorial makes use of the JUCE Synthesiser class to implement a polyphonic synthesiser. This shows you all the basic elements needed to customise the synthesiser with your own sounds in your own applications. There are various classes needed to get this to work and in addition to our standard MainContentComponent class, these are: SynthAudioSource: This implements a custom AudioSource class called SynthAudioSource, which contains the Synthesiser class itself. This outputs all of the audio from the synthesiser. SineWaveVoice: This is a custom SynthesiserVoice class called SineWaveVoice. A voice class renders one of the voices of the synthesiser mixing it with the other sounding voices in a Synthesiser object. A single instance of a voice class renders one voice. SineWaveSound: This contains a custom SynthesiserSound class called SineWaveSound. A sound class is effectively a description of the sound that can be created as a voice. For example, this may contain the sample data for a sampler voice or the wavetable data for a wavetable synthesiser. Our MainContentComponent class contains the following data members. The synthAudioSource and keyboardComponent members are initialised in the MainContentComponent constructor. See Tutorial: Handling MIDI events for more information on the MidiKeyboardComponent class. In order that we can start playing the keyboard from the computer's keyboard we grab the keyboard focus just after the application starts. To do this we use a simple timer that fires after 400 ms: The application uses the AudioAppComponent to set up a simple audio application (see Tutorial: Build a white noise generator for the most basic application). The three required pure virtual functions simply call the corresponding functions in our custom AudioSource class: The SynthAudioSource class does a little more work: getNextAudioBlock()function we pull buffers of MIDI data from the MidiKeyboardState object. SynthesiserSound objects can be shared between Synthesiser objects if you wish. The SynthesiserSound class is a type of ReferenceCountedObject class therefore the lifetime of SynthesiserSound objects is handled automatically. YourSoundClass::Ptrvariable for this memory management to work. Our sound class is very simple, it doesn't even need to contain any data. It just needs to report whether this sound should play on particular MIDI channels and specific notes or note ranges on that channel. In our simple case, it just returns true for both the appliesToNote() and appliesToChannel() functions. As mentioned above, the sound class might be where you would store data that is needed to create the sound (such as a wavetable). The SineWaveVoice class is a bit more complex. It needs to maintain the state of one of the voices of the synthesiser. For our sine wave, we need these data members: See Tutorial: Build a sine wave synthesiser for information on the first three. The tailOff member is used to give each voice a slightly softer release to its amplitude envelope. This gives each voice a slight fade out at the end rather than stopping abruptly. The SynthesiserVoice::canPlaySound() function must be overriden to return whether the voice can play a sound. We could just return true in this case but our example illustrates how to use dynamic_cast to check the type of the sound class being passed in. A voice is started by the owning synthesiser by calling our SynthesiserVoice::startNote() function, which we must override: Again, most of this should be familar to your from Tutorial: Build a sine wave synthesiser. The tailOff value is set to zero at the start of each voice. We also use the velocity of the MIDI note-on event to control the level of the voice. The SynthesiserVoice::renderNextBlock() function must be overriden to generate the audio. currentSamplevalue with the value alread at index startSample. This is because the synthesiser will be iterating over all of the voices. It is the responsibility of each voice to mix its output with the current contents of the buffer. tailOffvalue will be greater than zero. You can see the synthesis algorithm is similar. tailOffvalue is small we determine that the voice has ended. We must call the SynthesiserVoice::clearCurrentNote() function at this point so that the voice is reset and available to be reused. startSampleargument. The synthesiser is very likely to call the renderNextBlock()function mid-way through one of its output blocks. This is because the notes may start on any sample. These start times are based on the timestamps of the MIDI data received. A voice is stopped by the owning synthersiser calling our SynthesiserVoice::stopNote() function, which we must override: This may include velocity information from the MIDI note-off message, but in many cases we can ignore this. We may be being asked to stop the voice immediately in which case we call the the SynthesiserVoice::clearCurrentNote() function straight away. Under normal circumstances the synthesiser will allow our voices to end naturally. In our case we have the simple tail-off envelope. We trigger our tail-off by setting our tailOff member to 1.0. Let's add functionality to allow an external MIDI source to control our synthesiser in addition to the on-screen keyboard. First add a MidiMessageCollector object as a member of the SynthAudioSource class. This provides somewhere that MIDI messages can be sent and that the SynthAudioSource class can use them: In order to process the timestamps of the MIDI data the MidiMessageCollector class needs to know the audio sample rate. Set this in the SynthAudioSource::prepareToPlay() function [10]: Then you can pull any MIDI messages for each block of audio using the MidiMessageCollector::removeNextBlockOfMessages() function [11]: We'll need access to this MidiMessageCollector object from outside the SynthAudioSource class, so add an accessor to the SynthAudioSource class like this: In our MainContentComponent class we'll add this MidiMessageCollector object as a MidiInputCallback object to our application's AudioDeviceManager object. To present a list of MIDI input devices to user, we'll use some code from Tutorial: Handling MIDI events. Add some members to our MainContentComponent class: Then add the following code to the MainContentComponent constructor. Add the setMidiInput() function that is called in the code above: Notice that we add the MidiMessageCollector object from our SynthAudioSource object as a MidiInputCallback object [12] for the specified MIDI input device. We also need to remove the previous MidiInputCallback object for the previously selected MIDI input device if the user changes the selected device using the combo-box [13]. We need to position this ComboBox object and adjust the position of the MidiKeyboardComponent object in our resized() function: Run the application again and it should look something like this: Of course, the devices listed will depend on your specific system configuration. SynthUsingMidiInputTutorial_02.hfile of the demo project. This tutorial has introduced the Synthesiser class. After reading this tutorial you should be able to:
https://docs.juce.com/master/tutorial_synth_using_midi_input.html
2019-03-18T18:15:51
CC-MAIN-2019-13
1552912201521.60
[]
docs.juce.com
App Bar App Bar App Bar App Bar Class Definition public : class AppBar : ContentControl, IAppBar, IAppBar2, IAppBar3, IAppBar4, IAppBarOverrides, IAppBarOverrides3 struct winrt::Windows::UI::Xaml::Controls::AppBar : ContentControl, IAppBar, IAppBar2, IAppBar3, IAppBar4, IAppBarOverrides, IAppBarOverrides3 public class AppBar : ContentControl, IAppBar, IAppBar2, IAppBar3, IAppBar4, IAppBarOverrides, IAppBarOverrides3 Public Class AppBar Inherits ContentControl Implements IAppBar, IAppBar2, IAppBar3, IAppBar4, IAppBarOverrides, IAppBarOverrides3 <AppBar .../> -or- <AppBar> content </AppBar> - Inheritance - - Attributes - Windows 10 requirements> <AutoSuggestBox Grid. </Grid> </AppBar> </Page.TopAppBar> Remarks Important You should use the AppBar only when you are upgrading a Universal Windows 8 app that uses the AppBar, and need to minimize changes. For new apps in Windows 10, we recommend using the CommandBar control instead. An app bar is a UI element that's typically used to present commands and tools to the user, but can also be used for navigation. An app bar can appear at the top of the page, at the bottom of the page, or both. By default, its shown in a minimal state. Its content is shown or hidden when the user presses the ellipsis [•••], or performs a right-click that is not otherwise handled by the app. Here's an app bar in it's minimal state. Here's the app bar when it's open. You can open and close the app bar programmatically by setting the IsOpen property. You can use the Opening, Opened, Closing, and Closed events to respond to the app bar being opened or closed. or presses the ellipsis [•••]. AppBar control. Constructors Properties Methods Events See also Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Xaml.Controls.AppBar
2019-03-18T17:44:15
CC-MAIN-2019-13
1552912201521.60
[array(['windows.ui.xaml.controls/images/appbar_closed_10.png', 'A closed app bar control'], dtype=object) array(['windows.ui.xaml.controls/images/appbar_open_10.png', 'An open app bar control'], dtype=object) ]
docs.microsoft.com
[ ) ] A guide outlining the base requirements and recommendations for employees looking to contribute to OpenStack.. Depends on your own plan, but try to cover the range of project services that you’re using or plan to use for your services. Getting involved in these projects means you can: The more people you have working upstream, the better attention your feature will get. Providing more reviewers will definitely help merge your implementation to projects. Code review is a bottleneck for landing patches, more good reviews the faster code can land.. There are many components and projects related to running and developing OpenStack all of which run on top of Linux. So a developer will need: There are a number of technical events that are held where community, project, and cross-project planning and networking happen in person. Although this planning and networking does happen online outside these events you should consider sending developers along to be involved. Some technical events include: For more information on such events see:: Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/contributors/organizations/index.html
2019-03-18T17:44:14
CC-MAIN-2019-13
1552912201521.60
[]
docs.openstack.org
You can import meshes and animations from Lightwave in two different ways: Unity currently imports: Detailed documentation for this from Lightwave is not available on-line but comes in the PDF manual included with your download Note: From Lightwave version 11 onwards the FBX 2012 exporter is now included: FBX Filename : This file can contain multiple objects, Choose a filename and save to your \assets directory Anim Layer : TBC Type : Choose Binary to reduce filesize or ASCII for a text editable FBX FBX Version Select 201200 from the drop down to ensure version 2012.1 Export : Select all the elements you wish to include - not cameras and lights are not read into Unity Mesh type Choose Cage Subdivision if * otherwise choose subdivided to * Bake Motion EnvelopesTBC Start frame / End frame Scale Scene Set a scale for your scene applicable to match Unity To read a lightwave scene you must use the Applink package provided with your Lightwave installation from version 11 onwards only E.g. Windows C:\Program Files (x86)\Unity\Editor\Standard Packages (for machines running a 64-bit version of Windows) or Mac: Applications/Unity/Standard Packages. Your folder structure should look like this: Unity_Project Assets LightWave_Content Images Objects Scenes Unity will then convert the LightWave scene to an FBX Any changes to the FBX (lightwave scene assets) will only be stored in Unity, so this is a Uni-directional pipeline, but Unity will remember any material assignments and properties applied to the FBX scene even if you update from LightWave
https://docs.unity3d.com/Manual/HOWTO-importObjectLightwave.html
2017-02-19T18:43:51
CC-MAIN-2017-09
1487501170249.75
[]
docs.unity3d.com
Polr API Documentation API keys To authenticate a user to Polr, you will need to provide an API key along with each request to the Polr API, as a GET or POST parameter. (e.g ?key=API_KEY_HERE) Assigning an API key To assign an API key, log on from an administrator account, head over to the "Admin" tab, and scroll to the desired user. From there, you can open the API button dropdown to reset, create, or delete the user's API key. You will also be prompted to set a desired API quota. This is defined as requests per minute. You may allow unlimited requests by making the quota negative. Once the user receives an API key, they will be able to see an "API" tab in their user panel, which provides the information necessary to interact with the API. Alternative method: You can also assign a user an API key by editing their entry in the users database table, editing the api_key value to the desired API key, api_active to the correct value ( 1 for active, 0 for inactive), and api_quota to the desired API quota (see above). Actions Actions are passed as a segment in the URL. There are currently two actions implemented: shorten- shortens a URL lookup- looks up the destination of a shortened URL Actions take arguments, which are passed as GET or POST parameters. See API endpoints for more information on the actions. Response Type The Polr API will reply in plain_text or json. The response type can be set by providing the response_type argument to the request. If not provided, the response type will default to plain_text. Example json responses: { "action": "shorten", "result": "" } { "action":"lookup", "result": { "long_url": "https:\/\/google.com", "created_at": { "date":"2016-02-12 15:20:34.000000", "timezone_type":3, "timezone":"UTC" }, "clicks":"0" } } Example plain_text responses: API Endpoints All API calls will commence with the base URL, /api/v2/. /api/v2/action/shorten Arguments: url: the URL to shorten (e.g) is_secret(optional): whether the URL should be a secret URL or not. Defaults to false. (e.g trueor false) custom_ending(optional): a custom ending for the short URL. If left empty, no custom ending will be assigned. Response: A JSON or plain text representation of the shortened URL. Example: GET Response: { "action": "shorten", "result": "" } Remember that the url argument must be URL encoded. /api/v2/action/lookup The lookup action takes a single argument: url_ending. This is the URL to lookup. If it exists, the API will return with the destination of that URL. If it does not exist, the API will return with the status code 404 (Not Found). Arguments: url_ending: the link ending for the URL to look up. (e.g 5ga) url_key(optional): optional URL ending key for lookups against secret URLs Remember that the url argument must be URL encoded. Example: GET Response: { "action": "lookup", "result": "" } HTTP Error Codes The API will return an error code if your request was malformed or another error occured while processing your request. HTTP 400 Bad Request This status code is returned in the following circumstances: - By the shortenendpoint - In the event that the custom ending provided is already in use, a 400error code will be returned and the message custom ending already in usewill be returned as an error. - By any endpoint - Your request will return a 400if it is malformed or the contents of your arguments do not fit the required data type. HTTP 500 Internal Server Error - By any endpoint - The server has encountered an unhandled error. This is most likely due to a problem with your configuration or your server is unable to handle the request due to a bug. HTTP 401 Unauthorized - By any endpoint - You are unauthorized to make the transaction. This is most likely due to an API token mismatch, or your API token has not be set to active. - By the lookupendpoint - You have not provided the valid url_keyfor a secret URL lookup. HTTP 404 Not Found By the lookupendpoint - Returned in the circumstance that the short URL to look up was not found in the database. HTTP 403 Forbidden - By the shortenendpoint - Your request was understood, but you have exceeded your quota. Error Responses Example json error response: { "error": "custom ending already in use" } Example plain_text error response: custom ending already in use Testing the API You may test your integrations on with the credentials "demo-admin"/"demo-admin". Keep in mind the instance is only a demo and may be cleared at any time.
http://docs.polr.me/en/latest/developer-guide/api/
2017-02-19T18:35:18
CC-MAIN-2017-09
1487501170249.75
[]
docs.polr.me
Coding Style Guide¶ This style guide sets out some best practices for writing programs to be used with Reconfigure.io. Template¶ We provide a stripped down version of our project code to help you get started creating your own projects. You can find the template here: examples/template. For more information on using the template, see Tutorial 4 – Structure and Communication. Code Organization¶ Splitting code between a CPU and FPGA usually involves a separation that’s different to what you would expect when using just a CPU. A CPU is flexible and good at sequential things, whereas the FPGA is good for static things that lend themselves to parallelism. The best separation will depend on you.
http://docs.reconfigure.io/style_guide.html
2018-01-16T11:06:41
CC-MAIN-2018-05
1516084886416.17
[]
docs.reconfigure.io
Docs Viewer - View your Files with your iPad on the go with Docs Viewer. Viewing Word files, pdf, powerpoint presentations has never been easier. Try Docs viewer today. Easy File import via iTunes. Docs Viewer automatically detects the newly added files. Supported files: - .pdf - .doc - .docx - .dotx - .xls - .xlsx - .ppt - .pptx - .pps - .rtf - .txt - .xml - .html - .csv
http://docs-viewer.by-t-licious-software.qarchive.org/
2018-01-16T11:31:12
CC-MAIN-2018-05
1516084886416.17
[]
docs-viewer.by-t-licious-software.qarchive.org
Find Window This window allows you to search for a condition or marker inserted into a Microsoft Word source document using the WebWorks Transit menu for Microsoft Word. The fields are defined as follows: Item Specifies the type of item you want to find. You can search for a condition or a marker. Item Name Specifies the name of the item you want to find.
http://docs.webworks.com/ePublisher/2008.4/Help/04.Reference_Information/02.116.ePublisher_Window_Descriptions
2018-01-16T12:06:02
CC-MAIN-2018-05
1516084886416.17
[]
docs.webworks.com
Localization By configuring additional locales, content becomes translatable with the flick of a switch. Each language can run on its own URL, so you can have example.com and example.com/de or fr.example.com, or example.de. Lots of control. Lots of options. URL Structure The first step in localizing a Statamic site is to decide on the URL structure. A common convention is to create subfolders for each non-default locale. For example: English French German - and so on… In this guide, we will assume we’re using this subfolder approach in any examples. If you decide to go a different method, make sure any relative paths are updated for your situation. See below for the subdomain version. Creating the locale roots Following from our subfolder example, we’ll need to create those folders. For each locale, you should copy the index.php (and .htaccess if using Apache) into their respective folders. / / |-- assets/ |-- local/ |-- site/ |-- statamic/ |-- fr/ | |-- index.php | `-- .htaccess |-- de/ | |-- index.php | `-- .htaccess |-- index.php |-- .htaccess |-- ...etc... In the new index.php files, you should adjust a few variables: The relative path to the statamic folder needs to be updated to reflect the new location. $statamic = '../statamic'; The site root should should also be updated. Since we’re running in a subfolder, you should set this appropriately. (Take note of the trailing and leading slashes.) $site_root = '/fr/'; Lastly, the locale. This should correspond with the locale you will be adding to the system settings in the next step. $locale = 'fr'; Nginx If you’re using Nginx, you’ll want to slap one of these bad boys in for each locale. location @frrewrites { rewrite ^/fr/(.*)$ /fr/index.php last; } location /fr/ { try_files $uri $uri/ @frrewrites; } Adding Locales to Settings Next, Statamic will need to know we intend to localize our content. Head to System > Settings > System and add your locales to the Locales field. In this field you will need to provide: - The shorthand locale string. – This is what you added to $localein index.php. (eg. - The full locale. – This is what PHP uses to format date and other strings. (eg. fr_FR) - Name – This is what will be used throughout the Control Panel when referring to the locale. (eg. - URL – This is the URL of the homepage for that locale. (eg.) If you’re not using the CP, you can do the above by adding an array to site/settings/system.yaml, like so: locales: en: full: en_US name: English url: fr: full: fr_FR name: French url: Localizing your content Fields Once you’ve added your locales, you need to define which fields may be translated. You can do this by toggling the “Localizable” option for each field you wish to translate. If you aren’t using the CP, you can just add localizable: true to each field in your fieldset. Editing Now, when editing content, you should see a “Locales” list in the sidebar. This shows you all of your available locales. A checkmark indicates the locale you are currently editing, a green dot indicates that locale is available (published), and a white dot means the content is unavailable (draft). Selecting one of those locales will take you to edit the same page, but only fields marked as localizable will be available. Files For those of you that are not using the control panel, or if you are just interested, here’s how localizing works with files. site/content/ |-- pages/ | |-- biography/ | | |-- index.md | | |-- fr.index.md | | `-- de.index.md | `-- index.md |-- collections/ | `-- blog/ | |-- fr/ | | `-- 1.my-post.md | |-- de/ | | `-- 1.my-post.md | `-- 1.my-post.md `-- taxonomies/ `-- categories/ |-- fr/ | `-- news.md |-- de/ | `-- news.md `-- news.md In Pages, each folder represents a page. The default locale is simply named index.md. Any additional locales are named with their locale prefix. eg fr.index.md. In both Collections and Taxonomies, the localized entries/terms are all stored in subfolders with identical filenames to the default locale. eg. categories/news.md and categories/fr/news.md. For pages, entries, and taxonomy terms: slugs may be localized by adding a slug field to the front-matter. The filenames shouldn’t change. They should also all have id fields that match their default counterparts. Publish States / Statuses It’s possible to have a different status per locale. For instance, you may want to have a page that only exists in one locale (or multiple, but not all of them). In the Control Panel this should be straightforward: There is a toggle next to the locale you’re editing. Toggle it off to set it as a draft. In the files, the default locale’s status is defined in the filename as per usual (an underscore to make it a draft). For additional locales, you may add a published: true or false to their front matter. Keep in mind that additional locales will inherit the status of the default locale. If you change the status of the default locale, the other locales will also become drafts unless you already added a published value to their front-matter. Routing It’s possible to assign a different collection and/or taxonomy routes for each locale. For example, you may want /news/my-post for English, but /noticias/mi-publicacion in Spanish. Instead of a string for the route, you can simply specify an array with each locale, like so: collections: news: en: /news/{slug} es: /noticias/{slug} Displaying your content When browsing your site, the locale of content that will be displayed will be determined by the URLs defined in your system settings. For example, if we visit /about, it would load the default locale (English, in this example), and if we visit /fr/about, it would load the French locale. If you were to output a field that isn’t localizable, or a field that just hasn’t been localized, it would display the default value. Take this “biography” page ( site/content/pages/biography/index.md), for instance: --- id: 123 title: Biography portrait: me.jpg color: red --- I love bacon. And here’s the French equivalent ( site/content/pages/biography/fr.index.md): --- id: 123 title: Biographie slug: biographie --- J'adore le bacon Lastly, take this template: <h1 style="color: {{ color }}">{{ title }}</h1> <img src="{{ portrait }}" /> {{ content }} When visiting /biography, you’d see: <h1 style="color: red">Biography</h1> <img src="me.jpg" /> I love bacon And when visiting /fr/biographie, you’d see: <h1 style="color: red">Biographie</h1> <img src="me.jpg" /> J'adore le bacon There are 3 things to notice: - The URL for the French site would use the localized slug - We’ll assume the imagefield didn’t have localized: truein the fieldset. That means it wouldn’t have appeared in the publish form for the French locale. Also, since it’s not defined in the French file, it falls back to the default locale’s value. - Assuming {{ color }}simply was left blank, or it had the same value as the default locale, it will fall back to the default locale’s value. Subdomains The examples in this guide are for a typical subdirectory-based installation. As mentioned above, the steps outlined will be the same for any setup, but with any paths adjusted for your needs. Another common solution is to use subdomains. For example, mysite.com, fr.mysite.com, etc. Here’s a brief rundown of how to setup subdomain locales, assuming you have the same folder structure as mentioned above. - Instead of simply visiting mysite.com/fr/, you’d point your subdomain to the folder. - The $statamicpath wouldn’t need to change in this example, but if your frfolder is located elsewhere it will need to. - The $site_rootshould be "/"since there’s no subdirectory in the URL. - Links to your theme assets won’t work because they will be relative, and the files don’t actually exist there. You can symlink site/themes/to inside your frfolder. That’ll do the trick. - You can consider symlinking your public asset container folders in the same fashion. Translating the Control Panel You are able to translate the control panel. This is a separate process to localizing your site. The CP may be translated into languages other than what your site uses. Read about it on the Control Panel page. Troubleshooting I added a localized file but I get an error telling me the ID already exists. This will happen if you haven’t added your additional locale to your system.yaml file.
https://docs.statamic.com/localization
2018-01-16T11:37:13
CC-MAIN-2018-05
1516084886416.17
[]
docs.statamic.com
Website Pre-Launch Check List¶ - Make a favicon - Make sprites - Combine, Minify and Compress CSS and JS - Test all pages in all major browsers - Install a warning message for IE 6 - Check all meta data - Check to make sure you have a styled 404 and 500 error pages - Make sure you’re using page caching - Make sure you have google Analytics installed - Make sure the site works with out Javascript turned on or that you have a javascript required message. - Add a print style sheet - Add a Sitemap for search engines - Proofread content - Double check your links () - Test all forms for validation and functionality - Optimize the site for performance,
http://epicserve-docs.readthedocs.io/en/latest/web_dev/website-checklist.html
2017-09-19T16:54:04
CC-MAIN-2017-39
1505818685912.14
[]
epicserve-docs.readthedocs.io
Hampton Sides is a very engaging writer who has taken his readers through a number of diverse adventures. Whether hunting down James Earl Ray for the assassination of Martin Luther King; detailing the mission of US Army Rangers in January 1945 behind Japanese lines in the Philippines; ; or exploring the role of Kit Carson in the American west, Sides has always been able to break down each topic to capture the attention of his readers. His latest effort, IN THE KINGDOM OF ICE, is no exception as he tells the story of the USS Jeannette which set sail from San Francisco in July, 1879. Sides describes the origins of the voyage and its place in navigational history and he produces what might be his finest book yet. The narrative traces the development of the idea that there was a warm water path through the Arctic ice flows that would enable an expedition to reach the North Pole. The story begins with the disappearance of the Polaris, a ship Captained by Charles Hall, that was sailing north off of Greenland in 1872 and the failed rescue mission that was attempted by the “Little Juniata,” a much smaller ship. The rescue boat had traveled over 400 miles through large chunks of ice broken off icebergs and was Captained by George DeLong, an Annapolis graduate, who upon returning to New York explained to his wife Emma what he experienced and she realized that “the polar virus was in George’s blood to stay.” (10) DeLong knew he was bitten by the “arctic fever,” and began to seek funding for a return trip to the Arctic region in search of a passage to the North Pole. As Delong planned he came in contact with a number of important and colorful characters including James Gordon Bennett, Jr., the publisher, editor in chief and sole owner of the New York Herald, the largest and most influential newspaper in the world. Bennett was also the third richest man in America and believed that newspapers should not merely report stories, but should create them. His most famous involved sending a correspondent, Henry Stanley to Africa to locate Dr. David Livingstone, which took people by storm. Bennett believed that an Arctic voyage would create even more interest and newspaper sales. Another major figure was Professor August Heinrich Petermann, a German theoretician who concluded that the Open Polar Sea Theory was valid. Petermann believed that “the ice pack as a whole forms a mobile belt on whose polar side the sea is more or less ice free.” (60) Petermann also published numerous maps of the Arctic and Siberian region and was seen as the most reliable source of information for any polar excursion. Much to DeLong’s chagrin later in the narrative, the German theorist’s ideas were all wrong resulting in disastrous consequences. Once Bennett is convinced to finance a new voyage with DeLong in command the reader follows the preparation of a new vessel that is rechristened the Jeannette (named after Bennett’s estranged sister), the detailed planning, and the choosing and training of the crew. As a back drop to the exploratory adventure, Sides reminds the reader of the major technical and scientific advances of the day by describing the 1876 Central Exposition in Philadelphia which was attended by the likes of George Eastman, Alexander Graham Bell, George Westinghouse, and Thomas Alva Edison (whose invention, arc lighting would be a total failure during the Jeannette’s voyage). The author describes major new inventions and products including Heinz ketchup and Hires Root Beer as people came to observe from all over the world. The US Navy, politicians and the business community all favored the expedition and it became a cause célèbre in the United States. The ship departed San Francisco on July 8, 1879 under the command of George DeLong. Its crew is made up of experts in all nautical fields and is very optimistic on departure. As the USS Jeanette steamed north toward the Bering Strait, scientists and bureaucrats digested new data from ships returning from the Arctic region, and they discerned that Petermann’s ideas that DeLong was basing his path on did not exist. “The portal DeLong was aiming for offered no real gate of entrance into the Arctic Ocean….the North Pacific Ocean, has practically speaking, no northern outlet; Bering Strait is but a cul de sac. (143) By September, 1879, DeLong realized that the “thermometric gateway to the North Pole [was] a delusion and a snare.” (162). at this point on Sides describes how the Jeanette becomes imprisoned in the ice for almost a year, though because of the ice flow the ship does not remain stagnant. The crew will remain in high spirits but ultimately when the ship is released from the ice the ship has to deal with loose chunks of ice and poor weather. DeLong is not certain of his path and sends his chief engineer, George Melville on a dangerous reconnaissance mission when land is located. While Melville is gone lead poison overtakes the crew. By June, 1880 another ship is sent by the US Navy to learn what has happened to the Jeannette. Captained by Calvin Hooper, with the naturalist John Muir as part of the crew, the USS Corwin is unsuccessful in locating the missing vessel. Circumstances become dire for the Jeannette as it is encased in ice for another year and it finally will sink in June 1881. The crew will escape and split up into three boats and make for Siberia to try and survive. Sides is at his best as he describes the perilous journey as weather, unkind geography, and loneliness set in. The author offers a unique and often amazing description of DeLong and his crew as they sail and trek across the ice, slush, and open water as they sought the Siberian land mass. Details of the topography they dealt with, their physical strength and will power all place the reader among the crew as they tried to overcome the hand that Mother Nature had dealt them as each day became a separate battle for survival. DeLong and his men were always up against the Arctic clock, when would the warm weather end? By August 1881, DeLong had to burn the sleds that pulled his boats, making the remainder of their journey that more difficult. By September 19, 1881 they were down to four days worth of provisions. The remainder of the story is one of human will against the elements. The three boats split up, never to be rejoined again. DeLong sends his two best men ahead to try and reach a settlement to find aid. It is when these two men reach Yakutsk, a Siberian village of 5000 people they are reunited with George Melville and a few men from the second boat. It is with the aid of the Russian Governor-General, George Tchernieff who states that unlike today, “that Russia has your back.” Melville launches an expedition to return to the ice to look for DeLong and his men and by March, 1882 he will uncover DeLong’s “ice journals,” maps, and other writings that were last dated in October, 1881. Needless to say, shortly thereafter the bodies of these men were found. The conclusion that Sides reaches is that courage and loyalty dominated the mission of the USS Jeannette under the leadership of Captain DeLong. The capacity of George Melville’s commitment and identification with his captain and friend are compelling and explain his dogged determination to rescue DeLong and his crew once he reaches the safety of Yakutsk. Sides goes on to describe Melville’s commitment to DeLong’s widow, Emma and the mission that he would carry to his grave. Sides’ research and documentation is impeccable and there is little to question about his account as he has access to all of DeLong’s papers and other important materials. He presents a work of history that reads like the best adventure fiction that I have read in a long time, and the book should spark interest for all who seek a story about the triumph and loss of the human spirit.
https://docs-books.com/2014/09/04/in-the-kingdom-of-ice-by-hampton-sides/
2017-09-19T17:07:09
CC-MAIN-2017-39
1505818685912.14
[]
docs-books.com
The website is the online manual and documentation for eBECAS the administration software designed for Language and VET RTO’s (Vocational Colleges). eBECAS enables VET, Language and Higher Education Colleges to use the one system. Of special note is support for weekly progression through classes for the enrolment and storage of weekly results. There is separate support for classing and module study for VET classes results and competencies. Overview the operational process workflow using eBECAS Student details address, diary contacts including email and sms, Visa, CoE, Avetmiss and RAPT, Results & Assessments VET results including competencies and details for AVETMISS returns, transcripts, Language assessments and certificates in bulk and individually Admissions admission workflow from offer to acceptance, offer cancellation, enrolment changes – extensions, shorten, change start date and cancellations Study Tour Groups tour group offers and enrolments with one invoice for all students in the group Homestay & Airport Transfers homestay requests, accommodation provider management and categories, finding avialablility by date, bed type and family categories, placements, scheduled payments and airport transfer requests SSVF GTE Streamlined Student Visa Framework and Genuine Temporary Entrant support Financials student invoices, receipts, credits, debits and transfers; calculate revenue by period, course, faculty, agent or country (for each student and fee); schedule payments to agents, homestay providers, airport transfer provider and insurance providers; and calculate revenue using cash or accrual basis MYOB integration export data from eBECAS to load into MYOB Classing allocate Language enrolments to class including day, time, rooms and teachers for any week and allocate classes in the future without effecting current classes, enter absence and assessments by class, students and teachers can see their timetable using the teacher and student portal; for VET student enrolments, outcomes can be entered by class or cohort in bulk updating AVETMISS details. Email eBECAS integrates with services to send a single email for a student, homestay provider or agent with a single merged attachment or in bulk integrating with Mailchimp/Mandrill using html formatted templates with data extracted from eBECAS SMS messages eBECAS integrates with SMS sevices to send SMS messages to phones individually or in bulk using simple SMS templates Document Repository documents, Microsoft Word templates and files can be stored using EIT’s Amazon Document Storage for your College and each document can be filed in eBECAS with a student, accommodation provider, agent and Teacher and Class Student Portal Students can see their timetables, Fees, Result outcomes, attendance, College news and messages on their phone, tablet or computer browser ++ soon to be released is the student iphone and android app ++ Teacher Portal Teachers can see their timetables, classes and students and enter absences, diary comments and results on their phone, tablet or computer browser VET Colleges AVETMISS USI eBECAS supports AVETMISS 7 and 8 reporting and has the facility to generate and verify the USI value live CRICOS / TPS / ESOS Regulations eBECAS continuously keeps abreast of tHe CRICOS and ESOS regulations. eBECAS enables your College to load payments by COE into Prisms and update student address details. eBECAS also ensures the invoices generated conforms to the ESOS regulations and shows the period covered for the tuition payment Language Colleges Language College details are above, and will be separated for easier viewing Setup Installing eBECAS on a PC Initial Data Entry Templates AVETMISS Checklist Live USI integration Course Price Books Warning Letters Changes Search screen options and displaying columns EA Annual Report
http://docs.ebecas.com.au/
2017-09-19T16:49:31
CC-MAIN-2017-39
1505818685912.14
[]
docs.ebecas.com.au
(President Franklin Roosevelt circa late 1944) A number of years ago historian, Warren Kimball wrote a book entitled THE JUGGLER which seemed an apt description of Franklin D. Roosevelt’s approach to presidential decision making. As the bibliography of Roosevelt’s presidency has grown exponentially over the years Kimball’s argument has stood the test of time as FDR dealt with domestic and war related issues simultaneously. In his new book HIS FINAL BATTLE: THE LAST MONTHS OF FRANKLIN ROOSEVELT, Joseph Lelyveld concentrates on the period leading up to Roosevelt’s death in April, 1945. The key question for many was whether Roosevelt would seek a fourth term in office at a time when the planning for D-Day was in full swing, questions about the post war world and our relationship with the Soviet Union seemed paramount, and strategy decisions in the Pacific needed to be addressed. Lelyveld’s work is highly readable and well researched and reviews much of the domestic and diplomatic aspects of the period that have been mined by others. At a time when the medical history of candidates for the presidency is front page news, Lelyveld’s work stands out in terms of Roosevelt’s medical history and how his health impacted the political process, war time decision making, and his vision for the post war world. The secrecy and manipulation of information surrounding his health comes across as a conspiracy to keep the American public ignorant of his true condition thereby allowing him, after months of political calculations to seek reelection and defeat New York Governor Thomas Dewey in 1944. Roosevelt’s medical records mysteriously have disappeared, but according to Dr. Marvin Moser of Columbia Medical School he was “a textbook case of untreated hypertension progressing to [likely] organ failure and death from stroke.” The question historians have argued since his death was his decision to seek a fourth term in the best interest of the American people and America’s place in the world. (Roosevelt confidante, Daisy Suckley) Lelyveld does an exceptional job exploring Roosevelt’s personal motivations for the decisions he made, postponed, and the people and events he manipulated. Always known as a pragmatic political animal Roosevelt had the ability to pit advisors and others against each other in his chaotic approach to decision making. Lelyveld does not see Roosevelt as a committed ideologue as was his political mentor Woodrow Wilson, a man who would rather accept defeat based on his perceived principles, than compromise to achieve most of his goals. Lelyveld reviews the Wilson-Roosevelt relationship dating back to World War I and discusses their many similarities, but concentrates on their different approaches in drawing conclusions. For Roosevelt the key for the post war world was an international organization that would maintain the peace through the influence of the “big four,” Russia, England, China, and the United States. This could only be achieved by gaining the trust of Soviet dictator Joseph Stalin and making a series of compromises to win that trust. The author will take the reader through the planning, and decisions made at the Teheran Conference in November, 1943, and Yalta in February, 1945 and the implications of the compromises reached. Lelyveld’s Roosevelt is “the juggler” who would put off decisions, pit people against each other, always keep his options open, and apply his innate political antenna in developing his own viewpoints. This approach is best exemplified with his treatment of Poland’s future. In his heart Roosevelt knew there was little he could do to persuade Stalin to support the Polish government in exile, but that did not stop him from sending hopeful signals to the exiled Poles. Roosevelt would ignore the Katyn Forest massacre of 15,000 Polish officers by the Russian NKVD in his quest to gain Stalin’s support, and in so doing he fostered a pragmatic approach to the Polish issue as Roosevelt and Churchill were not willing to go to war with the Soviet Union over Poland. (Yalta Conference, February, 1945) While all of these decisions had to be made Roosevelt was being pressured to decide if he would run for reelection. Lelyveld’s analysis stands out in arguing that the president did not have the time and space to make correct decisions. With his health failing, which he was fully aware of, and so much going on around him, he could not contemplate his own mortality in deciding whether to run or not. The problem in 1944 was that Roosevelt would not tell anyone what he was planning. As he approached 1944 “his pattern of thought had grown no less elusive….and the number of subjects he could entertain at one time and his political appetite for fresh political intelligence had both undergone discernible shrinkage.” By 1944, despite not being not being totally informed of his truth health condition by physician Admiral Ross McIntire, Roosevelt believed he was not well. Lelyveld relies a great deal on the diaries of Daisy Suckley, a distant cousin who he felt comfortable with and spent more time with than almost anyone, to discern Roosevelt’s mindset. Lelyveld raises the curtain on the Roosevelt-Suckley relationship and makes greater use of her diaries than previous historians. She describes his moods as well as his health and had unprecedented access to Roosevelt. In so doing we see a man who was both high minded and devious well into 1944 which is highlighted by his approach to the Holocaust, Palestine, and Poland. Lelyveld spends a great deal of time exploring Roosevelt’s medical condition and the secretiveness that surrounds the president’s health was imposed by Roosevelt himself which are consistent with “his character and methods, his customary slyness, his chronic desire to keep his political options open to the last minute.” He was enabled by Admiral McIntire in this process, but once he is forced to have a cardiologist, Dr. Howard G. Bruenn examine him the diagnosis is clear that he suffered from “acute congestive heart failure.” Bruenn’s medical records disappeared after Roosevelt died and they would not reappear until 1970. Roosevelt work load was reduced by half, he would spend two months in the spring of 1944 convalescing, in addition to other changes to his daily routine as Lelyveld states he would now have the hours of a “bank teller.” Despite all of this Roosevelt, believing that only he could create a safe post war world decided to run for reelection. But, what is abundantly clear from Lelyveld’s research is that by the summer of 1944 his doctors agreed that should he win reelection there was no way he would have remained alive to fulfill his term in office. (First Lady, Eleanor Roosevelt) Since awareness of Roosevelt’s health condition could not be kept totally secret Democratic Party officials were horrified by the prospect that Roosevelt would win reelection and either die or resign his office after the war, making Henry Wallace President. Party officials had never been comfortable with the Iowa progressive and former Republican who was seen as too left leaning and was no match for Stalin. Roosevelt entertained similar doubts, but using his double bind messages convinced Wallace to travel to Siberia and Mongolia over fifty-one days that included the Democratic Convention. Lelyveld explores the dynamic between Roosevelt and Wallace and how the president was able to remove his vice president from the ticket; on the one hand hinting strongly he would remain as his running mate, and at the same time exiling him to the Russian tundra! For Roosevelt, Wallace did not measure up as someone who could guide a postwar organization through the treaty process in the Senate, further, it was uncovered in the 1940 campaign that Wallace had certain occult beliefs, he was also hampered by a number of messy interdepartmental feuds over funding and authority, and lastly, Roosevelt never reached out to him for advice during his four years as Vice-President. The choice of Harry Truman, and the implications of that decision also receive a great deal of attention as the Missouri democrat had no idea of Roosevelt’s medical condition. Lelyveld provides intricate details of the 1944 presidential campaign which reflects Roosevelt’s ability to rally himself when the need arose to defeat the arrogant and at times pompous Dewey. Evidence of Roosevelt’s ability to revive his energy level and focus is also seen in his reaction to the disaster that took place at the outset of the Battle of the Bulge, and finally confronting Stalin over Poland. In addition, the author does not shy away from difficulties with Churchill over the future of the British Empire, the Balkans and other areas of disagreement. In Lelyveld portrayal, Roosevelt seems to be involved through the Yalta Conference until his death in April, 1945. Lelyveld is correct in pointing out that Roosevelt’s refusal to accept his own mortality had a number of negative consequences, but he does not explain in sufficient detail how important these consequences were. For example, keeping Vice President Truman in the dark about the atomic bomb, Roosevelt’s performance at Yalta, and a number of others that made the transition for Truman more difficult, especially in confronting the Soviet Union. Overall, Lelyveld’s emphasis on Roosevelt’s medical history adds important information that students of Roosevelt can employ and may impact how we evaluate FDR’s role in history. (President Franklin Roosevelt towards the end of his life)
https://docs-books.com/2016/09/22/his-final-battle-the-last-months-of-franklin-roosevelt-by-joseph-lelyveld/
2017-09-19T16:50:08
CC-MAIN-2017-39
1505818685912.14
[]
docs-books.com
: brian_prefs['codegen.c.compiler'] = 'gcc' brian_prefs.codegen.c.compiler = 'gcc' if brian_prefs['codegen.c.compiler'] == 'gcc': ... if brian_prefs.codegen.c.compiler == 'gcc': ... Using the attribute-based form can be particulary useful for interactive work, e.g. in ipython, as it offers autocompletion and documentation. In ipython, brian Code generation preferences codegen.target = 'numpy' Default target for code generation. Can be a string, in which case it should be one of: - 'numpy' by default because this works on all platforms, but may not be maximally efficient. - 'weave‘ uses scipy.weave to generate and compile C++ code, should work anywhere where gcc is installed and available at the command line. Or it can be a CodeObject class. logging. core Core Brian preferences core.default_scalar_dtype = float64 Default dtype for all arrays of scalars (state variables, weights, etc.).’ codegen.runtime.numpy Numpy runtime codegen preferences codegen.runtime.numpy.discard_units = False Whether to change the namespace of user-specifed functions to remove units. codegen.runtime.weave Weave runtime codegen preferences codegen.runtime.weave.compiler = 'gcc' Compiler to use for weave. codegen.runtime.weave.extra_compile_args = ['-w', '-O3', '-ffast-math'] Extra compile arguments to pass to compiler codegen.runtime.weave.include_dirs = [] Include directories to use. is for gcc.
http://brian2.readthedocs.io/en/2.0a8/developer/preferences.html
2017-09-19T17:12:58
CC-MAIN-2017-39
1505818685912.14
[]
brian2.readthedocs.io
How to contribute Thanks for your interest in contributing, We're humbled and to be honest, a bit excited ^_^. There are many ways you can help advance Superpowers! Preamble Superpowers adheres to the Contributor Covenant. By participating, you are expected to honor this code. As a new contributor, we want you to feel welcome and comfortable. You're not going to break anything, so feel free to experiment. We appreciate it when people follow the conventions detailed below, but if you're unsure about where to post, pick a place that makes sense to you and someone will point you to the right one if needed. Superpowers's source code is hosted on GitHub GitHub is a website for collaborating on projects. Superpowers is split over several repositories: - superpowers/superpowers — The core (client/server) - superpowers/superpowers-app — The desktop app - superpowers/superpowers-game — The Superpowers Game system - superpowers/superpowers-html5.com — Superpowers's website Other systems and plugins might be hosted elsewhere by developers unaffiliated with the Superpowers project. Reporting and triaging bugs Bugs should be reported on GitHub. When in doubt, feel free to open an issue in the core repository. You can help triage bugs and make them more useful by: - Asking bug reporters which platform (Windows, OS X or Linux, 32-bit or 64-bit) and which browser/app version they're using. - Trying to reproduce the bug with the latest development version of Superpowers on any platform possible and reporting your findings. - If you can't reproduce the bug, it's worth sharing that too and maybe ask for more details! New features and suggestions Check out the Roadmap for an idea of where development is headed. Ideas and suggestions for new features should be posted on the forums. Once a feature proposal gets support and traction in the community, you can create an issue on GitHub to discuss its design in details before moving on to implementation. Documentation The documentation website is written in Markdown. New pages should be treated as new features, following the same proposal and discussion process outlined above. FIXME: It should probably be rewritten as a Superpowers project and published on GitHub Pages?! Contributing code See the build instructions to get the development version of Superpowers running on your own computer. Sending a pull request - Use imperative in your commits message, for instance Add flux capacitornot Added a flux capacitor - If you're fixing a bug with an existing issue number of GitHub, mention it in the commit message: Increase gigawatt output, closes #6 - Try to stick to the existing coding style. You can use the tslintplugin in Visual Studio Code to help. - Make your pull request as small as possible. Don't bundle many unrelated changes together.
http://docs.superpowers-html5.com/en/development/how-to-contribute
2017-09-19T16:58:06
CC-MAIN-2017-39
1505818685912.14
[array(['/images/icon.png', None], dtype=object)]
docs.superpowers-html5.com
Network File System (NFS)–Level Users, Groups, and Permissions Topics After creating a file system, by default, only the root user (UID 0) has read-write-execute permissions. In order for other users to modify the file system, the root user must explicitly grant them access. Amazon EFS file system objects have a Unix-style mode associated with them. This value defines the permissions for performing actions on that object, and users familiar with Unix-style systems can easily understand how Amazon EFS behaves with respect to these permissions. Additionally, on Unix-style systems, users and groups are mapped to numeric identifiers, which Amazon EFS uses to represent file ownership. File system objects (that is, files, directories, etc.) on Amazon EFS are owned by a single owner and a single group. Amazon EFS uses these numeric IDs to check permissions when a user attempts to access a file system object. This section provides examples of permissions and discusses Amazon EFS–specific NFS permissions considerations. Example Amazon EFS File System Use Cases and Permissions After you create an Amazon EFS file system and mount targets for the file system in your VPC, you can mount the remote file system locally on your Amazon EC2 instance. The mount command can mount any directory in the file system. However, when you first create the file system, there is only one root directory at /. The following mount command mounts the root directory of an Amazon EFS file system, identified by the file system DNS name, on the /efs-mount-point local directory. Copy sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 file-system-id.efs.aws-region.amazonaws.com:/ efs-mount-point Note that the root user and root group own the mounted directory. The initial permissions mode allows: read-write-executepermissions to the owner root read-executepermissions to the group root read-executepermissions to others Note that only the root user can modify this directory. The root user can also grant other users permissions to write to this directory. For example: Create writable per-user subdirectories. For step-by-step instructions, see Walkthrough 3: Create Writable Per-User Subdirectories and Configure Automatic Remounting on Reboot. Allow users to write to the Amazon EFS file system root. A user with root privileges can grant other users access to the file system. To change the Amazon EFS file system ownership to a non-root user and group, use the following:Copy $ sudo chown user: group/ EFSroot To change permissions of the file system to something more permissive, use the following:Copy $ sudo chmod 777 / EFSroot This command grants read-write-execute privileges to all users on all EC2 instances that have the file system mounted. User and group ID permissions on files and directories within a file system Files and directories in an Amazon EFS file system support standard Unix-style read/write/execute permissions based on the user ID and group ID asserted by the mounting NFSv4.1 client. When a user attempts to access files and directories, Amazon EFS checks their user ID and group IDs to verify the user has permission to access the objects. Amazon EFS also uses these IDs as the owner and group owner for new files and directories the user creates. Amazon EFS does not examine user or group names—it only uses the numeric identifiers. Note When you create a user on an EC2 instance, you can assign any numeric UID and GID to the user. The numeric user IDs are set in the /etc/passwd file on Linux systems. The numeric group IDs are in the /etc/group file. These files define the mappings between names and IDs. Outside of the EC2 instance, Amazon EFS does not perform any authentication of these IDs, including the root ID of 0. If a user accesses an Amazon EFS file system from two different EC2 instances, depending on whether the UID for the user is the same or different on those instances, you see different behavior as follows: If the user IDs are the same on both EC2 instances, Amazon EFS considers them to be the same user, regardless of the EC2 instance they use. The user experience when accessing the file system is the same from both EC2 instances. If the user IDs are not the same on both EC2 instances, Amazon EFS considers them to be different users, and the user experience will not be the same when accessing the Amazon EFS file system from the two different EC2 instances. If two different users on different EC2 instances share an ID, Amazon EFS considers them the same user. You might consider managing user ID mappings across EC2 instances consistently. Users can check their numeric ID using the id command, as shown following: Copy $ id uid=502(joe) gid=502(joe) groups=502(joe) Turn Off the ID Mapper The NFS utilities in the operating system include a daemon called an ID Mapper that manages mapping between user names and IDs. In Amazon Linux, the daemon is called rpc.idmapd and on Ubuntu is called idmapd. It translates user and group IDs into names, and vice versa. However, Amazon EFS deals only with numeric IDs. We recommend you turn this process off on your EC2 instances (on Amazon Linux the mapper is usually disabled, in which case don't enable the ID mapper), as shown following: Copy $ service rpcidmapd status $ sudo service rpcidmapd stop No Root Squashing When root squashing is enabled, the root user is converted to a user with limited permissions on the NFS server. Amazon EFS behaves like a Linux NFS server with no_root_squash. If a user or group ID is 0, Amazon EFS treats that user as the root user, and bypasses permissions checks (allowing access and modification to all file system objects). Permissions Caching Amazon EFS caches file permissions for a small time period. As a result, there may be a brief window where a user who had access to a file system object but the access was revoked recently can still access that object. Changing File System Object Ownership Amazon EFS enforces the POSIX chown_restricted attribute. This means only the root user can change the owner of a file system object. While the root or the owner user can change the owner group of a file system object, unless the user is root, the group can only be changed to one that the owner user is a member of.
http://docs.aws.amazon.com/efs/latest/ug/accessing-fs-nfs-permissions.html
2017-09-19T17:27:19
CC-MAIN-2017-39
1505818685912.14
[array(['images/nfs-perm-10.png', None], dtype=object)]
docs.aws.amazon.com