content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
UnknownCommandException represents an exception caused by incorrect usage of a console command. protected yii\console\Application $application = null The name of the command that could not be recognized. public string $command = null Construct the exception. Suggest alternative commands for $command based on string similarity. Alternatives are searched using the following steps: $command See also. © 2008–2017 by Yii Software LLC Licensed under the three clause BSD license.
http://docs.w3cub.com/yii~2.0/yii-console-unknowncommandexception/
2018-08-14T13:15:04
CC-MAIN-2018-34
1534221209040.29
[]
docs.w3cub.com
Config Mode¶ When in Config Mode a node will neither participate in the mesh nor connect to the VPN using the WAN port. Instead, it’ll offer a web interface on the LAN port to aid configuration of the node. Whether a node is in Config Mode can be determined by a characteristic blinking sequence of the SYS LED: Activating Config Mode¶ Config Mode is automatically entered at the first boot. You can re-enter Config Mode by pressing and holding the RESET/WPS button for about three seconds. The device should reboot (all LEDs will turn of briefly) and Config Mode will be available. Port Configuration¶ In general, Config Mode will be offered on the LAN ports. However, there are two practical exceptions: - Devices with just one network port will run Config Mode on that port. - Devices with PoE on the WAN port will run Config Mode on the WAN port instead. Accessing Config Mode¶ Config Mode can be accessed at. The node will offer DHCP to clients. Should this fail, you may assign an IP from 192.168.1.0/24 to your computer manually.
http://gluon.readthedocs.io/en/latest/features/configmode.html
2018-08-14T14:10:25
CC-MAIN-2018-34
1534221209040.29
[array(['../_images/node_configmode.gif', '../_images/node_configmode.gif'], dtype=object) ]
gluon.readthedocs.io
$ oadm router --dry-run \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router. cluster, you must have a service account for the router. Starting in OpenShift Enterprise 3.1, a router service account is automatically created during a quick or advanced installation (previously, this required manual creation). This service account has permissions to a security context constraint (SCC) that allows it to specify host ports.. The default router service account, named router, is automatically created during quick and advanced installations. To verify that this account already exists: $ oadm router --dry-run \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router To see what the default router would look like if created: $ oadm router -o yaml \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router To create a router if it does not exist: $ oadm router <router_name> --replicas=<number> \ --credentials='/etc/origin/origin/master/openshift-router.kubeconfig' \ --service-account=router For example: $ oadm router region-west -o yaml --images=myrepo/somerouter:mytag \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ - You can set up a highly-available router on your OpenShift Enterprise cluster using IP failover. You can customize the suffix used as the default routing subdomain for your environment using the master configuration file (the /etc/origin \ --credentials=${ROUTER_KUBECONFIG:-"$KUBECONFIG"} \ --credentials=${ROUTER_KUBECONFIG:-"$KUBECONFIG"} configuration file using the above certificate and key. Make sure to replace servicename my-service with the name of your service. # example-test-route.yaml Finally add the route to OpenShift (and the router) via: # oc create -f example-test-route.yaml router runs inside a Docker \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router \ --host-network=false Internally, this means the router container must publish the 80 and 443 ports in order for the external network to communicate with the router. Using the --metrics-image and --expose-metrics options, you can configure the OpenShift Enterprise Enterprise router: $ oadm router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router --expose-metrics Or, optionally, use the --metrics-image option to override the HAProxy defaults: $ oadm router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ - OpenShift!! $ echo " - Once the prometheus server is up, view the OpenShift HAProxy " $ echo " router metrics at: http://<ip>:9090/consoles/haproxy.html "/origin/master/openshift-router.kubeconfig' \(1) --service-account=router If you deployed an HAProxy router, you can learn more about monitoring the router. If you have not yet done so, you can: Configure authentication; by default, authentication is set to Deny All. Deploy an integrated Docker registry.
https://docs.openshift.com/enterprise/3.1/install_config/install/deploy_router.html
2018-08-14T13:15:28
CC-MAIN-2018-34
1534221209040.29
[]
docs.openshift.com
Creating Your Own ZBrush Translation The multi language system in ZBrush allows creation of custom languages by its users. This can range from editing an existing language to add personal modifications, all the way to creating support for a new language from scratch. With the exception of a few items, the process can be done almost totally within ZBrush. This done either by clicking on the buttons to translate or going to a menu that will display all strings or error messages that still need to be translated. All these edits are updated directly in ZBrush, without the need to reload the language or the application. At any point, you can export it as a file which can be shared with other ZBrush users who will then be able to benefit from your translation work. 1.Items to Translate In ZBrush, most any element visible to users is available for translation. This includes all buttons, sliders, error messages, the ZModeler menu, progress bar messages, etc. However, a few elements may remain only available in English. This includes some special error messages, the top bar, and resources (brush names, alphas, strokes, the files found in LightBox, etc.). 2.Plugins and ZScripts: The Translation Exception ZBrush is enhanced with plugins or with ZScripts, providing extra utilities and functionalities. Unfortunately, the core part of the ZScript system (which is used in plugins as well) can’t fully support the translation process. The result is that most buttons and sliders for plugins will be translated. However, any special text like popups, error messages or dynamic text will remain in English. An example would be the ZPlugins >> Scale Master plugin, when clicking the Set Scene Scale popup. Its contents will remain in English. 3.Creating a Custom Language The first step is to create a new custom language. It can be: - An existing translation already provided in ZBrush, which you want to edit. - Using an existing language as the foundation for your new translation. All editing tools will only work with a Custom language. This is why you need to create one, even if you wish to edit an existing one. - First select the language that you want to edit or which will be the foundation for your new translation. If you start an original translation, it is advised to begin with the English language as you will have a blank canvas where English will be used if you don’t translate some items. If you use another language as the foundation, you have a high risk of mixing two different languages and creating confusion for any other people who might use your translation. - Click Preferences >> Language >> Customize >> Create. Now the Custom language is selected and you can start your translation work. Don’t forget to save it on a regular basis as this action is not done automatically. 4.The Edit Window The Preferences >> Language >> Customize >> Edit button will open the complete Translation Edit window. This special interface is the full version of Auto Edit mode, explained in the next section of this document. (It is strongly advised to read this next part.) Here is a description of each element: A. The list of items that need to be translated. Click on one of them to start the translation of that item. B. The categories of the items to be translated: • Buttons are mainly the clickable items in the ZBrush interface and include buttons, modes, sliders, and switches. - Messages are typically error, warning and informational messages, or the text in progress bars - Infos include the extra items which doesn’t fit in the two previous sections. These include the ZModeler menu and very specific progress bar or popup messages. C. The Red, Orange and Green dots indicate the state of translation for each listed item. The change of the status is dynamic.• Red: No translation. - Orange: Partial translation has been done, such as if the item’s Title has been edited but its Info has not. - Green: Translation is complete. D. English (Title/Info/Extra): The original text in English. This text can’t be edited. Notice that it may vary a little bit from the English translation within ZBrush. This is because the original text has been edited through the translation system in order to rename some items or change some abbreviations. E. Localized (Title/Info/Extra): The field where you enter your translation. F. Title/Info/Extra: these switches let you select which part of the item to translate: - Title: The button, slider or switch name found in the user interface, like “Open,” “Divide,” or “DynaMesh.” It can also be the title of a warning message or information window. - Info: This is the tooltip text that is will be displayed when hovering over the button, slider, switch, or any other clickable user interface element. It can be the main (or first) part of an information window. These messages are not the AutoNotes, which are seen when holding the CTRL key while hovering over an interface element. - Extra: Additional text which can be found in some information and error messages, as well as a few miscellaneous items. G. Copy: Copies the original text into the translation field. This can be very helpful for long translations or if you need to reuse special codes such as color codes or breaklines (\Cxxxxxx and \n). H. Shared and Unique: Switches between the two modes. Please refer to the dedicated section below. I. The path and identifier to the item to translate. This field is provided for your information. It is an internal identifier and in most cases can give you an indication of where the item is located within ZBrush. J. Up/Down Arrow: Switches between Edit mode or Auto Edit mode. K. Untranslated: This switch toggles between displaying all items, or only those that still need full or partial translation. When turned off, any element that has been fully translated (Title + Info + Extra) will immediately be removed from the list. This mode is enabled by default. L. Previous, Next: Selects the next or previous element to translate, without cycling between Title, Info and Extra. These functions are affected by the Untranslated mode, meaning that if you want to select the previous item regardless of whether it has been translated or not, you must first disable Untranslated mode. M. Page Up, Page Down and Page Counter: These let you move through the entire catalogue of items that can be translated. Use these when you need to search for a specific item. N. Save: Saves the current custom language. As previously mentioned, do this regularly. O. Close: Closes the Edit window. This will not save the current language. That must be done either by using the Save button within this editing window or by clicking Preferences >> Language >> Customize >> Save. Hitting Enter will also close the window. 5.Auto Edit Mode This mode, found in Preferences >> Language >> Customize is strongly advised when creating your translation. When enabled, pressing Alt+clicking on any user interface element will open a reduced version of the Edit window, with only the clicked item displayed. You will then be able to translate this item the same way you would with the standard Edit window. If you want to see all other translations near this current item, click the arrow at the left of the popup window. The popup will then become the full Edit window. 6.Translation Workflow A typical workflow is: - Be sure to have a customized language loaded and turn Preferences >> Language >> Custom >> Auto-edit mode on. - ALT+click on any interface item to translate. For example, the Tool >> Load Tool button. - The popup version of the Edit window will appear with the Title field selected. - In this example, the English (Title) should be “Load Tool.” In the Localized (Title) field below, enter your translation. In French it would be “Charger Tool.” - Click Info to display that part of the translation. If you have the Tool palette visible, you should immediately see the change to the Title applied. In the “English (info)” section, you should have “Load Tool (Ztl File).” Enter your translation for this text. Continuing with the French example, you might enter “Charger un ZTL (.ZTL) précédement enregistré.”As you may notice, this example translation is longer than the original text. Feel free to provide more information if needed, so long as it is valid information. It is safer to keep the translation simple rather than trying to give more information at the risk of mistakes. - Press Enter. The Auto Edit window will close. - Switch to another item to translate by restarting at step 2. This workflow is in fact really fast, in part because you are directly in the context of the function being translated. Translating a list in a raw text file would make the task more difficult. Don’t forget that you can click the Previous and Next buttons when either Edit window is open. Most of the time, buttons within the same UI cluster will be next to each other in the list. 7. Unique and Shared Modes In ZBrush, there are a number of buttons or sliders which have the same text but slightly different functionality. A good example is the “Import” button. This can be found in the Texture palette to import a texture, in the Alpha palette to import an Alpha, the Document palette and other places. When Shared mode is enabled (by default), changing the Title part of the translation will also apply automatically to all the other identically named buttons within ZBrush. However, the tooltip (found in the Info section of the translation) will remain unchanged. This saves you the time of having to edit the same text repeatedly, yet also allows you to enter a different description for each unique item that shares that name. For example, the tooltips of the “Import (Alpha)” and “Import (Texture)” buttons. This mode is a huge convenience, since some items are duplicated many times across the interface. Be careful, however. Sometimes — depending on the language and context — a shared translation does not work well across all instances of a UI element. Depending on your language, another translation may have a better meaning. Another reason would be if the full text will not fit a button size at some locations and you then have to use an abbreviation. (“Transform” might become “Transf.”) By first setting a translation to Unique, you can tell ZBrush to only change that instance of the text while ignoring all other locations that would otherwise be shared. 8. Translating Icons The text on icons can also be translated with a special workflow: - Enable Auto-edit mode, found in Preferences >> Language >> Customize. - Shift+click on the icon of your choice. A note will tell you that the icon has been exported as a Photoshop file. It will be saved in the custom language source folder on your hard drive. - On Windows: C:\Users\Public\Documents\ZBrushData\ZStartup\CustomLang\icons - On MacOS: /Users/Shared/ZBrushData/ZStartup/CustomLang/icons - Open your favorite .PSD file format editor and load the file. We strongly advise using Adobe Photoshop. - Notice Image >> Mode is set to “Multichannel.” Feel free to switch to Greyscale mode to edit the icon, which gives you the ability to work with layers. - Edit the icon however you wish. This will usually be to change the text but you could also change the graphical portion of the icon. - Save your icon file. You do not need to flatten the layers. - Back in ZBrush, click Preferences >> Language >> Customize >> Reload Icons. The updated icon should now appear. - Repeat these steps for other icons. 9. Translation of the Gizmo 3D Modifiers The Gizmo 3D includes multiple modifiers which are plugin based. You can still translate their names and tooltip information (such as when hovering a cone), but this needs to be done through a different system. To make these changes, you need to edit the XML file corresponding to the Modifier. These files are found in the ZBrush/ZData/ZPlug64/NameOfTheModifierData folder. Look for files ending whe “_zc.xml” and open these with a text editor. There, translate the content found in the trans=”xxxx” sections. Do not change any other portion of the files! These translations will only become visible when ZBrush is restarted or by switching to another language and then reverting to the custom language. 10. Font for Your Language ZBrush is provided with multiple fonts that cover a wide range of characters for most languages. These are the Noto fonts from Google: If you are translating ZBrush in a language which needs a special font, download it from the Noto webpage and copy it to the ZBrush/ZData/ZLang/ZFont folder. Next, edit the ZFontMac.xml and ZFontWin.xml files. Search for the “language = “zc”” section, below which you will see a list of fonts: “sysfont = “NotoSans-Regular.ttf;….”/>”. Add your font filename name before the “NotoSans-Regular.ttf” and make sure to separate them with a “;” in between as it is for the other fonts. Your custom language will now first look at this font to display your language and its characters. 11. Important Information and Advice About Translating ZBrush This section contains valuable information that will help you when translating ZBrush. It includes best practices and how to avoid common mistakes. If you have questions about the translation process or you want to share your translation work with Pixologic, please send an email to [email protected]. Adapt to Your Language but Keep the Meaning This is a key point about localization. Obviously, using something like Google Translation would have no meaning. ZBrush has its own philosophy and only someone who knows the software can perfectly translate the software. With one exception, all translations provided by Pixologic have been made by ZBrush artists/users. For some words or expressions, a direct translation may not mean anything in your language. In that case, adapt to what makes the most sense in your own language. Some functions already have a specific word in your language and should be used even if this translation is far from the original English. What is most important is that it makes sense to the end user. The end user is the key, since the goal of the translation is to make ZBrush accessible to artists who may not understand at all because it is not in their language. At the opposite end of the spectrum, some words may not exist in your language or the English text is so well known that it accomplishes nothing to do a translation. In that case, keep it in English. As an example, the “Picker” menu and its associated function can be translated into the French language but would be meaningless. As a result, the English name has been kept. Keep Famous ZBrush Function Names — or at Least Keep their Pronunciation ZBrush includes numerous features which are well known and are key to what ZBrush is. These include Tool, SubTool, DynaMesh, ZRemesher, ZSphere, ShadowBox, SpotLight, Projection Master, LightBox, ArrayMesh, NanoMesh, ZScript, and FiberMesh, to name a few. It is strongly advised to keep them as they are and not try to translate them. If your language uses a different character set (like Asian characters) you can do a phonetic translation, keeping the pronunciation. If you think a translation is really needed, add it to the tooltip (Info section) but keep the original word/sound on the button itself. Tool and SubTool can be easily translated in most languages, but again these items are fundamental to ZBrush and must remain as is. This is a typical example of where you can add the translation to the tooltip of the function. Be Very Careful of Special Characters In some original text, you will notice characters like \n (or even sometimes \n\n) or \Cxxxxxx where the x can be a hexadecimal number. The first one is a break line character while the second is a color identifier that will change the color of the text that follows. Example: \C333333Z\CFFAA00Brush will output ZBrush. Notice that some characters can be right after the special code, without empty space. Some function names can start with a letter followed by a point which is not visible in ZBrush but does appear in the Editor. Examples are the NanoMesh or ArrayMesh functions which can start with a “p.”, “m.”, “a.”, etc. You need to keep these special characters in your translation since they are used by ZBrush to be recognized as a group of features. Notice the “P.” before the function name. “P.Panel Loops” appears in the UI as simply, “Panel Loops” while the “P.” is used as an internal identifier. Don’t Forget “Hidden” Features The most obvious way to do the translation is to open each palette one by one and do the translation using Auto Edit. When doing so, don’t forget that some palettes may have a popup with buttons, like the Render >> BPR filters >> Filters and Blend modes. There are also special palettes like for ZModeler, which is only available when the ZModeler brush is selected. Other examples are when a ZSphere or 3D Primitive is selected, changing the contents of the Tool palette. Obviously, each 2.5D Tool (like the Simple or Cloner brush) has different settings to translate. The way to be sure that the translation is done is to open the Translation Edit window and have Untranslated turned on. If all of its tabs are empty, you are done! Right to Left Languages Unfortunately, languages that read from right to left (like Hebrew and Arabic) are not supported. Time Commitment Creating a custom language is not a quick task. For your information, it takes approximately a month to translate ZBrush into a new language. This includes time spent searching for functions you don’t know and then figuring out how to best translate them. Even though the translators for the languages provided by Pixologic were well versed in ZBrush before they began, they all learned a lot about ZBrush by translating it!
http://docs.pixologic.com/user-guide/multiple-languages/custom-language/
2018-08-14T13:27:54
CC-MAIN-2018-34
1534221209040.29
[array(['http://docs.pixologic.com/wp-content/uploads/2017/06/4R8_70.jpg', '4R8_70'], dtype=object) ]
docs.pixologic.com
Duplicating an Existing Job Sometimes it is more convenient to make a copy of an existing job and adjust its options instead of creating a new job. ePublisher AutoMap allows you to duplicate an existing job. Then, you can modify the options as needed for your new job. To duplicate an existing job Start ePublisher AutoMap. Select the job you want to copy in the ePublisher AutoMap main window. On the Job menu, click Duplicate. Specify the new job name, and then click OK. Specify the scheduling options as needed, and then click OK. For more information, see “Scheduling Jobs with Windows Scheduler” on page 306. Once the new job exists, you can modify the job for your specific needs. For more information, see “Editing an Existing Job” on page 306.
http://docs.webworks.com/ePublisher/2009.1/Help/03.Preparing_and_Publishing_Content/04.020.Automating_Projects
2018-08-14T13:38:08
CC-MAIN-2018-34
1534221209040.29
[]
docs.webworks.com
Understanding Topics Reports Context-sensitive help topics require that you have TopicAlias markers inserted in your source documents. ePublisher generates context-sensitive help topics based on the topic IDs you specify for each TopicAlias marker you insert in your source documents. Each time ePublisher detects a TopicAlias marker in a source document, ePublisher generates a context-sensitive help topic based on the topic ID. For more information about creating context-sensitive help topics, see “Creating Context-Sensitive Help in FrameMaker” on page 126 and “Creating Context-Sensitive Help in Word” on page 229. You can use the Topics Report to verify that context-sensitive help topics have been created for each topic ID specified in your source document. The Topics Report lists the topic ID and the topic file created for each topic ID. Configure the notifications you want ePublisher to generate for Topics report settings before you generate Topics reports. For more information about configuring Topics report settings, see “Configuring Reports” on page 302. For more information about generating Topics reports, see “Generating Reports” on page 303.
http://docs.webworks.com/ePublisher/2009.2/Help/03.Preparing_and_Publishing_Content/3.062.Producing_Output_Based_on_Stationery
2018-08-14T13:38:48
CC-MAIN-2018-34
1534221209040.29
[]
docs.webworks.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. This operation lists the parts of an archive that have been uploaded in a specific multipart upload. You can make this request at any time during an in-progress multipart upload before you complete the upload (see CompleteMultipartUpload. List Parts returns an error for completed uploads. The list returned in the List Parts response is sorted by part range. The List Parts operation supports pagination. By default, this operation returns up to 50 uploaded parts in the response. You should always check the response for a marker at which to continue the list; if there are no more items the marker is null. To return a list of parts that begins at a specific part, set the marker request parameter to the value you obtained from a previous List Parts request. You can also limit the number of parts Parts in the Amazon Glacier Developer Guide. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to ListPartsAsync. Namespace: Amazon.Glacier Assembly: AWSSDK.Glacier.dll Version: 3.x.y.z Container for the necessary parameters to execute the ListParts service method. The example lists all the parts of a multipart upload. var response = client.ListParts(new ListPartsRequest { AccountId = "-", UploadId = ", VaultName = "examplevault" }); string archiveDescription = response.ArchiveDescription; string creationDate = response.CreationDate; string marker = response.Marker; string multipartUploadId = response.MultipartUploadId; long partSizeInBytes = response.PartSizeInBytes; List parts = response.Parts; string vaultARN = response.VaultARN; .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Glacier/MIGlacierListPartsListPartsRequest.html
2018-08-14T14:40:21
CC-MAIN-2018-34
1534221209040.29
[]
docs.aws.amazon.com
Best practices: Assessment for locally managed The Best Practices Guide includes deployment recommendations and real-world examples from the Office 365 Product Group and delivery experts from Microsoft Services. For a list of all the articles, see Best practices. Locally Managed success criteria is to deploy Office 365 ProPlus in a like-for-like fashion, replacing the previous version of Office. Users should get the application set they had before, including Visio and Project, and the existing languages. InfoPath-based LOB applications must continue to work, with minimal user impact and management overhead. Locally Managed only supports 32-bit Office so all users will remain on 32-bit Office. Assessments results Size and distribution 5000 employees 3 supported languages Canada, USA, and Germany < 100 travelling users, some who are offsite for extended periods Sites are a main office in each country, limited small offices throughout each country IT infrastructure Windows 7 64-bit Small group of Mac users, ~25 who users have local administrator rights Mac clients are not managed by client management software All systems have en-us language pack installed with additional languages in CanAm region or pulled by the user Remote Desktop Services and Citrix farm with < 500 users Network bandwidth is sufficient for daily business Users do not have administrative rights on their machines Network is a distributed layout with a small number of high-bandwidth internet interconnection points Application landscape Office 2010 Volume License (MSI) 32-bit Groups of users using Project 2010 and Visio 2010 32-bit All applications other than Access being deployed Due to legacy Line-of-Business (LOB) applications, InfoPath is required Cloud infrastructure Office 365 tenant with Azure AD Connect On-Premises Active Directory Federation Services with Single Sign-on (SSO) Exchange Online deployed in production Self-service including installations are not blocked in the portal
https://docs.microsoft.com/en-us/DeployOffice/best-practices/best-practices-assessment-for-locally-managed
2018-08-14T13:18:43
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
. [!Important] This topic does not apply for Windows 10. The way that default file associations work changed in Windows 10. For more information, see the section on Changes to how Windows 10 handles default apps in this post.. user account control (UAC) feature introduced in Windows Vista. post-installation will be unsuccessful. Instead, defaults must be registered. name. MIMEAssociations.. RegisteredApplications. Full Registration Example:. void NotifySystemOfNewRegistration() { SHChangeNotify(SHCNE_ASSOCCHANGED, SHCNF_DWORD | SHCNF_FLUSH, nullptr, nullptr); Sleep(1000); })
https://docs.microsoft.com/en-us/windows/desktop/shell/default-programs
2018-08-14T13:53:15
CC-MAIN-2018-34
1534221209040.29
[array(['images/defaultprogramsmain.png', 'screen shot of the default programs entry page'], dtype=object) array(['images/setyourdefaultprograms.png', 'screen shot of the set your default programs page'], dtype=object) array(['images/setassociationsforaprogram.png', 'screen shot of the set assocations for a program page'], dtype=object) array(['images/setassociationsforaprogramforlitware.png', 'screen shot of the set associations for a program page for litware'], dtype=object) array(['images/notthedefaultui.png', 'screen shot of an example dialog box'], dtype=object)]
docs.microsoft.com
Packaging Guidelines¶ Downstream packagers often want to package Hypothesis. Here are some guidelines. The primary guideline is this: If you are not prepared to keep up with the Hypothesis release schedule, don’t. You will annoy me and are doing your users a disservice. Hypothesis has a very frequent release schedule. It’s rare that it goes a week without a release, and there are often multiple releases in a given week. If you are prepared to keep up with this schedule, you might find the rest of this document useful. Release tarballs¶¶ Python versions¶ Hypothesis is designed to work with a range of Python versions. Currently supported are: - pypy-2.6.1 (earlier versions of pypy may work) - CPython 2.7.x - CPython 3.4.x - CPython 3.5.x - CPython 3.6.x - CPython 3.7.x If you feel the need to have separate Python 3 and Python 2 packages you can, but Hypothesis works unmodified on either. Other Python libraries¶ Hypothesis has mandatory dependencies on the following libraries: Hypothesis has optional dependencies on the following libraries: - pytz (almost any version should work) - Faker, version 0.7 or later - Django, all supported versions - numpy, 1.10 or later (earlier versions will probably work fine) - pandas, 1.19 or later - pytest (3.0 or greater). This is a mandatory dependency for testing Hypothesis itself but optional for users. The way this works when installing Hypothesis normally is that these features become available if the relevant library is installed. Testing Hypothesis¶.8.0 is strongly encouraged, but it may work with earlier versions (however py.test specific logic is disabled before 2.8.0). Tests are organised into a number of top level subdirectories of the tests/ directory. - cover: This is a small, reasonably fast, collection of tests designed to give 100% coverage of all but a select subset of the files when run under Python 3. - nocover: This is a much slower collection of tests that should not be run under coverage for performance reasons. - py2: Tests that can only be run under Python 2 - py3: Tests that can only be run under Python 3 - datetime: This tests the subset of Hypothesis that depends on pytz - fakefactory: This tests the subset of Hypothesis that depends on fakefactory. - django: This tests the subset of Hypothesis that depends on django An example invocation for running the coverage subset of these tests: pip install -e . pip install pytest # you will probably want to use your own packaging here python -m pytest tests/cover
https://hypothesis.readthedocs.io/en/latest/packaging.html
2018-08-14T13:27:40
CC-MAIN-2018-34
1534221209040.29
[]
hypothesis.readthedocs.io
. In SPlunk version 4.3.3 I can't add UDP 514 port. It reports the following error "Encountered the following error while trying to save: In handler 'udp': Parameter name: UDP port 514 is not available" hi Vleitao, sorry it took so long to get back to you. this is a relatively common problem, and is answered on our Answers site:<br />hope you've been able to sort out your issue.
http://docs.splunk.com/Documentation/Splunk/4.3.1/Data/SyslogUDP
2018-08-14T13:57:23
CC-MAIN-2018-34
1534221209040.29
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Development Environment Setup on macOS¶ This section describes how to set up a macOS development system. After completing these steps, you will be able to compile and run your Zephyr applications on the following macOS is successfully installed, install the following tools using the brew command line. Note Zephyr requires Python 3 in order to be built. Since macOS comes bundled only with Python 2, we will need to install Python 3 with Homebrew. After installing it you should have the macOS-bundled Python 2 in /usr/bin/ and the Homebrew-provided Python 3 in /usr/local/bin. Install tools to build Zephyr binaries: brew install cmake ninja dfu-util doxygen qemu dtc python3 gperf cd ~/zephyr # or to the folder where you cloned the zephyr repo pip3 install --user -r scripts/requirements.txt Note If pip3 does not seem to have been installed correctly use brew reinstall python3 in order to reinstall it. Source zephyr-env.sh wherever you have cloned the Zephyr Git repository: unset ZEPHYR_SDK_INSTALL_DIR cd <zephyr git clone location> source zephyr-env.sh Finally, assuming you are using a 3rd-party toolchain you can try building the Hello World sample to check things out. To build for the ARM-based Nordic nRF52 Development Kit: cd $ZEPHYR_BASE/samples/hello_world mkdir build && cd build # Use cmake to configure a Ninja-based build system: cmake -GNinja -DBOARD=nrf52_pca10040 .. # Now run ninja on the generated build system: ninja Setting Up the Toolchain¶ In case a toolchain is not available for the board you are using, you can build a toolchain from scratch using crosstool-NG. Follow the steps on the crosstool-NG website to prepare your host Follow the Zephyr SDK with Crosstool NG instructions to build the toolchain for various architectures. You will need to clone the sdk-ng repo and run the following command: ./go.sh <arch> Note Currently only i586 and arm builds are verified. Repeat the step for all architectures you want to support in your environment. To use the toolchain with Zephyr, export the following environment variables and use the target location where the toolchain was installed, type: export ZEPHYR_TOOLCHAIN_VARIANT=xtools export XTOOLS_TOOLCHAIN_PATH=/Volumes/CrossToolNGNew/build/output/ To use the same toolchain in new sessions in the future you can set the variables in the file $HOME/.zephyrrc, for example: cat <<EOF > ~/.zephyrrc export XTOOLS_TOOLCHAIN_PATH=/Volumes/CrossToolNGNew/build/output/ export ZEPHYR_TOOLCHAIN_VARIANT=xtools EOF Note In previous releases of Zephyr, the ZEPHYR_TOOLCHAIN_VARIANT variable was called ZEPHYR_GCC_VARIANT.
http://docs.zephyrproject.org/getting_started/installation_mac.html
2018-08-14T14:00:43
CC-MAIN-2018-34
1534221209040.29
[]
docs.zephyrproject.org
PlaneProjection Class Represents a perspective transform (a 3-D-like effect) on an object. Inheritance Hierarchy System.Object System.Windows.DependencyObject System.Windows.Media.Projection System.Windows.Media.PlaneProjection Namespace: System.Windows.Media Assembly: System.Windows (in System.Windows.dll) Syntax 'Declaration Public NotInheritable Class PlaneProjection _ Inherits Projection public sealed class PlaneProjection : Projection <PlaneProjection .../> The PlaneProjection type exposes the following members. Constructors Top Properties Top Methods Top Fields Top Remarks. Examples
https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/dd502829(v=vs.95)
2018-08-14T14:15:03
CC-MAIN-2018-34
1534221209040.29
[array(['images/dd502829.image_persptransform%28en-us%2cvs.95%29.png', 'Image with perspective transform applied. Image with perspective transform applied.'], dtype=object) array(['images/dd502829.stackingobjects%28en-us%2cvs.95%29.png', 'Shows stacking objects in 3d effect. Shows stacking objects in 3d effect.'], dtype=object) ]
docs.microsoft.com
Create a part requirement How to create a part requirement. Navigate to Work Management > Work Order > All Work Orders. Open a work order. Open a work order task that is not in the Closed or Cancelled state. Do one of the following: OptionDescription Click Source. All tasks and part requirements are listed on the left. Point to any task or part requirement icon to obtain more information. Right-click a work order task and select Create Part Requirement. This method is useful if you are sourcing multiple parts for a work order task. In the Part Requirements related list, click New. This method is useful if you are sourcing a single part for a work order task. Fill in the fields, as appropriate. Table 1. Part requirements fields Field Description Number Auto-generated number for the part requirement. Work order task Number assigned to the work order task. Model Description of the part model needed to complete the work order task. Required by date Date by which all parts should be delivered. The date is filled in automatically based on the task's expected travel start time. If necessary, change the date manually. Required quantity Total quantity necessary to complete the part requirement. This field becomes read-only when the full number of required parts has been sourced. Reserved quantity Total quantity that has been sourced already. Sourced Indicator for whether the required quantity for this part requirement has been reserved and transfer requested from one stockroom to another. Delivered Indicator for whether the transfer order lines under this part requirement have been delivered or not. Short description Contents of the Short description field from the parent work order. If the work order was created from an incident, problem, or change request, the short description of the part requirement is inherited from that record. If the work order was created automatically from a work order model, the short description is from model template. This field is not visible by default. Click Submit. If the part is out of stock, a message appears at the top of the form naming the part. If someone other than the qualifier will source the part requirement, create transfer order lines, move the part from a stockroom to an agent, and click Qualified. Note: Part requirement record numbers start with an SOPR prefix and the records are stored in the [sm_part_requirement] table in the Service Order Management application. Part requirements created in prior releases start with an WOPR prefix.
https://docs.servicenow.com/bundle/geneva-service-management-for-the-enterprise/page/product/planning_and_policy/task/t_CreateAPartRequirement.html
2018-08-14T13:55:07
CC-MAIN-2018-34
1534221209040.29
[]
docs.servicenow.com
Paragraph Styles in Word Create paragraph styles for items based on function, not based on formatting. This approach allows you to modify formatting over time and the style names continue to apply. It also prepares you for structured writing in the future. Name your paragraph styles starting with naming conventions that group styles by function. For example, group procedure-related styles together by starting the style names with Procedure, such as ProcedureIntro, ProcedureStep, and ProcedureSubstep. Note: Style names should not include a period in their name. The period can cause display issues when ePublisher creates the cascading style sheet entry that defines the appearance of the style. To simplify formatting and save time for future maintenance and customization, set the default paragraph font and spacing for a base style, such as Normal. Then, base other styles on this base style to inherit the default formatting settings. This process allows you to quickly modify fonts and spacing across styles by modifying only the base style. You can customize settings for each style as needed. The customized settings are not affected when you modify those settings in the base style. To simplify maintenance for heading styles, which often use a different font than your content styles, you may want to base all heading styles on the Heading 1 style to define the font for all headings. In ePublisher, you can scan the source documents to list all the paragraph styles. Then, you can organize them in ePublisher to allow property inheritance and to streamline the customization process for your generated output. You may need multiple paragraph styles to define functions that support pagination settings, such as a BodyListIntro format that has Keep with next set. To reduce the number of paragraph styles, you can customize paragraphs to add the Keep with next setting as needed. Customizing this setting on a paragraph does not affect the ability for the paragraph to receive the other formatting settings from the style definition. To automate and simplify template use, define the paragraph style that follows each paragraph style. This process allows the writer to press Enter after writing a paragraph and the template creates the next paragraph with the style most commonly used next. For example, after a Heading style, the writer most often writes a body paragraph of content. Common paragraph styles include: Figure paragraphs. You may need multiple indents, such as Figure, FigureInList, and FigureInList2. Body paragraphs. You may need multiple indents, such as Body, BodyInList, and BodyInList2. To reduce training needs, you can use the default style names, such as Body Text, Body Text Indent, and Body Text Indent 2. Headings, such as Heading 1, Heading 2, Heading 3, and Heading 4. You may also need specialized headings, such as Title, Subtitle, FrontMatterHeading1, FrontMatterHeading2, and FrontMatterHeading3. The cross-reference feature in Microsoft Word allows you to create cross references to headings that use the default Heading styles named Heading X, where X is a number. To create cross references to other styles, use bookmarks. Do not paste content at the beginning of a heading. Existing cross references to that heading may include the pasted content when the cross references are updated. Word” on page 97. Numbered lists. You may need multiple levels, such as ProcedureStep that uses numbers and ProcedureSubstep that uses lowercase letters. You may also need numbered list items in tables, such as CellStep. Be sure to consider related supporting formats, such as ProcedureIntro. For more information, see “Bulleted and Numbered Lists in Word” on page 97. Examples, such as code or command syntax statements, usually in a fixed font. To keep the lines of a code example together, you can set the Keep with next setting for the Example style and use an ExampleLast style to identify the end of the example. You may also need multiple example levels, such as ExampleInList and ExampleInListLast. Paragraphs in tables, such as CellHeading, CellBody, CellBody2, CellStep, and CellBullet. Legal notice and copyright or trademark styles for inside the cover page. Table of contents and Index styles. styles to control formatting. Notes, cautions, tips, and warnings. ePublisher projects use custom field code markers, paragraph styles, and character styles to define online features. You need to give the list of markers and styles to the writers so they know how to implement each online feature. The writers use the markers and styles you create to define online features. The Stationery defines the custom markers and styles. To reduce complexity, you can use the style names defined in the documentation, or you can define the online feature to a different style. The following list identifies additional paragraph styles you may need to support ePublisher online content features: Paragraph or character styles to support multiple languages, such as bidirectional languages and text. Dropdown paragraph style that identifies the start of an expand/collapse section. You can end the section with a paragraph style defined to end the section, or with a DropDownEnd marker. Popup paragraph styles that define several aspects of popup window content: Popup paragraph style identifies the content to display in a popup window and in a standard help topic. This style is applied to the first paragraph of popup content. Popup Append paragraph style identifies the content to display in a popup window and in a standard help topic. This style is applied to additional popup paragraphs when you have more than one paragraph of content to include in a popup window. Popup Only paragraph style identifies the content to display only in a popup window. This style is applied to the first paragraph of popup content. Popup Only Append paragraph style identifies the content to display only in a popup window. This style is applied to additional popup paragraphs when you have more than one paragraph of content to include in a popup window. Related topics paragraph style that identifies a link to a related topic, such as a concept topic related to a task or a task related to a concept. See Also paragraph style that identifies the text you want to include in an inline See Also link. For more information about enabling a specific online feature, see “Designing, Deploying, and Managing Stationery” on page 111.
http://docs.webworks.com/ePublisher/2009.3/Help/02.Designing_Templates_and_Stationery/2.20.Designing_Input_Format_Standards
2018-08-14T13:40:09
CC-MAIN-2018-34
1534221209040.29
[]
docs.webworks.com
4 Security In any communication protocol—especially one carrying sensitive, private data—security is a major concern. From RFC6455, we can see that the WebSocket specification itself does not provide much guidance:. In general, the approach for ETP has been similar, in that it does not define any new security protocols. Rather, it relies on the available security mechanisms in its underlying protocols (HTTP, WebSocket, TLS, TCP, etc.). As in WITSML, ETP specifies authentication methods which MUST be implemented by all servers for interoperability, but allows for additional methods (such as client certificates, for example) to be used by agreement between specific parties. This part of the ETP specification is primarily concerned with the exact mechanism for authenticating the WebSocket upgrade. It is does not attempt to define all of the methods that may be used to acquire credentials, federate identity, etc. ETP provides two well-known authentication mechanisms: - JSON Web Tokes (JWT) token-based authentication (MUST be supported) - Basic authentication All ETP servers MUST implement JWT. Implementation guidelines for specific domains (such as WITSML v2.0) may provide more strict language that requires one or both of these methods to be supported. Specific vendors, service companies, and operators MAY also implement any other appropriate security mechanisms (such as SAML tokens), but they are not required by the specification and may lead to interoperability issues. In all cases, the client SHOULD use the Authorization request header defined by HTTP/1.1. Servers MUST support this method. For browser-based clients, it is not possible to add request headers to the upgrade request. This is because the HTML5 WebSocket API definition does not allow access to the request headers. For this reason, the client MAY provide the information in the query string of the upgrade request. Servers MUST also support this method. If the information is provided in both the request headers and in the query string, the server MUST use the request header. When provided on the query string, the authentication information MUST be URL-encoded. The query variable is Authorization and the: character is replaced with the equal sign (=). So the following authorization header: Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ Would appear as follows in the query string: Authorization=Basic%20QWxhZGRpbjpvcGVuIHNlc2FtZQ
http://docs.energistics.org/ETP/ETP_TOPICS/ETP-000-170-0-C-sv1100.html
2021-11-27T12:32:58
CC-MAIN-2021-49
1637964358180.42
[]
docs.energistics.org
Adding a Role Instance Minimum Required Role: Configurator (also provided by Cluster Administrator, Limited Cluster Administrator , Cloudera Runtime. dfs_hosts_allow.txtfile. The new role instance is configured with the default role group for its role type, even if there are multiple role groups for the role type. If you want to use a different role group, follow the instructions in the topic Managing Role Groups for moving role instances to a different role group
https://docs.cloudera.com/cloudera-manager/7.5.2/managing-clusters/topics/cm-role-instance-add.html
2021-11-27T12:09:19
CC-MAIN-2021-49
1637964358180.42
[]
docs.cloudera.com
Testing HPX¶ To ensure correctness of HPX, we ship a large variety of unit and regression tests. The tests are driven by the CTest tool and are executed automatically on each commit to the HPX Github repository. In addition, it is encouraged to run the test suite manually to ensure proper operation on your target system. If a test fails for your platform, we highly recommend submitting an issue on our HPX Issues tracker with detailed information about the target system. Running tests manually¶ Running the tests manually is as easy as typing make tests && make test. This will build all tests and run them once the tests are built successfully. After the tests have been built, you can invoke separate tests with the help of the ctest command. You can list all available test targets using make help | grep tests. Please see the CTest Documentation for further details. Running performance tests¶ We run performance tests on Piz Daint for each pull request using Jenkins. To run those performance tests locally or on Piz Daint, a script is provided under tools/perftests_ci/local_run.sh (to be run in the build directory specifying the HPX source directory as the argument to the script, default is $HOME/projects/hpx_perftests_ci. Adding new performance tests¶ To add a new performance test, you need to wrap the portion of code to benchmark with hpx::util::perftests_report, passing the test name, the executor name and the function to time (can be a lambda). This facility is used to output the time results in a json format (format needed to compare the results and plot them). To effectively print them at the end of your test, call hpx::util::perftests_print_times. To see an example of use, see future_overhead_report.cpp. Finally, you can add the test to the CI report editing the hpx_targets for the executable name and hpx_test_options for the corresponding options to use for the run. Issue tracker¶ If you stumble over a bug or missing feature in HPX, please submit an issue to our HPX Issues page. For more information on how to submit support requests or other means of getting in contact with the developers, please see the Support Website page. Continuous testing¶ In addition to manual testing, we run automated tests on various platforms. We also run tests on all pull requests using both CircleCI and a combination of CDash and pycicle. You can see the dashboards here: CircleCI HPX dashboard and CDash HPX dashboard .
https://hpx-docs.stellar-group.org/branches/master/html/contributing/testing_hpx.html
2021-11-27T11:44:25
CC-MAIN-2021-49
1637964358180.42
[]
hpx-docs.stellar-group.org
2.4.2.1 Methods for Interface IStoreStore Topic Version1Published10/31/2016Topic Change HistoryFor StandardETP v1.1 Method Description Type Parameter Summary onGetObject A request for the store to return a data object, identified by a URI. event (in) eventData: GetObject onDeleteObject A request to delete a data object from the store. event (in) eventData: DeleteObject onPutObject A request to insert or update a data object in the store. event (in) eventData: PutObject Object Return a data object from a GetObject request. (in) eventData: Object
http://docs.energistics.org/ETP/ETP_TOPICS/ETP-000-050-0-C-sv1100.html
2021-11-27T12:05:23
CC-MAIN-2021-49
1637964358180.42
[]
docs.energistics.org
Python Making sudo pip Safe (Again) The location where sudo pip3 installs modules has been changed to /usr/local/lib/pythonX.Y/site-packages, and sudo pip3 is henceforth safer to use. No other changes in user experience are expected. Sudo pip3 is not considered a standard way to install Python packages. Virtual environment and pip3 install --user should still be the prefered options. Additionally, Fedora will increase it’s compliance with the Filesystem Hierarchy Standard, as user-installed host-specific Python modules will now be correctly located under /usr/local.
https://docs.fedoraproject.org/pl/fedora/f27/release-notes/developers/Development_Python/
2021-11-27T12:32:45
CC-MAIN-2021-49
1637964358180.42
[]
docs.fedoraproject.org
Testing System Environment To perform a simple system testing you should use the space-instance.bat script on windows or space-instance.sh script on linux. To setup a production environment see the Moving into Production Checklist. Verifying Local Installation - Run a single GigaSpaces space instance by moving to the <XAP Root>\bindirectory and running the space-instance.bat/shcommand. You should see such an output: D:\gigaspaces-xap-premium-8.0.1-ga\bin>space-instance.bat Starting a Space Instance Setting space url to "/./mySpace?schema=default&properties=gs&groups="gigaspaces-8.0.1-XAPPremium-ga"" java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.5-b02, mixed mode) Log file: D:\gigaspaces-xap-premium-8.0.1-ga\bin\..\logs\2011-06-19~01.21-gigaspaces-service-192.168.1.101-12840.log 2011-06-19 01:21:03,458 INFO [com.gigaspaces.common.resourceloader] - Loading properties file from: file:/D:/gigaspaces-xap-premium-8.0.1-ga/config/gs.properties 2011-06-19 01:21:03,896 INFO [com.gigaspaces.container] - System Environment: System: OS Name: Windows 7 OS Version: 6.1 Architecture: amd64 Number Of Processors: 4 JVM Details: Java Version: 1.6.0_24, Sun Microsystems Inc. Java Runtime: Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java VM: Java HotSpot(TM) 64-Bit Server VM 19.5-b02, Sun Microsystems Inc. Java Home: d:\java\jdk1.6.0_24\jre JVM Memory: Max Heap Size (KB): 466048 Current Allocated Heap Size (KB): 116451 Network Interfaces Information: Host Name: [my-PC] Network Interface Name: lo / Software Loopback Interface 1 IP Address: 0:0:0:0:0:0:0:1 IP Address: 192.168.1.101 Zones: N/A Process Id: 12840 GigaSpaces Platform: Edition: XAP Premium 8.0.1 GA Build: 5200 Home: D:\gigaspaces-xap-premium-8.0.1-ga\bin ..\ 2011-06-19 01:21:04,231 INFO [com.gigaspaces.container] - Created RMIRegistry on: < 192.168.1.101:10098 > 2011-06-19 01:21:04,244 INFO [com.gigaspaces.container] - Webster HTTP server started successfully serving the following roots: D:\gigaspaces-xap-premium-8.0.1-ga\bin ..\/lib;D:\gigaspaces-xap-premium-8.0.1-ga\bin ..\/lib/jini Webster serving on: 192.168.1.101:9813 2011-06-19 01:21:04,741 INFO [com.sun.jini.reggie] - started Reggie: ab0f28be-5829-4566-861e-4ccad1da1c50, [gigaspaces-8.0.1-XAPPremium-ga], jini://192.168.1.101:4166/ 2011-06-19 01:21:05,046 INFO [com.gigaspaces.core.common] - Starting Space [mySpace_container:mySpace] with url [/./mySpace?schema=default&properties=gs&groups=gigaspaces-8.0.1-XAPPremium-ga&state=started] ... 2011-06-19 01:21:05,515 INFO [com.gigaspaces.cache] - Cache manager created with policy [ALL IN CACHE], persistency mode [memory] 2011-06-19 01:21:05,703 INFO [com.sun.jini.mahalo.startup] - Mahalo started: com.sun.jini.mahalo.TransientMahaloImpl@31bd669d 2011-06-19 01:21:05,712 INFO [com.gigaspaces.core.common] - Space [mySpace_container:mySpace] with url [/./mySpace?schema=default&properties=gs&groups=gigaspaces-8.0.1-XAPPremium-ga&state=started] started successfully - Make sure it loads without errors/exceptions. If you have error/exceptions check the following: JAVA_HOMEenvironment variable - Make sure it points to a valid JDK folder. - Network setup - Make sure the machine has a valid network interface installed with a valid IP. - hosts file - Make sure it includes entry for localhostor other machines you are accessing. - Multiple NICs - If your machine running multiple network interfaces, make sure you have the XAP_NIC_ADDRESSenvironment variable set to a valid IP of the machine. This should be done on every machine running GigaSpaces. - User permissions - Make sure you run the gsInstnce.shscript with a linux user that has permissions to write into the <XAP root>/logsfolder. CLASSPATHenvironment variable - Make sure the CLASSPATHenvironment variable is not specified. You might have some libraries specified as part of the CLASSPATHthat cause GigaSpaces to fail. XAP_HOME- environment variable - Make sure the XAP_HOMEenvironment variable is not specified or pointing to a different GigaSpaces release folder. It might be you have some other GigaSpaces release installed on the same machine with XAP_HOMEvariable pointing to this release folder. - Ping the space by running the <XAP Root>\bin\gs.bat/shutility: gs space ping -url jini://*/*/mySpace The following using multicast lookup discovery protocol. The first * of the URL means search for the lookup service across the network using multicast. The following result output should appear on the console: D:\gigaspaces-xap-premium-8.0.1-ga\bin>gs space ping -url jini://*/*/mySpace total 1 ping from <mySpace> space with: Finder URL: jini://*/*/mySpace?timeout=5000 Lease Timeout: 10 seconds LookupFinder timeout: 5000 milliseconds Buffer Size: 100 Iterations: 5 Average Time = 135 milliseconds Subsequent space ping calls will have much faster response time. The fist space ping call introduce some meta data to the space. This happens only once. If the space ping call fails with the following: total 0 Service is not found using the URL: jini://*/*/mySpace?timeout=5000 You have the following options: - Make sure your network and machines running GigaSpaces are configured to have multicast enabled. See the How to Configure Multicast section for details on how to enable multicast. - Perform a space ping using unicast lookup discovery protocol: gs space ping -url jini://localhost/*/mySpace The following result should appear on the console: D:\gigaspaces-xap-premium-8.0.1-ga\bin>gs space ping -url jini://localhost/*/mySpace total 1 ping from <mySpace> space with: Finder URL: jini://localhost/*/mySpace?timeout=5000 Lease Timeout: 10 seconds LookupFinder timeout: 5000 milliseconds Buffer Size: 100 Iterations: 5 Average Time = 135 milliseconds When the ping client running on a remote machine (other than the machine running the space), localhost should be replaced with the machine hostname or IP running the space instance. Verifying Remote Installation Repeat the above steps #3, where the ping command is called from a machine different than the one running the space. You will have to install GigaSpaces both on the client and server machines.
https://docs.gigaspaces.com/xap/12.0/admin/troubleshooting-testing-system-environment.html
2021-11-27T11:26:43
CC-MAIN-2021-49
1637964358180.42
[]
docs.gigaspaces.com
Activating card readers To activate a USB card reader: Register it on the printing device’s touch panel Add the card reader's Vendor ID and Product ID to the USB device list on the printing device’s Web UI. Registering the card reader on the printing device’s touch panel On the initial screen of the MyQ Embedded application, tap MyQ to open the admin login screen. On the admin login screen, enter the admin password (the default password is 1087), and then tap OK. The Admin menu opens. Tap System settings in the menu on the right side of the screen. The device’s panel screen opens. On the screen, tap User Tools. The User Tools screen opens. Tap Screen Features. The Screen Features screen opens. Tap Screen Device Settings. The Screen Device Settings screen opens. On the Screen device Settings screen, scroll down and tap IC Card/Bluetooth Software Settings. The IC Card/Bluetooth Software Settings screen opens. Tap Select IC Card Reader. The Select IC Card Reader dialog box appears. Tap Proximity Card Reader. The dialog box is closed. Back on the IC Card/Bluetooth Software Settings screen, tap Proximity Card Reader Settings. The Proximity Card Reader Settings screen opens. Plug in the card Reader to the Android panel via Mini USB — USB (female) reduction. The Proximity Card Reader Settings message box informs you that a USB card reader was registered. Tap OK to close the Proximity Card Reader Settings message box. Back on the Proximity Card Reader Settings screen, remember (write down) the VENDOR_ ID number and the PRODUCT_ ID number displayed under Card Reader Info. These numbers have to be added to the printing device USB device list. Exit the system settings.
https://docs.myq-solution.com/ricoh-emb/7.5/Activating-card-readers.318210268.html
2021-11-27T12:20:25
CC-MAIN-2021-49
1637964358180.42
[]
docs.myq-solution.com
- op: add path: /spec/clusterNetwork/- value: (1) cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 (2) As a cluster administrator, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. After converting to dual-stack, all newly created pods are dual-stack enabled. As a cluster administrator, you can convert your single-stack cluster network to a dual-stack cluster network. You installed the OpenShift CLI ( oc). You are logged in to the cluster with a user with cluster-admin privileges. Your cluster uses the OVN-Kubernetes cluster network provider. To specify IPv6 address blocks for the cluster and service networks, create a file containing the following YAML: - op: add path: /spec/clusterNetwork/- value: (1) cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 (2) To patch the cluster network configuration, enter the following command: $ oc patch network.config.openshift.io cluster \ --type='json' --patch-file <file>.yaml where: file Specifies the name of the file you created in the previous step. network.config.openshift.io/cluster patched Complete the following step to verify that the cluster network recognizes the IPv6 address blocks that you specified in the previous procedure. Display the network configuration: $ oc describe network Status: Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cidr: fd01::/48 Host Prefix: 64 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 172.30.0.0/16 fd02::/112
https://docs.openshift.com/container-platform/4.9/networking/ovn_kubernetes_network_provider/converting-to-dual-stack.html
2021-11-27T11:36:38
CC-MAIN-2021-49
1637964358180.42
[]
docs.openshift.com
Parrot Bebop The Parrot Bebop 2 is a popular flying camera. Basic support for PX4 has been added so that it can be used for research and testing. Bebop support is at an early stage. To use PX4 with Bebop you will need to build the code using the developer toolchain. Video Advanced Topics screen /dev/ttyUSB0 115200 to connect to the Bebop. Resources Instructions on how to build the code and use PX4 with Bebop 2 available in the Developer Guide here:
https://docs.px4.io/v1.10/en/complete_vehicles/bebop.html
2021-11-27T11:44:57
CC-MAIN-2021-49
1637964358180.42
[]
docs.px4.io
Another option for manipulating attributes is to apply functions to them. Functions are applied using the familiar parenthesis syntax: SEARCH Host SHOW len(name) The following topics are covered in this section: Value manipulation The following functions operate on normal attribute values. - abs(value) - bin(value, bins [, legend]) - boolToString(value) - booleanLabel(value, true_label, false_label, other_label) - duration(start_time, end_time) - currentTime() - defaultNumber(value, default_value) - extract(value, pattern [, substitution ]) - fmt(format, ...) - fmt_map(format, items) - formatNumber(value, singular, plural, other) - formatQuantity(value, format) - formatTime(time_value, format) - formatUTCTime(time_value, format) - friendlyTime(time_value) - friendlyUTCTime(time_value) - friendly_duration(duration) - get(item, attribute [, default ]) - hash(value) - int(value [, default]) - join(value, separator) - leftStrip(value) - len(value) - lower(string_value) - parseTime(time_string) - parseUTCTime(time_string) - parseLocalTime(time_string) - replace(value, old, new) - recurrenceDescription(recurrence) - rightStrip(value) - single(value) - size(value) - sorted(values) - split(value [, separator ]) - str(value) - strip(value) - sumValues(values) - time(time_val) - toNumber(value [, base ]) - toText(value [, base [, width ] ]) - unique(values) - upper(string_value) - value(item) - whenWasThat() abs(value) Returns the absolute value of an integer. bin(value, bins [, legend]) Separates numeric values into 'bins', based on a list of values providing the bin boundaries, for example, a search like this: SEARCH Host SHOW name, ram, bin(ram, [64, 512, 1024, 4096]) gives results like The optional legend parameter allows the bins to be named. The list must have one more item than the bins list. For example: SEARCH Host SHOW name, ram, bin(ram, [64, 512, 1024, 4096], ["Less than 64M", "64M to 512M", "512M to 1G", "1G to 4G", "More than 4G"]) boolToString(value) Interprets its argument as a Boolean and returns "Yes" or "No". booleanLabel(value, true_label, false_label, other_label) A more advanced version of boolToString, which lets you choose which label to use for the True, False, and None cases. duration(start_time, end_time) Calculates the amount of time elapsed between the two dates and returns the result as a string of the form 'days.hours:minutes:seconds'. currentTime() Returns the number of 100 nanosecond intervals since 15 October 1582. This example query returns the hosts created in the last 7 days: search Host where creationTime(#) > (currentTime() - 7*24*3600*10000000) show name, time(creationTime(#)) as CreateTime defaultNumber(value, default_value) Attempts to convert the value into a number and if it is not possible returns the default value. extract(value, pattern [, substitution ]) Performs a regular expression extraction. The value is matched against the pattern. If no substitution is given, returns the part of the value that matched the pattern. If substitution is given, it specifies a string including group references of the form \1, \2, and so on, that are filled in with the corresponding groups in the pattern. If the value does not match the pattern, returns an empty string. Strings containing backslashes must be specified with a regex qualifier, for example, regex "\1" . fmt(format, ...) The first argument is a Python format string, the remaining arguments are values to interpolate into it. The result is equivalent to a key expression of the form #"format"(...), except that the arguments to the fmt() function can be the results of other functions. That is, these two expressions are equivalent: SEARCH Host SHOW #"%s: %s"(name,ram) SEARCH Host SHOW fmt("%s: %s",name,ram) Whereas the following expression can only be done with fmt() because it calls the len() function: SEARCH Host SHOW fmt("%s: %d",name,len(name)) The following example shows the fmt() function used to show the results of floating point arithmetic: fmt( "%.4f", float(size)/1048576) as 'Size GB' fmt_map(format, items) Applies a format string to all items in a list. The format argument must be a python format string with only one variable. items is the list of items to interpolate into the string. Returns a list of strings as a result. formatNumber(value, singular, plural, other) If value is 1, returns singular, if value is a number other than one, return plural with the value inserted at the mandatory %d inside it, and if value is not a number return other. e.g. SEARCH Host SHOW name, patch_count, formatNumber(patch_count, "One patch", "%d patches", "Unknown patches") formatQuantity(value, format) Takes a value and applies user friendly formatting, putting the values into bits, bytes, KiB, Mb, and so on. The value, an int or a float is formatted according to a number of fixed format parameters. Format parameters are all literal strings. If non-numeric values such as strings or None are passed then they are returned unmodified. The format parameters and examples are shown in the following table: The following example shows applying friendly formatting the raw capacity of a StoragePool: SEARCH StoragePool SHOW name, total_raw_capacity, formatQuantity( total_raw_capacity, "1000") as 'Size 1000', formatQuantity( total_raw_capacity, "B1024") as 'Size B1024' formatTime(time_value, format) Converts the internal time format to a string, based on the format specification, and converting into the appliance's time zone. The format is specified using Python's strftime format. For example, a search like this: SEARCH Host SHOW name, formatTime(last_update_success, "%d %B %Y") Gives results: formatUTCTime(time_value, format) Identical to formatTime, except that it does not perform timezone conversion. friendlyTime(time_value) Converts the internal time format into a human readable string, taking into account time zones and daylight saving times, based on the time zone of the appliance. friendlyUTCTime(time_value) Converts the internal time format into a human readable string, without converting the time to account for time zones and daylight saving times. friendly_duration(duration) Takes a duration (that is, one time minus another) and returns a human readable string of the result, such as '3 days' or '1 month' or '30 seconds'. The result is not intended to be precise, but to be quickly understood by a person. get(item, attribute [, default ]) Retrieve attribute from item. If the item does not have the specified attribute, returns default if it is specified, or None if not. hash(value) Returns the MD5 hash of the specified value. int(value [, default]) Converts a string form of an integer to an integer. Works on lists. Optionally supports a second argument, which if present will be used if the string cannot be converted. join(value, separator) Build a string out of a list by concatenating all the list elements with the provided separator between them. leftStrip(value) Returns the value with white space stripped from the start. len(value) Returns the length of a string or list. lower(string_value) Returns a lower-case version of a string. parseTime(time_string) Converts a date/time string into the internal format, without time zone conversion. parseUTCTime(time_string) Converts a date/time string into the internal format. Identical to parseTime. parseLocalTime(time_string) Converts a date/time string into the internal format, taking into account time zones and daylight saving times, based on the time zone of the appliance. replace(value, old, new) Modifies value, replacing all non-overlapping instances of the string old with new. recurrenceDescription(recurrence) Converts a recurrence object to a human readable string. rightStrip(value) Returns the value with white space stripped from the end. single(value) If value is a list, return just the first item of it; otherwise return the value unchanged. This is useful when following key expressions that usually return a single item, but occasionally return multiple. e.g. search Host show name, single(#InferredElement:Inference:Primary:HostInfo.uptimeSeconds) size(value) Returns the size of a list or string. A synonym for len(). sorted(values) Returns the sorted form of the given list. split(value [, separator ]) Split a string into a list of strings. Splits on white space by default, or uses the specified separator. str(value) Converts its argument to a string. strip(value) Removes white space at the start and end of the value. sumValues(values) Sums a list of values. For example, to total the count attributes of the Software Instances related to each Host: search Host show name, sumValues(#Host:HostedSoftware:RunningSoftware:SoftwareInstance.count) time(time_val) Marks a number to indicate that it is a time. The values returned by functions such as currentTime and parseTime are large numbers (representing the number of 100 nanosecond intervals since 15 October 1582), which can be manipulated by other functions and compared to each other. To return them in results in a way that the UI knows that they are times, they must be annotated as times using the time function. toNumber(value [, base ]) Converts a string into a number. If base is given, uses the specified base for the conversion, instead of the default base 10. toText(value [, base [, width ] ]) Converts a number to a string. If base is given, the conversion uses the specified base. Only bases 8, 10 and 16 are valid. If width is given, the string is padded so that it contains at least width characters, padding with spaces on the left. unique(values) Returns a list containing the unique values from the provided list. upper(string_value) Returns an upper-case version of a string. value(item) Returns item unchanged. This is only useful to bind a non-function result to a name, as described in Name binding. whenWasThat() Converts the internal time format to something easily readable, like '1 hour ago', '2 weeks ago', and so on. Node manipulation These functions must be passed nodes with key expressions, often just a single # to represent the current node: SEARCH Host SHOW name, keys(#) // Use the time() function to tell the UI that the result is a time. SEARCH Host SHOW name, time(modified(#)) destroyed(node) Returns True if the node has been destroyed, False if not. Returns [invalid node] if the argument is not a node. Works on lists of nodes as well, returning a list of boolean values. (See the section on Search Flags that permit searching destroyed nodes.) hasRelationship(node, spec) Takes a node and a traversal specification. Returns True if the node has at least one relationship matching the specification; False if not. Works on lists of nodes as well. id(node) DEPRECATED function to return a node id in string form. Use #id to return a node's id. keys(node) Returns a list of the keys set on the node. Returns [invalid node] if the argument is not a node. Works on lists of nodes as well, returning a list of lists of keys. kind(node) Returns the kind of the node. Returns [invalid node] if the argument is not a node. Works on lists of nodes as well, returning a list of kinds. label(node) Returns the node's label, as defined in the taxonomy. Works on lists of nodes as well, returning a list of labels. modified(node) Returns the node's last modified time in the internal numeric format, this includes any modification, including relationships to the node. The modified() function works on lists of nodes as well, returning a list of times. Modification times and host nodes At the end of each discovery run, the automatic grouping feature considers all the Hosts, and builds a new set of automatic grouping relationships. It commits one big transaction that adjusts all the relationships to all Hosts, so every Host node usually has the same modification time. provenance(node, attribute [, show_attribute]) Follows provenance relationships from the node, finding the evidence node that provided the specified attribute. If the show_attribute is given, returns the specified attribute of the evidence node; if not, returns a handle to the node itself. NODECOUNT and NODES In addition to the functions described in the previous section, the NODECOUNT and NODES keywords, defined in Traversals, behave like functions in some respects. History functions The following history-related functions are currently available. creationTime(node) Returns the number of 100 nanosecond intervals between 15 October 1582 and the time the node was created. Also works on lists of nodes. createdDuring(node, start, end) Returns true if the node was created during the time range specified with start and end. For example, to find all the application instances created between 1st July and 10th July 2008. SEARCH BusinessApplicationInstance WHERE createdDuring(#, parseTime("2008-07-01"), parseTime("2008-07-10")) destroyedDuring(node, start, end) Returns true if the node was destroyed during the time range specified with start and end. To find all the application instances destroyed between 1st July and 10th July 2008: SEARCH FLAGS (include_destroyed, exclude_current) BusinessApplicationInstance WHERE destroyedDuring(#, parseTime("2008-07-01"), parseTime("2008-07-10")) destructionTime(node) Returns the time the node was destroyed in the internal time format. If the node is not destroyed, returns 0. Works on lists of nodes as well, returning a list of destruction times. eventOccurred(node, start, end) Takes a node and two times in the internal format. Returns True if the node was modified between the specified times; False if not. Works on lists of nodes as well. Returns [invalid time] if the times are invalid. Specialist history functions The following history functions can be used for specialist purposes: newInAttr(node, attr, timeA, timeB) Retrieves the node's specified attribute at the two times. The attribute is expected to contain a list. Returns a list containing all items that were present in the list at timeB that were not present at timeA. attrSpread(node, attr, timeA, timeB) Returns a list containing all unique values that the attribute has had between the two times. newInAttrSpread(node, attr, timeA, timeB, timeC) A cross between newInAttr and attrSpread. Returns a list of values for the attribute that existed at any time between timeB and timeC, but which did not exist at any time between timeA and timeB. historySubset(nh, timeA, timeB, attrs, rels) Reports on a subset of the node history between the two times. attrs is a list of attribute names to report; rels is a list of colon-separated relationship specifications to report. For example, the following query will show changes to os_type, os_version, and hosted SoftwareInstances for a collections of Hosts: SEARCH Host SHOW name, historySubset(#, 0, currentTime(), ["os_version", "os_type"], ["Host:HostedSoftware:RunningSoftware:SoftwareInstance"]) See also the post-processing function displayHistory in Results after processing. System interaction These functions allow access to other aspects of the BMC Discovery system. fullFoundationName(username) Returns the full name of the user with the given user name or None if no such user exists. getOption(key) Returns the value of the system option key. Link functions When search results are shown in the UI, each cell in the result table is usually a link to the node corresponding to the result row. These functions allow other links to be specified: nodeLink(link, value) link is a node id or node reference, for example the result of a key expression; value is the value to display in the UI and to be used in exports. For example, to create a table listing Software Instances and their Hosts, with links from the host names to the Host nodes: SEARCH SoftwareInstance SHOW name AS "Software Instance", nodeLink(#RunningSoftware:HostedSoftware:Host:Host, #RunningSoftware:HostedSoftware:Host:Host.name) AS "Host" queryLink(link, value) link is a search query to execute; value is the value to display in the UI and to be used in exports.
https://docs.bmc.com/docs/display/DISCO113/Query+Language+Functions
2019-03-19T00:55:47
CC-MAIN-2019-13
1552912201812.2
[]
docs.bmc.com
Bindable Object Bindable Object Class Definition Provides a mechanism by which application developers can propagate changes that are made to data in one object to another, by enabling validation, type coercion, and an event system. BindableProperty. public abstract class BindableObject : System.ComponentModel.INotifyPropertyChanged, Xamarin.Forms.Internals.IDynamicResourceHandler type BindableObject = class interface INotifyPropertyChanged interface IDynamicResourceHandler - Inheritance - - Derived - Xamarin.Forms.Internals.TemplatedItemsList<TView,TItem>Xamarin.Forms.Internals.TemplatedItemsList<TView,TItem> - Implements - SetValue(BindableProperty, Object) and GetValue(BindableProperty) to get and set the value on the bound property; The property of the desired type provides the interface that the target of the bound property will implement. OnPropertyChanged(String) method. Additionally, the Name property in the example below merely wraps the name field. In practice, the application developer may choose a different model in which to store application data. (); } Constructors Fields Properties Methods Events Explicit Interface Implementations Extension Methods Applies to Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dotnet/api/xamarin.forms.bindableobject?view=xamarin-forms
2019-03-19T00:02:13
CC-MAIN-2019-13
1552912201812.2
[]
docs.microsoft.com
Serialization¶ Serializers¶ By default every message is encoded using JSON, so sending Python data structures like dictionaries and lists works. YAML, msgpack and Python’s built-in pickle module is also supported, and if needed you can register any custom serialization scheme you want to use. By default Kombu will only load JSON messages, so if you want to use other serialization format you must explicitly enable them in your consumer by using the accept argument: Consumer(conn, [queue], accept=['json', 'pickle', 'msgpack']) The accept argument can also include MIME-types.. Pickle and Security The pickle format is very convenient as it can serialize and deserialize almost any object, but this is also a concern for security. Carefully crafted pickle payloads can do almost anything a regular Python program can do, so if you let your consumer automatically decode pickled objects you must make sure to limit access to the broker so that untrusted parties do not have the ability to send messages! By default Kombu uses pickle protocol 2, but this can be changed using the PICKLE_PROTOCOLenvironment variable or by changing the global kombu.serialization.pickle_protocolflag. - Kombu to use an alternate serialization method, use one of the following options. - Set the serialization option on a per-producer basis:>>> producer = Producer(channel, ... exchange=exchange, ... serializer='yaml') - Set the serialization option per message:>>> producer.publish(message, routing_key=rkey, ... serializer='pickle') Note that a Consumer do not need the serialization method specified. They can auto-detect the serialization method as the content-type is sent as a message header. Sending raw data without Serialization¶ In some cases, you don’t need your message data to be serialized. If you pass in a plain string or Unicode object as your message and a custom content_type, then Kombu will not waste cycles serializing/deserializing the data. You can optionally specify a content_encoding for the raw data: >>> with open('~/my_picture.jpg', 'rb') as fh: ... producer.publish(fh.read(), content_type='image/jpeg', content_encoding='binary', routing_key=rkey) The Message object returned by the Consumer class will have a content_type and content_encoding attribute. Creating extensions using Setuptools entry-points¶ A package can also register new serializers using Setuptools entry-points. The entry-point must provide the name of the serializer along with the path to a tuple providing the rest of the args: encoder_function, decoder_function, content_type, content_encoding. An example entrypoint could be: from setuptools import setup setup( entry_points={ 'kombu.serializers': [ 'my_serializer = my_module.serializer:register_args' ] } ) Then the module my_module.serializer would look like: register_args = (my_encoder, my_decoder, 'application/x-mimetype', 'utf-8') When this package is installed the new ‘my_serializer’ serializer will be supported by Kombu. Buffer Objects The decoder function of custom serializer must support both strings and Python’s old-style buffer objects. Python pickle and json modules usually don’t do this via its loads function, but you can easily add support by making a wrapper around the load function that takes file objects instead of strings. Here’s an example wrapping pickle.loads() in such a way: import pickle from io import BytesIO from kombu import serialization def loads(s): return pickle.load(BytesIO(s)) serialization.register( 'my_pickle', pickle.dumps, loads, content_type='application/x-pickle2', content_encoding='binary', )
https://kombu.readthedocs.io/en/latest/userguide/serialization.html
2019-03-18T23:25:15
CC-MAIN-2019-13
1552912201812.2
[]
kombu.readthedocs.io
Table of Contents DHIS2 comes with an advanced solution for fine-grained management of users and user roles. The system is completely flexible in terms of the number and type of users and roles allowed. A user in the DHIS2 context is a human who is utilizing the software. Each user in DHIS2 has a user account which is identified by a username. A user account allows the user to authenticate to system services and be granted authorization to access them. To log in (authenticate) the user is required to enter a valid combination of username and password. If that combination matches a username and password registered in the database, the user is allowed to enter. In addition, a user should be given a first name, surname, email and phone number. This information is important to get correct when creating new users since certain functions in DHIS2 relies on sending emails to notify users about important events. It is also useful to be able to communicate to users directly over email and telephone to discuss data management issues or to sort out potential problems with the system. A user in DHIS2 is associated with an organisation unit. This implies that when creating a new user account that account should be associated to the location where user works. For instance when creating a user account for a district record officer that user account should be linked with the particular district where she works. The link between user account and organisation unit has several implications for the operation of the system: In the data entry module, a user can only be entering data for the organisation unit she is associated with and the organisation units below that in the hierarchy. For instance, a district records officer will be able to register data for her district and the facilities below that district only. In the user module, a user can only create new users for the organisation unit she is associated with in addition to the organisation units below that in the hierarchy. In the reports module, a user can only view reports her organisation units and those below. (This is something we consider to open up to allow for comparison reports.) A user role in DHIS2 is also associated with a set of user roles. Such user roles are discussed in the following section.
https://docs.dhis2.org/2.25/en/implementer/html/ch15.html
2019-03-18T23:47:19
CC-MAIN-2019-13
1552912201812.2
[]
docs.dhis2.org
¶ - 0.5.9 - 0.5.8 - 0.5.7 - 0.5.6 - 0.5.5 - 0.5.4p2 - 0.5.4p1 - 0.5.4 - 0.5.3 - 0.5.2 - 0.5.1 - 0.5.0 - 0.5.0rc4 - 0.5.0rc3 - 0.5.0rc2 - 0.5.0rc1 - 0.5.0beta3 - 0.5.0beta2 - 0.5.0beta] The extract() function, which was slightly improved in 0.5.7, needed a lot more work to generate the correct typecast (the typecasts appear to be necessary in PG’s EXTRACT quite a lot of the time). The typecast is now generated using a rule dictionary based on PG’s documentation for date/time/interval arithmetic. It also accepts text() constructs again, which was broken in 0.5.7.¶ 0.5.7¶Released: Sat Dec 26 2009 orm¶ [orm] contains_eager() now works with the automatically generated subquery that results when you say “query(Parent).join(Parent.somejoinedsubclass)”, i.e. when Parent joins to a joined-table-inheritance subclass. Previously contains_eager() would erroneously add the subclass table to the query separately producing a cartesian product. An example is in the ticket description] The “use get” behavior of many-to-one relations, i.e. that a lazy load will fallback to the possibly cached query.get() value, now works across join conditions where the two compared types are not exactly the same class, but share the same “affinity” - i.e. Integer and SmallInteger. Also allows combinations of reflected and non-reflected types to work with 0.5 style type reflection, such as PGText/Text (note 0.6 reflects types as their generic versions)] A column can be added to a joined-table declarative superclass after the class has been constructed (i.e. via class-level attribute assignment), and the column will be propagated down to subclasses. This is the reverse situation as that of, fixed in 0.5.6] relations() now have greater ability to be “overridden”, meaning a subclass that explicitly specifies a relation() overriding that of the parent class will be honored during a flush. This is currently to support many-to-many relations from concrete inheritance setups. Outside of that use case, YMMV.¶ [orm] Squeezed a few more unnecessary “lazy loads” out of relation(). When a collection is mutated, many-to-one backrefs on the other side will not fire off to load the “old” value, unless “single_parent=True” is set. A direct assignment of a many-to-one still loads the “old” value in order to update backref collections on that value, which may be present in the session already, thus maintaining the 0.5 behavioral contract] A column can be added to a joined-table subclass after the class has been constructed (i.e. via class-level attribute assignment). The column is added to the underlying Table as always, but now the mapper will rebuild its “join” to include the new column, instead of raising an error about “no such column, use column_property() instead”.¶ [test] Added examples into the test suite so they get exercised regularly and cleaned up a couple deprecation warnings.¶ 0.5.5¶Released: Mon Jul 13 2009 general¶ orm¶ [orm] The “foreign_keys” argument of relation() will now propagate automatically to the backref in the same way that primaryjoin and secondaryjoin do. For the extremely rare use case where the backref of a relation() has intentionally different “foreign_keys” configured, both sides now need to be configured explicitly (if they do in fact require this setting, see the next note…).¶ [orm] …the only known (and really, really rare) use case where a different foreign_keys setting was used on the forwards/backwards side, a composite foreign key that partially points to its own columns, has been enhanced such that the fk->itself aspect of the relation won’t be used to determine relation direction.bar).join(<anything>). In most cases, an error “Could not find a FROM clause to join from” would be raised. In a few others, the result would be returned in terms of the base class rather than the subclass - so applications which relied on this erroneous result need to be adjusted] Removed an obscure feature of execute() (including connection, engine, Session) whereby a bindparam() construct can be sent as a key to the params dictionary. This usage is undocumented and is at the core of an issue whereby the bindparam() object created implicitly by a text() construct may have the same hash value as a string placed in the params dictionary and may result in an inappropriate match when computing the final bind parameters. Internal checks for this condition would add significant latency to the critical task of parameter rendering, so the behavior is removed. This is a backwards incompatible change for any application that may have been using this feature, however the feature has never been documented.¶ 0.5.4p2¶Released: Tue May 26 2009 sql¶ postgresql¶ [postgresql] Deprecated the hardcoded TIMESTAMP function, which when used as func.TIMESTAMP(value) would render “TIMESTAMP value”. This breaks on some platforms as PostgreSQL doesn’t allow bind parameters to be used in this context. The hard-coded uppercase is also inappropriate and there’s lots of other PG casts that we’d need to support. So instead, use text constructs i.e. select([“timestamp ‘12/05/09’”]).¶ 0.5.4p1¶Released: Mon May 18 2009 0.5.4¶Released: Sun May 17 2009 orm¶ [orm] Significant performance enhancements regarding Sessions/flush() in conjunction with large mapper graphs, large numbers of objects: - Removed all* O(N) scanning behavior from the flush() process, i.e. operations that were scanning the full session, including an extremely expensive one that was erroneously assuming primary key values were changing when this was not the case. - one edge case remains which may invoke a full scan, if an existing primary key attribute is modified to a new value. - The Session’s “weak referencing” behavior is now full - no strong references whatsoever are made to a mapped object or related items/collections in its __dict__. Backrefs and other cycles in objects no longer affect the Session’s ability to lose all references to unmodified objects. Objects with pending changes still are maintained strongly until flush. The implementation also improves performance by moving the “resurrection” process of garbage collected items to only be relevant for mappings that map “mutable” attributes (i.e. PickleType, composite attrs). This removes overhead from the gc process and simplifies internal behavior. If a “mutable” attribute change is the sole change on an object which is then dereferenced, the mapper will not have access to other attribute state when the UPDATE is issued. This may present itself differently to some MapperExtensions. The change also affects the internal attribute API, but not the AttributeExtension interface nor any of the publicly documented attribute functions. - The unit of work no longer genererates a graph of “dependency” processors for the full graph of mappers during flush(), instead creating such processors only for those mappers which represent objects with pending changes. This saves a tremendous number of method calls in the context of a large interconnected graph of mappers. - Cached a wasteful “table sort” operation that previously occurred multiple times per flush, also removing significant method call count from flush(). - Other redundant behaviors have been simplified in mapper._save_obj(). ] MapperOptions and other state associated with query.options() is no longer bundled within callables associated with each lazy/deferred-loading attribute during a load. The options are now associated with the instance’s state object just once when it’s populated. This removes the need in most cases for per-instance/attribute loader objects, improving load speed and memory overhead for individual instances] Back-ported the “compiler” extension from SQLA 0.6. This is a standardized interface which allows the creation of custom ClauseElement subclasses and compilers. In particular it’s handy as an alternative to text() when you’d like to build a construct that has database-specific compilations. See the extension docs for details] Fixed bugs in Query regarding simultaneous selection of multiple joined-table inheritance entities with common base classes: - previously the adaption applied to “B” on “A JOIN B” would be erroneously partially applied to “A”. - comparisons on relations (i.e. A.related==someb) were not getting adapted when they should. - Other filterings, like query(A).join(A.bs).filter(B.foo==’bar’), were erroneously adapting “B.foo” as though it were an “A”. [orm] Fixed adaptation of EXISTS clauses via any(), has(), etc. in conjunction with an aliased object on the left and of_type() on the right.¶ [orm] Added an attribute helper method set_committed_valuein sqlalchemy.orm.attributes. Given an object, attribute name, and value, will set the value on the object as part of its “committed” state, i.e. state that is understood to have been loaded from the database. Helps with the creation of homegrown collection loaders and such] Further refined 0.5.1’s warning about delete-orphan cascade placed on a many-to-many relation. First, the bad news: the warning will apply to both many-to-many as well as many-to-one relations. This is necessary since in both cases, SQLA does not scan the full set of potential parents when determining “orphan” status - for a persistent object it only detects an in-python de-association event to establish the object as an “orphan”. Next, the good news: to support one-to-one via a foreign key or association table, or to support one-to-many via an association table, a new flag single_parent=True may be set which indicates objects linked to the relation are only meant to have a single parent. The relation will raise an error if multiple parent-association events occur within Python] Mappers now instrument class attributes upon construction with the final InstrumentedAttribute object which remains persistent. The _CompileOnAttr/__getattribute__() methodology has been removed. The net effect is that Column-based mapped class attributes can now be used fully at the class level without invoking a mapper compilation operation, greatly simplifying typical usage patterns within declarative.¶ [orm] ColumnProperty (and front-end helpers such as deferred) no longer ignores unknown **keyword arguments.¶ [orm] Fixed a bug with the unitofwork’s “row switch” mechanism, i.e. the conversion of INSERT/DELETE into an UPDATE, when combined with joined-table inheritance and an object which contained no defined values for the child table where an UPDATE with no SET clause would be rendered] For both joined and single inheriting subclasses, the subclass will only map those columns which are already mapped on the superclass and those explicit on the subclass. Other columns that are present on the Table will be excluded from the mapping by default, which can be disabled by passing a blank exclude_properties collection to the __mapper_args__. This is so that single-inheriting classes which define their own columns are the only classes to map those columns. The effect is actually a more organized mapping than you’d normally get with explicit mapper() calls unless you set up the exclude_properties arguments explicitly.¶ [declarative] It’s an error to add new Column objects to a declarative class that specified an existing table using __table__.¶ 0.5.0¶Released: Tue Jan 06 2009 general¶ [general] Documentation has been converted to Sphinx. In particular, the generated API documentation has been constructed into a full blown “API Reference” section which organizes editorial documentation combined with generated docstrings. Cross linking between sections and API docs are vastly improved, a javascript-powered search feature is provided, and a full index of all classes, functions and members is provided] Query.with_polymorphic() now accepts a third argument “discriminator” which will replace the value of mapper.polymorphic_on for that query. Mappers themselves no longer require polymorphic_on to be set, even if the mapper has a polymorphic_identity. When not set, the mapper will load non-polymorphically by default. Together, these two features allow a non-polymorphic concrete inheritance setup to use polymorphic loading on a per-query basis, since concrete setups are prone to many issues when used polymorphically in all cases] Exceptions raised during compile_mappers() are now preserved to provide “sticky behavior” - if a hasattr() call on a pre-compiled mapped attribute triggers a failing compile and suppresses the exception, subsequent compilation is blocked and the exception will be reiterated on the next compile() call. This issue occurs frequently when using declarative] Custom comparator classes used in conjunction with column_property(), relation() etc. can define new comparison methods on the Comparator, which will become available via __getattr__() on the InstrumentedAttribute. In the case of synonym() or comparable_property(), attributes are resolved first on the user-defined descriptor, then on the user-defined comparator] Duplicate items in a list-based collection will be maintained when issuing INSERTs to a “secondary” table in a many-to-many relation. Assuming the m2m table has a unique or primary key constraint on it, this will raise the expected constraint violation instead of silently dropping the duplicate entries. Note that the old behavior remains for a one-to-many relation since collection entries in that case don’t result in INSERT statements and SQLA doesn’t manually police collections] Two fixes to help prevent out-of-band columns from being rendered in polymorphic_union inheritance scenarios (which then causes extra tables to be rendered in the FROM clause causing cartesian products): - improvements to “column adaption” for a->b->c inheritance situations to better locate columns that are related to one another via multiple levels of indirection, rather than rendering the non-adapted column. - the “polymorphic discriminator” column is only rendered for the actual mapper being queried against. The column won’t be “pulled in” from a subclass or superclass mapper since it’s not needed. ] Reflected foreign keys will properly locate their referenced column, even if the column was given a “key” attribute different from the reflected name. This is achieved via a new flag on ForeignKey/ForeignKeyConstraint called “link_to_name”, if True means the given name is the referred-to column’s name, not its assigned key] Support for three levels of column nullability: NULL, NOT NULL, and the database’s configured default. The default Column configuration (nullable=True) will now generate NULL in the DDL. Previously no specification was emitted and the database default would take effect (usually NULL, but not always). To explicitly request the database default, configure columns with nullable=None and no specification will be emitted in DDL. This is backwards incompatible behavior.¶ oracle¶ [oracle] Adjusted the format of create_xid() to repair two-phase commit. We now have field reports of Oracle two-phase commit working properly with this change.¶ [oracle] Added OracleNVarchar type, produces NVARCHAR2, and also subclasses Unicode so that convert_unicode=True by default. NVARCHAR2 reflects into this type automatically so these columns pass unicode on a reflected table with no explicit convert_unicode=True flags] Query.count() has been enhanced to do the “right thing” in a wider variety of cases. It can now count multiple-entity queries, as well as column-based queries. Note that this means if you say query(A, B).count() without any joining criterion, it’s going to count the cartesian product of A*B. Any query which is against column-based entities will automatically issue “SELECT count(1) FROM (SELECT…)” so that the real rowcount is returned, meaning a query such as query(func.count(A.name)).count() will return a value of one, since that query would return one row] Dialects can now generate label names of adjustable length. Pass in the argument “label_length=<value>” to create_engine() to adjust how many characters max will be present in dynamically generated column labels, i.e. “somecolumn AS somelabel”. Any value less than 6 will result in a label of minimal size, consisting of an underscore and a numeric counter. The compiler uses the value of dialect.max_identifier_length as a default] Added a new extension sqlalchemy.ext.serializer. Provides Serializer/Deserializer “classes” which mirror Pickle/Unpickle, as well as dumps() and loads(). This serializer implements an “external object” pickler which keeps key context-sensitive objects, including engines, sessions, metadata, Tables/Columns, and mappers, outside of the pickle stream, and can later restore the pickle using any engine/metadata/session provider. This is used not for pickling regular object instances, which are pickleable without any special logic, but for pickling expression objects and full Query objects, such that all mapper/engine/session dependencies can be restored at unpickle time] The RowTuple object returned by Query(*cols) now features keynames which prefer mapped attribute names over column keys, column keys over column names, i.e. Query(Class.foo, Class.bar) will have names “foo” and “bar” even if those are not the names of the underlying Column objects. Direct Column objects such as Query(table.c.col) will return the “key” attribute of the Column] Joins along a relation() from a mapped class to a mapped subclass, where the mapped subclass is configured with single table inheritance, will include an IN clause which limits the subtypes of the joined class to those requested, within the ON clause of the join. This takes effect for eager load joins as well as query.join(). Note that in some scenarios the IN clause will appear in the WHERE clause of the query as well since this discrimination has multiple trigger points.¶ [orm] AttributeExtension has been refined such that the event is fired before the mutation actually occurs. Additionally, the append() and set() methods must now return the given value, which is used as the value to be used in the mutation operation. This allows creation of validating AttributeListeners which raise before the action actually occurs, and which can change the given value into something else before its used] The composite() property type now supports a __set_composite_values__() method on the composite class which is required if the class represents state using attribute names other than the column’s keynames; default-generated values now get populated properly upon flush. Also, composites with attributes set to None compare correctly] Added “sorted_tables” accessor to MetaData, which returns Table objects sorted in order of dependency as a list. This deprecates the MetaData.table_iterator() method. The “reverse=False” keyword argument has also been removed from util.sort_tables(); use the Python ‘reversed’ function to reverse the results] user-defined @properties on a class are detected and left in place during mapper initialization. This means that a table-bound column of the same name will not be mapped at all if a @property is in the way (and the column is not remapped to a different name), nor will an instrumented attribute from an inherited class be applied. The same rules apply for names excluded using the include_properties/exclude_properties collections.¶ [orm] Added a new SessionExtension hook called after_attach(). This is called at the point of attachment for objects via add(), add_all(), delete(), and merge().¶ [orm] A mapper which inherits from another, when inheriting the columns of its inherited mapper, will use any reassigned property names specified in that inheriting mapper. Previously, if “Base” had reassigned “base_id” to the name “id”, “SubBase(Base)” would still get an attribute called “base_id”. This could be worked around by explicitly stating the column in each submapper as well but this is fairly unworkable and also impossible when using declarative] simple label names in ORDER BY expressions render as themselves, and not as a re-statement of their corresponding expression. This feature is currently enabled only for SQLite, MySQL, and PostgreSQL. It can be enabled on other dialects as each is shown to support this behavior] declarative initialization of Columns adjusted so that non-renamed columns initialize in the same way as a non declarative mapper. This allows an inheriting mapper to set up its same-named “id” columns in particular such that the parent “id” column is favored over the child column, reducing database round trips when this value is requested] Modified SQLite’s representation of “microseconds” to match the output of str(somedatetime), i.e. in that the microseconds are represented as fractional seconds in string format. This makes SQLA’s SQLite date type compatible with datetimes that were saved directly using Pysqlite (which just calls str()). Note that this is incompatible with the existing microseconds values in a SQLA 0.4 generated SQLite database file. To get the old behavior globally:from sqlalchemy.databases.sqlite import DateTimeMixin DateTimeMixin.__legacy_microseconds__ = True To get the behavior on individual DateTime types:t = sqlite.SLDateTime() t.__legacy_microseconds__ = True Then use “t” as the type on the Column.¶ [sqlite] SQLite Date, DateTime, and Time types only accept Python datetime objects now, not strings. If you’d like to format dates as strings yourself with SQLite, use a String type. If you’d like them to return datetime objects anyway despite their accepting strings as input, make a TypeDecorator around String - SQLA doesn’t encourage this pattern The behavior of MapperExtensions attached to multiple, entity_name= primary mappers for a single class has been altered. The first mapper() defined for a class is the only mapper eligible for the MapperExtension ‘instrument_class’, ‘init_instance’ and ‘init_failed’ events. This is backwards incompatible; previously the extensions of last mapper defined would receive these events.¶ [postgres] Added Index reflection support to Postgres, using a great patch we long neglected, submitted by Ken Kuhlman.¶
https://docs.sqlalchemy.org/en/rel_1_1/changelog/changelog_05.html
2019-03-18T23:30:14
CC-MAIN-2019-13
1552912201812.2
[]
docs.sqlalchemy.org
17.1. CVE-2010-0009: Apache CouchDB Timing Attack Vulnerability¶ 17.1.1. Description¶ Apache CouchDB versions prior to version 0.11.0 are vulnerable to timing attacks, also known as side-channel information leakage, due to using simple break-on-inequality string comparisons when verifying hashes and passwords. 17.1.2. Mitigation¶ All users should upgrade to CouchDB 0.11.0. Upgrades from the 0.10.x series should be seamless. Users on earlier versions should consult with upgrade notes. 17.1.3. Example¶ A canonical description of the attack can be found in
http://docs.couchdb.org/en/2.1.2/cve/2010-0009.html
2019-03-19T00:20:39
CC-MAIN-2019-13
1552912201812.2
[]
docs.couchdb.org
Contents Now Platform Administration Previous Topic Next Topic Configuration logging Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Configuration logging Certain configuration changes are tracked in the system logs.
https://docs.servicenow.com/bundle/helsinki-platform-administration/page/administer/time/concept/c_ConfigurationLogging.html
2019-03-19T00:34:09
CC-MAIN-2019-13
1552912201812.2
[]
docs.servicenow.com
You might need to uninstall and reinstall Horizon Client if repairing Horizon Client does not solve the problem. About this task This procedures shows you how to uninstall Horizon Client if you have the Horizon Client installer. If you do not have the Horizon Client installer, you can uninstall Horizon Client in the same way that you uninstall other applications on your Windows system. For example, you can use the Windows operating system Add or Remove Programs feature to uninstall Horizon Client. Prerequisites Verify that you can log in as an administrator on the client system. Procedure - To uninstall Horizon Client interactively, double-click the Horizon Client installer, or run the Horizon Client installer with the /uninstall installation command from the command line, and click Remove. - To uninstall Horizon Client silently, run the Horizon Client installer with the /silent and /uninstall installation commands from the command line. For example: VMware-Horizon-Client-y.y.y-xxxxxx.exe /silent /uninstall What to do next Reinstall Horizon Client. For installation instructions, see the Using VMware Horizon Client for Windows document.
https://docs.vmware.com/en/VMware-Horizon-Client-for-Windows/4.5/com.vmware.horizon.windows-client-45-help.doc/GUID-11FE1E85-188A-4A38-A650-5C1D46C1F84F.html
2019-03-18T23:36:53
CC-MAIN-2019-13
1552912201812.2
[]
docs.vmware.com
The User management allows to manage all users that have access to the organisation and their role. Inviting users A new user can be invited by adding his email and desired role. An invitation email will be sent, where the user can complete the user account creation. User roles A user can have the following roles: - Admin - can manage all aspects of the account plus all operations that an Analyst can do - Analyst - can create dashboards and reports, publish dashboards and manage datasources - Viewer - can access all published dashboards and interact with them (change filters, download report CSV data) The current subscription determines the maximum number of Admins/Analysts that can exist within the organisation. If you run out of available analysts, you can either purchase more in the Subscription admin section, or change the role of any existing analysts or admins to free up the slots. Removing users A user can be easily deactivated, which disables the login for that user. Any content created by that user remains available. A list of deactivated users can be accessed in the filters on top - a user can be easily reactivated from here (assuming there is enough analyst slots available to re-activate an analyst/admin account). Managing invitations All unaccepted invitations are listed at the bottom of the user list. If an invitation is deleted, the user won't be able to join with that link anymore.
https://docs.cluvio.com/hc/en-us/articles/115001316409-Users
2017-08-16T21:40:32
CC-MAIN-2017-34
1502886102663.36
[]
docs.cluvio.com
a number of properties.. -,. - repository - The repository of the sourcestamp for this build - project - The project of the sourcestamp for this build - workdir - The absolute path of the base working directory on the slave, of the current builder. paramaters. Please file bugs for any parameters which do not accept properties. Property¶ The simplest form of annotation is to wrap the property name with Property: from buildbot.steps.shell import ShellCommand form) ]) WithProperties¶ Property can only be used to replace an entire argument: in the example above, it replaces an argument to echo. Often, properties need to be interpolated into strings, instead. The tool for that job is WithProperties.").
http://docs.buildbot.net/0.8.5/manual/cfg-properties.html
2017-08-16T21:43:33
CC-MAIN-2017-34
1502886102663.36
[]
docs.buildbot.net
Help Center Local Navigation Media shortcuts Audio and video files Depending on the typing input language that you are using, some shortcuts might not be available. - To pause an audio or video file, press the Play/Pause/Mute key on the top of your BlackBerry® device. To resume playing an audio or video file, press the Play/Pause/Mute key again. - If you are using a headset, to turn on the audio boost feature to amplify the volume for songs, ring tones, and videos, press and hold the Volume Up key on the right side of your device. To pan a picture, you must first be zoomed in. - To zoom in to a picture, press 3. To zoom to the original picture size, press 7. - To zoom out from a picture, press 9. To zoom to the original picture size, press 7. - To pan up in a picture, press 2. - To pan down in a picture, press 8. - To pan right in a picture, press 6. - To pan left in a picture, press 4. - To return to the center of a picture, press 5. - To rotate a picture, press L. - To fit a picture to the screen size, press 1. Camera and video camera - To zoom in to a subject before taking a picture, press the Volume Up key. - To zoom out from a subject before taking a picture, press the Volume Down key. - To take a picture, press the Right Convenience key. - To change the flash mode for a picture, if available, or to turn on low-light mode for a video, press the Space key. Next topic: Browser shortcuts Previous topic: File and attachment shortcuts Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/14928/Media_shortcuts_full_keyboard_50_764706_11.jsp
2015-06-30T08:25:40
CC-MAIN-2015-27
1435375091925.14
[]
docs.blackberry.com
Difference between revisions of "What is a vulnerable extension?" From Joomla! Documentation Revision as of 05:50, 27 February 2010 A vulnerable extension list. - Check the Vulnerable Extension List on a regular basis and remove or update any extension version found to be vulnerable. A RSS feed is also available
https://docs.joomla.org/index.php?title=What_is_a_vulnerable_extension%3F&diff=21924&oldid=20877
2015-06-30T09:36:49
CC-MAIN-2015-27
1435375091925.14
[]
docs.joomla.org
Help Center Local Navigation Applications with desktop synchronization You can use the BlackBerry® APIs to build applications with desktop synchronization capabilities, such as reference guides and organizer applications. The user connects the BlackBerry device to a computer to manage and synchronize data that is located on the computer. Research In Motion® does not provide HotSync® conduits or any other direct database synchronization module. You must build the synchronization code, and the BlackBerry device user must initiate the data synchronization process manually. After the application is installed on the BlackBerry device, the BlackBerry device user must synchronize information manually by connecting their BlackBerry device to the computer with a serial connection, a USB connection, or a Bluetooth® connection. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/5827/Applications_with_desktop_synchronization_446977_11.jsp
2015-06-30T08:32:41
CC-MAIN-2015-27
1435375091925.14
[]
docs.blackberry.com
9.9, “. 9. Create LVM Logical Volume Refer to Section 9 9.13. Encrypt Partitions Up 9.14.2. Adding Partitions
http://docs.fedoraproject.org/en-US/Fedora/17/html/Installation_Guide/s1-diskpartitioning-x86.html
2015-06-30T08:17:53
CC-MAIN-2015-27
1435375091925.14
[]
docs.fedoraproject.org
Revision history of "JDatabaseSQLSrv::queryBatch" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 11:49, 20 June 2013 JoomlaWikiBot (Talk | contribs) deleted page JDatabaseSQLSrv::queryBatch (cleaning up content namespace and removing duplicated API references)
https://docs.joomla.org/index.php?title=JDatabaseSQLSrv::queryBatch&action=history
2015-06-30T08:27:37
CC-MAIN-2015-27
1435375091925.14
[]
docs.joomla.org
FAQ: Using Django¶ Why do I get an error about importing DJANGO_SETTINGS_MODULE?¶ Make sure that: - The environment variable DJANGO_SETTINGS_MODULE is set to a fully-qualified Python module (i.e. “mysite.settings”). - Said module is on sys.path (import mysite.settings should work). - The module doesn’t contain syntax errors (of course). I can’t stand your template language. Do I have to use it?¶?¶?¶ Using a FileField or an ImageField }}..
https://docs.djangoproject.com/en/1.5/faq/usage/
2015-06-30T08:21:58
CC-MAIN-2015-27
1435375091925.14
[]
docs.djangoproject.com
Information for "Extensions Module Manager Search" Basic information Display titleHelp16:Extensions Module Manager Search Default sort keyExtensions Module Manager Search Page length (in bytes)1,808 Page ID10466:35, 11 July 2010 Latest editorToretto (Talk | contribs) Date of latest edit13:59, 13 May 2011 Total number of edits5 Total number of distinct authors’
https://docs.joomla.org/index.php?title=Help16:Extensions_Module_Manager_Search&action=info
2015-06-30T09:35:13
CC-MAIN-2015-27
1435375091925.14
[]
docs.joomla.org
Information for "Module/es" Basic information Display titleMódulo Default sort keyModule/es Page length (in bytes)5,313 Page ID32257 Page content languageSpanish (es)armyman (Talk | contribs) Date of page creation22:16, 3 March 2014 Latest editorFuzzyBot (Talk | contribs) Date of latest edit09:55, 7 May 2015 Total number of edits46 Total number of distinct authors/es (view source) Chunk:Module/es (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=Module/es&action=info
2015-06-30T09:33:13
CC-MAIN-2015-27
1435375091925.14
[]
docs.joomla.org
Difference between revisions of "Joomla LESS" From Joomla! Documentation Revision as of 08:43, 19 December 2012 Joomla 3.0 is shipped with a LESS files located in media/jui/less/ scripts as a CLI application.
https://docs.joomla.org/index.php?title=Joomla_LESS&diff=79185&oldid=79180
2015-06-30T09:47:41
CC-MAIN-2015-27
1435375091925.14
[]
docs.joomla.org
Difference between revisions of "JSimpleXML::loadFile" From Joomla! Documentation Revision as of 20::loadFile Description Interprets an XML file into an object. Description:JSimpleXML::loadFile [Edit Descripton] public function loadFile ( $path $classname=null ) - Returns - Defined on line 164 of libraries/joomla/utilities/simplexml.php See also JSimpleXML::loadFile source code on BitBucket Class JSimpleXML Subpackage Utilities - Other versions of JSimpleXML::loadFile SeeAlso:JSimpleXML::loadFile [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JSimpleXML::loadFile&diff=next&oldid=57710
2015-06-30T08:24:03
CC-MAIN-2015-27
1435375091925.14
[]
docs.joomla.org
Information for "Module Class Suffix/es" Basic information Display titleSufijo de clase de módulo Default sort keyModule Class Suffix/es Page length (in bytes)929 Page ID32140 Page content languageSpanish (es) Page content modelwikitext Indexing by robotsAllowed Number of redirects to this page0 Number of subpages of this page0 (0 redirects; 0 non-redirects) Page protection EditAllow all users MoveAllow all users Edit history Page creatorIsidrobaq (Talk | contribs) Date of page creation05:55, 3 March 2014 Latest editorIsidrobaq (Talk | contribs) Date of latest edit06:14, 3 March 2014 Total number of edits8 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (3)Templates used on this page: Template:JVer (view source) (semi-protected)Template:Version (view source) Chunk:Module Class Suffix/es (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=Module_Class_Suffix/es&action=info
2015-06-30T09:41:56
CC-MAIN-2015-27
1435375091925.14
[]
docs.joomla.org
Changes related to "Pre-upgrade Check Version Specification Debate" ← Pre-upgrade Check Version Specification Debate This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&limit=50&target=Pre-upgrade_Check_Version_Specification_Debate
2015-06-30T09:26:52
CC-MAIN-2015-27
1435375091925.14
[]
docs.joomla.org
Information for "Template/nl" Basic information Display titleTemplate Default sort keyTemplate/nl Page length (in bytes)1,646 Page ID31357 Page content languageDutch (nl) Page content modelwikitext Indexing by robots11:42, 25 February 2014 Latest editorMartijnM (Talk | contribs) Date of latest edit05:00, 11 March 2014 Total number of edits18 Total number of distinct authors3 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded template (1)Template used on this page: Chunk:Template/nl (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=Template/nl&action=info
2015-06-30T08:27:50
CC-MAIN-2015-27
1435375091925.14
[]
docs.joomla.org
What is the preferred webserver for media delivery?¶ Table of Contents As the different webservers expose different APIs as well as use different approaches internally in terms of processing or resource management, the following provides an overview including the consequences of choices made by the various webservers. Resource Management¶ Apache¶ Apache structures resource use using so-called "buckets" and "brigades". From Apache's documentation: A bucket is a container for data. Buckets can contain any type of data. Although the most common case is a block of memory, a bucket may instead contain a file on disc, or even be fed a data stream from a dynamic source such as a separate program. Different bucket types exist to hold different kinds of data and the methods for handling it. In OOP terms, the apr_bucket is an abstract base class from which actual bucket types are derived. [1] The bucket is contained in a so-called "brigade": Buckets not only allow for efficient handling of data, in the sense that they can be copied, split and deleted in effect by copying pointers but also that the bucket API can be extended. For instance, libfmp4 (the Unified Streaming core library) extends Apache's buckets with the following two types: Access to the underlying data is handled different in each bucket_type, so the HTTP bucket implements a different read than a FILE, or an XFRM bucket. This means that for instance when encrypting output the webserver does not have to load the whole file into memory but can delay the encryption to when it's actually required: just before sending the data out. In short, when producing output libfmp4 creates the buckets and sets the type. Apache then traverses the bucket list and either sends the data straight out (as libfmp4 implements the bucket API, Apache can simply walk over it) or process it (automatically) when calling the "read" function (for instance dynamically encrypting when the type is an XFRM bucket). The bucket list is created completely when processing output for a client request, but actual IO is deferred to when it is actually needed: when Apache walks over the bucket list just before data should be offloaded to the network. In effect, Apache takes control of the libfmp4 generated bucket list and uses what can be understood as lazy-evaluation on the bucket list. By implementing specialized buckets through the bucket API Download to own for HTTP Live Streaming (HLS) or Progressive download become possible in Apache, without having to load all in memory and keeping resource use low and flexible. Nginx¶ In many ways, Nginx can be seen as 'Apache Light': much of Apache's structure and ideas have been copied and then changed towards Nginx's own purpose. Nginx unfortunately lacks key Apache features: - there is no (extensible) bucket API - only FILE and HEAP buckets are implemented in Nginx The lack of the bucket API in Nginx limits certain uses-cases, notably progressive download and download to own: data must be copied into memory using the HEAP bucket as there is no XFRM (transform/encrypt) bucket. Serving many of such requests would bring down the webserver as it would need to load the requested file into memory completely, therefore this functionality is not available. Process Management¶ Regarding request handling, webservers have basically two choices: - thread/process based request handling - event based request handling Both have a number of pros and contras, a good overview can be found in 'Concurrent Programming for Scalable Web Architectures'. [2] An important difference is how state is handled: This basically implies that in an event based architecture a reversal of the control flow is required where the callback chain in effect forms a state machine. Apache allows for both models as it supports both a thread/process back end as well as an event based back end. Nginx however, supports only an event based model. Unfortunately this has a huge impact on synchronous I/O used in libfmp4, for instance when fetching samples over HTTP in an object storage (e.g. Amazon S3) or making upstream calls in the Remix workflow. In fact the upstream call will block the Nginx event handler until it returns, rendering the Nginx process unresponsive for anything else. Interestingly, Nginx internally uses synchronous I/O in certain cases ("stat" for instance), which can become a bottleneck. Nginx has recognised this and since version 1.7.11 (mid 2015) introduced the concept of a threadpool as 'it's not possible to avoid blocking operations in every case'. [4] The threadpool API is used to boost the performance of three of Nginx's internal I/O related system calls. [4] However, (fast) CGI is supported as well by Nginx and might prove a better, more modular approach to (blocking) I/O than using a threadpool with all it's inherent complexity. Media Origin¶ Looking at benchmarks between Apache and Nginx it becomes clear quite quickly that Nginx performs better when it comes to static file serving. [7] However, a Media Origin serving video content in many different formats, adding subtitles and applying DRM dynamically is not a static file serving setup. In fact, recent tests show that Nginx and Apache perform the same when it comes to dynamic content. [7] [8] [9] When performance is not the key factor, then other factors become decisive: Apache has richer APIs and better internal documentation as well as a choice in processing models when it comes to an essentially I/O bound process like a (dynamic) Media Origin. As far as deployment there is no difference either: Apache can easily be deployed as virtual machine, on hardware or as containerized micro service from Dockerhub serving particular use-cases - allowing to scale horizontally. However, Nginx shines when it comes to reverse proxy and caching so it is highly recommended to use both. Whenever possible media fragments should be served from cache, either by the CDN or the upstream origin first from an Nginx cache and only on full cache MISS the origin should be contacted. See Origin shield cache for further details. Media Edge¶ It is possible to move just-in-time content generation to the edge of the network using a combination of Nginx and Apache. Details of this setup can be found in the late transmuxing whitepaper.
https://beta.docs.unified-streaming.com/faqs/webserver/webserver-details.html
2022-06-25T13:46:12
CC-MAIN-2022-27
1656103035636.10
[]
beta.docs.unified-streaming.com
. toHTML("Hello <br> World") returns Hello <br> World On This Page
https://docs.appian.com/suite/help/20.4/fnc_text_tohtml.html
2022-06-25T14:25:49
CC-MAIN-2022-27
1656103035636.10
[]
docs.appian.com
Ensure credentials unused for 90 days or greater are disabled Error: Credentials unused for 90 days or greater are not disabled Bridgecrew Policy ID: BC_AWS_IAM_3 Severity: HIGH Credentials unused for 90 days or greater are not disabled Description AWS IAM users access AWS resources using different types of credentials, such as passwords or access keys. We recommend that all credentials that have been unused for 90 or greater days be removed or deactivated. Disabling or removing unnecessary password access to an account reduces the risk of credentials being misused. Fix - Runtime AWS Console To manually remove or deactivate credentials: - Log in to the AWS Management Console as an IAM user at. - Navigate to IAM Services. - Select Users. - Select Security Credentials. - Select Manage Console Password, then select Disable. - Click Apply. - If there is an unused access key, disable or delete the key. Updated 12 months ago Did this page help you?
https://docs.bridgecrew.io/docs/iam_3
2022-06-25T13:46:38
CC-MAIN-2022-27
1656103035636.10
[]
docs.bridgecrew.io
Environment Variables The XAP environment configuration is maintained by a configuration script file. This script is located in the <XAP Root>\bin directory: - Windows: setenv.bat - UNIX: Setenv.sh Below is a list of some of a selected list of commonly used variables that are defined in this script: The default value of the LOOKUPGROUPSvariable is the GigaSpaces version number, preceded by XAP. For example, in GigaSpaces XAP 6.0, the default lookup group is gigaspaces-6.0XAP. This is the lookup group which the space and Jini Transaction Manager register with, and which clients use by default to connect to the space. Using the setenv Utility It is recommended to use the setenv utility to derive the commonly used GigaSpaces libraries and setup environment. To use this utility, you simply need to call it from your script file.
https://docs.gigaspaces.com/xap/10.2/dev-java/common-environment-variables.html
2022-06-25T13:28:42
CC-MAIN-2022-27
1656103035636.10
[]
docs.gigaspaces.com
Go to the docs for the latest release. Adding NetApp Support Site accounts to Cloud Manager Adding your NetApp Support Site account to Cloud Manager is required to deploy a BYOL system. It’s also required to register pay-as-you-go systems and to upgrade ONTAP software. Watch the following video to learn how to add NetApp Support Site accounts to Cloud Manager. Or scroll down to read the steps. If you don’t have a NetApp Support Site account yet, register for one. In the upper right of the Cloud Manager console, click the Settings icon, and select Cloud Provider & Support Accounts. Click Add New Account and select NetApp Support Site. Specify a name for the account and then enter the user name and password. The account must be a customer-level account (not a guest or temp account). If you plan to deploy BYOL systems: The account must be authorized to access the serial numbers of the BYOL systems. If you purchased a secure BYOL subscription, then a secure NSS account is required. Click Create Account. Users can now select the account when creating new Cloud Volumes ONTAP systems and when registering existing systems.
https://docs.netapp.com/us-en/occm37/task_adding_nss_accounts.html
2022-06-25T14:11:39
CC-MAIN-2022-27
1656103035636.10
[]
docs.netapp.com
Manage Unzer Prepayment payments Manage Unzer prepayment transactions After the successful charge transaction, you can perform additional operations on the payment resource. Below you can see the most important cases for the Prepayment payment type. For a full description of the charge transaction, refer to the relevant server-side integration documentation page: Charge a payment (direct API calls), Charge a payment (PHP SDK), Charge a payment (Java SDK). Cancel before money receipt (reversal) If the customer wants to cancel an order, before the money has been received, you have to make a cancel call on the initial Charge transaction. $cancellation = $unzer->cancelChargeById( 's-pay-xxxxxxxxx, 's-chg-1' ); Cancel cancel = unzer.cancelCharge( "s-pay-xxxxxxxxx", "s-chg-1" ); Cancel after payment (refund) In case you received a payment for an order the customer wants to cancel, you can refund it up to the amount of the received payments. To do this you have to make a Cancel transaction on Charge(s) received via notification. $cancellation = $unzer->cancelChargeById( 's-pay-xxxxxxxxx, 's-chg-2' ); Cancel cancel = unzer.cancelCharge( "s-pay-xxxxxxxxx", "s-chg-2" ); The response will look similar ot te following example: { "id": "s-cnl-1", "isSuccess": true, "isPending": false, "isError": false, "message": { "code": "COR.000.100.112", "merchant": "Request successfully processed in 'Merchant in Connector Test Mode'", "customer": "Your payments have been successfully processed in sandbox mode." }, "amount": "12.9900", "currency": "EUR", "date": "2021-06-09 10:09:47", "resources": { "customerId": "s-cst-a0b2dc42b4f9", "paymentId": "s-pay-2127", "basketId": "", "metadataId": "", "payPageId": "", "traceId": "a902d887e266e24c4a36d0f80ed0555f", "typeId": "s-ppy-eh15lfuhrfus" }, "orderId": "o191564001623233177", "paymentReference": "", "processing": { "uniqueId": "31HA07BC8159F6CC0D414B67FABD1444", "shortId": "4871.5978.7781", "traceId": "a902d887e266e24c4a36d0f80ed0555f" } } chargesreceived after the initial chargeare actual payments and can be refunded by canceling them.
https://docs.unzer.com/payment-methods/unzer-prepayment/manage-unzer-prepayment/
2022-06-25T13:12:23
CC-MAIN-2022-27
1656103035636.10
[]
docs.unzer.com
libvirt Installation Compiling a release tarball ¶ libvirt uses the standard configure/make/install steps: $ xz -c libvirt-x.x.x.tar.xz | tar xvf - $ cd libvirt-x.x.x $ ./configure The configure script can be given options to change its default behaviour. To get the complete list of the options it can take, pass it the --help option like this: $ ./configure --help When you have determined which options you want to use (if any), continue the process. Note the use of sudo with the make install command below. Using sudo is only required when installing to a location your user does not have write access to. Installing to a system location is a good example of this. If you are installing to a location that your user does have write access to, then you can instead run the make install command without putting sudo before it. $ ./configure [possible options] $ make $ sudo make install At this point you may have to run ldconfig or a similar utility to update your list of installed shared libs. Building from a GIT checkout ¶ The libvirt build process uses GNU autotools, so after obtaining a checkout it is necessary to generate the configure script and Makefile.in templates using the autogen.sh command. By default when the configure script is run from within a GIT checkout, it will turn on -Werror for builds. This can be disabled with --disable-werror, but this is not recommended. Libvirt takes advantage of the gnulib project to provide portability to a number of platforms. This is normally done dynamically via a git submodule in the .gnulib subdirectory, which is auto-updated as needed when you do incremental builds. Setting the environment variable GNULIB_SRCDIR to a local directory containing a git checkout of gnulib will let you reduce local disk space requirements and network download time, regardless of which actual commit you have in that reference directory. However, if you are developing on a platform where git is not available, or are behind a firewall that does not allow for git to easily obtain the gnulib submodule, it is possible to instead use a static mode of operation where you are then responsible for updating the git submodule yourself. In this mode, you must track the exact gnulib commit needed by libvirt (usually not the latest gnulib.git) via alternative means, such as a shared NFS drive or manual download, and run this any time libvirt.git updates the commit stored in the .gnulib submodule: $ GNULIB_SRCDIR=/path/to/gnulib ./autogen.sh --no-git To build & install libvirt to your home directory the following commands can be run: $ ./autogen.sh --prefix=$HOME/usr $ make $ sudo make install Be aware though, that binaries built with a custom prefix will not interoperate with OS vendor provided binaries, since the UNIX socket paths will all be different. To produce a build that is compatible with normal OS vendor prefixes, use $ ./autogen.sh --system $ make When doing this for day-to-day development purposes, it is recommended not to install over the OS vendor provided binaries. Instead simply run libvirt directly from the source tree. For example to run a privileged libvirtd instance $ su - # service libvirtd stop (or systemctl stop libvirtd.service) # /home/to/your/checkout/src/libvirtd It is also possible to run virsh directly from the source tree using the ./run script (which sets some environment variables): $ ./run ./tools/virsh ....
https://docs.virtuozzo.com/libvirt-docs-5.6.0/html/compiling.html
2022-06-25T14:40:57
CC-MAIN-2022-27
1656103035636.10
[]
docs.virtuozzo.com
great_expectations.cli.checkpoint_script_template¶ This is a basic generated Great Expectations script that runs a Checkpoint. Checkpoints are the primary method for validating batches of data in production and triggering any followup actions. A Checkpoint facilitates running a validation as well as configurable Actions such as updating Data Docs, sending a notification to team members about validation results, or storing a result in a shared cloud storage. See also <cyan></cyan> for more information about the Checkpoints and how to configure them in your Great Expectations environment. Checkpoints can be run directly without this script using the great_expectations checkpoint run command. This script is provided for those who wish to run Checkpoints in python. Usage: - Run this file: python great_expectations/uncommitted/run_{0:s}.py. - This can be run manually or via a scheduler such, as cron. - If your pipeline runner supports python snippets, then you can paste this into your pipeline.
https://legacy.docs.greatexpectations.io/en/stable/autoapi/great_expectations/cli/checkpoint_script_template/index.html
2022-06-25T13:53:54
CC-MAIN-2022-27
1656103035636.10
[]
legacy.docs.greatexpectations.io
community.docker.docker_swarm inventory – Ansible dynamic inventory plugin for Docker swarm nodes.. Synopsis Reads inventories from the Docker swarm API. Uses a YAML configuration file docker_swarm.[yml|yaml]. The plugin returns following groups of swarm nodes: all - all hosts; workers - all worker nodes; managers - all manager nodes; leader - the swarm leader node; nonleaders - all nodes except the swarm leader. Requirements The below requirements are needed on the local controller node that executes this inventory. python >= 2.7 Docker SDK for Python >= 1.10.0 Parameters Examples # Minimal example using local docker plugin: community.docker.docker_swarm docker_host: unix://var/run/docker.sock # Minimal example using remote docker plugin: community.docker.docker_swarm docker_host: tcp://my-docker-host:2375 # Example using remote docker with unverified TLS plugin: community.docker.docker_swarm docker_host: tcp://my-docker-host:2376 tls: yes # Example using remote docker with verified TLS and client certificate verification plugin: community.docker.docker_swarm docker_host: tcp://my-docker-host:2376 validate_certs: yes ca_cert: /somewhere/ca.pem client_key: /somewhere/key.pem client_cert: /somewhere/cert.pem # Example using constructed features to create groups and set ansible_host plugin: community.docker.docker_swarm docker_host: tcp://my-docker-host:2375 strict: False keyed_groups: # add for example x86_64 hosts to an arch_x86_64 group - prefix: arch key: 'Description.Platform.Architecture' # add for example linux hosts to an os_linux group - prefix: os key: 'Description.Platform.OS' # create a group per node label # for exomple a node labeled w/ "production" ends up in group "label_production" # hint: labels containing special characters will be converted to safe names - key: 'Spec.Labels' prefix: label Collection links Issue Tracker Repository (Sources) Submit a bug report Request a feature Communication
https://docs.ansible.com/ansible/latest/collections/community/docker/docker_swarm_inventory.html
2022-06-25T14:36:08
CC-MAIN-2022-27
1656103035636.10
[]
docs.ansible.com
This guide describes the Web Services interface within ExtraView. The intended audience is experienced developers who wish to use the Web Services interface to integrate ExtraView to remote external applications that also support a Web Services interface. ExtraView provides a standardized set of methods to integrate with other applications using a service-orientated architecture (SOA). This complements the web-orientated architecture interface (WOA) that is also implemented within ExtraView and described in the complementary ExtraView Command Line Interface and Application Programming Interface guide. The interface is cross platform and can be accessed from any development platform, including Java and .Net. A service-oriented architecture is defined as a group of services, which communicate with each other. The process of communication involves either simple data passing or it involves two or more services coordinating some activity. Some means of connecting services to each other is needed. SOA applications are built out of software services. Services are intrinsically unassociated units of functionality, which have no calls to each other embedded in them. Within ExtraView they map to atomic functions to perform specific actions. Broadly, SOAs implement functionalities most humans would recognize as a service, such as filling out an online application for an account, viewing an online bank statement, inserting an item with a database or running a report. Instead of services embedding calls to each other in their source code, protocols are defined which describe how one or more services can talk to each other. This architecture then relies on your business process expert to link and sequence services to meet your business system requirement. ExtraView’s implementation of web services utilizes a service-orientated architecture to provide a full set of integration points between ExtraView and the consumer of its web services. The Web Services Interface Guide is downloadable as a single PDF by clicking here. You will need the Adobe Acrobat Reader to view this.
https://docs.extraview.com/site/guides-unsupported-versions/extraview-80/web-services-interface
2022-06-25T14:16:06
CC-MAIN-2022-27
1656103035636.10
[]
docs.extraview.com
Third-Party Repository Policy A third-party repository is any software repository that the Fedora Project does not officially maintain, including Copr repositories, as well as repositories that are hosted outside of the Fedora Project. This policy sets out the conditions under which Fedora editions and spins can include repository definitions that make the contents from those third party repositories available to users. It applies to repository definitions integrated with the usual package installation mechanisms like dnf or GNOME Software. Unless such integration exists, this policy does not cover the packaging of: Language-specific tools ( pip, maven, cargo, go, …). Tools that primarily exist to access external software packaging ecosystems ( snap, apt, pacman, …). Tools that provide images of other systems ( docker, podman, machinectl, …). These tools can use third-party repositories without the restrictions described below. This policy is intended to ensure quality and legal protections for the most critical and visible software mechanisms used by Fedora, while allowing special-purpose software management tools to function as expected. The policy also aims to ensure that software provided under its terms is clearly labeled, so users are fully informed about the origin of the software they are installing. Software from third-party repositories cannot be used when creating Fedora images. Third-party repository distribution Third-party repositories should be distributed in descriptively named rpm packages. Each third-party repository should be defined once through a separate (binary) package. Traditionally, definitions for multiple repositories were combined into one package (for example, Fedora Workstation edition installs a package called fedora-workstation-repositories), but this is discouraged and should not be done in new cases. Repositories can be configured with either enabled_metadata=0 or enabled_metadata=1 (or equivalent), at the discretion of the relevant working group or SIG. If they fulfill the requirements set out in this policy, a Fedora edition or spin install media can include third party repository definitions. The third-party nature of the repository must be apparent to the user when they enable it, as should the non-free status of its content, if such. To ensure this, repository files must initially include the enabled=0 (or equivalent) setting, and the user must explicitly enable third-party repositories to install from them. FESCo may grant an exception to waive this requirement. Reuse of repository definitions among editions or spins is encouraged. Key requirements for third-party repositories. Changes made by one Edition or spin should not impact other Fedora editions or. Software labeling and metadata Third-party and non-free software should be identifiable to users through software management tools before installation. In general, this requirement applies to the primary software management tools used in a given edition or spin. For Fedora Workstation, this is GNOME Software, the primary software installer for the desktop. Third-party software requirements Software included in each third-party repository must conform to the following requirements. Software packaged as RPMs Requirements for software packaged as RPMs: Applications that ship as RPMs should conform with Fedora’s RPM guidelines. However, while this is the best practice, it is not a hard requirement. (This more relaxed approach to RPM packaging allows the inclusion of software for which it is difficult to conform to Fedora’s packaging guidelines.) Software must be included in an RPM repository as described in the Fedora System Administrators Guide. RPM packages in a third-party repository must not replace packages provided by official Fedora repositories, nor break dependencies between those packages. Duplicates and replacements Third-party repositories can supplement official Fedora software. In limited cases, they can be used to replace software included in the official Fedora repositories. Such situations require FESCo approval. Maintaining a third-party repository Those responsible for a repository included as a third party repository should notify the Fedora project if: repository maintenance ends or will end in the future the contents of the repository changes, either in terms of the software included or its licensing Fedora working groups or FESCo may also define agreements with third-party maintainers.
https://docs.fedoraproject.org/ro/fesco/Third_Party_Repository_Policy/
2022-06-25T14:51:54
CC-MAIN-2022-27
1656103035636.10
[]
docs.fedoraproject.org
Failure Detection Failure detection is the time it takes for the space and the client to detect that failure has occurred. Failure detection consists of two main phases: - The backup space detects that the primary space is down, and takes over as primary. - The client detects that the machine running the primary space is down. In case it is running against a clustered space, it routs its requests to the new primary space (the backup space that has just taken over as primary). One of two main failure scenarios might occur: - Process failure or machine crash - Network cable disconnection It takes XAP a few seconds to recover from process failure or a machine crash. In case of network cable disconnection, the client first has to detect that it has been disconnected from the machine running the space. Therefore, recovery time in this case is longer. Reducing Failure Detection Time Configuring failure detection time can help you handle extreme failure scenarios more effectively. For example, in extreme cases of network disconnection, you might want the failover process to take 2-3 seconds. Here is a good combination for the space settings you may use to reduce the failover time - these should be used with a fast network: cluster-config.groups.group.fail-over-policy.active-election.yield-time=300 cluster-config.groups.group.fail-over-policy.active-election.fault-detector.invocation-delay=300 cluster-config.groups.group.fail-over-policy.active-election.fault-detector.retry-count=2 The following should be specified as system properties: -Dcom.gs.transport_protocol.lrmi.connect_timeout=3 -Dcom.gs.transport_protocol.lrmi.request_timeout=3 -Dcom.gs.jini.config.maxLeaseDuration=2000 By default, the maximum time it takes for a backup space to switch into a primary mode takes ~6-8 seconds. If you would like to reduce the failover time , you should use the following formula: 100 [ms] + (yield-time * 7) + invocation-delay + (retry-count * retry-timeout) = failover time - The 100 ms above refers a constant that is related to the network latency. You should reduce this number for a fast network. Change the default settings only if you have a special need to reduce the failover duration. (~1 second). In this case: - the yield-timeminimum value should be 200 ms. - Reducing the invocation-delayand retry-timeoutvalues, and increasing the retry-countvalues might increase the chatting overhead between the spaces and the lookup service. For additional tuning options please contact the GigaSpaces Support Team. Failure Detection Parameters Space Side Parameters Active Election Parameters The following parameters in the cluster schema active election block regard failure detection and recovery: - Prefix the property with: cluster-config.groups.group.fail-over-policy.active-election. Client Side Parameters Proxy Connectivity The Proxy Connectivity defines the settings in which the system checks the liveness of space members. Watchdog Parameters Service Grid Parameters The Service Grid uses two complementary mechanisms for service detections – the Lookup Service and fault-detection handlers. GSMFaultDetectionHandler– used by GSMs to monitor each other. GSCFaultDetectionHandler– used by the GigaSpaces Management Center to monitor GSCs. PUFaultDetectionHandler– Used by GSMs to monitor Processing Units deployed on GSCs. The fault-detection handlers check periodically if a service is alive, and in case of failure, how many times to retry and how often. The GSM and GSC fault-detection handler settings are controlled via the relevant properties. The PUFaultDetectionHandler is configurable using the SLA - member alive indicator. For logging information, it is advised to monitor service failure by setting the logging level to Level.FINE. # ServiceGrid FaultDetectionHandler logging com.gigaspaces.grid.gsc.GSCFaultDetectionHandler.level = INFO com.gigaspaces.grid.gsm.GSMFaultDetectionHandler.level = INFO org.openspaces.pu.container.servicegrid.PUFaultDetectionHandler.level = INFO Jini Lookup Service Parameters The LeaseRenewalManager in the advanced-space.config file is also related to failure detection and recovery: Lookup Service Unicast discovery parameters When a Jini Lookup Service fails and is brought back online, a client (such as a GSC, space or a client with a space proxy) needs to re-discover it. It uses Jini unicast discovery retrying to connect to the failed downside is that it may delay the discovery of services if these are not brought up quickly. A discovery can be delayed us much as 15 minutes. If you have two GSMs and one fails, but it will be brought back up only in the next hour, then it’s discovery will take ~15 minutes after it has loaded. These settings can be configured - see How to Configure Unicast Discovery.
https://docs.gigaspaces.com/xap/10.2/admin/troubleshooting-failure-detection.html
2022-06-25T14:16:08
CC-MAIN-2022-27
1656103035636.10
[]
docs.gigaspaces.com
11.1 Linear Optimization¶ MOSEK accepts linear optimization problems of the form. A primal solution \((x)\) is (primal) feasible if it satisfies all constraints in (11.1). If (11.1) has at least one primal feasible solution, then (11.1) is said to be (primal) feasible. In case (11.1) does not have a feasible solution, the problem is said to be (primal) infeasible 11.1.1 Duality for Linear Optimization¶ Corresponding to the primal problem (11.1), there is a dual.2). If (11.2) has at least one feasible solution, then (11.2) is (dual) feasible, otherwise the problem is (dual) infeasible. A solution is denoted a primal-dual feasible solution, if \((x^*)\) is a solution to the primal problem (11.1) and \((y^*,(s_l^c)^*,(s_u^c)^*,(s_l^x)^*,(s_u^x)^*)\) is a solution to the corresponding dual problem (11.2). a linear optimization problem has an optimal solution if and only if there exist feasible primal-dual solution so that the duality gap is zero, or, equivalently, that the complementarity conditions are satisfied. If (11.1) has an optimal solution and MOSEK solves the problem successfully, both the primal and dual solution are reported, including a status indicating the exact state of the solution. 11.1.2 Infeasibility for Linear Optimization¶ 11.1.2.1 Primal Infeasible Problems¶ If the problem (11.5) is unbounded, and that (11.1) is infeasible. 11.1.2.2 Dual Infeasible Problems¶ If the problem (11.6) is unbounded, and that (11.2) is infeasible. In case that both the primal problem (11.1) and the dual problem (11.2) are infeasible, MOSEK will report only one of the two possible certificates — which one is not defined (MOSEK returns the first certificate found). 11.1.3 Minimalization vs. Maximalization¶ When the objective sense of problem (11.1) is maximization, i.e. the objective sense of the dual problem changes to minimization, and the domain of all dual variables changes sign in comparison to (11.2). The dual problem thus takes the form This means that the duality gap, defined in (11.6) such that \(c^Tx>0\).
https://docs.mosek.com/latest/rmosek/prob-def-linear.html
2022-06-25T13:56:14
CC-MAIN-2022-27
1656103035636.10
[]
docs.mosek.com
🔔️ NotificationsKit Notify your players of important, retention-driving, events that occur in your game. What is NotificationsKit? NotificationsKit allows you to send notifications to your players, informing them of different events in your game. It functions very similarly to how mobile notifications work, albeit in the browser. Unlike on mobile, notifications on Trail are not opt-in by default. A player has to opt-in to get notifications. This can be done through an in-game request that you show to the player or through their profile settings. User Experience Notifications is a tricky feature that can be misused. We, at Trail, care a lot about the user experience, which is why using notifications requires activation before you can use it. Reach out to us on Discord to discuss your use case. A sample of a notification displayed to players. Screenshot showing where a player may opt-out or in to get notifications for a certain game. Using NotificationsKit First, you'll want to ask the player to subscribe for notifications: // Remember to initialize the SDK first private void CheckNotificationsPermissions() { // Request the permissions status from Trail. We add a callback to check its result SDK.NotificationsKit.GetPermissionStatus(CheckPermissionStatus); } // Callback for checking the permission status. private void CheckPermissionStatus(Result result, bool allowed) { // If the result of the process is not an error. if(result.IsOK()) { // If the player accepts subscribing to notifications. if(allowed) { // We create a KeyValueList to identify which notifications we want to add them to KeyValueList notificationsValues = new KeyValueList(); notificationsValues.Add(new KeyValue("MatchAvailable", "true")); // We then send to Trail the request permission to display that for the user and add a callback to see the result of the request SDK.NotificationsKit.RequestPermission(notificationsValues, CheckRequestStatus); } } else { Debug.Log("An error occurred while getting permission: " + result); } } private void CheckRequestStatus(Result result, bool allowed) { if(result.IsOk()) { if(allowed) { // Send to our backend the user information and them to the list of players to notify whenever we need to. MyBackend.Add(user, userinfo); MyBackend.PushNotification(user, myNotificationType); } } else { Debug.Log("An error occurred while requesting permission: " + result); } } Note that notifications are currently locked to trigger every 180 minutes to prevent spamming the user. In your backend, you can send an HTTP request using cURL with the below content: curl -X POST \ \ -H 'Authorization: Bearer your-api-token' \ -H 'Content-Type: application/json' \ -d '{ "game_id": "0000-0000-0000-0000", "type": "match_available", "target": { "region": "eu", "has_purchased: "true", } }' Notifications Type If you plan to use NotificationsKit, we recommend that you use the event type match_availablefor your tests. However, if you require a custom event type (say for a seasonal event) that you reach to us on Discord Updated about 1 month ago
https://docs.trail.gg/docs/notificationskit
2022-06-25T14:07:32
CC-MAIN-2022-27
1656103035636.10
[array(['https://files.readme.io/7c9e811-9e7dfa3-notification-ex.jpg', '9e7dfa3-notification-ex.jpg A sample of a notification displayed to players.'], dtype=object) array(['https://files.readme.io/7c9e811-9e7dfa3-notification-ex.jpg', 'Click to close... A sample of a notification displayed to players.'], dtype=object) array(['https://files.readme.io/32063fa-Screenshot_2021-04-13_061231.png', 'Screenshot 2021-04-13 061231.png Screenshot showing where a player may opt-out or in to get notifications for a certain game.'], dtype=object) array(['https://files.readme.io/32063fa-Screenshot_2021-04-13_061231.png', 'Click to close... Screenshot showing where a player may opt-out or in to get notifications for a certain game.'], dtype=object) ]
docs.trail.gg
Breaking: #88646 - Removed inheritance of AbstractService from AbstractAuthenticationService¶ See Issue #88646 Description¶ The PHP TYPO3\CMS\Core\Authentication\AbstractAuthenticationService class is used for any kind of Authentication or Authorization towards Backend Users and Frontend Users. It was previously based on TYPO3\CMS\Core\Service\AbstractService for any kind of Service API, which also includes manipulating files and execution of external applications, which is there for legacy reasons since TYPO3 3.x, where the Service API via GeneralUtility::makeInstanceService was added. In order to refactor the Authentication API, the TYPO3\CMS\Core\Authentication\AbstractAuthenticationService class does not inherit from TYPO3\CMS\Core\Service\AbstractService anymore. Instead, the most required methods for executing a service is added to the Abstract class directly. Impact¶ Any calls or checks on the TYPO3\CMS\Core\Authentication\AbstractAuthenticationService class or methods, properties or constants that reside within TYPO3\CMS\Core\Service\AbstractService will result in PHP E_ERROR or E_WARNING. Since TYPO3\CMS\Core\Authentication\AbstractAuthenticationService is used for most custom Authentication APIs, this could affect some of the hooks or custom authentication providers available. Affected Installations¶ TYPO3 installations that have custom Authentication providers for frontend or backend users / groups - e.g. LDAP or Two-Factor-Authentication. Migration¶ If your custom Authentication Service extends from TYPO3\CMS\Core\Authentication\AbstractAuthenticationService but requires methods or properties from TYPO3\CMS\Core\Service\AbstractService, ensure to copy over the necessary methods/properties/constants into your custom Authentication provider.
https://docs.typo3.org/c/typo3/cms-core/main/en-us/Changelog/10.0/Breaking-88646-RemovedInheritanceOfAbstractServiceFromAbstractAuthenticationService.html
2022-06-25T13:59:11
CC-MAIN-2022-27
1656103035636.10
[]
docs.typo3.org
Install the PostgreSQL Client - Log in to sios20lkclient as SIOS20\lkadmin. - Download the PostgreSQL (Windows x86-64) installation image from the following site, place it anywhere, and double-click it to open it. Version 11.7 is used here. This installation image is identical to the one for the PostgreSQL server. - Select only the Command Line Tools for installation. Then follow the prompts to install by default. Verifying PostgreSQL Client Connectivity - Log in to sios20lknode01 or sios20lknode02 as sios20\lkadmin - Launch the LifeKeeper GUI and verify that the resource hierarchy is active on sios20lknode01. - Log in to sios20lkclient as sios20\lkadmin. - Open the command prompt and run the following command: - “\Program Files\PostgreSQL\11\bin\psql” -h 10.20.1.200 -p 5432 -U postgres -l If you enter your password and see the list of databases, this means everything is working correctly. - Then run the following command: - “\Program Files\PostgreSQL\11\bin\psql” -h 10.20.1.200 -p 5432 -U postgres Enter the password and run the following internal command: SELECT inet_server_addr(); The IP address of the server on which PostgreSQL is running will be displayed. It returns 10.20.1.200. - Using the LifeKeeper GUI, configure the PostgreSQL resources (as seen below) so that sios20lknode02 is active. - Run the command again on sios20lkclient. The IP address shown at the end is 10.20.1.200. This completes the connection verification. Feedback Thanks for your feedback. Post your comment on this topic.
https://docs.us.sios.com/sps/8.8.2/en/topic/postgresql-connection-confirmation-client
2022-06-25T13:13:31
CC-MAIN-2022-27
1656103035636.10
[]
docs.us.sios.com
Embedded software simulation tests a software design that targets only the PS. It is based on the Quick Emulator (QEMU), which emulates the behavior of the dual-core Arm® Cortex®-A72 integrated in the Versal ACAP. This simulation enables a fast, compact functional validation of the platform OS. This flow includes a SystemC transaction-level model of the system, which allows for early system exploration and verification. Embedded software simulation is available through the Vitis unified software platform. For more information, see this link in the Versal ACAP System Software Developers Guide (UG1304) and the Xilinx Wiki: QEMU User Documentation.
https://docs.xilinx.com/r/en-US/ug1273-versal-acap-design/Embedded-Software-Simulation
2022-06-25T14:02:42
CC-MAIN-2022-27
1656103035636.10
[]
docs.xilinx.com
Kea Flow Diagrams¶ These flow diagrams describe the Kea DHCPv4 server implementation. They may be useful for system administrators to understand for several reasons. In order to design a configuration that will result in clients getting the intended addresses and options it is important to understand the sequence of request processing steps. For example, Kea will iterate looking for a suitable address, and will conditionally accept the first available address, so the order in which addresses are evaluated matters. It is also useful to understand Kea’s processing logic because there are configuration choices which can make the process far more efficient. Kea is very flexible so that whole step of checking for host reservations. These diagrams are focused on those aspects of Kea processing that will be most useful to operators. The diagrams illustrate DHCPv4 request processing, but most of the logic applies to DHCPv6. Following the title of each diagram is a Kea version number. Kea behavior has evolved over time, and the diagrams document the behavior as of the Kea version indicated. The diagrams are provided in the Kea source tree in UML (source), PNG and SVG formats. DHCPv4 Packet Processing¶ Next is the DHCPv4 packet processing, where we determine what sort of DHCP message this is, Discover, Request, Release, Decline or Inform. This diagram shows the general, high level flow for processing an inbound client DHCP packet (e.g. Discover, Request, Release, etc) from receipt to the server’s response.. DHCPv4 packet processing DHCP Request Processing¶ The following diagrams focus on DHCPREQUEST processing. This chart gives an overview of the process, beginning with subnet selection, proceeding to checking for Host Reservations and evaluating client classes. Finally, before acknowledging the lease, the options are evaluated and added to the message. DHCPREQUEST processing . DHCPv4 Subnet Selection¶ Subnet selection is the process of choosing a subnet that is topologically appropriate for the client. Note that when the selected subnet is a member of a shared network the whole shared network is selected. During subnet selection the client class may be checked more than once while iterating through subnets, to determine if it is permitted in the selected subnet. DHCPv4 subnet selection DHCPv4 Special Case of Double-booting¶ After the subnet selection and before the lease allocation the DHCPv4 server handles the special case of clients restarting with an image provided by PXE boot or bootp. Note that the Lease Request box is expanded below. DHCPv4 Assign Lease DHCPv4 Allocate Lease¶ This diagram provides the details of the processing the client request, showing renewing an existing lease, assigning a reserved lease and allocating an unreserved lease. The next diagram after this one shows the algorithm in more detail. Allocate a lease for DHCPREQUEST This diagram shows the algorithm used to validate a requested lease or select a new address to offer. The far right side of the diagram shows how a new address is selected when a new lease is required and the client has neither a requested address nor a reservation. Note that when a new lease is required and Kea iterates over pools and subnets, it starts with the subnet selected above in the subnet selection process. requestLease4 algorithm Note Declined addresses are included in the statistic for assigned addresses so the \(assigned + free = total\) equation is true. Lease States¶ This diagram illustrates the different lease states including the free one where no lease object exists. lease states Checking for Host Reservations¶ will also be evaluated against the selected subnet in a further check (added in Kea 1.7.10). Kea includes several options to skip checking for host reservations, which can make this process much more efficient if you are not using reservations. Note To find a free lease the allocation engine begins with evaluating the most recently used subnet. The current subnet depends on the history of prior queries. currentHost Building the Options List¶ - Before sending a response, options are added: - evaluate required client classes - build the configured option list - append requested options - append requested vendor options - append basic options buildCfgOptionList (build configured option list) algorithm appendRequestedOptions (append requested options) algorithm appendRequestedVendorOptions (append vendor requested options) algorithm
https://kea.readthedocs.io/en/kea-1.9.7/umls.html
2022-06-25T14:42:00
CC-MAIN-2022-27
1656103035636.10
[]
kea.readthedocs.io
Everybody cleaned their plates! The unexpected finale of a refugee odyssey: Madina, after escaping from Chechnya and an ordeal at the Brest train station, lives in the house of a famous Polish actor. Madina escaped from Chechnya. She ran away from the husband who beat her, from poverty, the lack of a future, and an oppressive society. The story of her journey through Russia, Belarus and the months of torment at the Brest railway station is material for a separate film. When she finally managed to get across the border, in Poland, Madina turned from a crushed victim of domestic violence into a strong, independent woman. Thanks to a fortunate coincidence, she met Katarzyna Błażejewska, the wife of actor Maciej Stuhr, and moved in with the Stuhr family. Madina fulfills herself as a theater actress, but her great passion is cooking. Although she would not return to Chechnya for anything, her dishes make for culinary journeys to the Caucasus. Kamil Witkowski’s cheerful film portrays, with great delicacy, Madina and all the people of good will she was fortunate to meet in Poland. Konrad Wirkowski
https://watchdocs.pl/en/watch-docs/2021/films/wszyscy-wszystko-zjedli,449665076
2022-06-25T14:18:24
CC-MAIN-2022-27
1656103035636.10
[array(['/upload/thumb/2021/11/wszyscy-wszystko-zjedli-fot-01_auto_800x900.jpg', 'Everybody cleaned their plates!'], dtype=object) ]
watchdocs.pl
Cloud Storage¶ New in version 1.4.33. Table of Contents It is possible to host the manifest in various ways: - local manifest and local content - local manifest and remote content - remote manifest and remote content The following sections outline how to setup the second and third options. (As the first option is outlined in Block Storage). You will also see that the 'data reference' (dref) mp4 used, for example, for progressive download can be hosted similarly. The following examples use the EC2/S3 combination, but any storage supporting HTTP range request can be used. Below the use of Amazon S3, Azure Storage, GCE, Scality or HCP is outlined as well as an outline on how to optimise performance. Local manifests¶ Store your video files on any storage server that supports HTTP range requests. The USP webserver module requests only the necessary data from the storage server to serve the request. This makes it possible to use for example an EC2/S3 combination or access your storage server over HTTP instead of using mount points. The request flow: [origin] <-- client request [storage] <-- video.ism video1.cmfv video1.cmfv audio1.cmfa The information of where the audio and video files are stored is specified in the server manifest file. Using MP4Split you can generate the server manifest file with the files stored on e.g. an S3 storage account. Using our S3 account 'unified-streaming' as an example we have the following three files: The previous URLs are passed to MP4Split as input to generate the server manifest file. #!/bin/bash mp4split -o tears-of-steel.ism \ \ \ \ If you open the server manifest file, you'll see that the audio and video sources now point to the files at S3: <audio src="" systemBitrate="64000" systemLanguage="eng"> Local mp4¶ Store the .cmfv file on the HTTP storage (e.g. S3) and generate the dref mp4 locally: #!/bin/bash mp4split -o tears-of-steel.mp4 --use_dref \ Request the MP4 video from the webserver with: Note that there is no need for a server manifest file in this case, nor setting up any additional proxying ( ProxyPass) statements. It is possible to store 'full URLs' as a reference in the MP4, where as before it could only reference files relative to the stored MP4. Remote manifests¶ Alternatively, you can store everyting in the S3 bucket. For an complete example configuration file, please see Example configuration. To do so you will need to set up IsmProxyPass and you can have better performance with Apache by enabling subrequests with the UspEnableSubreq directive . GET [storage] <-- [origin] <-- client request video.ism video1.cmfv video1.cmfv audio1.cmfa This configuration will tell the webserver that the content should be read from S3 instead of from local disk. Both fragment mp4 or mp4 can be used as source. An example below. The first step then is to create a server manifest: #!/bin/bash mp4split -o tears-of-steel.ism \ tears-of-steel-avc1-400k.cmfv \ tears-of-steel-avc1-750k.cmfv \ tears-of-steel-aac-64k.cmfa Then upload the files to your S3 bucket. To stream this you need to setup and enable UspEnableSubreq in the virtualhost in a <Location> and add IsmProxyPass in a <Directory>: <Location "/"> UspHandleIsm on UspEnableSubreq on </Location> <Location "/your-bucket"> IsmProxyPass </Location> Note Here we are using two locations, but if you are not using different buckets you may also merge the two as shown in Adding UspEnableSubreq directives. We recommend adding <Proxy> sections for target URLs to optimize performance (on requesting media from the remote object storage), an example with AWS S3: <Proxy ""> ProxySet connectiontimeout=5 enablereuse=on keepalive=on retry=0 timeout=30 ttl=300 </Proxy> With this setting you can stream the S3 based content. An example: The origin will make the mapping from request to S3 via the virtual path, your-bucket in the above example (but other names could equally be chosen). The S3 bucket will contain tears-of-steel/tears-of-steel.ism which the origin then uses to create the DASH manifest, the MPD. Remote MP4¶ Using IsmProxyPass it is possible to store the dref mp4 in your HTTP storage (for instance S3). Generate the dref mp4 locally and then store it in the 'your-bucket' bucket in S3: #!/bin/bash mp4split -o tears-of-steel.mp4 --use_dref \ tears-of-steel-avc1-400k.cmfv # copy tears-of-steel.mp4 to To stream the MP4 you need to setup and enable IsmProxyPass in the virtual host. You can have better performance with Apache by enabling subrequests with the UspEnableSubreq directive . <Location "/"> UspHandleIsm on UspEnableSubreq on </Location> <Location "/your-bucket"> IsmProxyPass </Location> We recommend adding <Proxy> sections for target URLs to optimize performance: <Proxy ""> ProxySet connectiontimeout=5 enablereuse=on keepalive=on retry=0 timeout=30 ttl=300 </Proxy> Request the MP4 video from the webserver with:
https://beta.docs.unified-streaming.com/documentation/vod/object_storage_use_cases.html
2022-06-25T13:21:14
CC-MAIN-2022-27
1656103035636.10
[]
beta.docs.unified-streaming.com
- URL: - Enter your username and password. The username is your email address. Initial registration When you log in for the first time, you will be asked to assign a new password. Please enter the password you have received and then set a new one: The new password must comply with the following guidelines: - At least 8 characters - At least 1 number(s) - At least 1 capital letter(s) - At least 1 lower case letter(s) Change password): Forgotten password. Upload new document manually - Search and open the required dossier - Open the "Documents" tab - Select the "Add" function, then use the "Upload file" option in the selection list - Select document code - Enter details for the document. At least the document title must be specified. Example: Add a new document using drag & drop Documents can be dragged directly from the file repository into objects in the portal. Depending on the context, different options are available.
https://docs.abf.ch/pages/viewpage.action?pageId=9896116
2022-06-25T14:26:52
CC-MAIN-2022-27
1656103035636.10
[]
docs.abf.ch
Integrate with GitHub Overview Integrating Github.com allows Bridgecrew Cloud to: - Include your Infrastructure-as-Code files in daily scans - Scan changed resources in Infrastructure-as-Code files for every new build generated, (before it is merged to the main branch) and provide an actionable view of the results via GitHub checks - see Code Review. - Display compliance badges for your repositories - see Code Repository Badges - Open Pull Requests when you Remediate buildtime Incidents in your main branch - see Remediate. Note on Scans - Daily Scans - These scans analyze the entire contents of your Infrastructure-as-Code files in your main branch. - Scans triggered by Infrastructure-as-Code files changes in other branches- These scans inspect only the resources changed in the latest build of the branch. How to Integrate For details on integrating Bridgecrew with Github Enterprise Server, see here. Part 1 - In Bridgecrew - From Integrations Catalogue, under Code Integrations, select GitHub . - Under the Configure Account tab, select Authorize. A GitHub Settings page will open. Part 2 - In GitHub: - Select Authorize Bridgecew. - Select All Repositories or select specific repositories, then select Install & Authorize. Part 3 - In BridgeCrew - Select one of the following options then select Next: - Permit all existing repositories - Permit all existing and future repositories - Choose from the repository list. If choosing from the repository list, select the relevant repositories. Note: selecting Previous will bring you back to the Configure Account tab. Use this to configure another account, if necessary. - When the message "New account successfully configured" appears, select Done. Note: after the next GitHub scan, the scanned repository will appear in the Integrations grid; for further details, see here. Example The image below shows an example of a Bridgecrew comment on a violation found in IaC resources modified in the PR that triggered the scan. The comment includes violation details and a link to a documentation page that explains the related Policy. Authorizing a repository via the GitHub API If you choose to select individual repositories in step 4 above, it can become challenging to manage a large list of repositories in a microservices-based or similar dynamic environment. Unfortunately, this is a limitation on the GitHub UI and is not controllable by Bridgecrew. However, you can perform the following steps to automatically add a repo. The steps below work for a personal repository, but the process is similar if you have GitHub administrator for your organization. - In GitHub, go to installed apps and click "configure" for the "Bridgecrew" app. - Note the installation ID in the URL: - Go here and create a personal API access token. For simplicity, enable all scopes. - Fetch the repo ID for a repository you want to add: curl -u GITHUB_USERNAME:GITHUB_API_TOKEN -H "Accept: application/vnd.github.v3+json" | jq '.[] | select(.name == "REPO_NAME") | .id' - Add the repo to the list of authorized apps: curl -u GITHUB_USERNAME:GITHUB_API_TOKEN -H "Accept: application/vnd.github.v3+json" -X PUT - If you go to the GitHub integrations page in Bridgecrew, you should see the new repository available to be selected. You can use the "Select all" button to select all the repos you authorized. Updated 3 months ago
https://docs.bridgecrew.io/docs/integrate-with-githubcom
2022-06-25T14:26:14
CC-MAIN-2022-27
1656103035636.10
[array(['https://files.readme.io/3707d12-integrate.png', 'integrate.png'], dtype=object) array(['https://files.readme.io/3707d12-integrate.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/e6286d2-authorize.png', 'authorize.png'], dtype=object) array(['https://files.readme.io/e6286d2-authorize.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/805c457-integrate_github.png', 'integrate_github.png'], dtype=object) array(['https://files.readme.io/805c457-integrate_github.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/af26d3e-Done.png', 'Done.png'], dtype=object) array(['https://files.readme.io/af26d3e-Done.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/78cc2ac-gh_comment_example.png', 'gh comment example.png'], dtype=object) array(['https://files.readme.io/78cc2ac-gh_comment_example.png', 'Click to close...'], dtype=object) ]
docs.bridgecrew.io
Data Flow: Shopify Order to NetSuite Order Add Error Code: invalid_key_or_ref Error Message: 'Invalid item reference key 1009 for subsidiary 1.' Reason: Subsidiary of the Item does not match the subsidiary of the Customer. Resolution: Ensure that both the Item and Customer belong to the same Subsidiary - In NetSuite, Navigate to Lists->Accounting->Items-. Find the Item with internal id "1009". - Edit the Item Record, set the Subsidiary with internal id "1" and Save the Item Record. Note: In order to find the Subsidiary with internal id "1", navigate to NetSuite -> Setup-> Company->Subsidiaries. Good morning Ken, Ref: Hi Steve, As a reminder, we are awaiting a response in order to proceed with this ticket. Our latest reply is included below: I received this reminder - assume it is an automated message as we have a meeting scheduled for later today. I had a meeting with LastPass yesterday and still do not have an active account kind regards SP Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/115000306772-Shopify-Error-Code-invalid-key-or-ref-Error-Message-Invalid-item-reference-key-1009-for-subsidiary-1-?sort_by=votes
2022-06-25T14:20:33
CC-MAIN-2022-27
1656103035636.10
[]
docs.celigo.com
View source for JDatabaseQuery/ toString ← API16:JDatabaseQuery/ toStringDatabaseQuery/ toString.
https://docs.joomla.org/index.php?title=API16:JDatabaseQuery/_toString&action=edit&section=2
2022-06-25T13:41:58
CC-MAIN-2022-27
1656103035636.10
[]
docs.joomla.org
A Silicon Labs Si5341B programmable clock generator provides nine independent LVDS clock pairs to the FPGA. Two are connected to fabric for general reference, one is connected for DDR4 reference, and six are connected to transceiver reference clock inputs. The input reference for the Si5341B is a fixed-frequency 50-MHz crystal. A footprint for a crystal oscillator provides the option for an additional clock reference input. The output frequency of each channel has a range of 0.0001-350 MHz. See the Si5341B data sheet for more information on configuring this part. The default frequencies used in the set-clock-ec.
https://docs.opalkelly.com/ecm1900/clock-generator/
2022-06-25T14:41:22
CC-MAIN-2022-27
1656103035636.10
[]
docs.opalkelly.com
Developer¶ - Downloading the Source - Autotest’s Directory Structure - Autotest Code Submission Check List - How to use git to contribute patches to autotest - Life cycle of an idea in autotest - Workflow Details - Topic Issues - Topic Issue States - Pull Requests - Pull Request Updates - Mail List Publishing - Autotest Test API - Submission common problems - Autotest requirements - Autotest Design Goals - Autotest Maintenance Docs - Global Configuration - Adding site-specific extensions - Autotest status file specification - Autotest job results specification - Documentation - Autotest Unittest suite - Web Frontend Development - Using the Autotest Mock Library for unit testing - Setting up to use the code - Stubbing out attributes - Stubbing methods on classes - Verifying external interactions of code under test - Constructing mock class instances - Isolating a method from other methods on the same instance - Verifying class creation within code under test - Convenient shortcuts for stubbing - Stubbing out builtins
https://autotest.readthedocs.io/en/stable/main/developer/index.html
2022-06-25T14:55:34
CC-MAIN-2022-27
1656103035636.10
[]
autotest.readthedocs.io
Ensure AWS IAM password policy has a lowercase character Error: AWS IAM password policy does not have a lowercase character Bridgecrew Policy ID: BC_AWS_IAM_6 Checkov Check ID: CKV_AWS_11 Severity: MEDIUM AWS IAM password policy does not have a lowercase character Description Password policies are used to enforce the creation and use of password complexity. Your IAM password policy should be set for passwords to require the inclusion of different character types. The password policy should enforce passwords contain at least one lowercase letter,. - Select Requires at least one lowercase letter. - Click Apply password policy. CLI Command To change the password policy, use the following command: aws iam update-account-password-policy --require-lowercase-characters Note All commands starting with aws iam update-account-password-policy can be combined into a single command. Updated 12 months ago
https://docs.bridgecrew.io/docs/iam_6
2022-06-25T14:19:42
CC-MAIN-2022-27
1656103035636.10
[]
docs.bridgecrew.io
Setup Now we should initialize the servers as configurations that we made. note The command examples used in this section uses kupboard bash script. Check out Bash script for running kupboard for more information. caution The private key used to create server instances must be in data/certs and its name should be ssh.pem. #Generate SSH Keys First, we need to generate a private key and public key for ssh connection to all servers. You can create new keys with the kupboard command setup --keygen. If the command works properly, you can find ssh-kupboard.pem and ssh-kupboard.pub in data/certs. #Initialize User setup --init-user will copy the public key to all servers so that kupboard can access them through ssh connection with the public key. Once this step is completed properly, you should access admin-node1 using kupboard ssh admin-node1. note You can use a specific username when you need to use a specific username to initialize servers. For example, we should use a username used to register a public key when setting up servers on GCP. #Initialize Cluster To initialize all servers with basic setting, setup --init-cluster is used. (init step 1) #Initialize Kubernetes To initialize all servers with, setup --init-k8s is used. (init step 2) #Finish Initialization To finish initialization, setup --init-finish is used. (init step 3)
https://docs.kupboard.io/docs/getstarted/setup/
2022-06-25T14:46:43
CC-MAIN-2022-27
1656103035636.10
[]
docs.kupboard.io
includes the following: - Time the API activity occurred - Source of the activity - Target of the activity - Type of action - Type of response Each log event includes a header ID, target resources, timestamp of the recorded event, request parameters, and response parameters. You can view events logged by the Audit service by using the Console, API, or the SDK for Java. Data from events can be used to perform diagnostics, track resource usage, monitor compliance, and collect security-related events. Version 2 Audit Log Schema On October 8, 2019, Oracle introduced the Audit version 2 schema, which provides the following benefits: - Captures state changes of resources - Better tracking of long running APIs - Provides troubleshooting information in logs The new schema is being implemented over time. Oracle continues to provide Audit logs in the version 1 format, but you cannot access version 1 format logs from the Console. The Console displays only the version 2 format logs. However, not all resources are emitting logs using the version 2 schema. For those services that are not emitting in the version 2 format, Oracle converts version 1 logs to version 2 logs, leaving fields blank if information for the version 2 schema cannot be determined. an example of policy that gives groups access to audit logs, see Required IAM Policy. To modify the Audit log retention period, you must be a member of the Administrators group. See The Administrators Group and Policy.
https://docs.cloud.oracle.com/en-us/iaas/Content/Audit/Concepts/auditoverview.htm
2020-02-17T06:54:46
CC-MAIN-2020-10
1581875141749.3
[]
docs.cloud.oracle.com
Data Attribute. Storage Property Definition Gets or sets a private storage field to hold the value from a column. public: property System::String ^ Storage { System::String ^ get(); void set(System::String ^ value); }; public string Storage { get; set; } member this.Storage : string with get, set Public Property Storage As String Property Value Examples <Column(Storage:=_CustomerID)> _ Public CustomerID As String [Column(Storage="_CustomerID")] public string CustomerID { } Remarks.
https://docs.microsoft.com/en-us/dotnet/api/system.data.linq.mapping.dataattribute.storage?view=netframework-4.8
2020-02-17T07:04:38
CC-MAIN-2020-10
1581875141749.3
[]
docs.microsoft.com
Web Http Binding Class Definition A binding used to configure endpoints for Windows Communication Foundation (WCF) Web services that are exposed through HTTP requests instead of SOAP messages. public ref class WebHttpBinding : System::ServiceModel::Channels::Binding, System::ServiceModel::Channels::IBindingRuntimePreferences public class WebHttpBinding : System.ServiceModel.Channels.Binding, System.ServiceModel.Channels.IBindingRuntimePreferences type WebHttpBinding = class inherit Binding interface IBindingRuntimePreferences Public Class WebHttpBinding Inherits Binding Implements IBindingRuntimePreferences - Inheritance - - Implements - Remarks The WCF Web Programming Model allows developers to expose WCF WCF for syndication and ASP.AJAX integration are both built on top of the WCF Web Programming Model.
https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.webhttpbinding?redirectedfrom=MSDN&view=netframework-4.8
2020-02-17T07:36:12
CC-MAIN-2020-10
1581875141749.3
[]
docs.microsoft.com
Ticket presets are the types of repairs you offer. You can now define attributes of a repair, including the parts you need, paperwork in terms of intake outtake forms, legal documents along with a how-to guide for your technicians, infect “Ticket Presets” are what types of repairs you offer to your customers. Your staff can create tickets or estimates directly from the “Ticket Presets” and ticket interface or the estimate interface will know what you need in terms of paperwork, products (items) and how many etc. If the repair was created from “Preset” then when your technicians are doing the repair they can click on “How To Guide” for more information. Let’s start creating “Ticket Presets“. Please go to the “Ticket Preset Manager“. On the manager, you can see the list of all the already created presets which are easily searchable. From the action column, you can start creating the ticket or an estimate. You can see the howto guide if you click on the “How-to guide” icon. To create a new preset please click on the “Add Ticket Preset” button. Add the relevant information and don’t forget to add the items you need to fix this type of repair. Once finished just click on the “Add Ticket Preset” button. You can modify any existing presets by click on the edit icon. Now let’s create a ticket from the preset interface when I clicked on the create ticket icon on the next screen “Model” and “Issue Type” was already selected as shown in the following screenshot. Click on “Add New Ticket” please and on the next screen all the items were already added for you as shown in the following screenshot. When you click on the “Ticket Actions” you will see a new option (If there is a how-to guide) called “How-to Guide“. Click on “How-to Guide” will show your staff the process of fixing the relevant repair. If you want to integrate presets with your website, please use the Public API as we have added new endpoints for the presets.
https://docs.mygadgetrepairs.com/tickets/tickets-presets-types-of-repairs/
2020-02-17T07:15:16
CC-MAIN-2020-10
1581875141749.3
[array(['https://docs.mygadgetrepairs.com/wp-content/uploads/2019/07/ticket-presets.jpg', None], dtype=object) array(['https://docs.mygadgetrepairs.com/wp-content/uploads/2019/07/ticket-presets-1.jpg', None], dtype=object) array(['https://docs.mygadgetrepairs.com/wp-content/uploads/2019/07/ticket-presets-2.jpg', None], dtype=object) array(['https://docs.mygadgetrepairs.com/wp-content/uploads/2019/07/ticket-presets-3.jpg', None], dtype=object) array(['https://docs.mygadgetrepairs.com/wp-content/uploads/2019/07/ticket-presets-4.jpg', None], dtype=object) array(['https://docs.mygadgetrepairs.com/wp-content/uploads/2019/07/ticket-presets-5.jpg', None], dtype=object) array(['https://docs.mygadgetrepairs.com/wp-content/uploads/2019/07/ticket-presets-6.jpg', None], dtype=object) ]
docs.mygadgetrepairs.com
Style specifies what happens when the collection is accessed with a key that does not exist Member of Keyed Collection (PRIM_KCOL) Data Type - Enumeration The Style property specifies how the collection will handle references to objects that do not exist. By default, Keyed Collections will create object instances if the key is unknown. This is very useful for simple collection use, but it also means that objects cannot effectively be destroyed. The Style property specifies what happens when the collection is accessed with a key that does not exist in the collection. This property can have one of two values: In this example, the forms collection uses the default Style(Factory). The moment the form is referenced, it will be created and shown. Define_Com Class(#Prim_kcol<#Prim_form #Std_num>) Name(#Forms) #Forms<123>.Showform Here the collection uses Style(Collection). This requires that the instances be created by the user. If the 123 form cannot be found, a runtime error will be produced. Define_Com Class(#Prim_kcol<#Prim_form #Std_num>) Name(#Forms) Style(Collection) #Forms<123>.Showform All Component Classes Technical Reference Febuary 18 V14SP2
https://docs.lansa.com/14/en/lansa016/prim_kcol_style.htm
2020-02-17T06:43:14
CC-MAIN-2020-10
1581875141749.3
[]
docs.lansa.com
Coding the Excel Services Windows 7 Gadget – Part 5 – Next steps to sniff it and display it the correct way like that. 2. Need to go over the code – I think there are cases where errors are not properly handled. I also think that there are cases where there are circular references which may cause issues with memory. New features: 1. I want to add links at the top of the fly-out, to allow people to actually open a browser to the content. 2. Add more link types that can be in the fly-out – maybe detect any image and allow it to be displayed in the fly-out, detect youtube links and embed the movie in the fly-out etc. 3. Allow users to have “macros” for links. I can imagine having[…] resolving to the server the workbook comes from. Similarly the file and maybe the document library. 4. Have a mechanism in the settings window that allows you to enter arbitrary links instead of a rigid selection mechanism. 5. Use the new SharePoint list APIs to show what files are available (so users don’t have to know the full URL by heart). If you have other ideas, do not hesitate to suggest them in comments!
https://docs.microsoft.com/en-us/archive/blogs/cumgranosalis/coding-the-excel-services-windows-7-gadget-part-5-next-steps
2020-02-17T08:01:46
CC-MAIN-2020-10
1581875141749.3
[]
docs.microsoft.com
Using the DirectShow EVR Filter To create the enhanced video renderer (EVR) filter, call CoCreateInstance. The CLSID is CLSID_EnhancedVideoRenderer, defined in uuids.h. You do not have to call MFStartup or MFShutdown to use the EVR filter. For more information about using the EVR filter in a DirectShow application, see Audio/Video Playback in DirectShow. The EVR filter starts with one input pin, which corresponds to the reference stream. To add pins for substreams, query the filter for the IEVRFilterConfig interface and call IEVRFilterConfig::SetNumberOfStreams. Call this method before connecting any input pins. Pin 0 is always the reference stream. Connect this pin before any other pins, because the format of the reference stream might limit which substream formats are available. Before starting the graph, set the video clipping window and the destination rectangle. For more information, see Using the Video Display Controls. Unlike the Video Mixing Renderer (VMR), the EVR does not have operational modes (windowed, windowless, and so forth). In particular: - The EVR does not support windowed mode. The application must provide the clipping window. - The EVR does not have a renderless mode. To replace the default presenter, call IMFVideoRenderer::InitializeRenderer. - The EVR does not have a mixing mode. The EVR always creates the mixer. If you have one input stream, it is not necessary to call SetNumberOfStreams to force the EVR to use the mixer. Filter Interfaces The EVR filter exposes the following interfaces. Some of these interfaces are documented in the DirectShow SDK. Use QueryInterface to retrieve pointers to these interfaces: - IAMCertifiedOutputProtection (DirectShow) - IAMFilterMiscFlags (DirectShow) - IBaseFilter (DirectShow) - IEVRFilterConfig - IKsPropertySet (DirectShow) - IMediaEventSink (DirectShow) - IMFGetService - IMFVideoPositionMapper - IMFVideoRenderer - IPersistStream - IQualityControl (DirectShow) - IQualProp (DirectShow) - ISpecifyPropertyPages Input Pin Interfaces The input pins on the EVR filter expose the following interfaces. Use QueryInterface to retrieve pointers to these interfaces: - IEVRVideoStreamControl - IMemInputPin (DirectShow) - IMFGetService - IPin (DirectShow) - IQualityControl (DirectShow) In addition, you can use the IMFGetService interface to retrieve the following interface: Related topics
https://docs.microsoft.com/en-us/windows/win32/medfound/using-the-directshow-evr-filter?redirectedfrom=MSDN
2020-02-17T08:02:25
CC-MAIN-2020-10
1581875141749.3
[]
docs.microsoft.com
API Overview¶('lcd /tmp') child.expect('ftp> ') child.sendline('cd pub') child.expect('ftp> ') child.sendline('get README') string. object.¶ There are two special patterns to match the End Of File (EOF) or a Timeout condition (TIMEOUT).. In this case everything the child has output will be available in the before property.¶ Pexpect matches regular expressions a little differently than what you might be used to. The $ pattern for end of line match is useless. any stream. Note Pexpect does have an internal buffer, so reads are faster than one character at a time, but from the user’s perspective the regex patterns signify the end of line. Pexpect uses a Pseudo-TTY device to talk to the child application, so when the child app prints "\n" you actually see "\r\n". UNIX uses just linefeeds to end lines of text, but not when it comes to TTY devices! TTY devices are more like the Windows world. Each line of text ends. Pexpect compiles all regular expressions with the re.DOTALL flag. With the DOTALL flag, a "." will match a newline. Beware of + and * at the end of patterns¶ the end of your pattern be \D+ instead of \D*. Number digits alone would not satisfy the (\d+)\D+ pattern. You would need some numbers and at least one non-number at the end. Debugging¶¶. If you wish to read up to the end of the child’s output without generating an EOF exception then use the expect(pexpect.EOF) method.)
https://pexpect.readthedocs.io/en/3.x/overview.html
2020-02-17T06:09:21
CC-MAIN-2020-10
1581875141749.3
[]
pexpect.readthedocs.io
Conversion tools¶ These tools convert data between a legacy genomic file format and using ADAM’s schemas to store data in Parquet._alignments: Treats the input as an alignment file (uses loadAlignmentsinstead of loadFragments), which behaves differently for unpaired FASTQ. -save_as_alignments: Saves the output as a Parquet file of Alignments, as SAM/BAM/CRAM, or as FASTQ, depending on the output file extension. If this option is specified, the output can also be sorted: -sort_by_read_name: Sorts alignments by read name. -sort_by_reference_position: Sorts alignments by the location where the reads are aligned. Unaligned reads are put at the end and sorted by read name. References are ordered lexicographically. -sort_by_reference_position_and_index: Sorts alignments by the location where the reads are aligned. Unaligned reads are put at the end and sorted by read name. References are ordered by index that they are ordered in the SequenceDictionary.
https://adam.readthedocs.io/en/latest/cli/conversions/
2020-02-17T06:37:13
CC-MAIN-2020-10
1581875141749.3
[]
adam.readthedocs.io
The System Settings provides the user with a centralized and convenient way to configure all of the settings for your desktop. System Settings is made up of multiple modules. Each module is a separate application, however the System Settings organizes all of these applications into a single location. Tip Each System Settings module can be executed individually See section entitled Running individual System Settings modules for more information. System Settings groups all of the configuration modules into several categories: The modules that make up System Settings fall under one of the above categories, making it easier to locate the correct configuration module.
https://docs.kde.org/trunk5/en/kde-workspace/systemsettings/introduction.html
2020-02-17T07:21:46
CC-MAIN-2020-10
1581875141749.3
[array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)]
docs.kde.org
How to: Determine How ASP.NET Web Forms Were Invoked. Note Be sure you test the IsCrossPagePostBack property of the page that is referenced in PreviousPage. The IsCrossPagePostBack property of the current page always returns false. See Also Concepts Redirecting Users to Another Web Forms Page Cross-Page Posting in ASP.NET Web Forms Implementing Client Callbacks Programmatically Without Postbacks in ASP.NET Web Pages
https://docs.microsoft.com/en-us/previous-versions/ms178141(v=vs.140)?redirectedfrom=MSDN
2020-02-17T07:38:11
CC-MAIN-2020-10
1581875141749.3
[]
docs.microsoft.com
This tutorial will guide you through building a simple pipeline which you can publish and test using a cURL command in a terminal. The following shows the completed pipeline. This tutorial uses the same data sources used in the Quickstart. If you haven't already onboarded these data sources, click this link to download the OpenAPI 2.0 file: vehicle_demo_swagger.yaml.
https://docs.xapix.io/functional-dashboards/pipeline-tutorial
2020-02-17T06:38:02
CC-MAIN-2020-10
1581875141749.3
[]
docs.xapix.io
Using EpiGraphDB API The EpiGraphDB API is a RESTful web API service that offers a standardardised way for users to query with the EpiGraphDB graph database. Here we offer a brief discussion on the major functionalities of the API, and the common methods for users to get data from the API service. OpenAPI / Swagger interface The Swagger interface of the API service looks like this: As an example, by clicking on the /mr endpoint will display a detail view regarding the endpoint specification in required and optional query parameters: You will be able to see the expected schema / data model of the returned query as specified in the endpoint: The Swagger web interface also offers the functionality for users to try out example queries, and the recommended curl query snippet: Web application You will be able to see the underlying API call from the web application to also assist you in creating your own queries to the EpiGraphDB database. Below is an example from the "Mendelian randomization" view: Example usage You can use any of the commonly used tools to query the API service, for example: Finally, for R users we offer an official package to provide easy access to EpiGraphDB:
http://docs.epigraphdb.org/web-api/
2020-02-17T07:34:49
CC-MAIN-2020-10
1581875141749.3
[array(['../img/api/mr-params.png', None], dtype=object) array(['../img/api/response-models.png', None], dtype=object) array(['../img/api/mr-results.png', None], dtype=object) array(['../img/web-app/mr-query.png', None], dtype=object)]
docs.epigraphdb.org
panda3d.core.RenderAttribRegistry¶ - class RenderAttribRegistry¶ This class is used to associate each RenderAttrib with a different slot index at runtime, so we can store a list of RenderAttribs in the RenderState object, and very quickly look them up by type. Inheritance diagram getSlot(type_handle: TypeHandle) → int¶ Returns the slot number assigned to the indicated TypeHandle, or 0 if no slot number has been assigned. getNumSlots() → int¶ Returns the number of RenderAttrib slots that have been allocated. This is one more than the highest slot number in use. getSlotDefault(slot: int) → RenderAttrib¶ Returns the default RenderAttrib object associated with slot n. This is the attrib that should be applied in the absence of any other attrib of this type. - Return type - getSortedSlot(n: int) → int¶ Returns the nth slot in sorted order. By traversing this list, you will retrieve all the slot numbers in order according to their registered sort value.
https://docs.panda3d.org/1.10/cpp/reference/panda3d.core.RenderAttribRegistry
2020-02-17T08:08:53
CC-MAIN-2020-10
1581875141749.3
[]
docs.panda3d.org
IP: 10.0.0.1 | Login: pi | Password: raspberry Download and install Putty: - it will allow you to establish SSH connection with the Rover and open command console. Connect to the Rover Wifi. Don't know how? Find here. Open Putty, type '10.0.0.1' as IP address and press 'open'. Login: pi ; password: raspberry You'll see something like this. You're in! [Based on:] Open 'run' command line (⊞ Win + X and click 'Run' or ⊞ Win + R) Type cmd and then in the command window: ssh [email protected] password: raspberry Access command console and enter: ssh [email protected] password: raspberry
https://docs.turtlerover.com/manuals/connect-to-the-rover-console-ssh
2020-02-17T06:00:01
CC-MAIN-2020-10
1581875141749.3
[]
docs.turtlerover.com
Under "Catalog" and the "Inventory" tab, there is a new drop down menu under the "Remove Products from Available Inventory" option. You can now choose to have items removed from inventory at Checkout, instead of the Miva default of "when added to basket." Normally when you add a product to the cart, it gets removed from inventory until you purchase the item, or your basket expires (typically 60 minutes) and expired baskets are deleted. The new Inventory At Checkout settings allow you to not reduce inventory until the item is actually purchased. When you select "At Checkout," what happens is, when a customer adds an item to the cart, it isn't removed from inventory until they actually check out and purchase the item, essentially, at the invoice page. This is a nice feature for stores who have low stock or one of a kind items. It allows multiple customers to add an item to their cart without affecting inventory counts. There is still a check to validate whether or not there is inventory available for the customers purchase during the checkout process. This check happens when the customer submits their customer information (Bill To / Ship To). If there isn't enough items to fulfill the order it will redirect you back to a specific page. You can select which page you want them to be redirected to by clicking the drop down menu under "When Inventory is Unavailable at Checkout." Here is what the error message the customer will see when their is not enough inventory to cover what is in their basket.
https://docs.miva.com/how-to-guides/inventory-at-checkout
2018-01-16T17:05:45
CC-MAIN-2018-05
1516084886476.31
[]
docs.miva.com
Record entry points in Dynamics AX Enterprise Portal You can record business process flows in Enterprise Portal for Microsoft Dynamics AX by using event traces. You can then view the business process flows in the Security Development Tool. Collect event traces for Enterprise Portal entry points by using Windows Performance Monitor - On a computer that is running the instance of Enterprise Portal that you want to collect event traces from, open Windows Performance Monitor. In the navigation pane, under Data Collector Sets, right-click User Defined, select New, and then click Data Collector Set. - Enter a unique name for the new data collector set. Select Create from a template, and then click Next. - Click Browse, and then select the EP EntryPointTracingTemplate.xml template that was installed together with the Security Development Tool. Click Next. - Enter the address of the root directory where you want to save the data, or click Browse to search for a directory. Click Next. - Click Finish. - Select the new data collector set, and then click Start. - Navigate to the Enterprise Portal site, and execute your business scenario. - Stop the data collector set. - Convert the trace log to XML format. - Open Windows Event Viewer. - Right-click the Applications and Service Logs node, and then select Open Saved Log. - Select the output file that you created in step 4. Output files have the .etl extension. - When you are prompted, click Yes to create a new copy of the event log. - Enter a unique name for the log, and then click OK. The log is displayed in the Saved Logs node. - Right-click the saved log, and then select Save All Events As. - In the Save as type field, select Xml (XML File) (*.xml), and then enter a unique name for the file. You do not have to include display information. Load the trace file - On the ribbon of the main form, click Load trace file. Select the .xml file that was created by using the Enterprise Portal event traces. A new window opens that displays all entry points that have been recorded. To update the level of access for the recorded entry points, select multiple rows in the list, and then click Mark as recorded. You are returned to the main form, and the entry points are marked as recorded. From the main form, you can use the Set entry point permissions function to update the access level. When you have finished, you can switch back to the window that displays the trace data for the Enterprise Portal entry point by clicking Go to the Enterprise Portal trace data window.
https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/lifecycle-services/ax-2012/record-entry-points-enterprise-portal
2018-01-16T17:44:24
CC-MAIN-2018-05
1516084886476.31
[]
docs.microsoft.com
Where do I see candidate matches in the backend? When you're in your account, you can see the Job Candidates option in the top right menu: If you don't see that, it's because your account isn't configured yet with any candidate matches. Just get in touch if that's the case. The matches list will look something like this: At first, the hero's are anonymized, but once they indicate interest in the hiring request, you will receive an email notification and their full contact details will populate.
http://docs.commercehero.io/article/143-where-do-i-see-candidate-matches-in-the-backend
2018-01-16T17:26:35
CC-MAIN-2018-05
1516084886476.31
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57eea01e9033602e61d4a311/images/5936cc6804286305c68cdb4c/file-0NvbEQzLx7.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57eea01e9033602e61d4a311/images/5936ccc004286305c68cdb53/file-UuSKU3KgdW.png', None], dtype=object) ]
docs.commercehero.io
Dictionary v0.12e August 2015, Current for Release 20 September 25, 2015 Updated to dictionary version 0.12e. Change between 0.12d and 0.12e: - Removed validation check requiring "donor_age_at_last_followup" to be less than or equal to 90. September 22, 2015 Updated to dictionary version 0.12d. Change between 0.12c and 0.12d: - Removed validation check requiring "donor_interval_of_last_followup" to be greater than or equal to "donor_survival_time" when donor is deceased. Reasoning: In some cases, information on a donor's vital status after they died was obtained via phone calls from/to families or from obituaries rather than a full clinical followup. A full followup was done prior while the donor was alive, and this followup interval is more informative rather than the followup phone call done after the donor died. The removal of this validation check will allow "donor_interval_of_last_followup" to be less than "donor_survival_time" if the donor is deceased. September 18, 2015 - Added new term to stsm_p.0.evidence.v1 controlled vocabulary: "Partner breakpoint found, and paired sequence either side of breakpoint". September 9, 2015 Updated to dictionary version 0.12c. Change between 0.12b and 0.12c: - Updated regex for "raw_data_accession" to accept EGA Experiment (EGAX) and EGA Run (EGAR) IDs September 8, 2015 - Added new term to GLOBAL.0.platform.v1: "Illumina HiSeq X Ten". September 2, 2015 - Added new term to GLOBAL.0.platform.v1: "Ion Torrent Proton". August 12, 2015 Change between 0.12a and 0.12b - Added cross-field validation check for "donor_primary_treatment_interval" Changes to Specifications Since Version 0.11c (April, 2015) Revisions to Data Elements: Changes Donor Clinical File
http://docs.icgc.org/dictionary/release-20/
2018-01-16T16:55:30
CC-MAIN-2018-05
1516084886476.31
[]
docs.icgc.org
webhelpers.html.grid¶ A helper to make an HTML table from a list of dicts, objects, or sequences. A set of CSS styles complementing this helper is in “webhelpers/html/public/stylesheets/grid.css”. To use them, include the stylesheet in your applcation and set your <table> class to “stylized”. The documentation below is not very clear. This is a known bug. We need a native English speaker who uses the module to volunteer to rewrite it. This module is written and maintained by Ergo^. A demo is available. Run the following command to produce some HTML tables: python -m webhelpers.html.grid_demo OUTPUT_DIR A subclass specialized for Pylons is in webhelpers.pylonslib.grid. Grid class¶ - class webhelpers.html.grid. Grid(itemlist, columns, column_labels=None, column_formats=None, start_number=1, order_column=None, order_direction=None, request=None, url=None, **kw)¶ This class is designed to aid programmer in the task of creation of tables/grids - structures that are mostly built from datasets. To create a grid at minimum one one needs to pass a dataset, like a list of dictionaries, or sqlalchemy proxy or query object: grid = Grid(itemlist, ['_numbered','c1', 'c2','c4']) where itemlist in this simple scenario is a list of dicts:[{‘c1’:1,’c2’...}, {‘c1’...}, ...] This helper also received the list that defines order in which columns will be rendered - also keep note of special column name that can be passed in list that defines order - _numbered- this adds additional column that shows the number of item. For paging sql data there one can pass start_numberargument to the grid to define where to start counting. Descendant sorting on _numberedcolumn decrements the value, you can change how numberign function behaves by overloading calc_row_noproperty. Converting the grid to a string renders the table rows. That’s just the <tr> tags, not the <table> around them. The part outside the <tr>s have too many variations for us to render it. In many template systems, you can simply assign the grid to a template variable and it will be automatically converted to a string. Example using a Mako template: <table class="stylized"> <caption>My Lovely Grid</caption> <col class="c1" /> ${my_grid} </table> The names of the columns will get automatically converted for humans ie. foo_bar becomes Foo Bar. If you want the title to be something else you can change the grid.labels dict. If you want the column part_noto become Catalogue Numberjust do: grid.labels[``part_no``] = u'Catalogue Number' It may be desired to exclude some or all columns from generation sorting urls (used by subclasses that are sorting aware). You can use grids exclude_ordering property to pass list of columns that should not support sorting. By default sorting is disabled - this exclude_orderingcontains every column name. Since various programmers have different needs, Grid is highly customizable. By default grid attempts to read the value from dict directly by key. For every column it will try to output value of current_row[‘colname’]. Since very often this behavior needs to be overridden like we need date formatted, use conditionals or generate a link one can use the column_formatsdict and pass a rendering function/lambda to it. For example we want to apppend footo part number: def custem_part_no_td(col_num, i, item): return HTML.td(`Foo %s` % item[``part_no``]) grid.column_formats[``part_no``] = custem_part_no_td You can customize the grids look and behavior by overloading grids instance render functions: grid.default_column_format(self, column_number, i, record, column_name) by default generates markup like: <td class="cNO">VALUE</td> grid.default_header_column_format(self, column_number, column_name, header_label) by default generates markup like: <td class="cNO COLUMN_NAME">VALUE</td> grid.default_header_ordered_column_format(self, column_number, order, column_name, header_label) Used by grids that support ordering of columns in the grid like, webhelpers.pylonslib.grid.GridPylons. by default generates markup like: <td class="cNO ordering ORDER_DIRECTION COLUMN_NAME">LABEL</td> grid.default_header_record_format(self, headers) by default generates markup like: <tr class="header">HEADERS_MARKUP</tr> grid.default_record_format(self, i, record, columns) Make an HTML table from a list of objects, and soon a list of sequences, a list of dicts, and a single dict. <tr class="ODD_OR_EVEN">RECORD_MARKUP</tr> grid.generate_header_link(self, column_number, column, label_text) by default just sets the order direction and column properties for grid. Actual link generation is handled by sublasses of Grid. grid.numbered_column_format(self, column_number, i, record) by default generates markup like: <td class="cNO">RECORD_NO</td> generate_header_link(column_number, column, label_text)¶ This handles generation of link and then decides to call self.default_header_ordered_column_formator self.default_header_column_formatbased on whether current column is the one that is used for sorting. you need to extend Grid class and overload this method implementing ordering here, whole operation consists of setting self.order_column and self.order_dir to their CURRENT values, and generating new urls for state that header should set set after its clicked (additional kw are passed to url gen. - like for webhelpers.paginate) example URL generation code below: GET = dict(self.request.copy().GET) # needs dict() for py2.5 compat self.order_column = GET.pop("order_col", None) self.order_dir = GET.pop("order_dir", None) # determine new order if column == self.order_column and self.order_dir == "asc": new_order_dir = "dsc" else: new_order_dir = "asc" self.additional_kw['order_col'] = column self.additional_kw['order_dir'] = new_order_dir # generate new url for example url_generator uses # pylons's url.current() or pyramid's current_route_url() new_url = self.url_generator(**self.additional_kw) # set label for header with link label_text = HTML.tag("a", href=new_url, c=label_text) - class webhelpers.html.grid. ObjectGrid(itemlist, columns, column_labels=None, column_formats=None, start_number=1, order_column=None, order_direction=None, request=None, url=None, **kw)¶ A grid class for a sequence of objects. This grid class assumes that the rows are objects rather than dicts, and uses attribute access to retrieve the column values. It works well with SQLAlchemy ORM instances.
http://webhelpers.readthedocs.io/en/latest/modules/html/grid.html
2018-01-16T16:52:03
CC-MAIN-2018-05
1516084886476.31
[]
webhelpers.readthedocs.io
BMC AMI Backup and Recovery for IMS 5.1 This space contains information about version 5.1.00 of BMC AMI Backup and Recovery for IMS (formerly known as the Backup and Recovery Solution for IMS product). The BMC AMI Backup and Recovery for IMS offers the following products: - CHANGE ACCUMULATION PLUS - IMAGE COPY PLUS - RECOVERY MANAGER for IMS - RECOVERY PLUS for IMS Frequently asked questions This section answers frequently asked questions (FAQs) about the documentation portal.: -) Click Export. For more information, see Help for BMC Online Technical Documentation ..
https://docs.bmc.com/docs/brsims51/home-828949944.html
2021-02-25T03:06:32
CC-MAIN-2021-10
1614178350706.6
[]
docs.bmc.com
Understanding Anomaly Detection¶ Network Sensor listens for abnormalities in network traffic and identifies endpoints with Anomaly and blocks them based on your access policies. You can configure Anomaly Definitions to detect abnormal network traffic such as Ad hoc Network, ARP Bomb, Spoofed ARP, MAC+IP Clones, and more. For an anomaly to be detected, anomalies definitions must be assigned to node policies. ARP Bomb¶ While the network sensor is monitoring ARP, it detects a device that generates excessive ARP packets and designates it as a critical Node. It detects abnormal ARP behavior and prevents attempts to disable network access or disable network access control. An attacker Node continually keeps sending request packets to the target Node, thereby causing its cache to fill up quickly. Soon the target Node will spend more of its resources to maintain its cache, which may lead to buffer overflow. And real mapping would never be entered in the cache. MAC+IP Clones¶ The IP protocol uses IP and MAC addresses to identify the destination of the communication. Since there is no verification procedure at this time, it is easy to steal. If you have cloned the MAC / IP of the malicious device on the network, it is very difficult to check the normal system and the stolen system at the packet level. However, Genian NAC can detect MAC / IP theft in a variety of ways. The network sensor periodically sends an ARP request to check the operation status of the device. If two replies are received at the same time, suspend the MAC / IP clone and designate the Node as a critical Node. In addition, if the user changes the MAC on the endpoint where the Agent is installed and the MAC is already being used by another device, the device is designated as a critical Node. In addition, Genian NAC provides industry-leading platform detection to detect when a Node is changing to another platform, allowing administrators to see when changes are made, and to block devices when unauthorized platform changes are detected. Port Scanning¶ Detects any device trying to scan TCP or UDP ports. Genian NAC use a honeypot IP for detecting scanning devices. Sensor MAC Clones¶ Detects whether a Sensor MAC address is cloned (No configuration settings required) Spoofed ARP¶ While ARP Enforcement is a technology used to block communication of network devices, ARP Spoofing is mainly used in malicious codes and is used for eavesdropping communication of other parties. Genian NAC can detect ARP packets through a network sensor to detect devices attempting to be spoofed. In addition, it provides a function to block devices that attempted spoofing and to return to normal MAC through ARP cache detox.
https://docs.genians.com/release/en/threats/understanding-threat.html
2021-02-25T02:53:11
CC-MAIN-2021-10
1614178350706.6
[]
docs.genians.com
You can back up and restore the OnCommand Workflow Automation (WFA) database and supported configurations so that you can recover the data in case of a disaster. The supported configurations include data access, HTTP timeout, and SSL certificates. You must have admin privileges or architect credentials. You must create the backup in a secure location because restoring the backup will provide access to all the storage systems that are accessed by WFA.
https://docs.netapp.com/wfa-41/topic/com.netapp.doc.onc-wfa-isg/GUID-34223A15-C5CA-4333-8498-F8F596337498.html?lang=en
2021-02-25T03:27:40
CC-MAIN-2021-10
1614178350706.6
[]
docs.netapp.com
HTTPS (SSL) Netlify offers free HTTPS on all sites, including automatic certificate creation and renewal. Our certificates use the modern TLS protocol, which has replaced the now deprecated SSL standard. HTTPS brings a lot of advantages: - Content integrity: Without HTTPS, free Wi-Fi services can inject ads into your pages. - Security: If your site has a login or accepts form submissions, HTTPS is essential for your users’ security and privacy. - SEO: Google search results prioritize sites with HTTPS enabled. - Referral analytics: HTTPS-enabled sites will not send referral data to sites without HTTPS enabled. - HTTP/2: Boost your sites’ performance — HTTP/2 requires HTTPS. # Certificate service types Netlify offers three different ways of providing a certificate for HTTPS. Netlify-managed certificates are offered to all Netlify sites for free. Find details for this in the section on Netlify-managed certificates. Custom certificates are a way for you to provide a certificate that matches your specifications — things like a wildcard certificate or an Extended Validation (EV) certificate. If you’d like to provide your own custom certificate, refer to Custom certificates below for more details. Certificates with dedicated IPs are available for people who do not want to use SNI-based certificates. If you want your own unique certificate available to all browsers without requiring SNI and without a shared certificate as fallback, please contact us. (This feature may not be available on all plans.) # Netlify-managed certificates When you create a new site on Netlify, it’s instantly secured at the Netlify-generated URL (for example,). If you add a custom domain, we will automatically provision a certificate with Let’s Encrypt, enabling HTTPS on your domain. Certificates are generated and renewed automatically as needed. Use Netlify DNS for automatic wildcards If your domain uses Netlify DNS, we’ll automatically provision a wildcard certificate, which ensures instant HTTPS for all of the Netlify sites using subdomains of that domain. In rare circumstances, there can be problems when provisioning a certificate for some domains. You can check the status of your site’s certificates in Site settings > Domain management > HTTPS. If you’re having trouble with the automatic provisioning, visit the troubleshooting page for an error message guide and other tips. # Domain aliases Your certificate will include all your domain aliases when it’s issued, but note that DNS also needs to be configured in advance for all aliases for us to include them on your certificate. Visit the troubleshooting page for more information on confirming the new configuration. Avoid rate limiting for subdomains If you have more than 5 aliases that are subdomains of the same domain, you might run into rate limits with our certificate provider. In that case we recommend you provide your own wildcard certificate using Netlify DNS or contact support for our assistance for getting them set up with our certificate provider. Please do this before adding any aliases for best results! # Custom certificates If you already have a certificate for your domain and prefer that to Netlify’s domain-validated certificate, you can install your own. To install a certificate, you’ll need: - the certificate itself, in X.509 PEM format (usually a .crt file) - the private key you used to request the certificate - a chain of intermediary certificates from your Certificate Authority (CA) In Site settings > Domain management > HTTPS, select Set Custom Certificate, then enter the information above. Renewal is not automatic When the time comes to renew your custom certificate, Netlify cannot do this automatically. You will need to renew it at your Certificate Authority, then follow the steps above to install it on your Netlify site. For automatic renewal, you can switch to a Netlify-managed certificate. Netlify validates that the certificate matches the custom domain for your site and that the DNS record for the domain is pointed at Netlify, then installs your certificate. If your certificate covers several of your sites (in other words, if it’s a wildcard certificate or uses Subject Alternative Names), you can install it on one site, and it will apply to all other sites covered by the certificate. # Certificates with dedicated IPs Netlify’s standard HTTPS handling relies on a browser standard called Server Name Indication, or SNI. It makes provisioning and verifying certificates more efficient, but it’s not supported on very old browsers, like Internet Explorer 7 on Windows XP, or Android 4. Site visitors using these browsers will encounter a security message on your site before they can access it over HTTPS. You might also experience issues with certain automated tools, like PhantomJS below 2.0 (early 2015). If you don’t want to use an SNI-based certificate for your site, Netlify offers the option for a traditional certificate with a dedicated IP. Please contact us for more information. (This feature may not be available on all plans.) # HSTS preload Most major browsers use a list of predefined domains to automatically connect to websites using HTTPS. This list is called the HTTP Strict Transport Security (HSTS) preload list. Your site can be included in this list if you follow the requirements in hstspreload.org: Your custom domain must be accessible in the www subdomain. For example:. You must include this header in your _headersfile or Netlify configuration file: Strict-Transport-Security: max-age=63072000; includeSubDomains; preload When this is set, the browser assumes that your site, along with all subdomains, can be accessed using HTTPS, and it will force those connections. This action is not easily reversible Please make sure to only use the directive preload once you’re confident that the domain and all subdomains are ready to be served using only HTTPS, since this setting is hard to remove once it’s in place, as described at hstspreload.org. # HTTP/2 When HTTPS is enabled for your site, Netlify supports HTTP/2, a newer internet protocol engineered for faster web performance. This brings support for core HTTP/2 features like request multiplexing and compressed headers, but does not include server push capability. Did you find this doc useful? Your feedback helps us improve our docs.
https://docs.netlify.com/domains-https/https-ssl/?utm_source=blog&utm_medium=scotchio&utm_campaign=devex
2021-02-25T03:17:49
CC-MAIN-2021-10
1614178350706.6
[]
docs.netlify.com
used to render parts of the reflecion probe's surrounding selectively. If the GameObject's layerMask AND the probe's cullingMask is zero then the game object will be invisible from this probe. See Layers for more information. using UnityEngine; using UnityEditor; public class ExampleScript : MonoBehaviour { void Start() { var probe = GetComponent<ReflectionProbe>(); // Only render objects in the first layer (Default layer) when capturing the probe's texture probe.cullingMask = 1 << 0; } } See Also: Camera.cullingMask.
https://docs.unity3d.com/2018.4/Documentation/ScriptReference/ReflectionProbe-cullingMask.html
2021-02-25T02:04:37
CC-MAIN-2021-10
1614178350706.6
[]
docs.unity3d.com