content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Microsoft takes user disclosures very seriously. Cortana Skills require you to provide your skill's users with the terms of use related to your service. Below are some guidelines and third-party resources to help you fulfill these requirements. Please be advised that these resources are provided for your convenience. You agree to assume all risk and liability arising from your use of these resources and that Microsoft is not responsible for any issues arising out of your use of them. For a sample terms of use for your skill, see Sample Terms for Cortana Skills. These Sample Terms for Cortana Skills are for your convenience to use with your skill if you do not already have your own terms of use. These Sample Terms for Cortana Skills are used at your own risk and come with important conditions, which you should understand before you decide to use them. Only use the Sample Terms for Cortana Skills if you understand and agree with the following conditions: - These Sample Terms for Cortana Skills do not purport to accurately describe the relationship you want to have with End Users or the manner in which they will interact with your skill. You should edit or revise these Sample Terms for Cortana Skills as needed. - These Sample Terms for Cortana Skills are not localized to every jurisdiction in which your skill might be available. You are responsible for ensuring that your skill complies with local laws, including that any terms you use for your skill to comply with local laws. - Microsoft makes no representations, guarantees, or warranties about the Sample Terms for Cortana Skills. For example, Microsoft does not guarantee that the Sample Terms for skills are legally enforceable or comply with local laws. Microsoft also does not guarantee the Sample Terms for Cortana Skills anticipate every legal scenario. - Your use of these the Sample Terms for Cortana Skills is your agreement with your end users and Microsoft is not liable for any and all liability, claims, damages, or losses which may be sustained in connection with use of the Sample Terms for Cortana Skills (such as a court concluding that they are unenforceable or a regulator deciding that they are illegal under local law). Sample terms for Cortana skills These terms are an agreement between you and skill publisher for the use of the skill available through Cortana and Cortana enable devices (“Cortana Services”). Please read them. They apply to your use of the Company Skill, including any updates to the skill, unless skill publisher provides you with separate terms, in which case those terms apply. Skill publisher means the entity making the skill available to you, as identified in the Cortana Skill description. IF YOU DO NOT ACCEPT THESE TERMS, YOU HAVE NO RIGHT TO AND MUST NOT USE THE SKILL.. - USE: You may use the skill for the sole purpose of interacting with the service provided by skill publisher the skill publisher has made it available, unless the skill publisher has enabled such uses; - Remove, modify, or tamper with any notice or link that is incorporated into the skill. - TERMINATION: If skill publisher believes that you are making unauthorized use of the skill or that you are in violation of these terms, it may suspend or terminate your access to skill publisher’s service with or without notice. This may result in a loss of your data. - YOUR CONTENT: You grant to skill publisher the right to use any content that you submit via the skill as necessary for skill publisher the skill publisher to determine if any support services are available. - CHANGES TO TERMS. Skill skill publisher. - APPLICABLE LAW. -. - Outside the United States and Canada. If you acquired the skill. - LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. To the extent not prohibited by law, if you have any basis for recovering damages, you can recover from the skill publisher only direct damages up to the amount you paid for the skill or USD$1.00, whichever is greater. the skill. - skill publisher knew or should have known about the possibility of the damages. Next steps Create your privacy policy. For information, see Privacy policy guidelines. Feedback
https://docs.microsoft.com/en-us/cortana/skills/terms-of-use
2019-11-12T09:09:25
CC-MAIN-2019-47
1573496664808.68
[]
docs.microsoft.com
Application Pool Identities and SQL Server Express by Thomas Deml Introduction IIS 7.5 on Windows 7 or Windows Server 2008 R2 supports a new feature called "Application Pool Identity". It allows the effective isolation of Application Pools without having to maintain a user account for each and every Application Pool that is supposed to be sandboxed. Application Pool Identities are generated automatically and don't require passwords either - management costs go down. As with every new feature there are certain drawbacks and this is not different with the Application Pool Identity feature. This article describes an issue that an web application developer might face when he uses IIS 7.5 together with SQL Express. The Pieces of the Puzzle SQL Express supports a feature called "User Instances" or RANU (Run As Normal User). If this feature is turned on (turning it on is as easy as adding "UserInstance=true" to your connection string) the SQL Server process that is started when a user opens a database connection will run under the same account as the connecting user. RANU is very desirable from a security perspective. When you are developing with Visual Studio 2008 or Visual Studio 2010 a default connection string is stored in the machine.config file. Here it is: <connectionStrings> <add name="LocalSqlServer" connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient"/> ... The default connection string is using RANU (User Instance=true) as you can see. The default connection string is used, for example, when a feature requires a database to store some data but no database is configured yet. ASP.NET Membership is a good example for this. If a developer adds Membership functionality to his web application ASP.NET will automatically create a database and the necessary tables by using the default connection string in machine.config. The Issue Now here comes the problem: Out-of-the-box RANU doesn't work with the new "Application Pool Identity" feature. The "Application Pool Identity" feature is the default identity in IIS 7.5. The IIS 7.5 DefaultAppPool for example, runs under the "IIS AppPool\DefaultAppPool identity and not as NetworkService anymore. For this reason you might run into this issue if you are also developing with Visual Studio. Here is the error pages you might see: The error is caused because SQL Server Express RANU requires a user profile to be loaded and IIS 7.5 doesn't load a user profile by default. You don't see this error in previous versions of IIS because the DefaultAppPool was running as NetworkService and the operating system preloads the user profile for NetworkService. The Fix Fortunately the fix for this problem is pretty straightforward. IIS allows you to load the user profile for an Application Pool by simply setting the LoadUserProfile setting on an Application Pool to true. This can be done via the User Interface: - Click the "Application Pools" node in the IIS Manager. - Select the Application Pool in question, e.g. DefaultAppPool - Click "Advanced Settings..." in the Actions menu on the right hand side. - In the "Process Model" sub section you find the line with the name "Load User Profile". Switch it to "true" If you want to do this via command-line execute the following command in an elevated command prompt: %windir%\system32\inetsrv\appcmd set config -section:applicationPools /[name='DefaultAppPool'].processModel.loadUserProfile:false Side Effects of Loading the User Profile The biggest side effect with loading the user profile can be the temp directory. If the user profile is not loaded the %temp% environment variable points to the \windows\temp directory. Everybody can write to this directory. If the user profile is loaded the %temp% environment variable points to a dedicated directory to which only the user has access to. So if your DefaultAppPool runs as the "IIS AppPool\DefaultAppPool" the %temp% variable wout point to the C:\Users\DefaultAppPool\AppData\Local\Temp directory. To this directory only the DefaultAppPool and Administrators would have write access to. So why is this a problem? Unfortunately the Windows operating system supports a feature called "impersonation". Impersonation allows a piece of code to run under an identity different from the identity the process is running as. Some Web Application Frameworks take advantage of this feature. Classic ASP for example executes all code as impersonated. The identity used is either the anonymous user configured in the IIS configuration store or the user authenticated via the IIS provided authentication schemes (Basic, Digest or Windows). ASP.NET doesn't impersonate the code it runs by default. Impersonation is still supported using the system.web identity section in configuration or using .NET APIs. The problem is that the impersonated identity probably doesn't have access to the %temp% directory. Here is an article that explains one instance of this problem. If you are only developer ASP.NET applications and don't use the identity section you probably will never see these issues and loading the user profile should be perfectly safe. Summary This article described an issue with SQL Express and the new IIS 7.5 Application Pool Identity. Because the Application Pool Identity feature is the default in IIS 7.5 users might see the problem described above when using SQL Express in a development environment, i.e. when developing with Visual Studio. This issue won't happen on production environments because SQL Express is not supported in production environments.
https://docs.microsoft.com/en-us/iis/manage/configuring-security/application-pool-identities-and-sql-server-express
2019-11-12T07:50:35
CC-MAIN-2019-47
1573496664808.68
[]
docs.microsoft.com
OVH Guides Microsoft collaborative solutions Misc - Exchange diagnostic: what to do if you encounter an error - Exchange 2013/2016 How to create an automatic signature - Exchange 2013/2016: How to use the groups feature (mailing lists) - Exchange 2013/2016: How to use resource accounts - Manually Configuring Outlook - Exchange 2013: Thunderbird Configuration - Exchange 2013: Configuration on Windows 8 - Exchange 2016: how to set up automatic replies in OWA - Exchange 2016: How to share calendars via OWA - Exchange 2016: How to share a folder via OWA - Enable and manage your OVH SharePoint
https://docs.ovh.com/ie/en/microsoft-collaborative-solutions/
2019-11-12T09:34:08
CC-MAIN-2019-47
1573496664808.68
[array(['/theme/img/logo-algolia-search.png', None], dtype=object)]
docs.ovh.com
GMT symbols for Whale-watchers and marine biologists¶ What’s this?¶ This is a collection of custom symbols for the Generic Mapping Tools (GMT) software that may locations of the main cities and ports in your area, or other useful data like plankton concentration or water temperatures in your map. The collection currently comprises symbols for 8 species of Baleen Whales, 16 of Toothed Whales, 2 species of seals and 4 more for “unidentified” seals, beaked whales, dolphins or whales. Several versions (low/normal/high) of most symbols are also available, and you can change easily between color or gray symbols in 57 of the 90 symbols provided, therefore you can choose really between more than 150 different symbols. Author of the symbols: Pablo Valdés <[email protected]> ?¶ Before to start, think in the type of map you want to obtain and prepare your data. If you want to create a 2D map (most common situation) you need plot, if you want a 3D map you should use plot3 ... plot plot plot but you could pass it to text. - Common dolphin (Delphinus delphis) - ddelphis_low.def - ddelphis_midlow.def - ddelphis.def - ddelphis_midhigh.def - ddelphis_high.def - Stripped dolphin (Stenella coeruleoalba) - stripped_low.def, stripped.def, stripped_high.def - Bottlenose dolphin (Tursiops truncatus) - bottlenose_low.def, bottlenose.def, bottlenose_high.def - Atlantic White-sided dolphin (Lagenorhynchus acutus) - atlanticwhitesided_low.def, atlanticwhitesided.def, atlanticwhitesided_high.def - Killer whale (Orcinus orca) - killerwhale_low.def, killerwhale.def, killerwhale_high.def - Risso’s dolphin (Grampus griseus) - rissosdolphin_low.def, rissosdolphin.def, rissosdolphin_high.def - Short-Finned Pilot whale (Globicephala macrorhynchus) - shortfinnnedpilotwhale_low.def, shortfinnnedpilotwhale.def - shortfinnnedpilotwhale_low.def - Long-Finned Pilot whale (Globicephala melaena) - longfinnedpilotwhale_low.def, longfinnedpilotwhale.def - longfinnedpilotwhale_low.def - Southern Rightwhale Dolphin (Lagenodelphis peronii) - srightwhaledolphin_low.def, srightwhaledolphin.def - srightwhaledolphin_high.def - Common porpoise (Phocoena phocoena) - commonporpoise_low.def, commonporpoise.def, commonporpoise_high.def - Burmeister’s porpoise (Phocoena spinipinnis) - burmeistersporpoise_low.def, burmeistersporpoise.def, burmeistersporpoise_high.def - Spectacled porpoise (Australophocaena dioptrica) - spectacledporpoise_low.def, spectacledporpoise.def, spectacledporpoise_high.def - Beluga (Delphinaterus leucas) - beluga_low.def, beluga.def, beluga_high.def - Cuvier’s beaked whale (Ziphius cavirostris) - cuviersbeaked_low.def, cuviersbeaked.def, cuviersbeaked_high.def - Unidentified beaked whale (Mesoplodon spp.) - unidentifiedbeakedwhale_low.def, unidentifiedbeakedwhale.def, unidentifiedbeakedwhale_high.def - Sperm whale (Physeter macrocephalus) - spermwhale_low.def, spermwhale.def, spermwhale_high.def - spermwhaletail_low.def, spermwhaletail.def, spermwhaletail_high.def - Pygmy sperm whale (Kogia breviceps) - pigmyspermwhale_low.def, pigmyspermwhale.def, pigmyspermwhale_high.def - A dolphin (gen. unknown) - unidentifieddolphin_low.def, unidentifieddolphin.def, unidentifieddolphin_high.def - Baleen Whales, SubO. Misticeti: - Minke whale (Balaenoptera acutorostrata) - minkewhale.def, minkewhale_low.def, minkewhale_high.def - Fin Whale (Balaenoptera physalus) - finwhale.def, finwhale_low.def, finwhale_high.def - Sei Whale (Balaenoptera borealis) - seiwhale_low.def, seiwhale.def, seiwhale_high.def - - Gray Whale (Eschrichtius robustus) - graywhale_low.def, graywhale.def, graywhale_high.def - - A whale (unknown species) - unidentifiedwhale_low.def, unidentifiedwhale.def, unidentifiedwhale_high.def 3: Call them including the corresponding plot or plot3d lines in a GMT script like this: #!/usr/bin/env bash gmt coast -JM20c -R-10/6/33/36 -K -W0.5pt/0 -P -Gblack > myfile.ps gmt plot cdolphin.xy -Skcommondolphin/ -JM20c -R-10/6/33/36 -P -K -O >> myfile.ps gmt plot bottlenose_dolphin.xy -Skbottlenose_high/0.5 -K -O ...etc >> myfile.ps gmt plot: gmt plot, or convert to PDF or rasters with gmt psconvert. FAQ and Troubleshoting¶ The symbols are not drawn¶ - When In run the script I obtain GMT ERROR: plot: Could not find custom symbol symbolname.def!¶ Probably you had wrote something like: -Sksymbolname.def/0.5 Please note that this is incorrect, you should remove .def and use -Sksymbolname/0.5 instead in your script. Could not find custom symbol mydirsymbolname!, Cannot open file mydirmyfile.xy!¶ Probably you had placed your symbols in a subdirectory and are using the wrong directory notation for your operative system. Please note that Linux use “/” for directories whereas Microsoft Windows use “”, so check if you had wrote something like -Skmydirkillerwhale when you want to say -Skmydir/killerwhale or mydirkillerwhale.xy instead mydir/killerwhale.xy This error could also occur if you try to run a script created in Windows in a Linux OS or vice versa. Try to intercange / and and run again the script. Symbol customization¶ The symbols are to much big!, What size should I use?¶ In your script you’ll need to reduce the normal size of the symbols. A range of sizes between 0.12 and 0.18, (rarely more than 0.2) should be OK. Some symbols are a little bigger than others, so play with the size in the script until you obtain the right for you. Remember that you can easily modify the size of the symbol directly in your GMT script (-Skoorca/0.8 -Skoorca/0.2) or in your file xy. I recommend to use different sizes for males, females and calfs. I don’t want color symbols!¶ You can easily obtain the same symbol in graytones editing the def file with your favourite text editor. Follow the instructions you will find inside the .def files. Some symbols like the killerwhale have only a b/w version for obvious reasons. How can I change the colour of the symbols?¶ The colour of each area is specified inside the def file, so you can’t simply specify a colour directly in your GMT script or you will obtain strange results. You should open and edit the -W and -G in the def file. After editing the def file I obtain strange polygonal patches instead the desired symbol but all points are the same than in the original .def! Check that you don’t have deleted the pt specification in a line with -W. This (-W100) is erroneous, (width line 100 pt) while this (-W025.pt/100) is ok. Other questions¶ Why they are so may similar symbols low, high, etc… for the same species?¶ Sometimes some symbols will overlap severely with their neighbors. Specially with the most common species like Delphinus dolphins. I think that this looks ugly, so you will obtain a nicer map if you use a little more tall or short symbol for these specific animals. Try with the different versions of the same symbol until you obtain a satisfactory presentation. Remember that you must place this problematic specimens in a different xy file first. The representation of multiple strandings or sightings in the same point can be also problematic and sometimes you will need to obtain more complex symbols to show a multiple and heterogeneous stranding, for instance a mother/calf stranding or two different species sighted in exactly the same point. You can deal with those cases if you stack several low/high symbols until you obtain the complex symbol desired. You will need duplicate or triplicate the plot lines in the script and perhaps play also with the size and color of the symbols. For instance if you see a killer whale harassing two dolphins and you want to show all in the same map:plot a_killer_whale_data.xy -Skkillerwhale_high/0.8 … etc plot a_common_dolphin_mother_data.xy -Skcommondolphin_midlow/0.7 … etc plot and_its_calf_data.xy -Skcommondolphin_low/0.3 … etc For a better result place the lines calling the taller symbols first and the shorter symbols at the end.
https://docs.generic-mapping-tools.org/latest/users_contrib_symbols/biology/Cetacea.html
2019-11-12T08:25:44
CC-MAIN-2019-47
1573496664808.68
[]
docs.generic-mapping-tools.org
Interlocked Class Definition Provides atomic operations for variables that are shared by multiple threads. public ref class Interlocked abstract sealed public static class Interlocked type Interlocked = class Public Class Interlocked - Inheritance - Examples Private Shared Sub MyThreadProc() Dim i As Integer For i = 0 To numThreadIterations - 1 UseResource() 'Wait 1 second before next attempt. Thread.Sleep(1000) Next i End Sub 'A simple method that denies reentrancy. Shared Function UseResource() As Boolean '0 indicates that the method is not in use. If 0 = Interlocked.Exchange(usingResource, 1) Then End If End Function End Class End Namespace Remarks.
https://docs.microsoft.com/en-us/dotnet/api/system.threading.interlocked?view=netframework-4.8
2019-11-12T08:15:02
CC-MAIN-2019-47
1573496664808.68
[]
docs.microsoft.com
Enhanced load balancers allow you to add forwarding policies to forward requests from different clients to backend server groups based on the domain names or URLs specified in the forwarding policies. Currently, you can add forwarding policies only to HTTP or HTTPS listeners. You can add a maximum of 20 forwarding policies. This allows you to easily forward video, image, audio, and text requests to different backend server groups, improving the flexibility of service traffic distribution and facilitating resource allocation. After a forwarding policy is added, the load balancer will forward frontend requests based on the following rules: Alternatively, locate the target listener in the load balancer list and click its name to switch to the Listeners area. Click Add on the right of Forwarding Policies and then add a forwarding policy. The following table shows how a URL is matched, and Figure 1 illustrates how a request is forwarded to a backend server group. In this figure, the system first searches the request URL (/elb_gls/glossary.html) using the Exact match rule. If no precisely matched URL is found, the Prefix match rule is used. If a URL matches the prefix of the request URL, the request is forwarded to backend server group 2. Even if the request URL also matches rule 3 (Regular expression match), the request is forwarded to backend server group 2 because Prefix match enjoys higher priority.
https://docs.otc.t-systems.com/en-us/usermanual/elb/en-us_topic_0114694934.html
2019-11-12T08:01:26
CC-MAIN-2019-47
1573496664808.68
[]
docs.otc.t-systems.com
This page provides information on the Raw Lighting Render Element. Overview The Raw Lighting Render Element stores the effects of direct lighting on scene objects with no diffuse components or GI contribution. This is useful for adjusting the brightness of direct lighting during compositing. UI Path ||Render Settings window|| > Render Elements tab > Raw LightLight.img).. This render element is not supported with V-Ray GPU rendering. Common uses The Raw Lighting Render Element is useful for changing the appearance of direct lighting after rendering in a compositing or image editing software. Below are a couple of examples of its use. In this set of render elements, direct lighting affects the back of the alien figure the most due to a strong back-light in the scene as well as the top of the circular machine above the figure. The Raw Lighting Render Element The Original Beauty Composite Brightened Lighting Render Element Brightened and tinted lighting Render Element Brightened Lights Raised and tinted Lights Underlying Compositing Equation vrayRE_Raw_Light x vrayRE_Diffuse = vrayRE_Lighting
https://docs.chaosgroup.com/pages/viewpage.action?pageId=39814203&spaceKey=VRAY4MAYA
2019-11-12T09:31:23
CC-MAIN-2019-47
1573496664808.68
[]
docs.chaosgroup.com
Search This Document About Facebook for BlackBerry smartphones You can use Facebook for BlackBerry smartphones to stay connected with your friends. You can send messages to friends, chat with your friends, invite new friends, view notifications, upload pictures to your Facebook account, and more. You can upload pictures from the media application or picture attachments. If your BlackBerry smartphone has a camera, you can also upload pictures that you take with the camera. If you already have a Facebook account, you can use your existing login information to log in to Facebook for BlackBerry smartphones. To get a Facebook account, visit. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/46513/262750.jsp
2013-12-05T02:54:01
CC-MAIN-2013-48
1386163038307
[]
docs.blackberry.com
Hello there.. I was about to pitch in and add on a whole other section on basically doing the exact same thing you did with $document.. except using the JHTML class. I wasn't exactly sure why you'd use one over the other and thought I'd ask if you knew why this redunacy exists and if one is in some way more helpful. Any ideas? Haelix 23:45, 8 November 2008 (EST) Haelix, I agree. It appears that the JHTML classes wrap the $document methods, and take care of anything else you might need (like mootools). I have updated the article to reflect this.Mike 21:24, 17 February 2009 (UTC) what page should we keep ? this one or this one ? Using the class loader method is the common way: JHTML::_('script', $filename, $path); JHTML::_('stylesheet', $filename, $path);.
http://docs.joomla.org/index.php?title=Talk:Adding_JavaScript_and_CSS_to_the_page&oldid=26565
2013-12-05T02:42:46
CC-MAIN-2013-48
1386163038307
[]
docs.joomla.org
Users Users are the employees of your organisation, who can able to login, access and perform the relevant task in JoForce. You can add as many users as possible. Adding New User As an admin user you can easily add new users to your JoForce. To add a New User, Click on the Profile icon → Settings → Navigate to the User Management section→ Users → Add User By default, the list of active users are listed here. Add the following field values. JoForce lets you add various information about your user. All the info are categorised here Basic Login info User Name - The name of the user (Mandatory field) Primary Email - The user’s email. Use a unique email address that hasn’t used before. (Mandatory field) First Name - The first name of your user Last Name - Your user’s last name (Mandatory field) Password - Provide the appropriate User Login Password i.e. is used by your users to login the JoForce account Confirm Password - Re-enter the password that is used earlier. Admin - Check on the admin, to provide your User the admin access(Complete access to JoForce) Role - Choose the appropriate role of your user. Based on the role, the users have restriction in accessing the information in the JoForce Account. Default Lead View - Choose the default lead view for your user. These are some of the fields that are necessary to create a new User. You can also add other information like Currency Info - Add the currency related information More Information - Lets you to add more info related to your user including Email, phone, user signature. User Address Info - Communication information User Image - Add a image of your user Once done, hit Save. Share the credentials to the appropriate user and then they can access their own JoForce Account. Making changes to the user information Change of information of other users - As an admin user, you can easily edit your own and other employees account information any time. To edit the user info, click on Profile icon → Settings → Navigate to the User Management section → Users → Click on the appropriate user → Edit button on the upper right corner. Make the necessary changes, hit Save. Change of information of their own - Admin and users can easily change their own details, Click on the Profile icon → My Account→ Edit Change the necessary information and click Save. Changing the user credentials Change credentials for other users - Admin user can easily change the User Login Name, Password and Access Key of all the other user. Click on the appropriate user → More and choose the option whether to change the user login name or password or accesskey. To change the User Name → A Popup window prompts, provide the necessary details and then click on save. To change password → In the change password overlay, enter the new password and confirm the new password and then click on Save. To change the Access Key → Click on the Yes on the New Access key requested overlay. Once done, your access will be changed. This can also be done for the list view of the users, Click on the icon → Choose the appropriate option and the make the changes. Change their own credentials - Both the admin and user can change the password and Access key on their own. Click on the Profile icon → My Account→ More → Choose the option (Change Password or Change Access Key) Make the necessary changes and click Save. When non admin users changes any of the above info, all the changes all are automatically updated for the admin. Deleting a user Admin user can delete any JoForce user at any time. To do so, Click the icon in the list view → Choose Delete or Click on the appropriate user → More → Delete → in the delete overlay → click on Yes After which, Transfer records of user window prompts, from here you can easily transfer all the information related to this user to another user. Transfer records to user - Choose the desired user from drop down ,to whom you want to transfer all the information. Delete User Permanently - Check on this to delete the user permanently, else the user will be marked as InActive and no longer access the account, you can use it for future reference. You can check all your Inactive Users using the InActive User in the top. From here you can easily Restore or Delete the User. Click on the icon in the list view → Yes Restore User - All the information are restored back Delete - The User will be deleted permanently, can’t be restored back.
https://docs.joforce.com/administrative-guide/user-management/users
2020-05-25T05:53:58
CC-MAIN-2020-24
1590347387219.0
[]
docs.joforce.com
Introduction Medialibrary is a Laravel (5.5 and up) package that can associate all sorts of files with Eloquent models. It provides a simple, fluent API to work with. Are you a visual learner? Then watch this video that demonstrates what the package can do. Here are some quick code examples: $newsItem = News::find(1); $newsItem->addMedia($pathToFile)->toMediaCollection('images'); It can also directly handle your uploads: $newsItem->addMediaFromRequest('image')->toMediaCollection('images'); Want to store some large files on another filesystem? No problem: $newsItem->addMedia($smallFile)->toMediaCollection('downloads', 'local'); $newsItem-: $newsItem->getMedia('images')->first()->getUrl('thumb');
https://docs.spatie.be/laravel-medialibrary/v6/introduction/
2020-05-25T03:32:46
CC-MAIN-2020-24
1590347387219.0
[]
docs.spatie.be
The first step is to get format suitable for what you want to do. It’s very important to set up a proper workflow from your 3D modeling application, such as Autodesk® 3ds Max®, Autodesk® Maya®, Blender, and Houdini, into Unity. When exporting assets from 3D modeling applications for import into Unity, you need to consider: Your project scale, and your preferred unit of measurement, play a very important role in a believable SceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info See in Glossary. In many “real world” setups, we recommend you assume 1 Unity unitThe unit size used in Unity Projects. By default, 1 Unity unit is 1 meter. To use a different scale, set the Scale Factor in the Import Settings when importing Assets. See in Glossary = 1 meter (100cm), because many physics systems assume this unit size. For more advice, see the Art Asset best practice guide. To maintain consistency between your 3D modeling application and Unity, always validate the imported GameObject scale and size. 3D modeling modeling application, create a simple 1x1x1m cube and import it into Unity. In Unity, create a default Cube > Cube). This is 1x1x1m. Use this as a scale reference to compare with your imported model. These cubes should look identical when the Transform componentA Transform component determines the Position, Rotation, and Scale of each object in the scene. Every GameObject has a Transform. More info See in Glossary’s Scale property is set to 1,1,1 in the InspectorA Unity window that displays information about the currently selected GameObject, Asset or Project Settings, alowing you to inspect and edit the values. More info See in Glossary:A method to approximate how much ambient lighting (lighting not coming from a specific direction) can hit a point on a surface. See in Glossary A small script that contains the mathematical calculations and algorithms for calculating the Color of each pixel rendered, based on the lighting input and the Material configuration. More info See in Glossary material reads smoothness data from the alpha channel, the smoothness of the material with PNG textures is unexpectedly inverted, while the smoothness of the material with TGA textures is normal. Unity reads tangent space normal maps with the following interpretation: For example, a Autodesk® 3ds Max®.
https://docs.unity3d.com/2018.4/Documentation/Manual/BestPracticeMakingBelievableVisuals1.html
2020-05-25T06:15:37
CC-MAIN-2020-24
1590347387219.0
[]
docs.unity3d.com
The EditorWindow which currently has keyboard focus. (Read Only) focusedWindow can be null if no window has focus. See Also: mouseOverWindow, Focus. Focus other windows with a mouse click. using UnityEngine; using UnityEditor; // Prints the name of the focused window to a label public class FocusedWindow : EditorWindow { [MenuItem("Examples/FocusedWindow")] public static void Init() { GetWindow<FocusedWindow>("FocusedWindow"); } void OnGUI() { GUILayout.Label(EditorWindow.focusedWindow.ToString()); } }
https://docs.unity3d.com/kr/2018.3/ScriptReference/EditorWindow-focusedWindow.html
2020-05-25T05:20:25
CC-MAIN-2020-24
1590347387219.0
[]
docs.unity3d.com
planargument or used as a context manager. The argument planis currently experimental and the interface may be changed in the future version. The get_fft_plan()function has no counterpart in scipy.fftpack. boolean switch cupy.fft.config.use_multi_gpusalso affects the FFT functions in this module, see FFT Functions. Moreover, this switch is honored when planning manually using get_fft_plan().
https://docs-cupy.chainer.org/en/latest/reference/fftpack.html
2020-05-25T06:06:59
CC-MAIN-2020-24
1590347387219.0
[]
docs-cupy.chainer.org
cupyx.scipy.fftpack.ifft¶ cupyx.scipy.fftpack. ifft(x, n=None, axis=-1, overwrite_x=False, plan=None)¶ Compute the one-dimensional inverse FFT. -. plan ( cupy.cuda.cufft.Plan1dor None) – a cuFFT plan for transforming xover axis, which can be obtained using: plan = cupyx.scipy.fftpack.get_fft_plan(x,
https://docs-cupy.chainer.org/en/latest/reference/generated/cupyx.scipy.fftpack.ifft.html
2020-05-25T05:50:34
CC-MAIN-2020-24
1590347387219.0
[]
docs-cupy.chainer.org
Software Download Directory There are many ways to get help using frevvo in addition to the product documentation found here. If you are using Live Forms™ for Confluence you can find that documentation here. Access frevvo's Public Solutions Portal to find answers to your questions. Coming Soon... Customers will be able to login to Support Portal to open support cases, view case status, and add comments. Join an up coming frevvo webinar... Use the forum for posting questions, getting answers, and sharing information with other users. frevvo offers online classes and seminars as well as customized on-site training to suite your company's specific needs. Classes are offered for frevvo partners and end customers. See the Product Training page. You may also Contact us for more information. We try very hard to solve all issues prior to release. Sometimes problems slip through that will be solved in a patch or future product release. Browse the Forum for known issues and answers. Our blog has the helpful usage Tips and is accessible at Blog If you haven't found an answer to your question, please contact us. We love hearing from our users and will do everything we can to help you as quickly as possible. Here's how: You can post questions and get answers and make suggestions using the Forum. Or by filling in a form to: frevvo software subscriptions and one-time license purchase include the Standard Support level. Premium support is an available upgrade option. Project assistance is available from frevvo's Client Services team. Please Contact Us.
https://docs.frevvo.com/d/display/frevvo/Getting+Help
2020-05-25T04:21:09
CC-MAIN-2020-24
1590347387219.0
[array(['/d/images/icons/linkext7.gif', None], dtype=object)]
docs.frevvo.com
CPTUI-Extended Frequently Asked Questions Do I need to be running WordPress multisite to make use of Custom Post Type UI Extended? No, you do not. If your primary interest is the shortcode builder, then you will be just fine. Multisite is not a requirement. The only requirement is Custom Post Type UI Will installing Custom Post Type UI Extended affect my post types and taxonomies that I created using Custom Post Type UI? No, it will not have any effect on existing registered post types and taxonomies. The network-level settings for Custom Post Type UI Extended creates a second saved option that registers just those across all the sites. Is the shortcode builder limited to just post types registered with Custom Post Type UI? No, we have made it inclusive of all public registered post types available on your website. I don't see the button to use the shortcode builder, where is it at? The appearance of the button is dependent on which version of the post editor you are presently looking at. There are two available with out-of-the-box WordPress. The visual editor and then the text editor. If you are on the visual editor tab, it will look like the button shown below. If you are on the text tab, it will look like this one. What is the difference between single site settings and network-wide settings? When working with Custom Post Type UI Extended, this will be an important distinction to make. You will be able to create post types and taxonomies in both levels of your website. Single site settings Creating post types and taxonomies from within an individual subsite are going to do so only for the site you created them in. This is the exact same behavior as with Custom Post Type UI without the Extended addon enabled. You can have many different post types and taxonomies created for individual subsite needs with this. Network-wide site settings Creating post types and taxonomies from the network admin panel is going to do so for every site in your network. CPTUI Extended will handle the registration of them for you and no extra action is needed from you. With that, tweaking the settings for the network-wide settings is just as easy and propagates across all automatically as well. How do I hide or show the Dashboard widget? When you are viewing your Dashboard, if you want to hide or re-show the Pluginize-provided Dashboard widget, you will need to click the "Screen options" tab at the top of the page, and either check or uncheck the "Pluginize Support" option presented. If you do not have a "Screen options" tab available, check and see if you have any plugins installed that may be visually removing it for you.
https://docs.pluginize.com/article/39-frequently-asked-questions
2020-05-25T05:58:21
CC-MAIN-2020-24
1590347387219.0
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56e826ea90336026d871841e/images/56fb2182c6979111dc399fa7/file-ZNSGAE8qKr.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56e826ea90336026d871841e/images/56fb219fc6979111dc399fa8/file-9GFjLLO1ZM.png', None], dtype=object) ]
docs.pluginize.com
This guide assumes you have already completed Creating a Simple Model guide and are familiar with creating a reducer and loading your data into a TFrame. To start, we are going to create a widget. To do so, please navigate to the TFrame you are trying to visualize then right click on the column you want to visualize. Select the independent and dependent variables for your chart as well as the plot type, then click on Save Widget. Once your chart has been created, on the left hand menu, click on New Dashboard. Once navigated to the dashboard page, on the bottom right menu click on Insert New Widget. When the panel opens, click on the + button. Now that you know how to create widgets with Terrene, we suggest checking out the full widgets documentation to learn about different widgets Terrene offers.
https://docs.terrene.co/guides/creating-a-simple-dashboard
2020-05-25T03:43:48
CC-MAIN-2020-24
1590347387219.0
[]
docs.terrene.co
PyPickupBot’s config is separated into two files: init.cfg and config.cfg. The former typically includes general bot configuration(nickname, command prefix, ..) connection details such as irc server, channels to join and most importantly modules to load. The latter is where all plugin configuration goes, more or less everything else. You should first create a directory to contain the config files: mkdir ~/my_shiny_pypickupbot_config/ cd ~/my_shiny_pypickupbot_config/ Note that it will also host the bot’s persistent database where a lot of stuff will be saved, for instance started games logs or channel bans. This makes it convenient because the config directory effectively becomes all the bot’s knowledge, making it easy to save or transfer. Always remember however, that you are likely to have a password or two in the config(NickServ/Q authentication). As precised earlier, init.cfg only contains few items. [Bot] nickname=ZeroBot [Server] host=irc.quakenet.org port=6667 channels=#qlpickup.eu username=ZeroBot password=secret [Modules] modules+=q_auth We will describe here the most important ones. [Server] host=irc.quakenet.org port=6667 channels=#example If you plan on running your bot on a network that allows authenticating via server username/password(like FreeNode), or if you connect to a bouncer, these might be of use: [Server] ... username=johnbotaccount password=secret PyPickupBot’s main functionality comes from modules(also known as plug-ins). In fact, without modules, it could only join a channel then sit there and do nothing. Hopefully, several modules are preloaded: Translated into PyPickupBot’s config, it would read like this(you don’t have to type it): [Modules] modules=ban, chanops, help, info, pickup, pickup_playertracking, topic If you plan on running your bot on a network using UnrealIRCd, such as QuakeNet, you can use the q_auth module to allow your bot to auth with Q before joining channels. You can tell PyPickupBot to load q_auth in addition to the preloaded modules like this: [Modules] modules+=q_auth You don’t need to repeat the preloaded modules at all in your config, as long as you use the modules+= syntax. config.cfg however contains most of the settings: [Q Auth] username=JohnBot password=secret [Pickup games] tdm= 4v4 TDM ctf= 4v4 CTF [Pickup: tdm] captains=2 players=8 [Pickup: ctf] captains=2 players=8 If you enabled the q_auth module as described earlier, you need to configure it. Typically, you only need to enter the account’s username and password: [Q Auth] username=JohnBot password=secret First, define in the Pickup games section the games that can be played: [Pickup games] ctf= 5v5 CTF tdm= 4v4 TDM Each config option here is treated as a game. The left-hand value is the short name for the game, that people will use to join it. The right-hand value is a title for the game. It can be anything you like, it isn’t interpreted anyway. It is shown in the output of !pickups and when a game fills up and starts. For each game, you can have an optional section defining the game’s settings: [Pickup: ctf] captains=2 players=10 You can omit it if the game’s settings matches the defaults, which are as follows: [Pickup: tdm] captains=2 players=8 Your bot is now configured! To start the bot, run the pickupbot script from the main folder as follows: /path/to/pickupbot -c ~/my_shiny_pypickupbot_config/ It will do it’s thing and connect, currently with a lot of output you probably don’t care about. You usually want to run it within GNU screen or a similar application, so the bot can continue running when you close the terminal or when you disconnect from the server the bot is hosted on.
https://pypickupbot.readthedocs.io/en/latest/quick/config.html
2020-05-25T03:38:55
CC-MAIN-2020-24
1590347387219.0
[]
pypickupbot.readthedocs.io
Activator 6.0.0 Administrator Guide Save PDF Selected topic Selected topic and subtopics All content Configure the UI Use the Configure UI connection page to: Specify how browsers connect to the Axway Activator Server Configure user Single Sign On (SSO) and Single Sign Off Browser connections Users connect to Activator using a browser which opens by default via HTTPS and is secured accordingly. Use the following address to access the Activator UI:https://<hostname>:6443/ui A self-signed certificate for the HTTPS UI server it is generated at startup time. You should change the self-signed certificate and use a certificate issued by CA. Configure Single Sign On (SSO) You can work on the Configure UI connection page to set up Single Sign On (SSO) and Single Log-off (SLO) for user browser connections. Open the Configure UI connection page To open the Configure UI connection page, go to the System management page and click the Configure UI connection link in the Related tasks list. In Activator, HTTPS port 6443 is available by default. As an option, you can enable HTTP after logging into the Activator UI, from Configure UI connection-> UI connections made via HTTP. See Configure HTTP. HTTPS/HTTP connections HTTPS connections are required if you are implementing Single Sign On (SSO). HTTPS connections are typically made using port 6443 and are secured by default. You can modify the ports required for both HTTPS and HTTP access. If you configure Activator UI access over HTTPS only, access to the HTTP port (default 6080) is forbidden for UI access. When you use HTTPS, you must add a server certificate as described in the following procedures. You have options to: Use a self-signed, as configured by default, or replace it with a CA certificate Import a certificate and private key from a file Retrieve a certificate from a certificate authority This certificate secures the connection between browsers and the server. If you select HTTPS and also select the option Require client authentication, you must add the client's trusted root certificate. Username and password handling Activator user names are not encrypted. Although, all user passwards are encrypted if they are stored to a disk or database. The following Activator passwords are encrypted when stored to a disk or database. During the UI connection authentication phase, these passwords are exchanged over the network over an encrypted communications channel: All user passwords Configure HTTP To enable the browser to log on using HTTP, use the following procedure: Click System management on the toolbar to open the System management page. Click the task Configure UI connection near the bottom of the page to open the Configure UI connection page. When you open this page for the first time, the secure connection via HTTPS is configured by default. You can accept the default or add the configuration for connecting via HTTP. You cannot disable connections via HTTPS until you have configured HTTP. After HTTP has been configured, you can return to this page. On the General tab, select UI connections made via HTTP. Port 6080 is displayed by default; however, you can change the number as your situation requires. Click Save. Restart the server to complete the configuration. Inform users of the URL needed to connect from a browser to the user interface. If you use the suggested port of 6080, the URL is: http://<host>:6080/ui Where <host> is the fully-qualified domain name or IP address of the computer running the server. Configure HTTPS Use this procedure to configure the server so browsers can log on to the user interface via HTTPS. If you changed the default configuration from HTTPS to HTTP, and then decide you want HTTPS again, you can configure HTTPS using the following procedures. you can change the number as your situation requires. Optionally, select the Override SSL and TLS cipher suites option for overriding a cipher suite. Select this option, and use the Add and Remove buttons to specify the cipher suites that are supported for the embedded server. If none are selected, all cipher suites are supported by default. The default is less secure than specifying only certain cipher suites. The default order in the Available column is the preferred order of use. Once ciphers are moved to the Selected column, you can arrange the order. Activator uses the ciphers in the order they are listed. Ciphers provide varying levels of security. Some provide the highest levels of security, but require a large amount of computation for encryption and decryption. Others are less secure, but provide rapid encryption and decryption. The length of the key used for encryption affects the level of security. The longer the key, the more secure the data.. Click Save. Select the Personal certificates tab and click Add a certificate to open the certificate wizard. You can add a self-signed or replace it with a CA certificate. The certificate has a public-private key pair. The certificate is used to secure connections between browsers and the server. If you choose to add a self-signed certificate, you can accept all default values in the certificate wizard. The steps for adding a server certificate are the same as adding a certificate for a community. See Add a certificate for more information. After you add a certificate, the General tab displays again. Select the Personal certificates tab again. The certificate you added in an earlier step is listed. You can click the certificate’s name to display details. If there is more than one certificate, select the certificate you want as the default and click Save. On the General tab, check again. Restart the server to complete the configuration.. Switch HTTPS off and on Once connections via HTTP or HTTPS have been added, you can return to the Configure UI connection page and select to allow browser connections via both HTTP and HTTPS, or HTTP only. If you change the configuration, click Save. You must also restrat the server. Configure SSO SAML This topic describes SSO configuration in the Activator user interface. The SSO connection is always over secured (HTTPS). The cipher settings that are configured for HTTPS apply when a user attempts to connect via the SSO port. See Configure HTTPS. For general information about the SSO functionality, see Single sign-on. For an example of a configuration of Activator with an IdP, see Example SSO IdP configuration. SSO SAML configuration prerequisites General prerequisite: A SAML Identity Provider (IdP) must be installed and running on your network. To complete the configuration of the Identity Provider application: The public certificate from the Service Provider (Activator) for validating the signature of the SAML Requests. This is the certificate that has been added to the Identity provider certificates tab of the Configure UI connection page and has been selected as the signing certificate in the SSO SAML Configuration Details page. The Activator assertion consumer URL: https://<interchange>/ui/core/SsoSamlAssertionConsumer The Interchange Metadata URL (optional): https://<interchange>/ui/core/SamlSsoMetadata This URL is useful if the Identity Provider implements the usage of Metadata Profiles. The Interchange Logout Service endpoint URL (optional): https://<interchange>/ui/core/SsoSamlLogoutRequest Required only if the support for SLO is supported and implemented by the IdP. To configure Activator: A private certificate to decrypt assertions from the IdP The public certificate from the IdP for validating the signature of the SAML Response and/or the signature of the assertions The HTTP POST SSO Binding URL to the IdP The signing algorithm the IdP will use (SHA1 or SHA256) To configure Single Log Out (SP or IdP initiated) in Activator, you must provide the following IdP URLs: HTTP Redirect Single Logout Service Response endpoint URL (IdP-initiated logout) HTTP POST Binding Single Logout Service endpoint URL (SP-initiated logout) HTTP POST Single Logout Service endpoint Response URL (IdP-initiated logout) Logout redirect URL. Users are navigated to this URL users after initiating the logout. For more information on HTTP-Redirect and HTTP-POST binding configuration, see SSO SAML configuration fields and options. To enable the SSO SAML connection in Activator Make sure that HTTPS is enabled. See Configure HTTPS. Select the option UI connections via SSO with SAML HTTP Post Binding over HTTPS. Accept the default HTTPS SAML Port (6643) or enter an alternative port number. Click SAML Configuration to open the SSO SAML configuration details page. Complete the tabs and fields, described below. Click Save. Restart Activator. SSO SAML configuration fields and options The SSO SAML configuration details page displays the following tabs, fields, and options. General Configuration tab Identity Provider section Identity provider HTTP POST URL – The URL for sending identification requests to the IdP. Single logout service – (Optional) Single logout provides for the simultaneous termination of all user sessions for the browser that initiated the logout. Closing all user sessions prevents unauthorized users from gaining access to resources at the SPs. Activator supports both HTTP-Redirect and HTTP-POST binding for single logout. Response location (HTTP-Redirect) – Specify the IdP’s Single Logout Service endpoint URL. This is the URL where the IdP receives the LogoutResponse in the case of an IdP-initiated logout with HTTP.Location (HTTP-POST) – Specify the IdP’s Single Logout Service endpoint URL. This is the URL for sending LogoutRequests in the case of an SP-initiated logout with HTTP POST binding.Response location (HTTP-POST) – Specify the IdP’s Single Logout Service endpoint URL. This is the URL where the IdP receives the LogoutResponse in the case of an IdP-initiated logout with HTTP POST binding. Identity provider certificate – Specify the public certificate from the IdP used to validate the signature of the SAML Response and/or the signature of the assertions. Items in the drop-down list are the certificates previously added to the Identity provider certificates tab of the Configure UI connection page. Service Provider section Service provider – Activator ID for communications with the IdP. Metadata URL – (text display) The URL that is displayed exposes the following Activator configuration settings. You can use the metadata configuration information to provision the IdP environment to provide Single Logout Service:SSO Service Provider certificate used by Activator to sign log-off requests and responses to the IdP.Single Logout Service for HTTP-POST binding, where the IdP sends the LogoutRequest and LogoutResponse to Activator.Single Logout Service for HTTP-Redirect binding, where the IDP sends the LogoutRequest and LogoutResponse to Activator.Assertion Consumer Service HTTP-POST binding for the Activator endpoint that consumes assertions.Assertion Consuming Service requested attribute names. Assertion Consumer Service – (text display) The Activator endpoint URL for consuming assertions. Single logout service Location (HTTP - Redirect/HTTP - POST) – (text display) The Activator Single Logout Service endpoint. This is where the IdP sends the LogoutRequest to Activator.Response location (HTTP - Redirect/HTTP - POST) – (text display) This field displays the SP’s Single Logout Service endpoint. This is where the IdP sends the LogoutResponse to Interchange. Logout redirect URL – The URL of the IdP logout page. Service provider certificate – Activator certificate (select from drop-down list). Activator uses this certificate to sign the authentication requests sent to the Identity Provider. The drop-down list displays certificates that have been previously added to the Identity provider certificates tab of the Configure UI connection page. Signing algorithm for login and logout – Select either SHA1 or SHA256. Sign authentication requests sent to Identity Provider – Select this option if the IdP requires signing of authentication requests. Reject responses with assertions that are not encrypted – Select this option to require encryption of responses from the IdP. User Attributes tab Use the radio buttons on this tab to define how Activator obtains user attributes from the Identity Provider. Activator compares the values that it retrieves to its defined users. Select an assertion option for each of the following user attributes: User ID Username Useremail The assertion options are for these user attributes: Assertion from subject identifier Assertion from assertion attribute Activator receives the SAML subject identifier with the specified assertion subject identifier or assertion attributes from the Identity Provider. The Service Provider uses the assertion subject identifier or another assertion attribute to retrieve the user identifier. Roles Mapping tab Every attribute has its own unique representation in a SAML attribute assertion, to ensure that there are no misinterpretations or communication failures. SAML exchanges rely on consistent attribute naming to deliver information about users in a mutually understood way between the IdP and the SP. Use the mapping wizard, available on this tab, to map predefined Activator roles to the roles defined in the Identity Provider. This is how you align the roles and permissions that you create in Activator with the SAML assertion attributes. Procedure: On the Roles Mapping tab, click Add a new role mapping to open the Map SSO roles wizard. Complete the fields:Role name – From the drop-down list, select the name of the Activator role you want to map to a SAML assertion attribute.Mapped to SAML Assertion attribute – Complete the fields to fully identify the assertion attribute to be mapped to the selected Activator role:NameFriendly nameValue Click Finish. Repeat the steps for each role you want to map. Manage IdP certificates For an example of a configuration of Activator with an IdP, see Example SSO IdP configuration. Use the Identity provider certificates tab of the Configure UI connection page to manage the certificates that Activator uses to sign the SAML messages that are sent from Activator to the IdP. On this tab, you can perform the following procedures: Import an IdP certificate to use for signing the SAML messages sent to the IdP. Remove IDP certificates, for example, if the certificate is not in use or is disabled. Import an IdP certificate Prerequisite: You must first obtain the public certificate from the IdP for validating the signature of the SAML Response and/or the signature of the assertions. Procedure: On the Configure UI connection page, select the Identity provider certificates tab. From the Related tasks list, click Add an identity provider certificate to open the Add a certificate wizard. On the Add a certificate page of the wizard, select Import a certificate from a file, and click Next. On the Locate the certificate file page of the wizard, use the Browse... tool to select the certificate file to use. Click Next to view the certificate details. Optionally enter a meaningful name in the certificate Name field. This name can help you tell one certificate from another. Click Finish to import the certificate. Delete an IdP certificate On the Configure UI connection page, select the Identity provider certificates tab. From the list of certificates, on the line of the certificate you want to remove, click Delete. Control the IdP session validation behavior The System properties page enables system administrators to configure a certain number of Activator trading engine parameters. One of the tunable properties on this page is: sso.saml.reauthenticateOnSessionTimeout If the property is set to "true": By default this property is set to "true". As long as the IdP session is valid, the session of the Activator user is refreshed and kept valid as well. If the property is set to "false": If you set the property to false, when the Activator user session expires, Activator forces a logout of the user to invalidate both the Activator and IdP user sessions. Caution Incorrectly modifying values on this page can severely degrade product behavior. Do not modify values on this page without explicit guidance from Axway support. To access the System Properties page, point your browser to https://<hostname>:6443/ui/core/SystemProperties. Related Links
https://docs.axway.com/bundle/Activator_600_AdministratorsGuide_allOS_en_HTML5/page/Content/System_mgt/sys_mgt_ui_connection.htm
2020-05-25T04:44:50
CC-MAIN-2020-24
1590347387219.0
[]
docs.axway.com
3.0.0 API API is the root concept defined and used by Gravitee.io. You can see it as a starting point to expose services through the gateway. Publisher A publisher (also called API publisher) is one of the two concretes role defined into the platform. This role is used to represent someone able to declare an API and manage it. Consumer A consumer (also called API consumer) is the role defined to consume an API. Consuming an API can only be done after subscribing to this API.
https://docs.gravitee.io/apim/3.x/apim_overview_concepts.html
2020-05-25T04:53:30
CC-MAIN-2020-24
1590347387219.0
[]
docs.gravitee.io
Customizing shortcode templates with CPTUI-Extended Customizing shortcode templates with CPTUI-Extended This is only for CPTUI-Extended 1.4.0 or higher With the release of CPTUI-Extended version 1.4.0, our users are now able to fully customize the templates provided with the plugin. If you are familiar with customizing templates the way WooCommerce, The Events Calendar, and similar, you will feel right at home with CPTUI-Extended's templates. Where to find CPTUI-Extend's template files. You can find the template files inside the available zip file and in cptui-extended/templates/. Each will relate to a different shortcode available in the UI when inserting the shortcode. Where to place the template files. In order to properly override the templates, safely without losing changes on next update, they should be put in a folder inside your active theme folder. The name of the folder should be cptui-extended. After that, simply upload your modified template file into that folder. In order to successfully override, make sure you are leaving the file name the same. What now? From here, you can edit your copies of the template files and those changes should be used when using the associated shortcode. Make sure to read the comments and notes provided in the template so you can get a good idea of what all may be needed or what's available to use for your custom output. Without making use of some sort of conditional checking, changes will be applied to all shortcode instances using the template.
https://docs.pluginize.com/article/138-customizing-shortcode-templates-with-cptui-extended
2020-05-25T05:37:41
CC-MAIN-2020-24
1590347387219.0
[]
docs.pluginize.com
Glossary Item Box Problem: The XMLSerializer has a problem to serialize generic IList<> instances (see). Solution: Firstly, you can use ArrayList as in the past – it works and you can serialize these objects. However, if you want to use Generics you can use the pattern described below to make your persistent classes ready for the XMLSerializer. The XMLSerializer accesses via the reflection API, all the public fields and public properties to retrieve the data; for creating a new object it needs a public constructor, and here again it accesses all the public fields and properties, via reflection. On the other hand, Telerik OpenAccess ORM wraps all accesses to fields with generated methods to perform lazy loading and change tracking. What this means, is that the read or write access, from the XMLSerializer to public fields (unlike the access to public properties) is not noticed by the Telerik OpenAccess ORM tool, since the generated methods are not used, but bypassed by reflection. And this poses a problem, when a field is not initialized (refer to Transparent Persistence for more information about lazy loading and FetchGroups for more information about fetch groups). When an object, is created from the XMLSerializer it might be also a problem – since the Telerik OpenAccess ORM Library cannot see that the field has been changed, and it is possible that it is assumed that a reference is not set – even if it is. To avoid such problems, the class must be implemented very carefully. All fields must be non-public (so they are not touched by the XMLSerializer), and the XMLSerializer must get/set all its information via public properties. As described above the IList<xxx> Interface from OpenAccess ORM cannot be directly converted to a List<xxx>, so the objects from the persistent IList<xxx> field must be copied to a transient List<xxx>. When is the best point to do it? It can be done within IInstanceCallbacks or partly inside the public property. In our example, we use the IInstanceCallbacks for all the needed conversions:
https://docs.telerik.com/help/openaccess-classic/the-.net-data-model-xml-serialization.html
2020-05-25T06:06:09
CC-MAIN-2020-24
1590347387219.0
[]
docs.telerik.com
Running the Full ICGS Workflow from FASTQ Files Using the new integrated Kallisto pseudoalignment and TPM quantification method, AltAnalyze completely automates gene expression analysis from beginning to end. Bulk and Single-Cell RNA-Seq Analysis Video illustrating how easy it is to analyze transcriptome datasets in AltAnalyze. Here we analyze a published single-cell gene expression dataset, supplied as a supplemental table, according to known biological groups and using a de novo group discovery workflow.
http://altanalyze.readthedocs.io/en/latest/YouTubeVidoes/
2018-03-17T16:14:38
CC-MAIN-2018-13
1521257645248.22
[]
altanalyze.readthedocs.io
Using IR sensors and packages Use IR sensors for rapid response to and scoping of incidents. Incident response can require computationally-intensive hashing algorithms and extensive file system scans. For this reason, IR sensors are written with a narrow scope to minimize processing and retrieve specific information within seconds. Few search operations are recursive and most sensors perform a hexadecimal search or hash match a single file and target a single directory. This strategy takes advantage of the Tanium linear chaining topology to rapidly deliver critical information at enterprise scale. About deploying parameterized sensors as actions sensors that require extensive computation across the security enterprise, for example, sensors that hash files and perform binary searches, are deployed as actions. Deploying parameterized sensors as actions increases the speed of larger tasks, including: - Searching across directories for binary data - Matching the hash values of files across many directories - Hashing and matching executables and their loaded modules Actions are not processed one at a time. Short actions run at the same time as longer actions. Because they are not strictly queued, shorter actions are not delayed by the execution of more extensive actions. Actions do not time out. Because the processing time of an action depends on the nature of the task, an action is considered complete when the job begins. The results, however, might not be immediately available. When you deploy the action, you must provide an IR job ID. Then, you can view results files from Windows-based endpoints with the Incident Response Job Results sensor by specifying the job ID as a parameter. You can retrieve and copy job results files to a central location by using one of the platform-specific collection actions. Before you begin - The Tanium Incident Response solution must be installed. For more information, see Install Tanium Incident Response. - The IR tools must be deployed to the endpoints. For more information, see Deploying IR tools. Deploy a parameterized sensor as an action - Identify the endpoints that you want to target. - Ask a question to return a set of endpoints. - Select the endpoints and click Deploy Action. - Specify the parameterized sensor. - Type the name of the parameterized sensor in the Deployment Package field. For example, type: Incident Response - Search for Files. - Specify parameters for the sensor.Incident Response - Search for Files sensor, indicate a Pattern of files to match and the IR Job ID. The IR Job ID can be any value that you choose. Use this value to get the results of the action. The value must be unique. If two actions share the same job ID, the files identified by those actions might be destroyed. Remember the value so that you can retrieve the job results later. - Complete deployment of the action. Click Deploy Action. - Get the results of the parameterized sensor action. Reference: IR sensors and packages For details about the parameters for each IR sensor and package, see Tanium Support Knowledge Base: Tanium IR Reference (login required). Last updated: 2/13/2018 3:25 PM | Feedback
https://docs.tanium.com/ir/ir/sensors.html
2018-03-17T16:15:58
CC-MAIN-2018-13
1521257645248.22
[]
docs.tanium.com
Globals¶ Category: Core Description¶ Contains global variables accessible from everywhere. Use the normal Object API, such as “Globals.get(variable)”, “Globals.set(variable,value)” or “Globals.has(variable)” to access them. Variables stored in engine.cfg are also loaded into globals, making this object very useful for reading custom game configuration options. Member Function Description¶ - void add_property_info ( Dictionary hint ) Add a custom property info to a property. The dictionary must contain: name:String), and optionally hint:int), hint_string:String. Example: Globals.set("category/property_name", 0) var property_info = { "name": "category/property_name", "type": TYPE_INT, "hint": PROPERTY_HINT_ENUM, "hint_string": "one,two,three" } Globals.add_property_info(property_info) Clear the whole configuration (not recommended, may break things). Return the order of a configuration value (influences when saved to the config file). Convert a localized path (res://) to a full native OS path. Return true if a configuration value is present. If returns true, this value can be saved to the configuration file. This is useful for editors. Convert a path to a localized path (res:// path). Set the order of a configuration value (influences when saved to the config file). If set to true, this value can be saved to the configuration file. This is useful for editors.
http://docs.godotengine.org/en/stable/classes/class_globals.html
2017-06-22T14:17:35
CC-MAIN-2017-26
1498128319575.19
[]
docs.godotengine.org
How to Customize Post Meta The Post Meta appears above or below posts and includes elements like the author, date, category link and tags. In the Layers Post Widget, you can turn off single meta elements under the Display option in the Design Bar. Layers Pro Layers Pro allows you to easily turn off single meta elements in the Blog and post archives under the Blog→ Archives →Styling panel in the Customizer. CSS The Post Meta on the archives can be filtered globally using layers_post_meta or replaced with custom code using layers_before_list_post_meta or layers_after_list_post_meta. Follow the links ot the hook reference to see how to change these in a child theme or extension. If you simply want to change the look of the meta, following are a couple things you can do: Put Meta on the Same Line Added to the CSS panel in the customizer: Remove Icons
http://docs.layerswp.com/doc/how-to-customize-layers-post-meta/
2017-06-22T14:09:07
CC-MAIN-2017-26
1498128319575.19
[]
docs.layerswp.com
Applies To: Windows Server 2016 This topic covers information you need to deploy Nano Server images that are more customized to your needs compared to the simple examples in the Nano Server Quick Start topic. You'll find information about making a custom Nano Server image with exactly the features you want, installing Nano Server images from VHD or WIM, editing files, working with domains, dealing with packages by several methods, and working with server roles. Nano Server Image Builder The Nano Server Image Builder is a tool that helps you create a custom Nano Server image and bootable USB media with the aid of a graphical interface. Based on the inputs you provide, it generates reusable PowerShell scripts that allow you easily automate consistent installations of Nano Server running either Windows Server 2016 Datacenter or Standard editions. Obtain the tool from the Download Center. The tool also requires Windows Assessment and Deployment Kit (ADK). Nano Server Image Builder creates customized Nano Server images in VHD, VHDX or ISO formats and can create bootable USB media to deploy Nano server or detect the hardware configuration of a server. It also can do the following: - Accept the license terms - Create VHD, VHDX or ISO formats - Add server roles - Add device drivers - Set machine name, administrator password, logfile path, and timezone - Join a domain by using an existing Active Directory account or a harvested domain-join blob - Enable WinRM for communication outside the local subnet - Enable Virtual LAN IDs and configure static IP addresses - Inject new servicing packages on the fly - Add a setupcomplete.cmd or other customer scripts to run after the unattend.xml is processed - Enable Emergency Management Services (EMS) for serial port console access - Enable development services to enable test signed drivers and unsigned applications, PowerShell default shell - Enable debugging over serial, USB, TCP/IP, or IEEE 1394 protocols - Create USB media using WinPE that will partition the server and install the Nano image - Create USB media using WinPE that will detect your existing Nano Server hardware configuration and report all the details in a log and on-screen. This includes network adapters, MAC addresses, and firmware Type (BIOS or UEFI). The detection process will also list all of the volumes on the system and the devices that do not have a driver included in the Server Core drivers package. If any of these are unfamiliar to you, review the remainder of this topic and the other Nano Server topics so that you'll be prepared to provide the tool with the information it will need. Creating a custom Nano Server image. You can also find and install these packages with the the NanoServerPackage provider of PackageManagement (OneGet) PowerShell module. See the "Installing roles and features online" section of this topic. This table shows the roles and features that are available in this release of Nano Server, along with the Windows PowerShell options that will install the packages for them. Some packages are installed directly with their own Windows PowerShell switches (such as -Compute); others you install by passing package names to the -Package parameter, which you can combine in a comma-separated list. You can dynamically list available packages using the Get-NanoServerPackage cmdlet. Note When you install packages with these options, a corresponding language pack is also installed based on selected server media locale. You can find the available language packs and their locale abbreviations in the installation media in subfolders named for the locale of the image. Note When you use the -Storage parameter to install File Services, File Services is not actually enabled. Enable this feature from a remote computer with Server Manager. Failover Clustering items installed by the -Clustering parameter - Failover Clustering role - VM Failover Clustering - Storage Spaces Direct (S2D) - Storage Quality of Service - Volume Replication Clustering - SMB Witness Service File and storage items installed by the -Storage parameter - File Server role - Data Deduplication - Multipath I/O, including a driver for Microsoft Device-Specific Module (MSDSM) - ReFS (v1 and v2) - iSCSI Initiator (but not iSCSI Target) - Storage Replica - Storage Management Service with SMI-S support - SMB Witness Service - Dynamic Volumes - Basic Windows storage providers (for Windows Storage Management) Installing a Nano Server VHD; New-NanoServerImage -DeploymentType Guest -Edition Standard -MediaPath \\Path\To\Media\server_en-us -BasePath .\Base -TargetPath .\FirstStepsNano.vhdx -ComputerName FirstStepsNano The cmdlet will accomplish all of these tasks: Select Standard as a base edition Prompt you for the Administrator password Copy installation media from \\Path\To\Media\server. Note You now have the option to specify the Nano Server edition to build either the Standard or Datacenter edition. Use the -Edition parameter to specify Standard or Datacenter editions. Once you have an existing image, you can modify it as needed using the Edit-NanoServerImage cmdlet. If you do not specify a computer name, a random name will be generated. Installing a Nano Server WIM - Copy the NanoServerImageGenerator folder from the \NanoServer folder in the Windows Server 2016 ISO a local folder on your computer. Start Windows PowerShell as an administrator, change directory to the folder where you placed the NanoServerImageGenerator folder and then import the module with Import-Module .\NanoServerImageGenerator -Verbose. Note You might have to adjust the Windows PowerShell execution policy. Set-ExecutionPolicy RemoteSignedshould work well. To create a Nano Server image to serve as a Hyper-V host, run the following: New-NanoServerImage -Edition Standard -DeploymentType Host -MediaPath <path to root of media> -BasePath .\Base -TargetPath .\NanoServerPhysical\NanoServer.wim -ComputerName <computer name> -OEMDrivers -Compute -Clustering Where - -MediaPath is the root of the DVD media or ISO image containing Windows Server 2016 . - -BasePath will contain a copy of the Nano Server binaries, so you can use New-NanoServerImage -BasePath without having to specify -MediaPath in future runs. - -TargetPath will contain the resulting .wim file containing the roles & features you selected. Make sure to specify the .wim extension. - -Compute adds the Hyper-V role. - -OemDrivers adds a number of common drivers. You will be prompted to enter an administrator password. For more information, run Get-Help New-NanoServerImage -Full. Boot into WinPE and ensure that the .wim file just created is accessible from WinPE. (You could, for example, copy the .wim file to a bootable WinPE image on a USB flash drive.) Once WinPE boots, use Diskpart.exe to prepare the target computer's hard drive. Run the following Diskpart commands (modify accordingly, if you're not using UEFI & GPT): Warning These commands will delete all data on the hard drive. Apply the Nano Server image (adjust the path of the .wim file): Dism.exe /apply-imagmediafile:.\NanoServer.wim /index:1 /applydir:n:\ Bcdboot.exe n:\Windows /s s: Remove the DVD media or USB drive and reboot your system with Wpeutil.exe reboot Editing files on Nano Server locally and remotely In either case, connect to Nano Server, such as with Windows PowerShell remoting. Once you've connected to Nano Server, you can edit a file residing on your local computer by passing the file's relative or absolute path to the psEdit command, for example: psEdit C:\Windows\Logs\DISM\dism.log or psEdit .\myScript.ps1 Edit a file residing on the remote Nano Server by starting a remote session with Enter-PSSession -ComputerName "192.168.0.100" -Credential ~\Administrator and then passing the file's relative or absolute path to the psEdit command like this: psEdit C:\Windows\Logs\DISM\dism.log Installing roles and features online Note If you install an optional Nano Server package from media or online repository, it won't have recent security fixes included. To avoid a version mismatch between the optional packages and base operating system, you should install the latest cumulative update immediately after installing any optional packages and before restarting the server. Installing roles and features from a package repository You can find and install Nano Server packages from the online package repository by using the NanoServerPackage provider of the PackageManagement PowerShell module. To install this provider, use these cmdlets: Install-PackageProvider NanoServerPackage Import-PackageProvider NanoServerPackage Note If you experience errors when running Install-PackageProvider, check that you have installed the latest cumulative update (KB3206632 or later), or use Save-Module as follows: Save-Module -Path "$Env:ProgramFiles\WindowsPowerShell\Modules\" -Name NanoServerPackage -MinimumVersion 1.0.1.0 Import-PackageProvider NanoServerPackage Once this provider is installed and imported, you can search for, download, and install Nano Server packages using cmdlets designed specifically for working with Nano Server packages: Find-NanoServerPackage Save-NanoServerPackage Install-NanoServerPackage You can also use the generic PackageManagement cmdlets and specify the NanoServerPackage provider: Find-Package -ProviderName NanoServerPackage Save-Package -ProviderName NanoServerPackage Install-Package -ProviderName NanoServerPackage Get-Package -ProviderName NanoServerPackage To use any of these cmdlets with Nano Server packages on Nano Server, add -ProviderName NanoServerPackage. If you don't add the -ProviderName parameter, PackageManagement will iterate all of the providers. For more details on these cmdlets, run Get-Help <cmdlet>. Here are some common usage examples: Searching for Nano Server packages You can use either Find-NanoServerPackage or Find-Package -ProviderName NanoServerPackage to search for and return a list of Nano Server packages that are available in the online repository. For example, you can get a list of all the latest packages: Find-NanoServerPackage Running Find-Package -ProviderName NanoServerPackage -DisplayCulture displays all available cultures. If you need a specific locale version, such as US English, you could use Find-NanoServerPackage -Culture en-us or Find-Package -ProviderName NanoServerPackage -Culture en-us or Find-Package -Culture en-us -DisplayCulture. To find a specific package by package name, use the -Name parameter. This parameter also accepts wildcards. For example, to find all packages with VMM in the name, use Find-NanoServerPackage -Name *VMM* or Find-Package -ProviderName NanoServerPackage -Name *VMM*. You can find a particular version with the -RequiredVersion, -MinimumVersion, or -MaximumVersion parameters. To find all available versions, use -AllVersions. Otherwise, only the latest version is returned. For example: Find-NanoServerPackage -Name *VMM* -RequiredVersion 10.0.14393.0. Or, for all versions: Find-Package -ProviderName NanoServerPackage -Name *VMM* -AllVersions Installing Nano Server packages You can install a Nano Server package (including its dependency packages, if any) to Nano Server either locally or an offline image with either Install-NanoServerPackage or Install-Package -ProviderName NanoServerPackage. Both of these accept input from the pipeline. To install the latest version of a Nano Server package to an online Nano Server, use either Install-NanoServerPackage -Name Microsoft-NanoServer-Containers-Package or Install-Package -Name Microsoft-NanoServer-Containers-Package. PackageManagement will use the culture of the Nano Server. You can install a Nano Server package to an offline image while specifying a particular version and culture, like this: Install-NanoServerPackage -Name Microsoft-NanoServer-DCB-Package -Culture de-de -RequiredVersion 10.0.14393.0 -ToVhd C:\MyNanoVhd.vhd or: Install-Package -Name Microsoft-NanoServer-DCB-Package -Culture de-de -RequiredVersion 10.0.14393.0 -ToVhd C:\MyNanoVhd.vhd Here are some examples of pipelining package search results to the installation cmdlet: Find-NanoServerPackage *dcb* | Install-NanoServerPackage finds any packages with "dcb" in the name and then installs them. Find-Package *nanoserver-compute-* | Install-Package finds packages with "nanoserver-compute-" in the name and installs them. Find-NanoServerPackage -Name *nanoserver-compute* | Install-NanoServerPackage -ToVhd C:\MyNanoVhd.vhd finds packages with "compute" in the name and installs them to an offline image. Find-Package -ProviderName NanoserverPackage *nanoserver-compute-* | Install-Package -ToVhd C:\MyNanoVhd.vhd does the same thing with any package that has "nanoserver-compute-" in the name. Downloading Nano Server packages Save-NanoServerPackage or Save-Package allow you to download packages and save them without installing them. Both cmdlets accept input from the pipeline. For example, to download and save a Nano Server package to a directory that matches the wildcard path, use Save-NanoServerPackage -Name Microsoft-NanoServer-DNS-Package -Path C:\ In this example, -Culture wasn't specified, so the culture of the local machine will be used. No version was specified, so the latest version will be saved. Save-Package -ProviderName NanoServerPackage -Name Microsoft-NanoServer-IIS-Package -Path C:\ -Culture it-IT -MinimumVersion 10.0.14393.0 saves a particular version and for the Italian language and locale. You can send search results through the pipeline as in these examples: Find-NanoServerPackage -Name *containers* -MaximumVersion 10.2 -MinimumVersion 1.0 -Culture es-ES | Save-NanoServerPackage -Path C:\ or Find-Package -ProviderName NanoServerPackage -Name *shield* -Culture es-ES | Save-Package -Path Inventory installed packages You can discover which Nano Server packages are installed with Get-Package. For example, see which packages are on Nano Server with Get-Package -ProviderName NanoserverPackage. To check the Nano Server packages that are installed in an offline image, run Get-Package -ProviderName NanoserverPackage -FromVhd C:\MyNanoVhd.vhd. Installing roles and features from local source Though offline installation of server roles and other packages is recommended, you might need to install them online (with the Nano Server running) in container scenarios. To do this, follow these steps: Copy the Packages folder from the installation media locally to the running Nano Server (for example, to C:\packages). Create a new Unattend.xml file on another computer and then copy it to Nano Server. You can copy and paste this XML content into the XML file you created (this example shows installing the IIS package): <?xml version="1.0" encoding="utf-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <servicing> <package action="install"> <assemblyIdentity name="Microsoft-NanoServer-IIS-Feature-Package" version="10.0.14393.0" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" /> <source location="c:\packages\Microsoft-NanoServer-IIS-Package.cab" /> </package> <package action="install"> <assemblyIdentity name="Microsoft-NanoServer-IIS-Feature-Package" version="10.0.14393.0" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="en-US" /> <source location="c:\packages\en-us\Microsoft-NanoServer-IIS-Package_en-us.cab" /> </package> </servicing> <cpi:offlineImage cpi: </unattend> In the new XML file you created (or copied), edit C:\packages to the directory you copied the content of Packages to. Switch to the directory with the newly created XML file and run dism /online /apply-unattend:.\unattend.xml Confirm that the package and its associated language pack is installed correctly by running: dism /online /get-packages You should see "Package Identity : Microsoft-NanoServer-IIS-Package~31bf3856ad364e35~amd64~en-US~10.0.10586.0" listed twice, once for Release Type : Language Pack and once for Release Type : Feature Pack. Customizing an existing Nano Server VHD You can change the details of an existing VHD by using the Edit-NanoServerImage cmdlet, as in this example: Edit-NanoServerImage -BasePath .\Base -TargetPath .\BYOVHD.vhd This cmdlet does the same things as New-NanoServerImage, but changes the existing image instead of converting a WIM to a VHD. It supports the same parameters as New-NanoServerImage with the exception of -MediaPath and -MaxSize, so the initial VHD must have been created with those parameters before you can make changes with Edit-NanoServerImage. Additional tasks you can accomplish with New-NanoServerImage and Edit-NanoServerImage -Edition Standard -DeploymentType Host -DeploymentType Host -Edition Standard . Adding additional drivers Nano Server offers a package that includes a set of basic drivers for a variety of network adapters and storage controllers; it's possible that drivers for your network adapters might not be included. You can use these steps to find drivers in a working system, extract them, and then add them to the Nano Server image. - Install Windows Server 2016 on the physical computer where you will run Nano Server. - Open Device Manager and identify devices in the following categories: - Network adapters - Storage controllers - Disk drives - For each device in these categories, right-click the device name, and click Properties. In the dialog that opens, click the Driver tab, and then click Driver Details. - Note the filename and path of the driver file that appears. For example, let's say the driver file is e1i63x64.sys, which is in C:\Windows\System32\Drivers. - In a command prompt, search for the driver file and search for all instances with dir e1i*.sys /s /b. In this example, the driver file is also present in the path C:\Windows\System32\DriverStore\FileRepository\net1ic64.inf_amd64_fafa7441408bbecd\e1i63x64.sys. In an elevated command prompt, navigate to the directory where the Nano Server VHD is and run the following commands: md mountdir dism\dism /Mount-Image /ImageFile:.\NanoServer.vhd /Index:1 /MountDir:.\mountdir dism\dism /Add-Driver /image:.\mountdir /driver: C:\Windows\System32\DriverStore\FileRepository\net1ic64.inf_amd64_fafa7441408bbecd dism\dism /Unmount-Image /MountDir:.\MountDir /Commit - Repeat these steps for each driver file you need. Note In the folder where you keep your drivers, both the SYS files and corresponding INF files must be present. Also, Nano Server only supports signed, 64-bit drivers. -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\InjectingDrivers.vhdx -DriverPath .\Extra\Drivers Note In the folder where you keep your drivers, both the SYS files and corresponding INF files must be present. Also, Nano Server only supports signed, 64-bit drivers. Using the -DriverPath parameter, you can also pass a array of paths to driver .inf files: New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\InjectingDrivers.vhdx -DriverPath .\Extra\Drivers\netcard64.inf -DeploymentType Host -Edition Standard 6Dns, -Ipv4Address, -Ipv4SubnetMask, -Ipv4Gateway and -Ipv4Dns parameters to specify the configuration, as in this example: New-NanoServerImage -DeploymentType Host -Edition Standard -Ipv4Dns 192.168.1.1 Custom image size You can configure the Nano Server image to be a dynamically expanding VHD or VHDX with the -MaxSize parameter, as in this example: New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\BigBoss.vhd -MaxSize 100GB Embedding custom data To embed your own script or binaries in the Nano Server image, use the -CopyPath parameter to pass an array of files and directories to be copied. The -CopyPath parameter can also accept a hashtable to specify the destination path for files and directories. New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\BigBoss.vhd -CopyPath .\tools Running custom commands after the first boot To run custom commands as part of setupcomplete.cmd, use the -SetupCompleteCommand parameter to pass an array of commands: New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\NanoServer.wim -SetupCompleteCommand @("echo foo", "echo bar") Running custom PowerShell scripts as part of image creation To run custom PowerShell scripts as part of the image creation process, use the -OfflineScriptPath parameter to pass an array of paths to .ps1 scripts. If those scripts take arguments, use the -OfflineScriptArgument to pass a hashtable of additional arguments to the scripts. New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\NanoServer.wim -OfflineScriptPath C:\MyScripts\custom.ps1 -OfflineScriptArgument @{Param1="Value1"; Param2="Value2"} Support for development scenarios If you want to develop and test on Nano Server, you can use the -Development parameter. This will enable PowerShell as the default local shell, enable installation of unsigned drivers, copy debugger binaries, open a port for debugging, enable test signing, and enable installation of AppX packages without a developer license: New-NanoServerImage -DeploymentType Guest -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\NanoServer.wim -Development Custom unattend file If you want to use your own unattend file, use the -UnattendPath parameter: New-NanoServerImage -DeploymentType Guest -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\NanoServer.wim -UnattendPath \\path\to\unattend.xml Specifying an administrator password or computer name in this unattend file will override the values set by -AdministratorPassword and -ComputerName. Note Nano Server does not support setting TCP/IP settings via unattend files. You can use Setupcomplete.cmd to configure TCP/IP settings. Collecting log files If you want to collect the log files during image creation, use the -LogPath parameter to specify a directory where all the log files are copied. New-NanoServerImage -DeploymentType Guest -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\NanoServer.wim -LogPath C:\Logs Note Some parameters on New-NanoServerImage and Edit-NanoServerImage are for internal use only and can be safely ignored. These include the -SetupUI and -Internal parameters. Installing apps and drivers Windows Server App installer Windows Server App (WSA) installer provides a reliable installation option for Nano Server. Since Windows Installer (MSI) is not supported on Nano Server, WSA is also the only installation technology available for non-Microsoft products. WSA leverages Windows app package technology designed to install and service applications safely and reliably, using a declarative manifest. It extends the Windows app package installer to support Windows Server-specific extensions, with the limitation that WSA does not support installing drivers. Creating and installing a WSA package on Nano Server involves steps for both the publisher and the consumer of the package. The package publisher should do the following: - Install Windows 10 SDK, which includes the tools needed to create a WSA package: MakeAppx, MakeCert, Pvk2Pfx, SignTool. - Declare a manifest: Follow the WSA manifest extension schema to create the manifest file, AppxManifest.xml. - Use the MakeAppx tool to create a WSA package. - Use MakeCert and Pvk2Pfx tools to create the certificate, and then use Signtool to sign the package. Next, the package consumer should follow these steps: - Run the Import-Certificate PowerShell cmdlet to import the publisher's certificate from Step 4 above to Nano Server with the certStoreLocation at "Cert:\LocalMachine\TrustedPeople". For example: Import-Certificate -FilePath ".\xyz.cer" -CertStoreLocation "Cert:\LocalMachine\TrustedPeople" - Install the app on Nano Server by running the Add-AppxPackage PowerShell cmdlet to install a WSA package on Nano Server. For example: Add-AppxPackage wsaSample.appx Additional resources for creating apps WSA is server extension of Windows app package technology (though it is not hosted in Windows Store). If you want to publish apps with WSA,these topics will help you familiarize youself with the app package pipeline: - How to create a basic package manifest - App Packager (MakeAppx.exe) - How to create an app package signing certificate - SignTool Installing drivers on Nano Server You can install non-Microsoft drivers on Nano Server by using INF driver packages. These include both Plug-and-Play (PnP) driver packages and File System Filter driver packages. Network Filter drivers are not currently supported on Nano Server. Both PnP and File System Filter driver packages must follow the Universal driver requirements and installation process, as well as general driver package guidelines such as signing. They are documented at these locations: Installing driver packages offline Supported driver packages can be installed on Nano Server offline via DISM.exe or DISM PowerShell cmdlets. Installing driver packages online PnP driver packages can be installed to Nano Server online by using PnpUtil. Online driver installation for non-PnP driver packages is not currently supported on Nano Server., then run this command to Check the domain you want to join Nano Server to and ensure that DNS is configured. Also, verify that name resolution of the domain or a domain controller works as expected. To do this,. Nslookup is not available on Nano Server, so you can verify name resolution with Resolve-DNSName. If name resolution succeeds, then in the same Windows PowerShell session, run this command to join the domain: djoin /requestodj /loadfile c:\Temp\odjblob /windowspath c:\windows /localos Restart the Nano Server computer, and then exit the Windows PowerShell session: shutdown /r /t 5 Exit-PSSession After you have joined Nano Server to a domain, add the domain user account to the Administrators group on the Nano Server. For security, remove the Nano Server from the trusted hosts list with this command: Set-Item WSMan:\localhost\client\TrustedHosts ""-ImagemediaFile:.\NanoServer.vhd /Index:1 /MountDir:.\mountdir dism\dismmedia:.. Working with server roles on Nano Server - Hyper-V Replica. Using MPIO on Nano Server For steps to use MPIO, see MPIO on Nano Server Using SSH on Nano Server For instructions on how to install and use SSH on Nano Server with the OpenSSH project, see the Win32-OpenSSH wiki. Appendix: Sample Unattend.xml file that joins Nano Server to a domain Note Be sure to delete the trailing space in the contents of "odjblob" once you paste it into the Unattend file. <?xml version='1.0' encoding='utf-8'?> <unattend xmlns="urn:schemas-microsoft-com:unattend" xmlns: <settings pass="offlineServicing"> <component name="Microsoft-Windows-UnattendedJoin" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <OfflineIdentification> <Provisioning> <AccountData> AAAAAAARUABLEABLEABAoAAAAAAAMABSUABLEABLEABAwAAAAAAAAABbMAAdYABc8ABYkABLAABbMAAEAAAAMAAA0ABY4ABZ8ABbIABa0AAcIABY4ABb8ABZUABAsAAAAAAAQAAZoABNUABOYABZYAANQABMoAAOEAAMIAAOkAANoAAMAAAXwAAJAAAAYAAA0ABY4ABZ8ABbIABa0AAcIABY4ABb8ABZUABLEAALMABLQABU0AATMABXAAAAAAAKdf/mhfXoAAUAAAQAAAAb8ABLQABbMABcMABb4ABc8ABAIAAAAAAb8ABLQABbMABcMABb4ABc8ABLQABb0ABZIAAGAAAAsAAR4ABTQABUAAAAAAACAAAQwABZMAAZcAAUgABVcAAegAARcABKkABVIAASwAAY4ABbcABW8ABQoAAT0ABN8AAO8ABekAAJMAAVkAAZUABckABXEABJUAAQ8AAJ4AAIsABZMABdoAAOsABIsABKkABQEABUEABIwABKoAAaAABXgABNwAAegAAAkAAAAABAMABLIABdIABc8ABY4AADAAAA4AAZ4ABbQABcAAAAAAACAAkKBW0ID8nJDWYAHnBAXE77j7BAEWEkl+lKB98XC2G0/9+Wd1DJQW4IYAkKBAADhAnKBWEwhiDAAAM2zzDCEAM6IAAAgAAAAAAAQAAAAAAAAAAAABwzzAAA </AccountData> </Provisioning> </OfflineIdentification> </component> </settings> <settings pass="oobeSystem"> <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <UserAccounts> <AdministratorPassword> <Value>Tuva</Value> <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> <TimeZone>Pacific Standard Time</TimeZone> </component> </settings> <settings pass="specialize"> <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <RegisteredOwner>My Team</RegisteredOwner> <RegisteredOrganization>My Corporation</RegisteredOrganization> </component> </settings> </unattend>
https://docs.microsoft.com/en-us/windows-server/get-started/deploy-nano-server
2017-06-22T14:44:09
CC-MAIN-2017-26
1498128319575.19
[]
docs.microsoft.com
Blog Settings The Blog Layout Settings can be found in the layout section of the theme options. On the 3rd tab you have the Blog options. Here you have the following available options(note that those changes will reflect only on category, blog and static pages/posts): Title bar look – enable/disable the title bar from the top of the site. Default option is set to On. The title bar looks like this: Background Image – add a background image for the title bar. The default is presented above. How to fit background image – set how to fit the background image assigned to the title bar – available options are: cover, contain, fit vertically, fit horizontally, just center, repeat. Default option is set to Cover. Background color – add a background color to the title bar instead of the background image.
http://docs.aa-team.com/personal-trainer-wordpress-theme/documentation/blog-settings/
2017-06-22T14:21:02
CC-MAIN-2017-26
1498128319575.19
[array(['http://docs.aa-team.com/wp-content/uploads/2014/10/blog-options-pt.png', 'blog-options-pt'], dtype=object) array(['http://docs.aa-team.com/wp-content/uploads/2014/10/blog-options-pt-2.png', 'blog-options-pt-2'], dtype=object) ]
docs.aa-team.com
Kazoo Installation Guide This is a guide to building Kazoo from source on a Debian 8 (Jessie) base installation. Other GNU/Linux distros should work similarly, though the dependencies may differ a bit. If you want to just install and use Kazoo (and not build it) try using the installation instructions. The rest of this guide assumes you want to run a development environment for Kazoo. Dependencies Packages Required sudo apt-get install build-essential libxslt-dev \ zip unzip expat zlib1g-dev libssl-dev curl \ libncurses5-dev git-core libexpat1-dev \ htmldoc Note: htmldoc is required only if you want to be able to download PDFs. Docs-related When running make docs, some Python libraries are useful: shell sudo apt-get install python2.7 python-yaml sudo pip install mkdocs mkdocs-bootstrap mkdocs-bootswatch pymdown-extensions You can also run a local version of the docs with make docs-servewhich will start a local server so you can view how the docs are rendered. If you have a custom theme, you can copy it to doc/mkdocs/themeand build the docs again. When you serve the docs the theme should have been applied to the site. Erlang Kazoo 4 targets Erlang 18+. There are a couple ways to install Erlang: From Source I prefer to use a tool like kerl to manage my installations. If you want to play around with multiple versions of Erlang while hacking on Kazoo, this is probably the best way. shell curl -O chmod a+x kerl mv kerl /usr/bin kerl list releases kerl build 18.2 r18.2 # this takes a while kerl install r18.2 /usr/local/erlang . /usr/local/erlang/activate Erlang Solutions Install from the Erlang Solutions packages. These tend to be kept up-to-date better than the default distro's packages. Building Kazoo Short version cd /opt git clone cd kazoo make Longer version Clone the Kazoo repo: shell git clone Build Kazoo: shell cd kazoo make Additional make targets When developing, one can cdinto any app directory (within applications/or core/) and run: make( make allor make clean) make xrefto look for calls to undefined functions (uses Xref) make dialyzeto statically type-check the app (uses Dialyzer) make testruns the app / sub-apps test suite, if any. - Note: make sure to make clean allafter running your tests, as test BEAMs are generated in ebin/! Running the tests To run the full test suite it is advised to: cdinto the root of the project make compile-testto compile every app with the TESTmacro defined - This way apps can call code from other apps in a kind of TESTmode make eunit(instead of make test) to run the test suite without recompiling each app make properto run the test suite, including property-based tests (uses PropEr) Generate an Erlang release make build-releasewill generate a deployable release SUP The SUP command ( sup) is found under core/sup/priv/sup and should be copied or symlinked to /usr/bin/sup (or somewhere in your $PATH). It is a shell file that calls sup.escript. sudo ln -s core/sup/priv/sup /usr/bin/sup Make sure that the path to Kazoo's intallation directory is right (in /usr/bin/sup). Otherwise you can change it by setting the KAZOO_ROOT environment variable (not set by default). If one needs KAZOO_ROOT, an alias should be created: alias sup='KAZOO_ROOT=/opt/kazoo sup' Auto-completion make sup_completioncreates sup.bash: a Bash completion file for the SUP command - Copy or symlink this file to /etc/bash_completion.d/sup.bash
https://docs.2600hz.com/dev/doc/installation/
2017-06-22T14:17:19
CC-MAIN-2017-26
1498128319575.19
[]
docs.2600hz.com
The SQLServer:Availability Replica performance object contains performance counters that report information about the availability replicas in Always On availability groups in SQL Server 2017.. See Also Monitor Resource Usage (System Monitor) SQL Server, Database Replica Always On Availability Groups (SQL Server)
https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-availability-replica
2017-06-22T15:06:54
CC-MAIN-2017-26
1498128319575.19
[]
docs.microsoft.com
To test your iOS App on one or more devices you must create an .ipa package file for an Ad Hoc Distribution 1- Compile fields in Project Settings, General Tab. For an Ad Hoc Distribution, make sure that App Identifier, Code Signing Identity (Distribution), Developer Account Team ID and Provisioning Profile (Ad Hoc Distribution) are filled. Refer to the sections iOS HowTos: How to Create a Provisioning Profile, iOS HowTos: How to Create a Code Signing Identity, iOS HowTos: How to Create an App Identifier to generate the required fields. 2- After setting Icons and Covers and the Package configurations, click on Publish. In the first window, under Pack iOS App for select “Ad Hoc Distribution”. The .ipa package file will be generated, ready for distribution on devices authorized by the incorporated Provisioning Profile. 3- PubCoder will go through the compiling process and will ask you to save the .ipa package file in your Finder. 4- To view the .ipa file on your local device: – With Xcode 5.1 installed: Plug your device to the computer. From the Window menu open Organizer. From the list of devices click on the preferred device, then drag and drop or add via the plus button the file. – With Xcode 6.1 installed: Plug your device to the computer. From the Window menu open Device. Click on the preferred device, then drag and drop or add via the plus button the file.
https://docs.pubcoder.com/en/test-your-ios-app/
2017-06-22T14:02:09
CC-MAIN-2017-26
1498128319575.19
[]
docs.pubcoder.com
Final Exam Procedure Students enrolled in the courses of the Degree Campus are required to attempt one final exam per semester, except students of the Intensive Arabic Program (IAP), who are required to give two final exams per semester. The exam dates for your respective stream can be found on the Events Schedule. Outlined below is the procedure for giving the exam from start to finish. Preliminary Checks - Ensure that you have registered your choice of exam center with IOU before the scheduled deadlines. Kindly follow this link for further details. - Contact your centre a week before the exam to check if they: - Are still ready to accommodate you during the exam period. This is your responsibility and IOU will not be held responsible for any changes made by the centre at the last moment. However, if the centre is unable to host the exam for you, kindly contact us here immediately. - Have received password details from IOU. If they have not received it yet, report it to us immediately here. The Exam Period - The link for the final exam is located at the end of each course page. - You will be prompted for a password. - Ask the proctor of your center to enter the password. Your final exam has commenced now. - Proceed to attempt the exam and submit the answers after checking your choices thoroughly. - Your final exam grades will be displayed immediately. - Proceed in a similar manner for the remaining enrolled courses. Should you or your proctor face any technical/password/ reset issues during the final exam, you can contact us immediately at [email protected] or via Live Chat Support. Our HelpDesk executives are available 24/7 during the exam period to assist you with any complications. Additionally, acquaint yourself with the Final Exam Reset Procedure. Please Note: - There is no specific timetable for each course. - Students can attempt any exam on any day within the prescribed period. - If you are unable to attempt the exam in the normal exam period, you can attempt it in the late exam period with a 15% deduction of marks. The final exam link is deactivated between the normal exam period and the late exam period. It is active only during the exam periods provided.
http://docs.islamiconlineuniversity.com/article/961-final-exam-procedure
2017-06-22T14:17:16
CC-MAIN-2017-26
1498128319575.19
[]
docs.islamiconlineuniversity.com
Getting involved¶ GeoServer exists because of the efforts of people like you. There are many ways that you can help out with the GeoServer project. GeoServer fully embraces an open source development model that does not see a split between user and developer, producer and consumer, but instead sees everyone as a valuable contributor. Development¶ Helping to develop GeoServer is the obvious way to help out. Developers usually start with bug fixes and other small patches, and then move into larger contributions as they learn the system. Our developers are more than happy to help out as you learn and get acquainted. We try our hardest to keep our code clean and well documented. You can find the project on GitHub. As part of the GitHub model, anyone can submit patches as pull requests, which will be evaluated by the team. To learn more about contributing to the GeoServer codebase, we highly recommend joining the GeoServer developers mailing list. See details below. Documentation¶ Another crucial way to help out is with documentation. Whether it’s adding tutorials or just correcting mistakes, every contribution serves to make the project more healthy. And the best part is that you do not need to be a developer in order to contribute. Our official documentation is contained as part of our official code repository. As part of the GitHub model, anyone can submit patches as pull requests, which will be evaluated by the team. To learn more about contributing to the GeoServer codebase, we highly recommend joining the GeoServer developers mailing list (see details below). For typos and other small changes, please see our Documentation Guide for how to make quick fixes. Mailing lists¶ GeoServer maintains two email lists: The Users list is mainly for those who have questions relating to the use of GeoServer, and the Developers list is for more code-specific and roadmap-based discussions. If you see a question asked on these lists that you know the answer to, please respond! These lists are publicly available and are a great resource for those who are new to GeoServer, who need a question answered, or who are interested in contributing code. IRC¶ Users can join the IRC channel, #geoserver, on the Freenode network, in order to collaborate in real time. GeoServer developers occasionally will be in this channel as well. Bug tracking¶ If you have a problem when working with GeoServer, then please let us know through the mailing lists. GeoServer uses JIRA , a bug tracking website, to manage issue reports. In order to submit an issue, you’ll need to create an account first. Everyone is encouraged to submit patches and, if possible, fix issues as well. We welcome patches through JIRA, or pull requests to GitHub. Responsible Disclosure Warning. Translation¶ We would like GeoServer available in as many languages as possible. The two areas of GeoServer to translate are the text that appears in the Web administration interface and this documentation. If you are interested in helping with this task, please let us know via the mailing lists. Suggest improvements¶ If you have suggestions as to how we can make GeoServer better, we would love to hear them. You can contact us through the mailing lists or submit a feature request through JIRA. Spread the word¶ A further way to help out the GeoServer project is to spread the word. Word-of-mouth information sharing is more powerful than any marketing, and the more people who use our software, the better it will become. Fund improvements¶ A final way to help out is to push for GeoServer to be used in your own organization. A number of commercial organizations offer support for GeoServer, and any improvements made due to that funding will benefit the entire GeoServer community.
http://docs.geoserver.org/stable/en/user/introduction/gettinginvolved.html
2017-01-16T14:57:53
CC-MAIN-2017-04
1484560279189.36
[]
docs.geoserver.org
Loads a CUIx file. Opens the Load/Unload Customizations dialog box, where you can locate and load a CUIx file to customize or transfer user interface settings. When FILEDIA is set to 0 (off), CUILOAD displays the following Command prompt. Enter name of customization file to load: Enter a file name
http://docs.autodesk.com/ACD/2010/ENU/AutoCAD%202010%20User%20Documentation/files/WS1a9193826455f5ffa23ce210c4a30acaf-4cd1.htm
2017-01-16T15:05:31
CC-MAIN-2017-04
1484560279189.36
[]
docs.autodesk.com
How to get admin. Support for Intune, and for Intune when used with Configuration Manager, is free of charge. Premier Support customers incur charges for procedure questions (for example, how to go about configuring an Intune feature). Create an online service ticket Sign in to the Office 365 admin center with your Intune credentials. Note Premier Support customers can open an Intune support ticket on the Premier support page. Choose the Admin tile. On the left, under Support, choose Support to open a ticket. Note Customers who have, or have had, an O365 account with 100 or fewer licenses, will see this message. If you see it, refer to Create a support ticket with alternate methods. For billing, licensing, and account issues, select Billing and product info. For all other Intune issues, select Mobile device management. Note You might have to choose more at the bottom of the list to see all of the categories. Follow the instructions to open your request. Create a support ticket with alternate methods Follow this procedure if your support page looks like this: - Choose Need help. In the text box, provide a description of your issue, and then choose Get help. Review the suggested online resources, or choose Let us call you to receive a call from Microsoft Support. Get phone support See Contact assisted phone support for Microsoft Intune for a list of support phone numbers by country and region, support hours, and supported languages for each region. Track your service requests - Sign in to the Office 365 admin center with your Intune credentials. - Choose the Admin tile. - On the left, under Support, choose Service requests. Then you can review your requests. Our initial responses to service requests depend on the severity of the issue. For the most severe issues, our first response for Professional customers is within two hours. For Premier Support customers, the response varies according to your support agreement. These are cases where: - One or more services aren’t accessible or are unusable. - Production, operations, or deployment deadlines are severely affected, or there will be a severe impact on production or profitability. - Multiple users or services are affected. For moderately severe issues, our first response for Professional customers is within four hours. For Premier Support customers, the response varies according to your support agreement. These are cases where: - The service is usable but is not functioning as well as usual. - The situation has moderate business impact and can be dealt with during business hours. - A single user, customer, or service is partially affected. For other issues, our first response for Professional customers is within eight hours. For Premier Support customers, the response varies according to your support agreement. These are cases where: - The situation has minimal business impact. - The issue is important but does not have an immediate, significant service or productivity impact for the customer. - A single user is experiencing partial disruption, but an acceptable workaround exists.. It also offers the option of opening a support request online or over the phone. Technical support for System Center Configuration Manager or System Center Endpoint Protection requires either payment or is decremented from your existing licensing or Premier Support agreements. Resolve issues without opening a support ticket You might be able to resolve your issue without opening a support ticket. For self-help with Intune, see General troubleshooting tips for Microsoft Intune or any of the troubleshooting topics for specific issues. You can also search for a solution or post your question to the Intune forum. Find support for volume licensing If you have already purchased licenses from Microsoft under a volume licensing program, use the following resources for support: For support related to licenses and locating keys, see Volume Licensing Service Center. For billing questions, see Billing and subscription management support. For general information about volume licensing, see Volume licensing.
https://docs.microsoft.com/en-us/intune/troubleshoot/how-to-get-support-for-microsoft-intune
2017-01-16T15:35:57
CC-MAIN-2017-04
1484560279189.36
[array(['../media/alternate-support-ui.png', 'Alternate Intune support'], dtype=object) ]
docs.microsoft.com
Keymap Customization Keys Available Keys When customizing keymaps it's useful to use keys which won't conflict with Blender's default keymap. Here are keys which aren't used and aren't likely to be used in the future. - F-Keys (F5 - F8) These F-keys (including modifier combination) have been intentionally kept free for users to bind their own keys to. - OSKey (also known as the Windows-Key, Cmdor Super) Blender doesn't use this key for any bindings. macOS is an exception, where Cmd replaces Ctrl except in cases it would conflict with the system's key bindings. - Modifier Double Click Binding modifier keys as primary keys is supported, to avoid conflicts with regular usage you can bind them to double click. Multi-Action Keys Click/Drag It's possible to configure a single key to perform multiple operations using Click event instead of Press. Then you may bind Drag to a separate action. This is useful for mixing actions where one uses a drag event, e.g: Toggle a setting using with tab, drag to open a pie menu showing all options related to the setting. Click/Tweak Unlike click/drag, this only works for the mouse buttons, but has the advantage that tweak events can be directional. To use this, events in this keymap must use Click instead of Press, then you can bind Tweak actions to the mouse buttons. This is used in the default keymap in the 3D Viewport, Alt-MMB dragging in different directions rotates the view. Common Operations This section lists useful generic operations which can be used. Key Bindings for Pop-Ups Menus and panels can be assigned key shortcuts, even if they're only accessible from submenus elsewhere. - Open a Pop-up Menu ( wm.call_menu) Open any menu on key press. - Open a Pie Menu ( wm.call_menu_pie) Open any pie menu on key press. - Open a Panel ( wm.call_panel) Open a pop-up panel (also known as a pop-over). Key Bindings for Properties There are many properties you might want to bind a key with. To avoid having to define operators for each property, there are generic operators for this purpose: Operators for adjusting properties begin with wm.context_. Some of these include: wm.context_toggletoggle a Boolean property. wm.context_cycle_enumcycle an enum property forwards or backwards. wm.context_menu_enumshow a pop-up menu for an enum property. wm.context_pie_enumshow a pie menu for an enum property. wm.context_scale_floatscale a number (used for increasing / decreasing brush size for example). wm.context_toggle_enumtoggle between two options of an enum. wm.context_modal_mousemoving the cursor to interactively change a value. See bpy.ops.wm for a complete list. Each of these operators has a data_path setting to reference the property to change. To find the data_path, basic Python knowledge is needed. For example, you can use the Python Console to access a Boolean property you wish to map to a key: bpy.context.object.show_name To bind this to a key, add a new keymap item using the operator wm.context_toggle with data_path set to object.show_name (notice the bpy.context prefix is implicit). See bpy.context for other context attributes. The Python API documentation can be used to find properties or you may use the Python Console's auto-complete to inspect available properties.
https://docs.blender.org/manual/zh-hant/dev/advanced/keymap_editing.html
2021-11-27T07:46:45
CC-MAIN-2021-49
1637964358153.33
[]
docs.blender.org
RepositoryItemLookUpEdit.ListChanged Event Occurs after a record(s) in the RepositoryItemLookUpEditBase.DataSource has been changed. Namespace: DevExpress.XtraEditors.Repository Assembly: DevExpress.XtraEditors.v21.2.dll Declaration [DXCategory("Events")] public event ListChangedEventHandler ListChanged <DXCategory("Events")> Public Event ListChanged As ListChangedEventHandler Event Data The ListChanged event's data class is ListChangedEventArgs. The following properties provide information specific to this event: Remarks The ListChanged event is raised as a result of adding, inserting and removing records from the editor’s data source specified by the RepositoryItemLookUpEditBase.DataSource property. The editor subscribes to the ListChanged event and updates itself as needed. The editor’s LookUpEdit.ListChanged event is equivalent to the current event. See Also Feedback
https://docs.devexpress.com/WindowsForms/DevExpress.XtraEditors.Repository.RepositoryItemLookUpEdit.ListChanged
2021-11-27T09:37:32
CC-MAIN-2021-49
1637964358153.33
[]
docs.devexpress.com
Date: Sat, 19 Jul 1997 00:13:15 -0600 (MDT) From: Brandon Gillespie <[email protected]> To: "Daniel O'Callaghan" <[email protected]> Cc: Dan Busarow <[email protected]>, [email protected], [email protected], [email protected] Subject: BIND-8 port (was Re: upgrading to a safe BIND?) Message-ID: <[email protected]> In-Reply-To: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Sat, 19 Jul 1997, Daniel O'Callaghan wrote: > > On Fri, 18 Jul 1997, Dan Busarow wrote: > > > On Fri, 18 Jul 1997, Brandon Gillespie wrote: > > > Why don't we ship FreeBSD with bind-8? From what I've read, it seems like > > > the better of the two.. > > > > The new named.conf syntax. > > If I get a chance this w/e, I'll build 811 and release it as a package. > That would be the best way to transition (and to get everyone to upgrade > faster). If someone beats me to it, I won't complain, though. I'll see if I can do it.. just a note for whoever does.. perhaps include in the port a hook to run the conversion scripts from bind-4 to bind-8 (after prompting, of course). -Brandon Gillespie Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=790432+0+archive/1997/freebsd-questions/19970713.freebsd-questions
2021-11-27T07:58:11
CC-MAIN-2021-49
1637964358153.33
[]
docs.freebsd.org
Chart Series Coloring in Presentation Document Chart Series Coloring in Presentation Document Creating a Column Chart Practising the following steps, you can insert a Column Chart in MS PowerPoint 2013: - Create a new presentation slide - Presentation Document Adding Syntax to be evaluated by GroupDocs.Assembly Engine Chart Title <Total Contract Prices by Managers<<foreach [m in managers]>><<x [m.Manager]>> Chart Data (Excel) Download Template Please download the sample Dynamic Chart Series Color document we created in this article: - Chart Template.pptx :
https://docs.groupdocs.com/assembly/net/chart-series-coloring-in-presentation-document/
2021-11-27T08:42:49
CC-MAIN-2021-49
1637964358153.33
[]
docs.groupdocs.com
GroupDocs.Assembly for .NET 17.11 Release Notes This page contains release notes for GroupDocs.Assembly for .NET 17.11. Major Features This release of GroupDocs.Assembly comes up with several new features to dynamically manipulate non-textual document elements. Full List of Features Covering all Changes in this Release Public API and Backward Incompatible Changes This section lists public API changes that were introduced in GroupDocs.Assembly for .NET 17.11. insertion of images for email messages with HTML body Dynamic insertion of images and barcodes is now supported for MSG, EML, and MHTML documents with HTML body created using Microsoft Outlook or Microsoft Word: Add ability to remove selective chart series dynamically]>> Added Support for Codablock F and GS1 Codablock barcodes The following identifiers can be used to generate Codablock F and GS1 Codablock F barcodes:
https://docs.groupdocs.com/assembly/net/groupdocs-assembly-for-net-17-11-release-notes/
2021-11-27T09:21:17
CC-MAIN-2021-49
1637964358153.33
[]
docs.groupdocs.com
Crate twelve_bit [−] [src] The U12 Library A Rust library for representing 12-bit unsigned values. This is primarily useful for implementing Chip-8 assemblers and interpreters safely. The type implements bulk of the standard Rust literal semantics and operators, and much of the documentation is adapted from the u16 intrinsic type.
https://docs.rs/twelve_bit/0.1.1/x86_64-apple-darwin/twelve_bit/?search=lib
2021-11-27T08:03:55
CC-MAIN-2021-49
1637964358153.33
[]
docs.rs
Attentive¶ This is a custom integration that can be implemented with extra effort. If you wish to integrate this vendor, please contact your Customer Success Manager to apply it to your campaigns. Talkable converts customers into brand advocates by enabling trusted, word-of-mouth referrals. Attentive connects with customers on the most engaging channel: SMS. Together, Talkable and Attentive drive personalized brand engagement and the acquisition of high-value, loyal customers. Use cases: Capture phone numbers through Talkable to seamlessly grow your SMS subscriber list. The integration automatically passes these numbers to your Attentive messaging flow. Promote refer-a-friend to your SMS list to facilitate increased sharing. Mobile messages have a 99% open rate. Sample SMS triggered post-purchase: Thanks for your recent purchase! A referral from you would mean a lot to us! Share ‘Brand name’ with friends & we’ll share $10 with you 😃 Trigger referral messages through SMS for increased engagement. For example, send advocates their referral reward via text to increase redemptions and repeat shoppers. Interested in setting this up? Contact your CSM or get in touch here.
https://docs.talkable.com/email_marketing_and_automation/attentive.html
2021-11-27T08:13:39
CC-MAIN-2021-49
1637964358153.33
[]
docs.talkable.com
Add & manage taxes that are going to be used by Organisers to apply them on their event's tickets, and Admin taxes, that are applicable by default on all tickets. {primary} Click on Taxeson Admin Panel Let's first start by adding a new Tax, so that you can know all the options you can set along with a tax. {primary} Click on Add New About input fields Tax Title - Name of the Tax Rate Type Net-Price Status Admin Tax When Admin sets a tax as Admin Tax, it becomes Admin Tax. Let's see the powers of it. It's a separate tax that's charged by Admin to the customers. {primary} Admin (site-owners) can create multiple admin taxes that are applicable on all tickets by default, like a fixed admin fee to the customers.
https://eventmie-pro-docs.classiebit.com/docs/1.5/admin/taxes
2021-11-27T09:34:12
CC-MAIN-2021-49
1637964358153.33
[]
eventmie-pro-docs.classiebit.com
The. Werkzeug parses the incoming data under the following situations: form, files, or streamand the request method was POST or PUT.. The standard Werkzeug parsing behavior handles three cases: streamwill be empty and formwill contain the regular POST / PUT data, fileswill contain the uploaded files as FileStorageobjects. streamwill be empty and formwill contain the regular POST / PUT data and fileswill be empty.(). To avoid being the victim of a DDOS attack you can set the maximum accepted content length and request field sizes. The Request. Modern web applications transmit a lot more than multipart form data or url encoded data. To extend the capabilities, subclass Request or Request and add or extend methods.
https://getdocs.org/Werkzeug/docs/2.0.x/request_data
2021-11-27T08:53:44
CC-MAIN-2021-49
1637964358153.33
[]
getdocs.org
Porting guide¶ Until the versions 0.4.x, python-mpd2 was a drop-in replacement for application which were using the original python-mpd. That is, you could just replace the package’s content of the latter one by the former one, and things should just work. However, starting from version 0.5, python-mpd2 provides enhanced features which are NOT backward compatibles with the original python-mpd package. This goal of this document is to explains the differences the releases and if it makes sense, how to migrate from one version to another. Stickers API¶ When fetching stickers, python-mpd2 used to return mostly the raw results MPD was providing: >>> client.sticker_get('song', 'foo.mp3', 'my-sticker') 'my-sticker=some value' >>> client.sticker_list('song', 'foo.mp3') ['my-sticker=some value', 'foo=bar'] Starting from version 0.5, python-mpd2 provides a higher-level representation of the stickers’ content: >>> client.sticker_get('song', 'foo.mp3', 'my-sticker') 'some value' >>> client.sticker_list('song', 'foo.mp3') {'my-sticker': 'some value', 'foo': 'bar'} This removes the burden from the application to do the interpretation of the stickers’ content by itself. New in version 0.5.
https://python-mpd2.readthedocs.io/en/latest/topics/porting.html
2021-11-27T07:40:41
CC-MAIN-2021-49
1637964358153.33
[]
python-mpd2.readthedocs.io
How to install¶ Using pip¶ If you are using pyenv or don’t need special root access to install: $ pip install trepan2 # or trepan3k for Python 3.x If you need root access you may insert sudo in front or become root: $ sudo pip install trepan2 or: $ su root # pip install trepan Using easy_install¶ Basically the same as using pip, but change “pip install” to “easy_install”: $ easy_install trepan # or trepan3k $ git clone $ cd python-trepan $ make check-short # to run tests $ make install # if pythonbrew or you don't need root access $ sudo make install # if pythonbrew or you do need root access Above I used GNU “make” to run and install. However this just calls python setup.py to do the right thing. So if you are more familiar with setup.py you can use that directly. For example: $ ./setup.py test $ ./setup.py install
https://python2-trepan.readthedocs.io/en/stable/install.html
2021-11-27T09:08:37
CC-MAIN-2021-49
1637964358153.33
[]
python2-trepan.readthedocs.io
Xdebug The following lines detail the options that Vlad sets as the Xdebug setup. xdebug.remote_enable=1 xdebug.remote_port=9000 xdebug.remote_handler="dbgp" xdebug.remote_connect_back = 1 xdebug.profiler_enable=0 xdebug.profiler_output_dir=/tmp/xdebug_profiles xdebug.profiler_enable_trigger=1 xdebug.profiler_output_name=cachegrind.out.%p This allows for both interactive debugging and profiling using Xdebug. The Xdebug profiler needs to be activated by passing the XDEBUG_PROFILE variable as a GET or POST parameter. When passed during a page load this will generate a file starting with cachegrind.out. in the /tmp/xdebug_profiles directory. Open these files with a program like KCachegrind to see data on how your application is performing. Setting Up XDebug Debugger - You must setup your host file mapping so that the local files point to the correct files on the host. This should be a setting in your IDE. The default location inside the virtual machine is /var/www/site/docroot/, so this is where the mapping should point to. - On the host machine start your debug Listener with default settings. This should use port 9000 and the guest box IP address. The IDE Key does not matter for some IDE's, but PHPStorm likes to have a known IDE key. - Request any page with a query string like. The value 'foo' does not matter - all values will cause PHP in the virtual machine to connect to the host. If you are running PHPStorm then you can create a bookmarklet that will allow you to start the debugger from any page with the correct IDE key on the following page. - When your IDE receives a connection from the VM, it should prompt you to create a Path Mapping. This helps your IDE pull up an editable file instead of a read-only copy thats provided by XDebug. Debugging Drush Commands If you want to use Xdebug with Drush commands, you must place drush into the docroot of the project and set the following linux variables (run the following in terminal): export PHP_IDE_CONFIG="serverName=drupal.local" export XDEBUG_CONFIG="idekey=PHPSTORM remote_host=192.168.100.1 remote_port=9000"
https://vlad-docs.readthedocs.io/en/latest/applications/xdebug/
2021-11-27T08:29:38
CC-MAIN-2021-49
1637964358153.33
[]
vlad-docs.readthedocs.io
infix ~< Documentation for infix ~< assembled from the following types: language documentation Operators (Operators) infix ~< Coerces the left argument to a non-variable-encoding string buffer type (e.g. buf8, buf16, buf32) and then performs a numeric bitwise left shift on the bits of the buffer. Please note that this has not yet been implemented.
http://docs.perl6.wakelift.de/routine/~%3C
2019-10-14T06:30:21
CC-MAIN-2019-43
1570986649232.14
[]
docs.perl6.wakelift.de
This page provides a tutorial on using XGen Archive Primitives with V-Ray for Maya. Overview These tutorials explain how to use instanced, animated objects (archives) with XGen and V-Ray for Maya. Before starting these tutorials, it is recommended that you review the XGen General page. Part I: Export an Archive - To export geometry with materials as an XGen archive, the scene has to be saved on disk. Select the geometry that you want to export. Ensure that the name has no colons. For example, a mesh named Torso:torso1Shape will not export properly as an archive using XGen. Deleting the colon will fix the problem. Transforms should also not contain any colons. Ensure that the AbcExport maya plugin is loaded before exporting archives - XGen uses it. Choose Export Selection as Archive(s) from the XGen menu. For newer versions of Maya, this option can be found under the Generate menu. XGen menu Generate menu In the dialog that opens, give the archive a name and export it To download the scene created with these steps, please click the button below. When an XGen archive export fails (no .xarc file is generated), find the <archive_name>.log file in the output directory of the export and check for errors. Part II: Create a Simple Scene In this step, we will create a simple scene for the XGen Archive primitives. Set Up a Plane Create a plane. Assign a description with archives randomly across the surface. - Set the density to 0.5 Set up the Archive Primitives Set the size of the archive to 50 and add the cookie_monster archive A dialog will appear that asks you if the materials for the archives should be imported into the scene. Choose Yes. - Save the scene after the materials have been imported. Give the twist attribute the following expression: rand(-180, 180) Set Up the Render You should see something like this in the viewport: - Add a VRay Dome Light to the scene and render. The result should be like this: Part III: Adding Multiple Archives. Part IV: Per Patch/Description Materials A material can be assigned to a patch/description and all archive instances can be forced to use that material - Select the description from the Outliner and right click in the viewport > Assign New Material. To make everything yellow we can assign a VRayMtl to the description and tick the Use Per Patch/Description Material For Archives option from the VRay settings in the Preview/Output tab: There may not be a change in the viewport, but when rendered it should look like this: - Projected textures can be used too! Here is an example with a projected file texture instead of the yellow diffuse color: Part V: Shading Archive Instances Uniquely Archive instances can be shaded uniquely through the use of Custom Shader Parameters Unlike XGen Hair there are no specified/hardcoded names for such parameters - colors and floats with any name can be used with VRay User Color and VRay User Scalar and can be plugged to anything. Here we will get rid of the tori archive and make each cookie monster body with a unique random color Add a V-Ray User Color Map - Disable the Use Per Patch/Description Material For Archives option from the VRay settings in the Preview/Output tab - Since the materials for the archive have been imported we can select the cookie_monster:body_blue material and attach a VRay User Color to the Diffuse Color. Give User attribute name "body_color" as a value (or whatever name you like - It should just match the name of the parameter we will add later). Add a Custom Shader Parameter - Go to the Preview/Output tab of the XGen window and scroll down to the Custom Shader Parameters section. Add body_color as type color like this: Randomize the Color with an Expression Go to the expression editor for the new attribute Set the following expression [rand(0,1), rand(0,1),rand(0,1)] And when rendered it should look like this: You can also set this expression to the Primitive Color attribute of the Preview Settings in the Preview/Output tab to get some sort of visual feedback in the viewport Add a V-Ray User Scalar Map - To render each cookie with a random amount of reflectivity we can create a VRay User Scalar and plug it to the Amount of the reflection of cookie_monster:BrownMtl - Set the Reflection Color to white so that the Amount dictates the reflectivity - Set the User attribute name of the VRay User Scalar to something like "reflectivity" Randomize the Reflectivity Add a custom shader parameter of type float like this Set the expression of the new parameter to the following: rand(0,1) And when rendered it should look like this (with every cookie having different reflectivity): Additional Options - A texture map could be also used to drive the Custom Shader Parameters like explained in the XGen General page Here is another render with randomized refraction for the cookies instead of reflectivity Part VI: Animated Archives XGen can export an archive with animation within it from the export menu as shown below: As an example here is the cookie monster with its eyes animated for 26 frames: The whole archive instances can be animated uniquely as well - for more information refer to the XGen Animation and Motion Blur page. The animation in the archive does not contain samples for non-integer times! If rendering with motion blur and more than 2 geometry samples for the scene the animated archives will have only 2 geometry samples! Part VII: Frame Attribute Per Archive To make each archive instance be in a different frame of it's inner animation you can use XGen Frame attribute from the Primitives tab of the XGen window - you can put an expression like rand(0,10) or $frame. You will also need to enable the Use XGen Frame Attribute Per Archive option from the Preview/Output tab of the XGen window. Notes - Everything that can be done for XGen hair (animation, motion blur, batch mode, scene modifications, XGen attributes, IPR) can also be applied to XGen archives - If an archive is exported with a material from the "initialShadingGroup", then it will not be imported by xgen into the scene when importing the archive.
https://docs.chaosgroup.com/pages/viewpage.action?pageId=39816453
2019-10-14T05:26:02
CC-MAIN-2019-43
1570986649232.14
[]
docs.chaosgroup.com
,. export JAVA_HOME=/usr/java/jdk1.6.0_25 export PATH=${JAVA_HOME}/bin:${PATH} The file should now look like this: Save the file. - To verify that the JAVA_HOMEvariable.
https://docs.wso2.com/display/DSS322/Installing+on+Solaris
2019-10-14T05:40:04
CC-MAIN-2019-43
1570986649232.14
[]
docs.wso2.com
Tutorial: Monitor network communication between two virtual machines using the Azure portal Successful communication between a virtual machine (VM) and an endpoint such as another VM, can be critical for your organization. Sometimes, configuration changes are introduced which can break communication. In this tutorial, you learn how to: - Create two VMs - Monitor communication between VMs with the connection monitor capability of Network Watcher - Generate alerts on Connection Monitor metrics - Diagnose a communication problem between two VMs, and learn how you can resolve it If you don't have an Azure subscription, create a free account before you begin. Create VMs Create two VMs. Create the first VM Select + Create a resource found on the upper, left corner of the Azure portal. Select Compute, and then select an operating system. In this tutorial, Windows Server 2016 Datacenter is used. Enter, or select, the following information, accept the defaults for the remaining settings, and then select OK: Select a size for the VM and then select Select. Under Settings, select Extensions. Select Add extension, and select Network Watcher Agent for Windows, as shown in the following picture: Under Network Watcher Agent for Windows, select Create, under Install extension select OK, and then under Extensions, select OK. Accept the defaults for the remaining Settings and select OK. Under Create of the Summary, select Create to start VM deployment. Create the second VM Complete the steps in Create the first VM again, with the following changes: The VM takes a few minutes to deploy. Wait for the VM to finish deploying before continuing with the remaining steps. Create a connection monitor Create a connection monitor to monitor communication over TCP port 22 from myVm1 to myVm2. On the left side of the portal, select All services. Start typing network watcher in the Filter box. When Network Watcher appears in the search results, select it. Under MONITORING, select Connection monitor. Select + Add. Enter or select the information for the connection you want to monitor, and then select Add. In the example shown in the following picture, the connection monitored is from the myVm1 VM to the myVm2 VM over port 22: View a connection monitor Complete steps 1-3 in Create a connection monitor to view connection monitoring. You see a list of existing connection monitors, as shown in the following picture: Select the monitor with the name myVm1-myVm2(22), as shown in the previous picture, to see details for the monitor, as shown in the following picture: Note the following information: Generate alerts Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. A generated alert can automatically run one or more actions, such as to notify someone or start another process. When setting an alert rule, the resource that you target determines the list of available metrics that you can use to generate alerts. In Azure portal, select the Monitor service, and then select Alerts > New alert rule. Click Select target, and then select the resources that you want to target. Select the Subscription, and set Resource type to filter down to the Connection Monitor that you want to use. Once you have selected a resource to target, select Add criteria.The Network Watcher has metrics on which you can create alerts. Set Available signals to the metrics ProbesFailedPercent and AverageRoundtripMs: Fill out the alert details like alert rule name, description and severity. You can also add an action group to the alert to automate and customize the alert response. View a problem By default, Azure allows communication over all ports between VMs in the same virtual network. Over time, you, or someone in your organization, might override Azure's default rules, inadvertently causing a communication failure. Complete the following steps to create a communication problem and then view the connection monitor again: In the search box at the top of the portal, enter myResourceGroup. When the myResourceGroup resource group appears in the search results, select it. Select the myVm2-nsg network security group. Select Inbound security rules, and then select Add, as shown in the following picture: The default rule that allows communication between all VMs in a virtual network is the rule named AllowVnetInBound. Create a rule with a higher priority (lower number) than the AllowVnetInBound rule that denies inbound communication over port 22. Select, or enter, the following information, accept the remaining defaults, and then select Add: Since connection monitor probes at 60-second intervals, wait a few minutes and then on the left side of the portal, select Network Watcher, then Connection monitor, and then select the myVm1-myVm2(22) monitor again. The results are different now, as shown in the following picture: You can see that there's a red exclamation icon in the status column for the myvm2529 network interface. To learn why the status has changed, select 10.0.0.5, in the previous picture. Connection monitor informs you that the reason for the communication failure is: Traffic blocked due to the following network security group rule: UserRule_DenySshInbound. If you didn't know that someone had implemented the security rule you created in step 4, you'd learn from connection monitor that the rule is causing the communication problem. You could then change, override, or remove the rule, to restore communication between the VMs. Clean up resources When no longer needed, delete the resource group and all of the resources it contains: - Enter myResourceGroup in the Search box at the top of the portal. When you see myResourceGroup in the search results, select it. - Select Delete resource group. - Enter myResourceGroup for TYPE THE RESOURCE GROUP NAME: and select Delete. Next steps In this tutorial, you learned how to monitor a connection between two VMs. You learned that a network security group rule prevented communication to a VM. To learn about all of the different responses connection monitor can return, see response types. You can also monitor a connection between a VM, a fully qualified domain name, a uniform resource identifier, or an IP address. At some point, you may find that resources in a virtual network are unable to communicate with resources in other networks connected by an Azure virtual network gateway. Advance to the next tutorial to learn how to diagnose a problem with a virtual network gateway. Feedback
https://docs.microsoft.com/en-us/azure/network-watcher/connection-monitor
2019-10-14T06:52:12
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
Analytics and User Feedback Adding Analytics The Docsy theme contains built-in support for Google Analytics via Hugo’s internal template, which is included in the theme. Once you set Analytics up as described below, usage information for your site (such as page views) is sent to your Google Analytics account. Setup - Ensure you have set up a Google Analytics property for your site: this gives you an Analytics ID to add to your config, which Docsy in turn adds to all your site’s pages. - Open config.toml. Enable Google Analytics by setting the Tracking ID property to your site’s Analytics ID. [services.googleAnalytics] id = "UA-00000000-0" Save and close config.toml. Ensure that your site is built with HUGO_ENV="production", as Docsy only adds Analytics tracking to production-ready sites.. User Feedback By default Docsy puts a “was this page helpful?” feedback widget at the bottom of every documentation page, as shown in Figure 1. ![ The user is presented with the text 'Was this page helpful?' followed by 'Yes' and 'No' buttons.](/images/feedback.png) After clicking Yes the user should see a response like Figure 2. You can configure the response text in config.toml. ![ After clicking 'Yes' the widget responds with 'Glad to hear it! Please tell us how we can improve.' and the second sentence is a link which, when clicked, opens GitHub and lets the user create an issue on the documentation repository.](/images/yes.png) How is this data useful? When you have a lot of documentation, and not enough time to update it all, you can use the “was this page helpful?” feedback data to help you decide which pages to prioritize. In general, start with the pages with a lot of pageviews and low ratings. “Low ratings” in this context means the pages where users are clicking No — the page wasn’t helpful — more often than Yes — the page was helpful. You can also study your highly-rated pages to develop hypotheses around why your users find them helpful. In general, you can develop more certainty around what patterns your users find helpful or unhelpful if you introduce isolated changes in your documentation whenever possible. For example, suppose that you find a tutorial that no longer matches the product. You update the instructions, check back in a month, and the score has improved. You now have a correlation between up-to-date instructions and higher ratings. Or, suppose you study your highly-rated pages and discover that they all start with code samples. You find 10 other pages with their code samples at the bottom, move the samples to the top, and discover that each page’s score has improved. Since this was the only change you introduced on each page, it’s more reasonable to believe that your users find code samples at the top of pages helpful. The scientific method, applied to technical writing, in other words! Setup - Open config.toml. - Ensure that Google Analytics is enabled, as described above. Set the response text that users see after clicking Yes or No. [params.ui.feedback] enable = true yes = 'Glad to hear it! Please <a href="">tell us how we can improve</a>.' no = 'Sorry to hear that. Please <a href="">tell us how we can improve</a>.' Save and close config.toml. Access the feedback data This section assumes basic familiarity with Google Analytics. For example, you should know how to check pageviews over a certain time range and navigate between accounts if you have access to multiple documentation sites. - Open Google Analytics. - Open Behavior > Events > Overview. - In the Event Category table click the Helpful row. Click view full report if you don’t see the Helpful row. - Click Event Label. You now have a page-by-page breakdown of ratings. Here’s what the 4 columns represent: - Total Events is the total number of times that users clicked either Yes or No. - Unique Events provides a rough indication of how frequenly users are rating your pages per session. For example, suppose your Total Events is 5000, and Unique Events is 2500. This means that you have 2500 users who are rating 2 pages per session. - Event Value isn’t that useful. - Avg. Value is the aggregated rating for that page. The value is always between 0 and 1. When users click No a value of 0 is sent to Google Analytics. When users click Yes a value of 1 is sent. You can think of it as a percentage. If a page has an Avg. Value of 0.67, it means that 67% of users clicked Yes and 33% clicked No. The underlying Google Analytics infrastructure that stores the “was this page helpful?” data is called Events. See docsy pull request #1 to see exactly what happens when a user clicks Yes or No. It’s just a click event listener that fires the Google Analytics JavaScript function for logging an Event, disables the Yes and No buttons, and shows the response text. Disable feedback on a single page Add hide_feedback: true to the page’s front matter. Disable feedback on all pages Set params.ui.feedback.enable to false in config.toml: [params.ui.feedback] enable = false Feedback Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://www.docsy.dev/docs/adding-content/feedback/
2019-10-14T07:06:33
CC-MAIN-2019-43
1570986649232.14
[]
www.docsy.dev
2017 Senate Bill 143 - S - Transportation and Veterans Affairs
http://docs-preview.legis.wisconsin.gov/2017/proposals/ab210
2019-10-14T05:24:06
CC-MAIN-2019-43
1570986649232.14
[]
docs-preview.legis.wisconsin.gov
@Request.SortOrder getSortOrder() The sort order to use, either ‘asc’ or ‘desc’. public String getSortBy() The field to sort by. If not specified, the default is timeCreated. The default sort order for timeCreated is DESC. The default sort order for displayName is ASC in alphanumeric order. public String getLifecycleState() The current state of the resource to filter by. public String getDisplayName() A user-friendly name. Does not have to be unique, and it’s changeable. Example: My new resource public String getOpcRequestId() The client request ID for tracing.
https://docs.cloud.oracle.com/iaas/tools/java/latest/com/oracle/bmc/budget/requests/ListAlertRulesRequest.html
2019-10-14T06:47:27
CC-MAIN-2019-43
1570986649232.14
[]
docs.cloud.oracle.com
UIElement. Preview UIElement. Give Feedback Preview UIElement. Give Feedback Preview UIElement. Give Feedback Preview Event Give Feedback Definition Occurs when a drag-and-drop operation is started. public: event System::Windows::GiveFeedbackEventHandler ^ PreviewGiveFeedback; public event System.Windows.GiveFeedbackEventHandler PreviewGiveFeedback; member this.PreviewGiveFeedback : System.Windows.GiveFeedbackEventHandler Public Custom Event PreviewGiveFeedback As GiveFeedbackEventHandler Remarks. Routed Event Information The corresponding bubbling event is GiveFeedback. Override OnPreviewGiveFeedback to implement class handling for this event in derived classes.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.uielement.previewgivefeedback?redirectedfrom=MSDN&view=netframework-4.8
2019-10-14T06:08:42
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
IPsec Offload Version 1 [The IPsec Task Offload feature is deprecated and should not be used.] This section describes the version 1 interface for offloading Internet protocol security (IPsec) tasks in NDIS 6.0 and later. IPsec Offload Version 2 is also supported in NDIS 6.1 and later. This section includes the following topics: Offloading the Processing of ESP-Protected and AH-Protected Packets Offloading the Processing of UDP-Encapsulated ESP Packets Feedback
https://docs.microsoft.com/en-us/windows-hardware/drivers/network/ipsec-offload-version-1
2019-10-14T06:36:48
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
Version: 6.x-21 This is an older version of Search Guard. Switch to Latest version This is an older version of Search Guard. Switch to Latest version Search Guard Versions All Search Guard releases, past and present, are available on Maven Central. You can either download the releases from there (offline install) or use the Elasticsearch plugin install command to install Search Guard directly (online install). This page lists all available versions for Elasticsearch 6.x. For other releases please refer to: Please also refer to Search Guard End of Life policy to make sure that you are not running an outdated Search Guard or Elasticsearch version. Search Guard 6 If you are upgrading from Elasticsearch 5.6.x to Elasticsearch >= 6.0.0, please read the upgrade instructions to Elasticsearch to 6.x.
https://docs.search-guard.com/6.x-21/search-guard-versions
2019-10-14T06:45:16
CC-MAIN-2019-43
1570986649232.14
[]
docs.search-guard.com
The Turbo 360 platform is the fastest, easiest environment for creating and deploying full stack Node/Express sites. Our collection of tools and code templates abstract away repetitive tasks and infrastructure concerns (such as scaling and SSL integration) so you can focus on creating great projects for your job, clients and for yourself. Make sure your machine runs an up-to-date version of Node JS. Then install the necessary libraries globally to run the Turbo build commands (NOTE - "sudo" is not necessary on Windows environments): $.34.89 To scaffold a new project, open your terminal and type the following: $ turbo new <PROJECT_NAME> Next, change directory into the project and install dependencies: $ cd <PROJECT_NAME> $ npm install This scaffolds a base Node/Express application with configuration settings for the Turbo hosting environment. The base project is "opinionated" in that a handful of options are pre-set such as: These settings can be changed and the initial scaffold serves as a "starting" point for a project right out of the box. Run the devsever then navigate to which should show the following:then navigate to which should show the following: $ turbo devserver To turn off the devsever: CONTROL + C To connect your local source code to a the Turbo 360 staging environment, create a project in your Turbo dashboard by filling out the area shown below and click "CREATE SITE": Back in the terminal, login to your Turbo account using the CLI: $ turbo login Then from the root directory of your project, connect your local source code the project on Turbo: $ turbo connect This will prompt you to enter the SITE ID and API KEY for your project. Head back to the Turbo 360 dashboard and on your site admin console, find those two values in the area shown below: Finally, deploy your site: $ turbo deploy When the deployment is complete, a staging link will be provided in your terminal. Copy & paste that link in your browser and you should see the same home page from earlier but now on a live staging server! The Turbo. A Turbo 360 project can also be "extended" with configurations that add more functionality "out-of-the-box." For exmample, you can add a React with Redux integration to your Turbo site with the following command (from root directory): $ turbo extend react This adds a React with Redux base project onto your app including the required packages in the package.json file. It also installs the new packages and runs the build script in order to ensure the source compiles properly. The React and Redux source code is located under the /src directory in the root level of the project. A webpack config file is also provided in the root level and the index.mustache file under the /views directory is where the compiled React source is mounted. Turbo 360 projects support the following extensions to speed up your development flow: $ turbo extend gulp Adds a gulpfile.js with basic configuration for CSS and JS concatenation and minification. $ turbo extend react Adds a React with Redux codebase under the "src" directory and updates package.json with corresponding dependencies. Also include webpack.config.js file with basic loaders and scripts installed. $ turbo extend graphql Adds a GraphQL endpoint on the server with corresponding route. The endpoint is NOT connected and has to be manually imported in the "app.js" file in order to handle requests. * if you have a request for an extension, please email Dan at [email protected] In addition to scaffolding projects from scratch with the Turbo CLI, you can clone projects by directly downloading source code, then connected the project to an environment on your account and redeploying. To begin, navigate to the Gallery page () and select a project to download. Click the "Download Source" button as shown in the image below: This will download a ZIP file called "package.zip" which contains the source of the selected project as constituted on the current deployment. Unzip the file and remove the "node_modules" folder. Then open your terminal and change directory into the root of the project and reinstall the dependencies: $ npm install When the dependencies are installed, run the dev server and navigate to: $ turbo devserver This should show the same site now running locally on your machine. In order to deploy the project on to your Turbo account, the environment from the original site needs to be cloned to your account. To do so, navigate to your account page on Turbo and select the "Downloaded" option on the left-side menu bar: Select the site that the project was originally downloaded from and click the "Clone Environment" button: This will prompt you to enter a name for the environment. Enter a name then click the "CLONE ENVIRONMENT" button. This will create a new site with the same configuration as the original site that your project was downloaded from. The deployment process is the same for any other Turbo 360 site. First, we connect our local source code to the staging environment by navigating to the root directory of the project and entering the following command: Then follow the instructions for deployment from HERE.Then follow the instructions for deployment from HERE. $ turbo connect Vertex 360 is a complementary platform in which Turbo projects are augmented with visual editing tools as well as a robust CMS system. Several templates on Turbo 360 are formatted for the Vertex management console which are divided into the following areas: Vertex templates are indicated by the green stripe show in the screenshot below. By cloning a Vertex compatible template and redploying it on your environment, you gain access to the editing tools and features above. If you have any questions, comments or feedback, feel free to contact us at [email protected]
https://docs.turbo360.co/?selected=vertex
2019-10-14T07:02:50
CC-MAIN-2019-43
1570986649232.14
[]
docs.turbo360.co
Latest Release - Zextras Suite 3.0.2 Release Date: October 8th, 2019 Changelog. Powerstore The doCheckBlobs operation now includes Drive NG
https://docs.zextras.com/zextras-suite-documentation/latest/home.html
2019-10-14T06:53:31
CC-MAIN-2019-43
1570986649232.14
[]
docs.zextras.com
Events also download it from the Event Viewer page. To access the event viewer page. In the SD-WAN Center web interface click the Fault tab. The Event Viewer page appears by default. You can select and view events of a particular time frame by using the timeline controls. For more information, see, How to use timeline controls. You can also create, save and open event views. For more information, see, How to manage views You can create custom filters for narrowing the Events table results. Using FiltersUsing Filters. To download the events table as a CSV file: Click the Download icon at the upper right corner of the events table. You can configure SD-WAN Center to send external event notifications for different event types as email, SNMP traps or syslog messages. For more information, see How to Configure Event Notifications. For more information on event statistics, see How to View Event Statistics.
https://docs.citrix.com/en-us/netscaler-sd-wan-center/9-3/events.html
2019-10-14T07:21:33
CC-MAIN-2019-43
1570986649232.14
[array(['/en-us/netscaler-sd-wan-center/9-3/media/Events.png', 'localized image'], dtype=object) ]
docs.citrix.com
The Web Dashboard control allows end-users to export an entire dashboard or individual dashboard items. You can export the dashboard/dashboard items to PDF and Image formats; additionally, you can export dashboard item's data to Excel/CSV. To learn more about basic exporting capabilities, see Printing and Exporting. To export the entire dashboard, click the button in the dashboard title and choose the required action. You can export only dashboard items when the Web Dashboard displays dashboards on mobile phones. Export to PDF - Invokes a corresponding dialog that allows end-users to export a dashboard to a PDF file with specific options. The following options are available: Export to Image - Invokes a corresponding dialog that allows end-users to export a dashboard to image of the specified format. The following options are available: Export to Excel - Invokes a corresponding dialog that allows end-users to export dashboard's data to the Excel file. The following options are available: Specify the required options in the dialog and click the Export button to export the dashboard. To reset the changes to the default values, click the Reset button. To export a dashboard item, click the button in the dashboard item caption and choose the required action. To learn more about exporting specifics of different dashboard items, see the Exporting topic for the required dashboard item. The following topics contain a list of API used to customize default export options and export dashboard / dashboard items: See Member Table: Printing and Exporting for a detailed table related to export. Export to image is disabled for .NET Core because of some problems in the Libgdiplus library. See T685212 and T685811 for more information.
https://docs.devexpress.com/Dashboard/116694/create-dashboards/create-dashboards-on-the-web/exporting
2019-10-14T05:56:04
CC-MAIN-2019-43
1570986649232.14
[]
docs.devexpress.com
IAR EW integration is provided as follows: The following components are provided to facilitate testing IAR Embedded Workbench projects: . void f(int __data16 *)and void f(int __data24 *)functions may be displayed as coverage for two different function with the same void f(int *)signature. -ecompiler option to C++test compiler options for the project: -ein the Compiler options field and click Apply.) gui.propertiesfile contains a line that begins with cppCompilationModeOption=, then you must also change the --eec++compiler option to the required option. This line may have the following appearance: сppCompilationModeOption=--eec++ cpp.psrcfile: Locate the lines that begin with edgtk.preprocessorCommand and edgtk.gccAutoconfiguratorCommand} inlineor function_effects) used inside compiler header files may be printed to the C++test console. In most cases, these warnings indicate that the optimization level of tested code is different than the optimization level of the same code in the original project and can be ignored.:
https://docs.parasoft.com/plugins/viewsource/viewpagesrc.action?pageId=6387225
2019-10-14T06:57:19
CC-MAIN-2019-43
1570986649232.14
[]
docs.parasoft.com
source code Command-line utility for querying ROS services, along with library calls for similar functionality. The main benefit of the rosservice Python library over the rospy ServiceProxy library is that rosservice supports type-introspection on ROS Services. This allows for both introspecting information about services, as well as using this introspection to dynamically call services.
http://docs.ros.org/indigo/api/rosservice/html/
2019-10-14T06:08:17
CC-MAIN-2019-43
1570986649232.14
[]
docs.ros.org
Tutorial: Using Amazon EFS File Systems with Amazon ECS Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files. Your applications can have the storage they need, when they need it. You can use Amazon EFS file systems with Amazon ECS to export file system data across your fleet of container instances. That way, your tasks have access to the same persistent storage, no matter the instance on which they land. However, you must configure your container instance AMI to mount the Amazon EFS file system before the Docker daemon starts. Also, your task definitions must reference volume mounts on the container instance to use the file system. The following sections help you get started using Amazon EFS with Amazon ECS. 注意 Amazon EFS is not available in all regions. For more information about which regions support Amazon EFS, see Amazon Elastic File System in the AWS Regions and Endpoints section of the AWS General Reference. Step 1: Gather Cluster Information Before you can create all of the required resources to use Amazon EFS with your Amazon ECS cluster, gather some basic information about the cluster, such as the VPC it is hosted inside of, and the security group that it uses. To gather the VPC and security group IDs for a cluster Open the Amazon EC2 console at. Select one of the container instances from your cluster and view the Description tab of the instance details. If you created your cluster with the Amazon ECS first-run or cluster creation wizards, the cluster name should be part of the EC2 instance name. For example, a cluster named defaulthas this EC2 instance name: ECS Instance - EC2ContainerService-. default Record the VPC ID value for your container instance. Later, you create a security group and an Amazon EFS file system in this VPC. Open the security group to view its details. Record the Group ID. Later, you allow inbound traffic from this security group to your Amazon EFS file system. Step 2: Create a Security Group for an Amazon EFS File System In this section, you create a security group for your Amazon EFS file system that allows inbound access from your container instances.. Choose Inbound, Add rule. For Type, choose NFS. For Source, choose Custom and then enter the security group ID that you identified earlier for your cluster. Choose Create. Step 3: Create an Amazon EFS File System Before you can use Amazon EFS with your container instances, you must create an Amazon EFS file system. To create an Amazon EFS file system for Amazon ECS container instances Open the Amazon Elastic File System console at. 注意 Amazon EFS is not available in all regions. For more information about which regions support Amazon EFS, see Amazon Elastic File System in the AWS Regions and Endpoints section of the AWS General Reference. Choose Create file system. On the Configure file system access page, choose the VPC that your container instances are hosted in. By default, each subnet in the specified VPC receives a mount target that uses the default security group for that VPC. 注意 Your Amazon EFS file system and your container instances must be in the same VPC. Under Create mount targets, for Security groups, add the security group that you created in the previous section. Choose Next step.. 注意 Bursting is the default, and it is recommended for most file systems. Choose a performance mode for your file system. 注意 General Purpose is the default, and it is recommended for most file systems. (Optional) Enable encryption. Select the check box to enable encryption of your Amazon EFS file system at rest. Review your file system options and choose Create File System. Step 4: Configure Container Instances After you've created your Amazon EFS file system in the same VPC as your container instances, you must configure the container instances to access and use the file system. Configure a running container instance to use an Amazon EFS file system Log in to the container instance via SSH. For more information, see 连接到您的容器实例. Create a mount point for your Amazon EFS file system. For example, /efs. sudo mkdir /mnt/efs Install the amazon-efs-utilsclient software on your container instance. For Amazon Linux: sudo yum install -y amazon-efs-utils For other Linux distributions, see Installing the amazon-efs-utils Package on Other Linux Distributions in the Amazon Elastic File System User Guide. Make a backup of the /etc/fstabfile. sudo cp /etc/fstab /etc/fstab.bak Update the /etc/fstabfile to automatically mount the file system at boot. echo ' fs-12345678:/ /mnt/efs efs defaults,_netdev 0 0' | sudo tee -a /etc/fstab Reload the file system table to verify that your mounts are working properly. sudo mount -a 注意 If you receive an error while running the above command, examine your /etc/fstabfile for problems. If necessary, restore it with the backup that you created earlier. Validate that the file system is mounted correctly with the following command. You should see a file system entry that matches your Amazon EFS file system. If not, see Troubleshooting Amazon EFS in the Amazon Elastic File System User Guide. mount | grep efs Bootstrap an instance to use Amazon EFS with user data You can use an Amazon EC2 user data script to bootstrap an Amazon ECS–optimized AMI at boot. For more information, see 使用 Amazon EC2 用户数据引导启动容器实例. Follow the container instance launch instructions at 启动 Amazon ECS 容器实例. On 步骤 6.g, pass the following user data to configure your instance. If you are not using the defaultcluster, be sure to replace the ECS_CLUSTER=line in the configuration file to specify your own cluster name. default Content-Type: multipart/mixed; boundary="==BOUNDARY==" MIME-Version: 1.0 --==BOUNDARY== Content-Type: text/cloud-boothook; charset="us-ascii" # Install amazon-efs-utils cloud-init-per once yum_update yum update -y cloud-init-per once install_amazon-efs-utils yum install -y amazon-efs-utils # Create /efs folder cloud-init-per once mkdir_efs mkdir /efs # Mount /efs cloud-init-per once mount_efs echo -e ' fs-12345678:/ /efs efs defaults,_netdev 0 0' >> /etc/fstab mount -a --==BOUNDARY== Content-Type: text/x-shellscript; charset="us-ascii" #!/bin/bash # Set any ECS agent configuration options echo "ECS_CLUSTER= default" >> /etc/ecs/ecs.config --==BOUNDARY==-- Step 5: Create a Task Definition to Use the Amazon EFS File System Because the file system is mounted on the host container instance, you must create a volume mount in your Amazon ECS task definition that allows your containers to access the file system. For more information, see 在任务中使用数据卷. The following task definition creates a data volume called efs-html at /efs/html on the host container instance Amazon EFS file system. The nginx container mounts the host data volume at the NGINX root, /usr/share/nginx/html. { "containerDefinitions": [ { "memory": 128, "portMappings": [ { "hostPort": 80, "containerPort": 80, "protocol": "tcp" } ], "essential": true, "mountPoints": [ { "containerPath": "/usr/share/nginx/html", "sourceVolume": "efs-html" } ], "name": "nginx", "image": "nginx" } ], "volumes": [ { "host": { "sourcePath": "/efs/html" }, "name": "efs-html" } ], "family": "nginx-efs" } You can save this task definition to a file called nginx-efs.json and register it to use in your own clusters with the following AWS CLI command. For more information, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. aws ecs register-task-definition --cli-input-json Step 6: Add Content to the Amazon EFS File System For the NGINX example task, you created a directory at /efs/html on the container instance to host the web content. Before the NGINX containers can serve any web content, you must add the content to the file system. In this section, you log in to a container instance and add an index.html file. To add content to the file system Connect using SSH to one of your container instances that is using the Amazon EFS file system. For more information, see 连接到您的容器实例. Write a simple HTML file by copying and pasting the following block of text into a terminal. sudo bash -c "cat >/efs/html/index.html" <<'EOF' <html> <body> <h1>It Works!</h1> <p>You are using an Amazon EFS file system for persistent container storage.</p> </body> </html> EOF Step 7: Run a Task and View the Results Now that your Amazon EFS file system is available on your container instances and there is web content for the NGINX containers to serve, you can run a task using the task definition that you created earlier. The NGINX web servers serve have configured to use Amazon EFS. Choose Tasks, Run new task . For Task Definition, choose the nginx-efstaskjob definition that you created earlier and choose Run Task. For more information on the other options in the run task workflow, see 正在运行的任务. Below the Tasks tab, choose the task that you just ran. Expand the container name at the bottom of the page, and choose the IP address that is associated with the container. Your browser should open a new tab with the following message: 注意 If you do not see the message, make sure that the security group for your container instances allows inbound network traffic on port 80.
https://docs.amazonaws.cn/AmazonECS/latest/developerguide/using_efs.html
2019-10-14T06:15:07
CC-MAIN-2019-43
1570986649232.14
[]
docs.amazonaws.cn
Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_96746_55943529.1571030691449" ------=_Part_96746_55943529.1571030691449 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Overview Available Materials Lecture <= span style=3D"color: rgb(0,51,102);">Demonstration Activity =  = ; Additional resources &n= bsp; =  = ; &nb= sp; &nbs= p; &n= bsp; =  = ; &nb= sp; &= nbsp; = = Goal - In this lesson you will learn about the differ= ent render engines in V-Ray and their pros and cons. This will allow you to= properly select the render engine you need based on the task at hand. Objective =E2=80=93 Cover the process of setting u= p and rendering animations with V-Ray Outcome =E2=80=93 You will be able to efficiently = render your animated scene To set up the lesson follow the links below and download all available m= aterials. Lesson plan download Pr= esentation (Lecture) download Demonstration tutorial 1 download Demonstration tutorial 2 download Scenes & Assets download&nbs= p; a) The V-Ray rendering system <= /p> V-Ray Frame Buffer Irradiance Map Rendering Animation with V-Ray RT Time to see it work! Time to do it yourself!
https://docs.chaosgroup.com/exportword?pageId=40863163
2019-10-14T05:24:52
CC-MAIN-2019-43
1570986649232.14
[]
docs.chaosgroup.com
Configuring Word Automation Services for Development Applies to: SharePoint Server 2010 This topic describes how to configure Word Automation Services for development. Note If you configured the server as a stand-alone installation, or if you configured Word Automation Services as part of the SharePoint Server 2010 Farm Configuration Wizard, then you do not need to manually configure Word Automation Services; however, you still might want to complete the steps in the section, "Additional Developer-Specific Configuration" later in this topic. Setting Up Word Automation Services Once you complete the initial configuration for SharePoint Server 2010, you can configure Word Automation Services by using: SharePoint 2010 Central Administration. Windows PowerShell. Using SharePoint 2010 Central Administration You can configure Word Automation Services by using SharePoint Server 2010 Central Administration. To configure Word Automation Services by using SharePoint 2010 Central Administration On the SharePoint Server 2010 Central Administration page, under Application Management, click Manage Service Applications. On the Service Applications tab, click New, and then click Word Automation Services Application. In the Create New Word Automation Services Application dialog box, complete the following: Name. Type a unique name for this instance of Word Automation Services application. Word Automation Services uses this name to locate this service application instance when creating new conversion jobs in the object model. Application Pool. Select the application pool for this instance of Word Automation Services application. Run In Partitioned Mode. Use this setting to specify whether this instance of the Word Automation Services application is being run in an environment with multiple partitions. Typical configurations do not include this setting. Note If you are configuring Word Automation Services in order to complete the walkthrough presented in this documentation, you do not need to configure this setting. Add to Default Proxy List. Use this setting to specify whether you want to add the application proxy for this instance of the Word Automation Services application to the default proxy group. Database. Type the name of the database that you want to use to store the document queue for this instance, and type the name of the server where that database is installed. Note In most cases, we recommend that you specify the default database server and database name. Click Finish. The new instance of Word Automation Services appears in the list of service applications on the Service Applications tab. Using Windows PowerShell You can configure Word Automation Services by using Windows PowerShell 1.0. To configure Word Automation Services by using Windows PowerShell Open Windows PowerShell on the machine where SharePoint Server 2010 is installed. To do so, click Start, click All Programs, click Windows PowerShell, and then click Windows PowerShell. Note Windows PowerShell may be under Accessories. Add the SharePoint Server 2010 snap-in to the current Windows PowerShell 1.0 session by typing the following command and then pressing Enter. Add-pssnapin Microsoft.SharePoint.PowerShell To run the service application under a new application pool, type the following command and then press Enter. New- New-SPServiceApplicationPool -Name "Word Conversion Services Application Pool" -Account <<service application account>> Running the command creates a new application pool called the "Word Conversion Services Application Pool". Type the following command to create an instance of the Word Automation Services application and set the application pool under which it runs. Get- New-SPServiceApplicationPool -Identity <<application pool name>> | New-SPWordConversionServiceApplication -Name "Word Conversion Services" To automate the previous steps, use the following Windows PowerShell script. param($appPoolName, $admin) $serviceName = "Word Conversion Services" $appPool = $null Add-pssnapin Microsoft.SharePoint.PowerShell $appPool = Get-SPIisWebServiceApplicationPool -Identity $appPoolName if ($appPool –eq $null) {$appPool = New-SPIisWebServiceApplicationPool -Name $appPoolName -Account $admin} New-SPWordConversionServiceApplication -Name $serviceName -ApplicationPool $appPool The script takes two parameters: The name of the application pool that the service application uses. Required. The name of the account that the service application should use to run the application pool. Required only if the application pool does not exist. Additional Developer-Specific Configuration To configure Word Automation Services for application development and debugging, it can be helpful to reduce the frequency that Word Automation Services performs conversion jobs. To set the conversion frequency On the SharePoint Server 2010Central Administration page, under Application Management, click Manage Service Applications. Click the instance of Word Automation Services that you use to develop. On the Service Applications tab, click Manage. Under Conversion Throughput, in the Frequency with which to start conversions (minutes) setting, type 1. See Also Concepts Word Automation Services Object Model
https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2010/ee557330(v=office.14)?redirectedfrom=MSDN
2019-10-14T05:33:43
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
Inherits: Node2D < CanvasItem < Node < Object Category: Core Copies a region of the screen (or the whole screen) to a buffer so it can be accessed with the texscreen() shader instruction. COPY_MODE_*constants. copy_modeis COPY_MODE_RECT. enum CopyMode. © 2014–2018 Juan Linietsky, Ariel Manzur, Godot Engine contributors Licensed under the MIT License.
https://docs.w3cub.com/godot~3.0/classes/class_backbuffercopy/
2019-10-14T05:52:19
CC-MAIN-2019-43
1570986649232.14
[]
docs.w3cub.com
The Array{Int64,2}:. Each event generates several pieces of data, some provided by the user and some automatically extracted. Let's examine the user-defined data first: LogLevel; user-defined levels are also possible. Debugfor verbose information that could be useful when debugging an application or module. These events are disabled by default. Infoto inform the user about the normal operation of the program. Warnwhen a potential problem is detected. Errorto report errors where the code has enough context to recover and continue. (When the code doesn't have enough context, an exception or early return is more appropriate.) AbstractStrings passed as messages are assumed to be in markdown format. Other types will be displayed using show(io,mime,obj)according to the display capabilities of the installed logger. @logmsg). The system also generates some standard information for each event: modulein which the logging macro was expanded. fileand linewhere the logging macro occurs in the source code. idthat is unique for each logging macro invocation. This is very useful as a key for caching information or actions associated with an event. For instance, it can be used to limit the number of times a message is presented to the user.. As you can see in the examples, logging statements make no mention of where log events go or how they are processed. This is a key design feature that makes the system composable and natural for concurrent use. It does this by separating two different concerns:. When an event occurs, a few steps of early filtering occur to avoid generating messages that will be discarded: disable_logging). This is a crude but extremely cheap global setting. Logging.min_enabled_level. This behavior can be overridden via environment variables (more on this later).. Log events are a side effect of running normal code, but you might find yourself wanting to test particular informational messages and warnings. The Test module provides a @test_logs macro that can be used to pattern match against the log event stream.) Base.CoreLogging.@logmsgMacro endsource Base.CoreLogging.LogLevelType LogLevel(level) Severity/verbosity of a log record. The log level provides a key against which potential log records may be filtered, before any other work is done to construct the log record data structure itself.source Event processing is controlled by overriding functions associated with AbstractLogger: Base.CoreLogging.AbstractLoggerType A logger controls how log records are filtered and dispatched. When a log record is generated, the logger is the first piece of user configurable code which gets to inspect the record and decide what to do with it.source Base.CoreLogging.handle_messageFunction handle_message(logger, level, message, _module, group, id, file, line; key1=val1, ...) Log a message to logger at level. The logical location at which the message was generated is given by module _module and group; the source location by file and line. id is an arbitrary unique Symbol to be used as a key to identify the log statement when filtering. Base.CoreLogging.shouldlogFunction shouldlog(logger, level, _module, group, id) Return true when logger accepts a message at level, generated for _module, group and with unique log identifier id. Base.CoreLogging.min_enabled_levelFunction min_enabled_level(logger) Return the maximum disabled level for logger for early filtering. That is, the log level below or equal to which all messages are filtered. Base.CoreLogging.catch_exceptionsFunction.source Base.CoreLogging.disable_loggingFunction disable_logging(level) Disable all log messages at log levels equal to or less than level. This is a global setting, intended to make debug logging extremely cheap when disabled. Logger installation and inspection: Base.CoreLogging.global_loggerFunction global_logger() Return the global logger, used to receive messages when no specific logger exists for the current task. global_logger(logger) Set the global logger to logger, and return the previous global logger. Base.CoreLogging.with_loggerFunction with_logger(function, logger) Execute function, directing all log messages to logger. Example function test(x) @info "x = $x" end with_logger(logger) do test(1) test([1,2]) endsource Base.CoreLogging.current_loggerFunction current_logger() Return the logger for the current task, or the global logger if none is attached to the task.source Loggers that are supplied with the system: Base.CoreLogging.NullLoggerType NullLogger() Logger which disables all messages and produces no output - the logger equivalent of /dev/null.source Logging.ConsoleLoggerType ConsoleLogger(stream=stderr, min_level=Info; meta_formatter=default_metafmt, show_limited=true, right_justify=0) Logger with formatting optimized for readability in a text console, for example interactive work with the Julia REPL.). Base.CoreLogging.SimpleLoggerType SimpleLogger(stream=stderr, min_level=Info) Simplistic logger for logging all messages with level greater than or equal to min_level to stream. © 2009–2018 Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors Licensed under the MIT License.
https://docs.w3cub.com/julia~1.0/stdlib/logging/
2019-10-14T05:24:00
CC-MAIN-2019-43
1570986649232.14
[]
docs.w3cub.com
Links We should always strive for consistency in how we present links. Guidelines for Navigational Links - The link should be noticeably different from other content, considering our accessibility guidelines for links ↗ (i.e. don't rely on only color to identify a link, hence why Rivendell's links are a different font-weight as well as color). - Use a bold font-weight, Aperçu-Medium to be specific. - Use the $blue50color. - No underline (text-decoration) in a normal state. - On-hover and focus state should have a simple $blue501px border-bottom. Alternative Styles for Links Different product scenarios may require a different treatment for a link or action. Consider the following guidelines in these cases: - Bold font-weight (Aperçu-Medium), using the $black50color. - Use a simple $black501px border-bottom on-hover for $black50color links. - Avoid using $yelllow50on a light background as it won't be legible. - When using smaller font sizes (e.g. Copy 3 or Copy 4, perhaps a bold font-weight may draw too much attention to itself. If the intention of the smaller text is be subtle, maintain a regular font-weight (Aperçu-Light) and use a simple underline (text-decoration), or a $black501px border-bottom, on-hover. $blue50is overkill when aligned with the WeWork logo. $black50is more appropriate for navigation patterns. Reserve $blue50for links in copy.
http://rivendell-docs.netlify.com/links/
2018-02-18T05:14:58
CC-MAIN-2018-09
1518891811655.65
[array(['link.png', 'Link example 1'], dtype=object) array(['link-2.png', 'Link example 2'], dtype=object)]
rivendell-docs.netlify.com
style for this example, which is the simplest possible situation. All subsequent examples will share this characteristic. Styling of rasters is done via the raster symbolizer (lines 2-7). This example creates a smooth gradient between two colors corresponding to two elevation values. The gradient is created via the color-map on lines 8-12. Each entry in the color-map represents one entry or anchor in the gradient. Line 11 sets the lower value of 70 and color to a dark green ( '#008000'). Line 12 sets the upper value of 256 and color to a dark brown ( '#663333'). Line 9 sets the type to ramp, which means that color-map-enhancement parameter on lines 13-15. Line 14 normalizes the output by increasing the contrast to its maximum extent. Line 15 then adjusts the brightness by a factor of 0.5. Since values less than 1 make the output brighter, a value of 0.5 makes the output twice as bright. As with previous examples, lines 8-12 determine the color-map, with line 11 setting the lower bound (70) to be colored dark green ( '#008000') and line 12 setting the upper bound (256) to be colored dark brown ( '#663333'). Three-color gradient¶ This example creates a three-color gradient in primary colors. Three-color gradient Details¶ This example creates a three-color gradient based on a color-map with three entries on lines 8-13: line 11 specifies the lower bound (150) be styled in blue ( '#0000FF'), line 12 specifies an intermediate point (200) be styled in yellow ( '#FFFF00'), and line 13 color-map can have a value for opacity (with the default being 1.0 or completely opaque). In this example, there is a color-map with two entries: line 11 specifies the lower bound of 70 be colored dark green ( '#008000'), while line 13. Discrete colors Details¶ Sometimes color bands in discrete steps are more appropriate than a color gradient. The type: intervals parameter added to the color-map on line 9 11 colors all values less than 150 to dark green ( '#008000') and line 12 colors all values less than 256 but greater than or equal to 150 to dark brown ( '#663333'). Many color gradient¶ This example shows a gradient interpolated across eight different colors. Many color gradient
http://docs.geoserver.org/latest/en/user/styling/ysld/cookbook/rasters.html
2018-02-18T05:05:24
CC-MAIN-2018-09
1518891811655.65
[array(['../../../_images/raster4.png', '../../../_images/raster4.png'], dtype=object) array(['../../../_images/raster_twocolorgradient2.png', '../../../_images/raster_twocolorgradient2.png'], dtype=object) array(['../../../_images/raster_transparentgradient2.png', '../../../_images/raster_transparentgradient2.png'], dtype=object) array(['../../../_images/raster_brightnessandcontrast2.png', '../../../_images/raster_brightnessandcontrast2.png'], dtype=object) array(['../../../_images/raster_threecolorgradient2.png', '../../../_images/raster_threecolorgradient2.png'], dtype=object) array(['../../../_images/raster_alphachannel2.png', '../../../_images/raster_alphachannel2.png'], dtype=object) array(['../../../_images/raster_discretecolors2.png', '../../../_images/raster_discretecolors2.png'], dtype=object) array(['../../../_images/raster_manycolorgradient2.png', '../../../_images/raster_manycolorgradient2.png'], dtype=object)]
docs.geoserver.org
Event ID 1298 — BCD File Access and Creation Associate the correct boot image Windows Deployment Services must be able to access the boot image that is associated with the client computer. To resolve this issue, ensure that the client computer is prestaged with the correct image and that the image has been uploaded to the Windows Deployment Services image store. To perform this procedure, you must either be a member of the local Administrators group or have been delegated the appropriate authority. To ensure that the associated boot image is correct: - Open the Command Prompt window. - Using the globally unique identifier (GUID) specified in the event log message, run the following command to see which boot image is associated with the client: WDSUTIL /get-device /id:<MAC or GUID> Note: The name of the image and client GUID that caused this issue is specified in BINLVC event 1298. To find this event, open Event Viewer, expand Custom Views, expand Server Roles, and then click Windows Deployment Services. - If you need to change the boot image, run the wdsutil /get-allimages /show:boot command to see a list of the boot images. - To specify another boot image for the device, run the WDSUTIL /set-device /device:<name> /bootprogram:<path> command, where <path> is the relative path to the specified boot program from the shared RemoteInstall folder. For more information about adding images and configuring computers, see "How to Perform Common Tasks" at.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc726731(v=ws.10)
2018-02-18T06:03:42
CC-MAIN-2018-09
1518891811655.65
[array(['images/ee406008.red%28ws.10%29.jpg', None], dtype=object)]
docs.microsoft.com
Event ID 1105 — DHCP Server Rogue Detection. Event Details Resolve Authorize the DHCP server To perform these procedures, you must be a member of the Administrators group, or you must have been delegated the appropriate authority. To authorize a DHCP server in Active Directory Domain Services: - At the DHCP server, click Start, point to Administrative Tools, and then click DHCP. - In the console tree, click DHCP. - On the Action menu, click Manage authorized servers. - In the Manage Authorized Servers dialog box, click Authorize. - When prompted, type the name or IP address of the DHCP server to be authorized, and then click OK. Verify To perform these procedures, you must be a member of the Administrators group, or you must have been delegated the appropriate authority. To verify that the DHCP server is authorized in Active Directory Domain Services, perform the following steps: - At the DHCP server computer, click Start, click Run, type dhcpmgmt.msc, and then press ENTER. - Right-click DHCP, and then click Manage authorized servers. - If the DHCP server is authorized, it appears in the list. To verify that clients are getting leased IP addresses from the DHCP server, perform the following steps: - At the DHCP-enabled client computer, click Start, in Start Search type *cmd*, and then press ENTER. - To verify the lease of the client with a DHCP server, type ipconfig /all to view lease-status information. - The DHCP server should be distributing leases to clients. Related Management Information DHCP Server Rogue Detection
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc726962(v=ws.10)
2018-02-18T06:04:11
CC-MAIN-2018-09
1518891811655.65
[array(['images/ee406008.red%28ws.10%29.jpg', None], dtype=object)]
docs.microsoft.com
Installation Please follow the deployment documentation to install the Kubernetes Ingress Controller onto your Kubernetes cluster. Pre-requisite To make gRPC requests, you need a client which can invoke gRPC requests. In this guide, we use grpcurl. Please ensure that you have that installed in on your local system. Testing connectivity to Kong This guide assumes that PROXY_IP environment variable is set to contain the IP address or URL pointing to Kong. If you’ve not done so,. Running GRPC - Add a grpc deployment and service $ kubectl apply -f service/grpcbin created deployment.apps/grpcbin created - Create a demo grpc ingress rule: $ echo "apiVersion: extensions/v1beta"}}}' - Next,
https://docs.konghq.com/kubernetes-ingress-controller/1.0.x/guides/using-ingress-with-grpc/
2021-01-15T21:38:38
CC-MAIN-2021-04
1610703496947.2
[]
docs.konghq.com
LibreOffice » distro-configs Pre-canned distribution configurations These files are supposed to correspond to the options used when creating the Document Foundation (or other "canonical") builds of LibreOffice for various platforms. They are *not* supposed to represent the "most useful" options for developers in general. On the contrary, the intent is that just running ./autogen.sh without any options at all should produce a buildable configuration for developers with interest in working on the most commonly used parts of the code. See [] for how TDF builds make use of these switches. (Especially, since --with-package-format now triggers whether or not installation sets are built, all the relevant *.conf files specify it, except for LibreOfficeLinux.conf, where the TDF build instructions pass an explicit --with-package-format="rpm deb" in addition to --with-distro=LibreOfficeLinux.) (Possibly the above is a misunderstanding, or maybe there never even has been any clear consensus what situations these files actually are intended for.) The files contain sets of configuration parameters, and can be passed on the autogen.sh command line thus: ./autogen.sh --with-distro=LibreOfficeFoo Contrary to the above, in the Android case the amount of parameters you just must use is so large, that for convenience it is always easiest to use the corresponding distro-configs file. This is a bug and needs to be fixed; also configuring for Android should ideally use sane (or the only possible) defaults and work fine without any parameters at all. Generated by Libreoffice CI on lilith.documentfoundation.org Last updated: 2021-01-14 06:35:25 | Privacy Policy | Impressum (Legal Info)
https://docs.libreoffice.org/distro-configs.html
2021-01-15T21:35:49
CC-MAIN-2021-04
1610703496947.2
[]
docs.libreoffice.org
Elements of the Deployment Descriptor Files classloading-delegate clustered-attach-postconstruct clustered-bean clustered-detach-predestroy clustered-key-name clustered-lock-type whitelist-package enable-implicit-cdi scanning-excludeand scanning-include container-initializer-enabled default-role-mapping jaxrs-roles-allowed-enabled max-wait-time-in-millis webservice-default-login-config jsp-config This page is a reference for extra and changed elements added to the GlassFish proprietary deployment descriptors classloading-delegate With this option its possible to enable/disable class loading delegation. This allows deployed application to use libraries included on them, overriding the versions included on the server. For more information about how class delegation can be configured on Payara Server Enterprise, see the Enhanced Class loading section. clustered-attach-postconstruct Whether to call @PostConstruct each time the bean is created on a different node. Will result in multiple calls. Valid values are true or false. The default value is true. clustered-bean Whether this bean should be a Clustered Singleton. Can be applied only to singleton EJBs. Valid values are true or false. The default value is false. clustered-detach-predestroy Whether to call @PreDestroy when the singleton is destroyed on an instance while still being available on another. Will result in multiple calls. Valid values are true or false. The default value is true. clustered-key-name The key used for replication of clustered beans. Applies to singleton EJBs when clustered-bean is set to true. This element is optional. If not set, the default value is the value of the ejb-name element. clustered-lock-type The type of distributed locking to be performed. For EJB beans, only INHERIT and LOCK_NONE are valid. For CDI beans, valid values are LOCK and INHERIT, which is equivalent to using LOCK_NONE. Default value is INHERIT whitelist-package Used to whitelist packages on extreme class loading isolation. Whitelisted packages are taken into account by the server when scanning libraries. For more information about how extreme class loading isolation works on Payara Server Enterprise, see the Enhanced Classloading section. enable-implicit-cdi In a WAR file, it is possible to set the property bean-discovery-mode equal to none to turn off implicit scanning of the archive for bean defining annotations, as defined by the CDI 1.1 specification. The default value of this setting is defined as annotated in the specification, so the archive is scanned for any bean-defining annotations, which can cause unwanted side effects. In the glassfish-application.xml deployment descriptor for an EAR file, the property enable-implicit-cdi can be set to false to achieve the same goal for all modules inside the EAR assembly. The default value is true, in line with the default value for WAR files. If implicit CDI scanning causes problems for an EAR assembly, the value false will disable implicit CDI scanning for all CDI modules inside the EAR assembly: <glassfish-application> <enable-implicit-cdi>false</enable-implicit-cdi> </glassfish-application> The default behavior of the admin console is for the Implicit CDI checkbox to be enabled, but this will not override the application configuration. scanning-exclude and scanning-include Modern WAR and EAR files very often include a number of 3rd party JARs. In situations where some JARs require CDI scanning and others may break if scanned, these can now be explicitly included or excluded from such component scanning. Both the glassfish-application.xml and the glassfish-web.xml files support the following directives: <scanning-exclude>*</scanning-exclude> <scanning-include>ejb*</scanning-include> <scanning-include>conflicting-web-library</scanning-include> In the above example, all JARs will be excluded by default, then all JARs beginning with ejb will be scanned along with the JAR named conflicting-web-library. container-initializer-enabled This property configures whether to enable or disable the calling of ServletContainerInitializer component classes defined in JAR files bundled inside a WAR assembly. For performance considerations, you can explicitly disable the servlet container initializer by setting the container-initializer-enabled element to false. This can help solve the deployment of web applications that can suffer from conflicts with a custom bootstrapping process. The default value for this configuration element is true. default-role-mapping With this property, you can set whether to enable the default group to role mappings for your application’s security settings. This element is set up as a property element with a Boolean value attribute like this: <property name="default-role-mapping" value="true"> <description>Enable default group to role mapping</description> </property> Enabling the default group to role mappings will cause all named groups in the application’s linked security realm to be mapped to a role of the same name. This will save you the time of having to redefine the same roles and map them to the realm groups each time they are modified. This will have the same effect as executing the following asadmin command: asadmin set configs.config.server-config.security-service.activate-default-principal-to-role-mapping=true Except its effect will only limit itself to the application instead of all applications deployed on the server. This setting is configured by default to true on the production-ready-domain The default value of this property is false. This property can be set in the glassfish-web.xml, glassfish-ejb-jar.xml and glassfish-application.xml deployment descriptors. In an EAR assembly, only the property set in the glassfish-application.xml will take effect and if set in the glassfish-web.xml and glassfish-ejb-jar.xml, it will be ignored. Setting this configuration property in any of these files will always take precedence over any setting configured on the server. jaxrs-roles-allowed-enabled Since Payara Server 4.1.2.181; 5.181 Payara Server and Micro since versions 4.1.2.181 and 5.181 support @RolesAllowed out of the box to secure JAX-RS resources. In some cases this may clash with existing code that interprets the same annotation using custom code. The out-of-the-box support of @RolesAllowed for JAX-RS resources can be switched off by setting the <jaxrs-roles-allowed-enabled> tag in WEB-INF/glassfish-web.xml of a war archive to false. E.g. <jaxrs-roles-allowed-enabled>false</jaxrs-roles-allowed-enabled> max-wait-time-in-millis Since Payara Server 4.1.2.172 Payara Server has re-implemented a property of the glassfish-ejb-jar.xml descriptor that was available in GlassFish in versions prior to 4.0. The bean-pool element allows users to specify controls on a per-EJB basis for pooled stateless EJBs. Payara Server has reintroduced max-wait-time-in-millis to govern what happens when the number of requests for an EJB exceeds the number of beans available in the pool. A value of -1 disables the property and means that, when the pool is at maximum usage and another request is made, a new EJB instance is created immediately, with no upper bound. A value of 0 means the server will wait indefinitely for an existing EJB instance to be freed. A value between 1 and MAX_INTEGER means that the server will wait for the given amount of milliseconds for an EJB to be freed. Only after this max-wait-time-in-millis is exceeded will the server create a new instance of the requested EJB. For more detail, see the Enhanced EJB configuration section. webservice-default-login-config Since Payara Server 4.1.2.173 When declaring a secured Web Service based on an EJB using the glassfish-ejb-jar.xml deployment descriptor, it’s necessary to define the login configuration (authentication method, security realm name, etc.) for each EJB Web Service that is secured inside the assembly. For example, if an application contains 2 EJB web services called EJBWS1 and EJBWS2, and they need to be secured using BASIC authentication against the file security realm, the following configuration would be needed: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE glassfish-ejb-jar PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 EJB 3.1//EN" ""> <glassfish-ejb-jar> <ejb> <ejb-name>EJBWS1</ejb-name> <webservice-endpoint> <port-component-name>EJBWS1Port</port-component-name> <endpoint-address-uri>EJBWS1/EJBWebService</endpoint-address-uri> <login-config> <auth-method>BASIC</auth-method> <realm>file</realm> </login-config> </webservice-endpoint> </ejb> <ejb> <ejb-name>EJBWS2</ejb-name> <webservice-endpoint> <port-component-name>EJBWS2Port</port-component-name> <endpoint-address-uri>EJBWS2/EJBWebService</endpoint-address-uri> <login-config> <auth-method>BASIC</auth-method> <realm>file</realm> </login-config> </webservice-endpoint> </ejb> </glassfish-ejb-jar> Notice that the login-config element is repeated exactly like it is in the 2 EJB definitions. Not only that, but if these Web services are defined using annotations for each EJB component, then the JAX-WS information (Port Component Name, Endpoint Address, etc.) would be duplicated too, which is too cumbersome for cases when there are lots of EJB Web service definitions. For this scenario, the webservice-default-login-config has been introduced to simplify this configuration. When this element is declared, the login configuration inside it will apply to all of the EJB defined Web Services by default. The previous example can be simplified like this: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE glassfish-ejb-jar PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 EJB 3.1//EN" ""> <glassfish-ejb-jar> <webservice-default-login-config> <auth-method>BASIC</auth-method> <realm>file</realm> </webservice-default-login-config> </glassfish-ejb-jar>
https://docs.payara.fish/enterprise/docs/5.20.0/documentation/payara-server/app-deployment/descriptor-elements.html
2021-01-15T21:17:58
CC-MAIN-2021-04
1610703496947.2
[]
docs.payara.fish
public interface SecurityControl extends SecurityStoreFeature Access rights to read and write data and to perform actions on the server are controlled by a fixed set of permissions. When a session is opened, the server assigns it a set of roles based on the principal used to authenticate. The rules in the security store assign each role to a set of permissions. Each role can be assigned zero, one, or many permissions. The same permission can be assigned to multiple roles. Roles can also include other roles to form a role hierarchy, and so inherit permissions from the other roles. Roles are defined implicitly by specifying them in permission assignments or inclusion relationships; there is no need to explicitly create roles in the security store. Permissions either have 'path' or 'global' scope. Global permissions apply to actions that are server-wide and not specific to a particular path. Path permissions apply to hierarchical context, such as a branch of the topic tree or a branch of the message path hierarchy. Path permissions can be assigned to a role for a path. The permissions are inherited by all descendant paths for the role, except paths that have a separate permission assignment for the role or that are isolated and their descendant paths. Default path permissions can be assigned to a role to set permissions at the root of the path hierarchy. A default permission assignment applies to all paths without direct or inherited path permission assignments, except paths that are isolated and their descendant paths. The permissions a session has for a path are determined as follows: The path permissions model was changed in Diffusion 6.5 so the set of permissions granted to a session for a path is formed by independently evaluating the permissions for each of its roles. In addition, Diffusion 6.5 added the ability to isolate paths. To convert a Diffusion 6.4 security store to an equivalent Diffusion 6.5 store, for each path in a path permission assignment for a role, add a separate statement to isolate the path. This produces a strictly equivalent model, but in practice it is typical that many of these path isolation statements can be removed without affecting an application's security policy, resulting in a simpler configuration. VIEW_SECURITYpermission and to update the store it needs MODIFY_SECURITYpermission. sessionas follows: SecurityControl securityControl = session.feature(SecurityControl.class); SecurityStoreFeature.UpdateStoreCallback, SecurityStoreFeature.UpdateStoreContextCallback<C> updateStore, updateStore, updateStore getSession CompletableFuture<SecurityControl.SecurityConfiguration> getSecurity() If the request was successful, the CompletableFuture will complete successfully with a SecurityControl.SecurityConfiguration result. Otherwise, the CompletableFuture will complete exceptionally with a CompletionException. Common reasons for failure, listed by the exception reported as the cause, include: SessionSecurityException– if the session does not have VIEW_SECURITYpermission; SessionClosedException– if the session is closed. void getSecurity(SecurityControl.ConfigurationCallback callback) callback- the operation callback <C> void getSecurity(C context, SecurityControl.ConfigurationContextCallback<C> callback) C- the context type context- the context to pass to the callback, may be null callback- the operation callback getSecurity(ConfigurationCallback) SecurityControl.ScriptBuilder scriptBuilder() updateStore.
https://docs.pushtechnology.com/docs/6.5.2/java/com/pushtechnology/diffusion/client/features/control/clients/SecurityControl.html
2021-01-15T21:40:10
CC-MAIN-2021-04
1610703496947.2
[]
docs.pushtechnology.com
9.14. Problems with standard IP services on a Dynamic IP number PPP link As noted in the introduction, dynamic IP numbers affect the ability of your Linux PC to act as a server on the Internet. Section Chapter 23 provides information on the (main) services affected and what you can do (if anything) to overcome this.
http://tldp.docs.sk/howto/linux-ppp/x577.html?lang=sk
2021-01-15T21:28:34
CC-MAIN-2021-04
1610703496947.2
[]
tldp.docs.sk
DataStax Community release notes Release notes for DataStax Community. New features, improvements, and notable changes are described in What's new in Apache Cassandra 2.2. The latest Cassandra version is 2.2.6. The CHANGES.txt describes the changes in detail. You can view all version changes by branch or tag in the drop-down list:
https://docs.datastax.com/en/cassandra-oss/2.2/cassandra/releaseNotes.html
2021-01-15T21:29:02
CC-MAIN-2021-04
1610703496947.2
[array(['images/screenshots/rn_c_changes_tag.png', None], dtype=object)]
docs.datastax.com
A newer version of this page is available. Switch to the current version. MenuScrollButtonImageSpriteProperties Class Contains settings that define different states (hottracked, pressed) of a scroll button image when it’s taken from a sprite image. Namespace: DevExpress.Web Assembly: DevExpress.Web.v20.2.dll Declaration public class MenuScrollButtonImageSpriteProperties : ButtonImageSpriteProperties Public Class MenuScrollButtonImageSpriteProperties Inherits ButtonImageSpriteProperties Related API Members The following members accept/return MenuScrollButtonImageSpriteProperties objects: Remarks To learn more, see the Using Custom CSS Sprites topic.
https://docs.devexpress.com/AspNet/DevExpress.Web.MenuScrollButtonImageSpriteProperties?v=20.2
2022-05-16T16:23:29
CC-MAIN-2022-21
1652662510138.6
[]
docs.devexpress.com
What is a PES Embroidery file? The PES embroidery file is a design file that contains instructions for the embroidery/sewing machines. It was developed by Brother Industries for their embroidery machines but were later formalized as general file format. PES files are used by sewing machines to read instructions for stitching a patterns on fabric. These files serve two purposes; first providing design information for PE-Design application developed by Brother Industries and second, providing design name, colors, and embroidery machine codes such as “stop”, “jump”, and “trim”. PES File Format - More Information A PES file is saved to disc in binary file format. It contains multiple sections that has sewing information stored using predefined method. The PES file format is as follow. - Version Data - Gives version information and can be any value #PES0001, #PES0020, #PES0030, #PES0040, #PES0050, #PES0055, #PES0060 - PEC Seek Value - A 4 byte little-endian integer following immediately the version data and represents the seek value for the PEC section that contains design details Information - PSE Section - Contains design information relevant to the Brother PE-Design and may be other sewing applications - PCE Section - Can be anywhere in the PSE file but referenced by the PEC Seek Value. Type of Sewing Machines using PES File PES files are primarily created by the PE-Design Software used with the Brother sewing machines. Some other machines that may support PES files include Babylock and Bernina home embroidery machines. How to Convert PSE Files? The PE-Design software application can convert PSE file to other formats such as .pes, .dst, .exp, .pcs, .hus, .vip, .shv, .jef, .sew, .csd, or .xxx. It can be done using the PE-Design application by opening the file and selecting the Conversion format.
https://docs.fileformat.com/misc/pes/
2022-05-16T14:57:48
CC-MAIN-2022-21
1652662510138.6
[]
docs.fileformat.com
This widget displays the total number of critical threat types detected on your network and the number of Important Users and Other Users affected by each threat type. For more information about defining important users or endpoints, see User or Endpoint Importance. Use the Range drop-down to select the time period for the data that displays. The table lists critical threat types in order of severity. Click a number in the Important Users or Other Users columns and then click the user you want to view. For more information, see Security Threats for Users. The Threat Type column displays the following threat types. Individual users may be affected by more than one critical threat type.
https://docs.trendmicro.com/en-us/enterprise/control-manager-70/getting-started/dashboard/summary-tab/critical-threats-wid.aspx
2022-05-16T14:27:39
CC-MAIN-2022-21
1652662510138.6
[]
docs.trendmicro.com
# FacilMap app On some devices, it is possible to add FacilMap as an app. # Chrome - Open FacilMap in Chrome - Press Chrome’s menu icon (3 dots) on the top right - Press “Add to Home Screen” # Firefox - Open FacilMap in Firefox - Press Firefox’s menu icon (3 dots) on the top right or bottom right - Press “Add to Home screen” # Safari - Open FacilMap in Safari - Press Safari’s share icon (rectangle with an up arrow) at the bottom in the middle - Press “Add to Home Screen” # Opera - Open FacilMap in Opera - Press Opera’s plus icon on the top left - Press “Add to home screen” ← Share a link Privacy →
https://docs.facilmap.org/users/app/
2022-05-16T16:10:01
CC-MAIN-2022-21
1652662510138.6
[]
docs.facilmap.org
Operations Guide¶ The MSR Operations Guide provides the detailed information you need to store and manage images on-premises or in a virtual private cloud, to meet security or regulatory compliance requirements. - Access MSR - Manage access tokens - Configure MSR - Manage applications - Manage images - Manage jobs - Manage users - Manage webhooks - Manage repository events - Promotion policies and monitoring - Use Helm charts - Tag pruning - Vulnerability scanning - Image enforcement policies and monitoring - Upgrade MSR - Monitor MSR - Troubleshoot MSR - Disaster recovery - Customer feedback
https://docs.mirantis.com/msr/2.9/ops.html
2022-05-16T14:49:13
CC-MAIN-2022-21
1652662510138.6
[]
docs.mirantis.com
AWS¶ Overview¶ AWS is the Amazon public cloud, offering a full range of services and features across the globe in various datacenters. AWS provides businesses with a flexible, highly scalable, and low-cost way to deliver a variety of services using open standard technologies as well as proprietary solutions. This section of documentation will help you get Morpheus and AWS connected to utilize the features below: Features¶ Instance, Service, Infrastructure Provisioning & Synchronization EKS Cluster Creation & Synchronization Morpheus Kubernetes, Docker & KVM Cluster Creation ELB Classic Load Balancer Creation & Synchronization ELB Application Load Balancer (ALB) Creation & Synchronization Security Group Creation & Synchronization Security Group Rule Creation & Synchronization Network Synchronization VPC Creation & Synchronization CloudFormation Provisioning & Resource Synchronization Terraform Provisioning & Resource Synchronization Pricing & Costing Synchronization MetaData Tag Creation & Synchronization S3 Bucket Creation & Synchronization Route53 Automation & Synchronization IAM Profile Synchronization and Assignment RDS Support Backups / Snapshots Migrations Auto Scaling Remote Console (SSH & RDP) Lifecycle Management and Resize Restore from Snapshots Elastic IP Assignment Network Pools Enhanced Invoice Costing Requirements¶ - AWS IAM Security Credentials Access Key Secret Key Sufficient User Privileges (see MinimumIAMPolicies section for more info) - Security Group Configuration for Agent Install, Script Execution, and Remote Console Access Typical Inbound ports open from Morpheus Appliance: 22, 5985, 3389 (22 & 3389 required for Console. 22 & 5985 required for agent-less comms) Typical Outbound to Morpheus Appliance: 443 (Required for Agent install & comms) Note These are required for Morpheus agent install, communication, and remote console access for windows and linux. Other configurations, such as docker instances, will need the appropriate ports opened as well. Cloud-init Agent Install mode does not require incoming access for port 22. - Network(s) IP assignment required for Agent install, Script Execution, and Console if the Morpheus Appliance is not able to communicate with AWS instances private ip’s. Note Each AWS Cloud in Morpheus is scoped to an AWS Region and VPC. Multiple AWS Clouds can be added and even grouped if different region and VPC combinations are needed. It’s also recommended you verify Security Groups are properly configured in all regions Morpheus Clouds will scope to. Adding an AWS Cloud¶ Navigate to Infrastructure -> Clouds Select + Create Cloud Select AWS Enter the following: - REGION - Select AWS Region for the Cloud - ACCESS KEY - Access Key ID from AWS IAM User Security Credentials. - SECRET KEY - Secret Access Key associated with the Access Key ID. - USE HOST IAM CREDENTIALS - Check to use use Host IAM Credentials - ROLE ARN - Supports security token service (STS) to AssumeRole by entering an AWS Role ARN - INVENTORY - - Basic - Morpheus will sync information on all EC2 Instances in the selected VPC the IAM user has access to, including Name, IP Addresses, Platform Type, Power Status, and overall resources sizing for Storage, CPU and RAM, every 5 minutes. Inventoried EC2 Instances will appear as Unmanaged VM’s. - Full - In addition to the information synced from Basic Inventory level, Morpheus will gather Resource Utilization metrics for Memory, Storage and CPU utilization per VM. - Off - Existing EC2 Instances will not be inventoried Note Cloud Watch must be configured in AWS for Morpheus to collect Memory and Storage utilization metrics on inventoried EC2 instances. - USE VPC - Specify if the target account is using EC2-VPC or EC2-Classic Platform. In almost all cases, VPC should be selected, and then select the target VPC from the synced available VPC’s list, or All VPC’s. The AWS cloud is ready to be added to a group and saved. Additional configuration options available: - IMAGE TRANSFER STORE S3 bucket for Image transfers, required for migrations into AWS. - EBS ENCRYPTION Enable or disable encrytion of EBS Volumes - COSTING KEY For Gov Cloud pricing only, key for standard managing cost account - COSTING SECRET For Gov Cloud pricing only, secret for standard managing cost account Command. Enhanced Invoice Costing Configuration¶ AWS cloud integrations in Morpheus will sync highly-granular costing data through the use of AWS Costing & Utilization Reports (CUR). If desired, users can turn on costing in the Morpheus Cloud configuration without linking a CUR to use AWS Cost Explorer instead. Morpheus version 4.2.3 also simplified the way CUR reports can be selected or created in order to sync costing data. The section below discusses setting up enhanced costing through CUR reports both in 4.2.3 and versions prior. For additional details on setting up costing with AWS GovCloud, see the next section. Note Even with a costing report configured in the Cloud integration as described below, the COSTING value must also be set to “Costing and Reservations” in order for enhanced invoice data to be brought into Morpheus. Confirm this setting by editing the Amazon Cloud integration, and checking the COSTING value in the Advanced Options panel before continuing. v4.2.3 and Above In Morpheus 4.2.3+, edit the Amazon cloud integration or create a new Amazon Cloud to get started. On the Create/Edit Cloud modal, open the advanced options section. The relevant fields for configuring invoice costing are shown below: In the example case above, a new report and a new S3 bucket are created but Morpheus will also sync in buckets and reports that meet the required parameters if they already exist. For reports to be synced they must meet the requirements listed below: Hourly time granularity Include resource IDs GZIP compression CSV format If you don’t currently have a report meeting those criteria, you can create one by selecting “New Report” from the REPORT NAME dropdown menu. A new S3 bucket can be created in similar fashion if needed. You may also want to review the section below on configuration for Morpheus 4.2.2 and below to note policies that will be applied to your selected bucket and Cost Explorer permissions required for the AWS cloud user associated with the Morpheus Cloud integration. In the end, the following fields must be filled in order to complete the process:. v4.2.2 and Below Begin by logging into the AWS Billing Console, then click Create report. Include a name for your report and mark the box to “Include resource IDs”. Morpheus uses these resource IDs to map costs to various resources. Click Next. On the following page, begin by identifying an S3 bucket to house reports. Click Configure near the top of the page and select an existing bucket or create a new one. After identifying the bucket, you must mark the box to accept the default policy being applied to the bucket. Click Save. The default policy applied to the bucket is below: { "Version": "2008-10-17", "Id": "SomeID", "Statement": [ { "Sid": "SomeStmtID", "Effect": "Allow", "Principal": { "Service": "billingreports.amazonaws.com" }, "Action": [ "s3:GetBucketAcl", "s3:GetBucketPolicy" ], "Resource": "arn:aws:s3:::bucket-name" }, { "Sid": "SomeStmtID", "Effect": "Allow", "Principal": { "Service": "billingreports.amazonaws.com" }, "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::bucket-name/*" } ] } After choosing a bucket, accepting the default policy, and saving the change, you’re brought back to the report delivery page. By default, CUR reports are saved to a folder at the path my-report-name/date-folder. If this bucket already contains CUR reports, you may want to specify a prefix path in the “Report path prefix” field. Outside of this field, use the default values as shown in the screenshot below, then click Next. On the following page, make your final review and click Review and Complete. Following this, you will see your newly configured report in the list of CUR report(s). In addition, the AWS cloud user associated with the integration in Morpheus needs IAM policy permission to access Cost Explorer. Attach a policy like the one below to this cloud user: { "Version": "2012-10-17", "Id": "SomeID", "Statement": [ { "Sid": "SomeStmtID", "Effect": "Allow", "Action": [ "ce:DescribeReportDefinitions", "ce:DescribeCostCategoryDefinition", "ce:ListCostCategoryDefinitions" ], "Resource": [ "*" ] } ] } Note If the Cost Explorer permissions are granted at the master account level, the user will see all costs for each member account; if granted at the member account, only the costs for that member account are available. With the AWS console configuration steps complete, we can move back into Morpheus. Keep in mind it is only necessary to set up one AWS cloud for Costing since we process all records in the CUR report. Once back in Morpheus, add or edit the relevant AWS cloud integration (Infrastructure > Clouds > + ADD OR click the pencil icon in the row for the chosen AWS integration). Expand the Advanced Options drawer and complete the following fields:. Save changes to your cloud integration. Important It may take as long as one hour for Morpheus to process the next CUR report. Costing and AWS GovCloud¶ AWS GovCloud delivers Amazon public cloud infrastructure and features in a way that complies with U.S. government standards for security. GovCloud accounts are applied for and must be associated with a pre-existing standard AWS account and the usage and billing data for the GovCloud account is rolled up into that of the standard AWS account. For that reason, Amazon recommends creating a new standard account solely to house the GovCloud account if usage and billing must be tracked separately. Since GovCloud accounts do not have access to billing data directly, Morpheus must be able to access it through the associated standard account. You could do this by creating the Morpheus cloud integration through the standard account itself or by integrating the GovCloud account and supplying an Access Key and Secret Key for the standard account when configuring costing. When needed, add the additional credentials for the standard commercial account as described below: Add a new AWS Cloud or edit an existing one Expand the Advanced Options section Complete the following fields in addition to other required fields needed to set up costing as described in the previous section: COSTING KEY: The AWS Key ID for an IAM user with sufficient access who is associated with the standard commercial account COSTING SECRET: The AWS Secret Key for an IAM user with sufficient access who is associated with the standard commercial account LINKED ACCOUNT ID: The AWS account number for the standard commercial account in which the IAM user referenced in the prior bullets resides Save the changes to the AWS Cloud integration When credentials are configured correctly, you will be able to select an existing Costing and Usage Report (CUR) from the appropriate S3 bucket if it already exists. If not, you can create one directly from the add/edit AWS Cloud modal in Morpheus. AWS Reserved Instances and Savings Plans¶ Amazon AWS public cloud offers Reserved Instances (RI) and Savings Plans, which allow organizations with consistent use patterns to reduce cloud spend significantly. Morpheus analyzes AWS cloud usage and spend, which allows it to make intelligent recommendations that can lead to significant savings. This data can be reviewed in the Reservation Recommendations and Savings Plan Recommendations tables on any AWS Cloud detail page (Infrastructure > Clouds > Selected Amazon Cloud). Savings Plans potentially offer greater than 70% savings in exchange for a commitment to consistent usage levels for a 1- or 3-year term. Morpheus provides Savings Plan guidance based on learned analytics; allowing you to analyze Savings Plans based on different term commitments and upfront costs to choose the best savings plan. Reserved Instances (RI) provide a discounted hourly rate and optional capacity reservation for EC2 instances. AWS billing automatically applies your RI-discounted rate when the attributes of EC2 instance usage match attributes of an active RI. Morpheus provides RI guidance based on learned analytics. Minimum AWS IAM Policies¶ Below are the AWS IAM Permissions covering the minimum access for Morpheus applying to all resources and services. See for more information. Morpheus Sample AWS IAM Policy¶ { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "ce:*", "cloudwatch:GetMetricStatistics", "ec2:AllocateAddress", "ec2:AssignPrivateIpAddresses", "ec2:AssociateAddress", "ec2:AttachInternetGateway", "ec2:AttachNetworkInterface", "ec2:AttachVolume", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:CancelExportTask", "ec2:CancelImportTask", "ec2:CopyImage", "ec2:CopySnapshot", "ec2:CreateEgressOnlyInternetGateway", "ec2:CreateImage", "ec2:CreateInstanceExportTask", "ec2:CreateInternetGateway", "ec2:CreateKeyPair", "ec2:CreateNatGateway", "ec2:CreateNetworkAcl", "ec2:CreateNetworkAclEntry", "ec2:CreateNetworkInterface", "ec2:CreateSecurityGroup", "ec2:CreateSnapshot", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteEgressOnlyInternetGateway", "ec2:DeleteInternetGateway", "ec2:DeleteKeyPair", "ec2:DeleteNatGateway", "ec2:DeleteNetworkAcl", "ec2:DeleteNetworkAclEntry", "ec2:DeleteNetworkInterface", "ec2:DeleteSecurityGroup", "ec2:DeleteSnapshot", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DeregisterImage", "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeAvailabilityZones", "ec2:DescribeClassicLinkInstances", "ec2:DescribeConversionTasks", "ec2:DescribeEgressOnlyInternetGateways", "ec2:DescribeExportTasks", "ec2:DescribeImageAttribute", "ec2:DescribeImages", "ec2:DescribeImportImageTasks", "ec2:DescribeImportSnapshotTasks", "ec2:DescribeInstances", "ec2:DescribeInstanceStatus", "ec2:DescribeInternetGateways", "ec2:DescribeKeyPairs", "ec2:DescribeNatGateways", "ec2:DescribeNetworkAcls", "ec2:DescribeNetworkInterfaceAttribute", "ec2:DescribeNetworkInterfaces", "ec2:DescribeRegions", "ec2:DescribeSecurityGroupReferences", "ec2:DescribeSecurityGroups", "ec2:DescribeSnapshotAttribute", "ec2:DescribeSnapshots", "ec2:DescribeStaleSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeTags", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumes", "ec2:DescribeVolumeStatus", "ec2:DescribeVpcAttribute", "ec2:DescribeVpcClassicLink", "ec2:DescribeVpcClassicLinkDnsSupport", "ec2:DescribeVpcEndpoints", "ec2:DescribeVpcEndpointServices", "ec2:DescribeVpcPeeringConnections", "ec2:DescribeVpcs", "ec2:DetachInternetGateway", "ec2:DetachNetworkInterface", "ec2:DetachVolume", "ec2:DisassociateAddress", "ec2:GetPasswordData", "ec2:ImportImage", "ec2:ImportInstance", "ec2:ImportKeyPair", "ec2:ImportSnapshot", "ec2:ImportVolume", "ec2:ModifyImageAttribute", "ec2:ModifyInstanceAttribute", "ec2:ModifyNetworkInterfaceAttribute", "ec2:ModifySnapshotAttribute", "ec2:ModifyVolumeAttribute", "ec2:RebootInstances", "ec2:RegisterImage", "ec2:ReleaseAddress", "ec2:ReplaceNetworkAclAssociation", "ec2:ReplaceNetworkAclEntry", "ec2:ResetImageAttribute", "ec2:ResetInstanceAttribute", "ec2:ResetNetworkInterfaceAttribute", "ec2:ResetSnapshotAttribute", "ec2:RevokeSecurityGroupEgress", "ec2:RevokeSecurityGroupIngress", "ec2:RunInstances", "ec2:StartInstances", "ec2:StopInstances", "ec2:TerminateInstances", "ec2:UnassignPrivateIpAddresses", "ec2:UpdateSecurityGroupRuleDescriptionsEgress", "eks:*", "iam:ListGroups", "iam:ListInstanceProfiles", "iam:ListRoles", "rds:AddRoleToDBCluster", "rds:AddTagsToResource", "rds:ApplyPendingMaintenanceAction", "rds:AuthorizeDBSecurityGroupIngress", "rds:CopyDBClusterSnapshot", "rds:CopyDBParameterGroup", "rds:CopyDBSnapshot", "rds:CreateDBCluster", "rds:CreateDBClusterSnapshot", "rds:CreateDBInstance", "rds:CreateDBInstanceReadReplica", "rds:CreateDBSecurityGroup", "rds:CreateDBSnapshot", "rds:DeleteDBCluster", "rds:DeleteDBInstance", "rds:DeleteDBSecurityGroup", "rds:DeleteDBSnapshot", "rds:DescribeAccountAttributes", "rds:DescribeCertificates", "rds:DescribeDBClusterParameterGroups", "rds:DescribeDBClusterParameters", "rds:DescribeDBClusters", "rds:DescribeDBClusterSnapshotAttributes", "rds:DescribeDBClusterSnapshots", "rds:DescribeDBEngineVersions", "rds:DescribeDBInstances", "rds:DescribeDBLogFiles", "rds:DescribeDBParameterGroups", "rds:DescribeDBParameters", "rds:DescribeDBSecurityGroups", "rds:DescribeDBSnapshotAttributes", "rds:DescribeDBSnapshots", "rds:DescribeDBSubnetGroups", "rds:DescribeEngineDefaultClusterParameters", "rds:DescribeEngineDefaultParameters", "rds:DescribeEventCategories", "rds:DescribeEvents", "rds:DescribeOptionGroupOptions", "rds:DescribeOptionGroups", "rds:DescribeOrderableDBInstanceOptions", "rds:ListTagsForResource", "rds:ModifyDBCluster", "rds:ModifyDBClusterParameterGroup", "rds:ModifyDBClusterSnapshotAttribute", "rds:ModifyDBInstance", "rds:ModifyDBParameterGroup", "rds:ModifyDBSnapshotAttribute", "rds:PromoteReadReplica", "rds:RebootDBInstance", "rds:RemoveTagsFromResource", "rds:RestoreDBClusterFromSnapshot", "rds:RestoreDBClusterToPointInTime", "rds:RestoreDBInstanceFromDBSnapshot", "rds:RestoreDBInstanceToPointInTime", "rds:RevokeDBSecurityGroupIngress", "route53:GetHostedZone", "route53:ListHostedZones", "route53:ListResourceRecordSets", "s3:AbortMultipartUpload", "s3:CreateBucket", "s3:DeleteBucket", "s3:DeleteObject", "s3:DeleteObjectVersion", "s3:GetBucketLocation", "s3:GetObject", "s3:GetObjectVersion", "s3:ListAllMyBuckets", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions", "s3:ListMultipartUploadParts", "s3:PutObject" ], "Resource": "*" } ] } Resource Filter¶ If you need to limit actions based on filters you have to pull out the action and put it in a resource based policy since not all the actions support resource filters. See for more info on limiting resources by filter. Resource filter example: { "Effect": "Allow", "Action": [ "ec2:StopInstances", "ec2:StartInstances" ], "Resource": * }, { "Effect": "Allow", "Action": "ec2:TerminateInstances", "Resource": "arn:aws:ec2:us-east-1:123456789012:instance/*", "Condition": { "StringEquals": { "ec2:ResourceTag/purpose": "test" } } }
https://docs.morpheusdata.com/en/5.2.14/integration_guides/Clouds/aws/amazon.html
2022-05-16T16:25:45
CC-MAIN-2022-21
1652662510138.6
[]
docs.morpheusdata.com
Integrating with Amazon Kinesis (Python) Amazon Kinesis allows you to collect, process, and analyze real-time streaming data. In this tutorial, we will set up Nightfall DLP to scan Kinesis streams for sensitive data. An overview of what we are going to build is shown in the diagram below. We will send data to Kinesis using a simple producer written in Python. Next, we will use an AWS Lambda function to send data from Kinesis to Nightfall. Nightfall will scan the data for sensitive information. If there are any findings returned by Nightfall, the Lambda function will write the findings to a DynamoDB table. Prerequisites In order to complete this tutorial you will need the following: - An AWS Account with access to Kinesis, Lambda, and DynamoDB - The AWS CLI installed and configured on your local machine. - A Nightfall API Key - An existing Nightfall Detection Rule which contains at least one detector for email addresses. - Local copy of the companion repository for this tutorial. Before continuing, you should clone the companion repository locally. git clone Configuring AWS Services First, we will configure all of our required Services on AWS. Create Execution Role - Open the IAM roles page in the AWS console. - Choose Create role. - Create a role with the following properties: - Lambda as the trusted entity - Permissions - AWSLambdaKinesisExecutionRole - AmazonDynamoDBFullAccess - Role name: nightfall-kinesis-role Create Kinesis Data Stream - Open the Kinesis page and select Create Data Stream - Enter nightfall-demoas the Data stream name - Enter 1as the Number of open shards - Select Create data stream Create Lambda Function - Open the Lambda page and select Create function - Choose Author from scratch and add the following Basic information: nightfall-lambdaas the Function name - Python 3.8 as the Runtime - Select Change default execution role, Use an existing role and select the previously created nightfall-kinesis-role Once the function has been created, in the Code tab of the Lambda function select Upload from and choose .zip file. Select the local nightfall-lambda-package.zip file that you cloned earlier from the companion repository and upload it to AWS Lambda. You should now see the previous sample code replaced with our Nightfall-specific Lambda function. Next, we need to configure environment variables for the Lambda function. Within the same Lambda view, select the Configuration tab and then select Environment variables. Add the following environment variables that will be used during the Lambda function invocation. NIGHTFALL_API_KEY: your Nightfall API Key DETECTION_RULE_UUID: your Nightfall Detection Rule UUID. Detection Rule Requirements This tutorial uses a data set that contains a name, email, and random text. In order to see results, please make sure that the Nightfall Detection Rule you choose contains at least one detector for email addresses. Lastly, we need to create a trigger that connects our Lambda function to our Kinesis stream. - In the function overview screen on the top of the page, select Add trigger. - Choose Kinesis as the trigger. - Select the previously created nightfall-demoKinesis stream. - Select Add Create DynamoDB Table The last step in creating our demo environment is to create a DynamoDB table. - Open the DynamoDB page and select Create table - Enter nightfall-findingsas the Table Name - Enter KinesisEventIDas the Primary Key Be sure to also run the following before the Lambda function is created: This is to ensure that the required version of the Python SDK for Nightfall has been installed. We also need to install boto3. pip install nightfall=1.2.0 pip install boto3 Lambda Function Overview Before we start processing the Kinesis stream data with Nightfall, we will provide a brief overview of how the Lambda function code works. The entire function is shown below: import os import base64 import boto3 from nightfall import Nightfall def lambda_handler(event, context): dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('nightfall-findings') records = [] for record in event['Records']: # Kinesis data is base64 encoded so decode here payload = base64.b64decode(record["kinesis"]["data"]) records.append(payload.decode("utf-8")) nightfall = Nightfall( os.environ.get('NIGHTFALL_API_KEY') ) findings, redactions = nightfall.scan_text( records, detection_rule_uuids=[os.environ.get('DETECTION_RULE_UUID')] ), } ) This is a relatively simple function that does four things. - Create a DynamoDB client using the boto3library. dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('nightfall-findings') - Extract and decode data from the Kinesis stream and add it to a single list of strings. records = [] for record in event['Records']: # Kinesis data is base64 encoded so decode here payload = base64.b64decode(record["kinesis"]["data"]) records.append(payload.decode("utf-8")) - Create a Nightfall client using the nightfalllibrary and scan the records that were extracted in the previous step. nightfall = Nightfall( os.environ.get('NIGHTFALL_API_KEY') ) findings, redactions = nightfall.scan_text( records, detection_rule_uuids=[os.environ.get('DETECTION_RULE_UUID')] ) - Iterate through the response from Nightfall, if there is are findings for a record we copy the record and findings metadata into a DynamoDB table. We need to process the list of Finding objects into a list of dicts before passing them to DynamoDB., } ) Sending Data to Kinesis Now that you've configured all of the required AWS services, and understand how the Lambda function works, you're ready to start sending data to Kinesis and scanning it with Nightfall. We've included a sample script in the companion repository that allows you to send fake data to Kinesis. The data that we are going to be sending looks like this: 'id': fake.uuid4(), 'name': fake.name(), 'email': fake.email(), 'message': fake.paragraph() The script will send one record with the data shown above every 10 seconds. Sample Data Script Usage Instructions Before running the script, make sure that you have the AWS CLI installed and configured locally. The user that you are logged in with should have the appropriate permissions to add records to the Kinesis stream. This script uses the Boto3 library which handles authentication based on the credentials file that is created with the AWS CLI. You can start sending data with the following steps: - Open the companion repo that you cloned earlier in a terminal. - Create and Activate a new Python Virutalenv python3 -m venv venv source venv/bin/activate - Install Dependencies pip install -r requirements.txt - Start sending data python producer.py If everything worked, you should see output similar to this in your terminal: SENT TO KINESIS: {'id': '8a69f3f5-432e-4ec1-8295-e8b79236e36e', 'name': 'Jessica Henderson', 'email': '[email protected]', 'message': 'Eye evening ahead field. With energy all personal soon sense. Method decision TV that.'} SENT TO KINESIS: {'id': 'd4a90b48-cbcd-45ca-a231-3edbbc0c4792', 'name': 'Thomas Cuevas', 'email': '[email protected]', 'message': 'People write from season. Upon drive before summer exactly tonight practice expert. Actually news reason particularly in should.'} SENT TO KINESIS: {'id': '084083bc-114a-4cc5-8cd6-2e15fd26b6db', 'name': 'Nathan Ward', 'email': '[email protected]', 'message': 'Add school air visit physical range. Child that company late. Boy than remain. Early ability economy thought event option.'} View Nightfall Findings in DynamoDB As the data starts to get sent to Kinesis, the Lambda function that we created earlier will begin to process each record and check for sensitive data using the Nightfall Detection Rule that we specified in the configuration. If Nightfall detects a record with sensitive data, the Lambda function will copy that record and additional metadata from Nightfall to the DynamoDB table that we created previously. Conclusion Congrats! You've successfully integrated Nightfall with Amazon Kinesis, Lambda, and DynamoDB. If you have an existing Kinesis Stream, you should be able to take the same Lambda Function that we used in this tutorial and start scanning that data without any additional changes. Clean Up If you'd like to clean up the created resources in AWS after completing this tutorial you should remove the following resources: nightfall-kinesis-roleIAM Role nightfall-demoKinesis data stream nightfall-lambdaLambda Function nightfall-findingsDynamoDB Table Using Redaction to Mask Findings With the Nightfall API, you are also able to redact and mask your Kinesis findings. You can add a Redaction Config, as part of your Detection Rule, as a section within the lambda function. For more information on how to use redaction with the Nightfall API, and its specific options, please refer to the guide here. Updated 4 months ago
https://docs.nightfall.ai/docs/integrating-with-amazon-kinesis-python-sdk
2022-05-16T16:11:05
CC-MAIN-2022-21
1652662510138.6
[]
docs.nightfall.ai
What is MessengerX.io?¶ MessengerX.io is a developer marketplace for everyday chat apps also known as chatbots. A joint venture between AppyHigh and MACHAAO with an aim to aid developers looking to build and monetize deeply personalized chat experiences. Why MessengerX.io?¶ With our unique learnings in the conversational space from processing over 2.5B+ messages from or to end users. We would like to claim that we do know a thing or two about making good chatbots :) We also see an evidential gap between chatbots and end users due to the operating nature of current messaging eco-systems. We aim to bridge this gap between “great” chatbots and end users! How it Works?¶ When a user sends a message to your bot through Platform SDKs integrated inside publisher or partner app(s). The following set of events sequence should take place when an incoming message is received destined for you bot. - Our servers would route the incoming messages to Your Webhook or CHAT BOT URL, where your messaging app or chatbot is hosted. - Using the Send Message API, the mini app or the chat bot can then respond to the person directly on the Partner App via our Platform APIs The MessengerX Platform is FREE for developers looking to build highly engaging consumer based chatbots. What are Chatbots?¶ Chat Apps also called as Chat Bots are functional conversational programs that one can chat with to carry on a series of complicated tasks in a quick fashion. Setup your Chatbot Server¶ Understanding Webhooks¶ Webhook is a REST endpoint url which denotes your custom functions / callbacks. This is the end point which will receive any incoming messages destined for your bot. Understanding Message Payloads¶ A message payload is a JSON representation of an incoming message sent by the end user. Encrypted Incoming Message Payload¶ The incoming messages that your webhook will receive are encrypted using HS512 algorithm with your token as the key. Below is the JSON representation of the raw incoming message. {"raw":"eyJhbGciOiJIUziJ9.===jsY8eeeru2i1vcsJQ....."} Decrypting Incoming Message Payload¶ Decrypt the incoming payload with the secret_key provided. You can use the open source JWT.io libraries available for all major programming languages Let’s quickly go through the sample representation of the decrypted incoming message object payload - messaging: list of incoming message actions - message_data: details about the nature of data sent. - user: basic user info details about the user associated with the message. - sender: the unique device id of the user. - client: the partner app client id. - version: android / web sdk client version - silent: if silent is true, don’t reply back to the request. { "sub":{ "messaging":[ { "message_data":{ "text":"hi", "action_type":"get_started" }, "user":{ "userId":"<!-- USER_ID -->", "creation_time":1589518339556, "device_id":"311b145ed6a96d6", "email":"<c11b145ed6a96d6>@machaao.com", "timezone":"-7.0" }, "sender":"dWnjre9rTr65ZeiOmrY1oU", "silent":false, "client":"messenger.---.debug", "version":"0.838" } ] } } Sending On-Demand Responses¶ Send a outgoing message in response to user’s input can be done by the usage of our On Demand Messaging APIs as demonstrated below Sending a Text Message with Quick Replies¶ Below is an example CURL request to send a message / response to a particular user id using our Core Messaging APIs. curl --location --request POST ' \ --header 'api_token: API_TOKEN' \ --header 'Content-Type: application/json' \ --data-raw '{ "users":["<!--- UNIQUE_USER_ID -->"], "message":{ "text": "I am a good bot", "quick_replies": [{ "content_type": "text", "title": "Hi", "payload": "hi" }] } }' Sending a Media Element with Quick Replies¶ Below is an example CURL request to send a message attachment to a particular user id. curl --location --request POST ' \ --header 'api_token: API_TOKEN' \ --header 'Content-Type: application/json' \ --data-raw '{ "users":["<!-- UNIQUE_USER_ID -->"], "message":{ "attachment":{ "type":"template", "payload":{ "template_type":"generic", "elements":[ { "title": "Test #786 - Duffle Bag + 200 Machaao Credits", "subtitle":"Only Pay Shipping & Handling Charges. Combo Offer for Machaao Users only.", "image_url":" } ] } }, "quick_replies": [{ "content_type": "text", "title": "Hi", "payload": "hi" }] } }' Personalization, Tagging & User Engagement¶ The personalization and engagement api is the core base to build sophisticated re-engaging bots. The process starts with tagging a user, Tagging a user allows you to open up multiple re-targeting or re-engagement use cases such as sending daily news, personalized responses, etc. This opens up your chat bot to support variety of deeply personalized use cases without a need for a huge dev ops team. Tag a User¶ Annotate or Tag a user with values for deeper personalization. curl --location --request POST ' \ --header 'api_token: <API_TOKEN>' \ --header 'Content-Type: application/json' \ --data-raw '{ "tag": "preferred_languages", "status": 1, "values": ["en", "fr"], "displayName": "Languages" }' Announcements (Subscription Messaging)¶ Sending announcements in order to re-engage your bot user (rate limited to a max of 1 per hour per user) Sample CURL Command¶ Below is an example CURL request to send an announcement to a particular tag or list of tags using our Announcement APIs. curl --location --request POST ' \ --header 'api_token: API_TOKEN' \ --header 'Content-Type: application/json' \ --data-raw '{ "tags":["india", "pakistan", "usa"], "message":{ "text": "I am a good bot", "quick_replies": [{ "content_type": "text", "title": "Hi", "payload": "hi" }] } }' Headless CMS¶ Tagging a user allows you to open up multiple re-targeting or re-engagement use cases such as sending daily news, personalized responses, etc. Insert new content¶ Auto-Annotate and insert content for your chat app. curl --location --request POST ' \ --header 'api_token: <API_TOKEN>' \ --header 'Content-Type: application/json' \ --data-raw '{ "url": " "tags": ["india", "pakistan", "bangladesh"] }' Open Source Samples¶ Below are some samples which will help get you started RASA (Python Sample)¶ Client Integration via SDKs¶ Integrate your bot on your website¶ Step_1: Below is a sample script you need to paste into your website to install webchat for your chatbot. <script id="webchat" src=" type="text/javascript" themecolor="#2196f3" chathost=" botname="<!-- your bot name -->" machaaokey="<!-- your api token -->" avatarurl="<!-- your bot logo url -->" ></script> Step_2: You will need to update variables above as shown below: themecolor : Put the desired color in hex or rgb format which will be applied to the chat header background, buttons and message bubble background. botname : The name of the bot that will appear on the chat header avatarurl : The url of the image that is shown on bot launcher icon. chathost : Url where the static assets for the webchat are hosted. machaaokey : The API token for your bot proivded by Machaao Integrate your bot in your Android App¶ Add following to your app gradle file. maven { url " } Add Gradle Dependency¶ debugImplementation('com.machaao.android:machaao-sdk:0.874-SNAPSHOT') { transitive = true } releaseImplementation('com.machaao.android:machaao-sdk:0.874') { transitive = true } // [contact [email protected] for access] Modify Manifest (Add Token)¶ <meta-data android: Add SingleBotActivity Reference to Manifest (Bot Developers)¶ <activity android: <intent-filter> <action android: <category android: <action android: </intent-filter> </activity> Launch Your Bot / Chat App via our SDK [For Bot Developers / Partners]¶ Intent intent = new Intent(this, SingleBotActivity.class); intent.putExtra("botToken", botToken); startActivity(intent); Sample Android Chat App @ Bonus for Developers¶ In addition to the massive savings on marketing and infrastructure costs, the platform also offers multiple other Rest APIs dor developers looking to build deeply personalized chatbots: - Rich Messaging Support via On Demand Messaging API - Deep Personalization via Tagging API - Auto ML based Engagement via Announcement API - Data Capture API (Subjected to Approval) - Transactional Wallet API (Subjected to Approval) - FREE Hosting for your chat bot (Subjected to Approval) - Guaranteed Message Processing (Subjected to Approval) - Admin Dashboard (Premium) Resources / Tutorials¶ RASA + MessengerX.io Chat Bot Tutorial (YouTube Video)¶ This is a beginner’s guide on how to build a python & RASA chatbot from scratch. The video helps you build your first sample bot using RASA’s open-source NLU template and lets you train your bot by writing logic in python. Chatbot Showcase¶ MedBot - Your Pocket Medical Guide Built using Python + RASA using MessengerX API Other Resources¶ Partner Deck: Small Businesses / Enterprise¶ - Do you have an existing facebook messenger chatbot? - Make your existing chatbot / platform work inside your client android app or website within hours (iOS coming soon). - Conversational Bot Designer (Premium) Join our Gitter Community¶
https://messengerx.readthedocs.io/en/latest/
2022-05-16T14:11:20
CC-MAIN-2022-21
1652662510138.6
[]
messengerx.readthedocs.io
SetOcsAnnouncementID schedules configurable announcements to the subscriber based on Credit-Control-Answers The feature schedules charging announcements as defined by TS 32.281 as well as announcements for out-of-credit (4012 CCA result code), low balance (Low-Balance-Indicator AVP), and for custom OC-Play-Announcement-Id AVPs. The feature runs on every successful and 4012 credit check result and analyses the CCA for announcement triggers: Multiple-Services-Credit-ControlAVP containing Announcement-InformationAVP; OC-Play-Announcement-IdAVP; low balance AVP; or result code 4012. If the CCA contains such a trigger then the feature schedules the relevant announcement either immediately or on a timer relative to quota exhaustion. The feature is able to schedule multiple announcements to be played. The feature also runs on each party request for a terminating call to determine whether the session has been established and so if a reauthorization request should be triggered or whether any timers should be set relative to quota exhaustion to start announcements. The feature runs on timer expiry to start any announcements that were scheduled on timers. Behaviour Announcements requested in the Announcement-Information AVP are played in preference to other announcements. Announcements requested in the OC-Play-Announcement-Id AVP are played if no Announcement-Information announcements are present and configured announcement IDs for out-of-credit (4012 CCA result code) or low balance (Low-Balance-Indicator AVP) are only played if no Announcement-Information or OC-Play-Announcement-Id AVPs were present. Announcements for terminating user It is not possible to play announcements to the terminating subscriber after the initial credit check as the session has not yet been established. Announcements requested by Announcement-Information AVPs to be played to the terminating party in the initial credit check are not played. If other announcements were requested when running as a terminating instance then a reauthorization request will be sent once the session is established. Any announcements in the response to this subsequent reauthorization request will be played. The reauthorization request can be delayed for a number of milliseconds using the ChargingReauthDelayMillis configuration field. Mid call announcements to the terminating subscriber will be played immediately. Charging Announcements The Announcement-Information AVP and its child AVPs were introduced in 3GPP Rel. 13 but Sentinel does support receiving them in Rel. 12. Low Balance Announcements If the LowBalanceIndicator AVP is set in a successful CCA-I or CCA-U then a low balance announcement will be played (either configured or via the OC-Play-Announcement-Id-AVP) and a session state field is set to mark that a low balance announcement has been played. Subsequent credit checks will not trigger another low balance announcement unless the previous credit check response did not have the LowBalanceIndicator AVP set. Out of Credit Announcements If a credit check response contains a 4012 result code then: For early originating sessions, the feature will schedule the configured early dialog announcement if an announcement was not supplied in the CCA and then end the session with the appropriate sip error response according to the CCA result code. For early terminating and forwarding sessions, the feature will not schedule an announcement and will end the session with a 480 Temporarily Unavailable response. For confirmed originating and terminating sessions, the feature will schedule the configured mid session announcement if an announcement was not supplied in the CCA and then end the session. Configuration SetOcsAnnouncementID uses two JSLEE configuration profile tables: LowBalanceAnnouncementConfigProfileTable and SetOutOfCreditAnnouncementIDConfigProfileTable. Statistics SetOcsAnnouncementID statistics are tracked by the SetOcsAnnouncementID feature and can be found under the following parameter set: SLEE-Usage ▶ sentinel.volte.sip service ID ▶ sentinel.volte.sip SBB ID ▶ SetOcsAnnouncementID.
https://docs.rhino.metaswitch.com/ocdoc/books/sentinel-volte-documentation/3.1.0/sentinel-volte-administration-guide/features/general-volte-features/set-ocs-announcement-id.html
2022-05-16T14:48:41
CC-MAIN-2022-21
1652662510138.6
[]
docs.rhino.metaswitch.com
Azure Data Explorer API Overview The Azure Data Explorer service supports the following communication endpoints: - A REST API endpoint, through which you can query and manage the data in Azure Data Explorer. This endpoint supports the Kusto Query Language for queries and control commands. - An MS-TDS endpoint that implements a subset of the Microsoft Tabular Data Stream (TDS) protocol, used by the Microsoft SQL Server products. This endpoint is useful for tools that know how to communicate with a SQL Server endpoint for queries. - An Azure Resource Manager (ARM) endpoint that is the standard means for Azure services. The endpoint is used to manage resources, such as Azure Data Explorer clusters. REST API The primary means of communicating with any Azure Data Explorer service, is by using the service's REST API. With this fully documented endpoint, callers can: - Query data - Query and modify metadata - Ingest data - Query the service health status - Manage resources The different Azure Data Explorer services communicate among themselves, via the same publicly available REST API. A number of client libraries are also available to use the service, without dealing with the REST API protocol. MS-TDS Azure Data Explorer also supports the Microsoft SQL Server communication protocol (MS-TDS), and includes a limited support for running T-SQL queries. This protocol enables users to run queries on Azure Data Explorer using a well-known query syntax (T-SQL) and database client tools such as LINQPad, sqlcmd, Tableau, Excel, and Power BI. For more information, see MS-TDS. Client libraries Azure Data Explorer provides a number of client libraries that make use of the above endpoints, to make programmatic access easy. - .NET SDK - Python SDK - R - Java SDK - Node SDK - Go SDK - PowerShell .NET Framework Libraries .NET Framework Libraries are the recommended way to invoke Azure Data Explorer functionality programmatically. A number of different libraries are available. - Kusto.Data (Kusto Client Library): Can be used to query data, query metadata, and alter it. It's built on top of the Kusto REST API, and sends HTTPS requests to the target Kusto cluster. - Kusto.Ingest (Kusto Ingestion Library): Uses Kusto.Dataand extends it to ease data ingestion. The above libraries use Azure APIs, such as Azure Storage API and Azure Active Directory API. Python Libraries Azure Data Explorer provides a Python client library that permits callers to send data queries and control commands. For more information, see Azure Data Explorer Python SDK. R Library Azure Data Explorer provides an R client library that permits callers to send data queries and control commands. For more information, see Azure Data Explorer R SDK. Java SDK The Java client library provides the capability to query Azure Data Explorer clusters using Java. For more information, see Azure Data Explorer Java SDK. Node SDK Azure Data Explorer Node SDK is compatible with Node LTS (currently v6.14) and built with ES6. For more information, see Azure Data Explorer Node SDK. Go SDK Azure Data Explorer Go Client library provides the capability to query, control and ingest into Azure Data Explorer clusters using Go. For more information, see Azure Data Explorer Golang SDK. PowerShell Azure Data Explorer .NET Framework Libraries can be used by PowerShell scripts. For more information, see Calling Azure Data Explorer from PowerShell. Monaco IDE integration The monaco-kusto package supports integration with the Monaco web editor. The Monaco Editor, developed by Microsoft, is the basis for Visual Studio Code. For more information, see monaco-kusto package.
https://docs.azure.cn/en-us/data-explorer/kusto/api/
2022-05-16T16:34:35
CC-MAIN-2022-21
1652662510138.6
[]
docs.azure.cn
AWS firewall restrictions In AWS, Databricks launches the cluster in a VPC created and managed by Databricks in the customer’s account. For additional security, workers that belong to a cluster can only communicate with other workers that belong to the same cluster. Workers cannot talk to any other EC2 instances or other AWS services running in the Databricks VPC. If you have any AWS service running on the same VPC as that of the Databricks cluster, you may not be able to talk to the service because of this firewall restriction. Databricks recommends to run such services outside of the Databricks VPC and peer with that VPC to connect to those services.
https://docs.databricks.com/administration-guide/cloud-configurations/aws/aws-firewall-restrictions.html
2022-05-16T15:59:04
CC-MAIN-2022-21
1652662510138.6
[]
docs.databricks.com
Jumps Conditional logic on your form(s). Jumps allow a questionnaire to follow a conditional flow based on the user's answers. It is possible to only jump forward , to one of the next questions, not backward. When Four conditions can be set: answer IS For example, within question one, jump to question five if the user selects value two. answer ** IS NOT** For example, within question one, jump to question five if the user does not select value two. no answer is given For example, If a user does not select a value for question one, jump to question five always For example, If a user selects any answer, jump to question five. Useful when you want to skip questions regardless of the user input to keep the flow going. Answer Pick the answer that will trigger the jump from the list of possible answers. You CANNOT have more jump than the total of possible answers you have. Go to Select the jump destination i.e. the question to go to when the condition is met. By design, you cannot jump to the immediate next question (it would not be a jump, technically), you need to jump at least one question. So if you have questions A, B, C, etc, setting a jump on A will list the next questions starting from C. Question B will not be listed, as it is just next to A. To reach the end of the questionnaire, just select " End of form ". Multiple jumps You can of course add more than one jump clause. However, please be careful to make sure you do not add conflicting jump clauses to a field . If that is the case, the jumps will follow the order from top to bottom i.e. in the case of more than one condition met, the first jump from the top will win. For example, you may have a drop-down with five items, and you would like to specify that if a user selects item one, they jump to question five, but if they select item three they jump to question ten. All other choices would proceed normally within the form. In every case, you cannot add more jumps than the total number of possible answers you have. Jumps can be used in many circumstances to give great flexibility when defining a single form. In many cases, multiple forms that may normally be given to a user can be combined into a single form with logic defined by jumps. Questions without possible answers to choose from The logic described above is for RADIO, DROPDOWN, and CHECKBOX. All the other questions will allow you to only enter a single simple "always" jump if needed: The reason behind that is to make it possible to jump from one question to another regardless of the user inputs. Useful when a question is the destination of a jump, but all the following questions are the destination of other previous jumps the user might have skipped. This situation is quite common when many jumps are set and you want to keep the proper questionnaire flow. Jumps and Groups By design, when using Groups , it is possible to apply only a jump "always" on the whole group, not any jumps on any questions nested in that group. This makes a lot of sense. A group is a set of related questions displayed at the same time as they would appear on a paper- based form. Jumps are a tool to hide questions from the users when they are not supposed to answer them. If there is the need to have conditional questions within a group, the paper based approach will work. Something like: "If you answered B to previous question, tap on Next to go to the next section." Also, you could use README question types to offer even more instructions to the user. If your logic is more complex, try to restructure your form so you will not need to use a jump within a group. Formbuilder - Previous Search Next - Formbuilder Branches Last modified 1yr ago Copy link Contents When Answer Go to Multiple jumps Questions without possible answers to choose from Jumps and Groups
https://docs.epicollect.net/formbuilder/jumps
2022-05-16T15:26:17
CC-MAIN-2022-21
1652662510138.6
[]
docs.epicollect.net
What is a PYX file? A PYX file is a source code written in the Python-like language Pyrex. It may contain code that references existing C modules. Pyrex is a programming language that is used for creating Python modules PYD and is based on C-like syntax. Using the PYX files, users can write extension modules that may directly access the external C code. PYX files can be opened with Pyrex that is available on Windows, MacOS and Linux OS. PYX File Format - More Information PYX files are plain text files and their syntax is very close to Python. Python provides only a C API to write extension modules. Pyrex speeds up the execution of Python code and provides a Python interface to existing C modules/libraries.
https://docs.fileformat.com/programming/pyx/
2022-05-16T14:18:36
CC-MAIN-2022-21
1652662510138.6
[]
docs.fileformat.com
To configure device trust and access policies for desktop devices, you configure identity provider routing rules in Okta and conditional access policies in Workspace ONE Access. The new, simplified Okta device trust solution that is available for iOS and Android devices is not yet available for desktop devices. To configure device trust for desktop devices, you can use the Certificate (Cloud Deployment) and Device Compliance authentication methods in Workspace ONE Access policies. Important: Do not use the Device Compliance (with AirWatch) authentication method for apps that are configured with Device Trust in Okta. The Device Compliance authentication method is not compatible with apps using Okta Device Trust. Make sure that you follow the preliminary procedures listed for the Device Trust use case in Main Use Cases before proceeding with the tasks in this section.
https://docs.vmware.com/en/VMware-Workspace-ONE-Access/services/workspaceone_okta_integration/GUID-02C8DB3D-977F-459B-9A50-ED57F190DFC7.html
2022-05-16T14:30:11
CC-MAIN-2022-21
1652662510138.6
[]
docs.vmware.com
After creating the OAuth 2.0 client in Workspace ONE Access, generate an OAuth bearer token. Prerequisites Download and install the Postman app. You can download Postman from Procedure - Open a new tab in the Postman app. - For the HTTP method, select POST. - For the URL, enter: tenanturl with your Workspace ONE Access URL, for example: - Click the Authorization tab and select OAuth 2.0 as the type. - In the Configure New Token section, enter the required information. For example: - For Token Name, enter a name, such as WorkspaceONE. - For Grant Type, select Client Credentials. - For Access Token URL, enter where tenantURL is your Workspace ONE Access tenant URL.For example: Workspace ONE Access was formerly called VMware Identity Manager. Old tenants have the domain name vmwareidentity.com while new tenants have the domain name workspaceoneaccess.com. - For Client ID, enter the Client ID that you set in Create OAuth 2.0 Client. - For Client Secret, enter the secret that was generated in Create OAuth 2.0 Client.Note: If you did not copy the secret while creating the client, you can regenerate it. To regenerate the secret, go to the page in the Workspace ONE Access console, select the client, and click Regenerate Secret on the client page. - For Scope, enter admin. - Click Get New Access Token.A token is generated and displayed. - To verify that the bearer token was added, click the Headers tab and click hidden headers.The bearer token appears. - If the bearer token was not added, return to the Authorization tab and select your token from the Available Tokens drop-down menu and check again.
https://docs.vmware.com/en/VMware-Workspace-ONE-Access/services/workspaceone_okta_scim_provisioning/GUID-CA84C36A-CE5B-417C-8157-9E145E114644.html
2022-05-16T16:11:19
CC-MAIN-2022-21
1652662510138.6
[]
docs.vmware.com
TensorFlow BYOM: Train locally and deploy on SageMaker. - Prerequisites and Preprocessing Training the network locally Set up hosting for the model Validate the endpoint for use Note: Compare this with the tensorflow bring your own model example Thie notebook was last tested on a ml.m5.xlarge instance running the Python 3 (TensorFlow 2.3 Python 3.7 CPU Optimized) kernel in SageMaker Studio. Introduction We will do a classification task, training locally in the box from where this notebook is being run. We then set up a real-time hosted endpoint in SageMaker. Consider the following model definition for IRIS classification. This mode uses the tensorflow.estimator.DNNClassifier which is a pre-defined estimator module for its model definition. Prequisites and Preprocessing Permissions and environment variables Here we set up the linkage and authentication to AWS services. In this notebook we only need the roles used to give learning and hosting access to your data. The Sagemaker SDK will use S3 defualt buckets when needed. If the get_execution_role does not return a role with the appropriate permissions, you’ll need to specify an IAM role arn that does. [ ]: !pip install --upgrade tensorflow sagemaker [ ]: import boto3 import numpy as np import os import pandas as pd import re import sagemaker from sagemaker.tensorflow import TensorFlowModel from sagemaker.utils import S3DataConfig import shutil import tarfile import tensorflow as tf from tensorflow.python.keras.utils.np_utils import to_categorical from tensorflow.keras.layers import Input, Dense role = sagemaker.get_execution_role() sm_session = sagemaker.Session() bucket_name = sm_session.default_bucket() Model Definitions For this example, we’ll use a very simple network architecture, with three densely-connected layers. [ ]: def iris_mlp(metrics): ### Setup loss and output node activation output_activation = "softmax" loss = "sparse_categorical_crossentropy" input = Input(shape=(4,), name="input") x = Dense( units=10, kernel_regularizer=tf.keras.regularizers.l2(0.001), activation="relu", name="dense_layer1", )(input) x = Dense( units=20, kernel_regularizer=tf.keras.regularizers.l2(0.001), activation="relu", name="dense_layer2", )(x) x = Dense( units=10, activation="relu", kernel_regularizer=tf.keras.regularizers.l2(0.001), name="dense_layer3", )(x) output = Dense(units=3, activation=output_activation)(x) ### Compile the model model = tf.keras.Model(input, output) model.compile(optimizer="adam", loss=loss, metrics=metrics) return model Data Setup We’ll use the pre-processed iris training and test data stored in a public S3 bucket for this example. [ ]: data_bucket = S3DataConfig( sm_session, "example-notebooks-data-config", "config/data_config.json" ).get_data_bucket() print(f"Using data from {data_bucket}") [ ]: # Download iris test and train data sets from S3 SOURCE_DATA_BUCKET = data_bucket SOURCE_DATA_PREFIX = "datasets/tabular/iris" sm_session.download_data(".", bucket=SOURCE_DATA_BUCKET, key_prefix=SOURCE_DATA_PREFIX) # Load the training and test data from .csv to a Pandas data frame. train_df = pd.read_csv( "iris_train.csv", header=0, names=["sepal_length", "sepal_width", "petal_length", "petal_width", "class"], ) test_df = pd.read_csv( "iris_test.csv", header=0, names=["sepal_length", "sepal_width", "petal_length", "petal_width", "class"], ) # Pop the record labels into N x 1 Numpy arrays train_labels = np.array(train_df.pop("class")) test_labels = np.array(test_df.pop("class")) # Save the remaining features as Numpy arrays train_np = np.array(train_df) test_np = np.array(test_df) Training the Network Locally Here, we train the network using the Tensorflow .fit method, just like if we were using our local computers. This should only take a few seconds because the model is so simple. [ ]: EPOCHS = 50 BATCH_SIZE = 32 EARLY_STOPPING = tf.keras.callbacks.EarlyStopping( monitor="val_loss", mode="auto", restore_best_weights=True ) # Instantiate classifier classifier = iris_mlp(metrics=["accuracy", "binary_accuracy"]) # Fit classifier history = classifier.fit( x=train_np, y=train_labels, validation_data=(test_np, test_labels), callbacks=[EARLY_STOPPING], batch_size=BATCH_SIZE, epochs=EPOCHS, ) Set up hosting for the model Export the model from tensorflow In order to set up hosting, we have to import the model from training to hosting. We will begin by exporting the model from TensorFlow and saving it to our file system. We also need to convert the model into a form that is readable by sagemaker.tensorflow.model.TensorFlowModel. There is a small difference between a SageMaker model and a TensorFlow model. The conversion is easy and fairly trivial. Simply move the tensorflow exported model into a directory export\Servo\ and tar the entire directory. SageMaker will recognize this as a loadable TensorFlow model. [ ]: classifier.save("export/Servo/1") with tarfile.open("model.tar.gz", "w:gz") as tar: tar.add("export") Open a new sagemaker session and upload the model on to the default S3 bucket. We can use the sagemaker.Session.upload_data method to do this. We need the location of where we exported the model from TensorFlow and where in our default bucket we want to store the model( /model). The default S3 bucket can be found using the sagemaker.Session.default_bucket method. Here, we upload the model to S3 [ ]: s3_response = sm_session.upload_data("model.tar.gz", bucket=bucket_name, key_prefix="model") Import model into SageMaker Use the sagemaker.tensorflow.model.TensorFlowModel to import the model into SageMaker that can be deployed. We need the location of the S3 bucket where we have the model and the role for authentication. [ ]: sagemaker_model = TensorFlowModel( model_data=f"s3://{bucket_name}/model/model.tar.gz", role=role, framework_version="2.3", ) Create endpoint Now the model is ready to be deployed at a SageMaker endpoint. We can use the sagemaker.tensorflow.model.TensorFlowModel.deploy method to do this. Unless you have created or prefer other instances, we recommend using a single 'ml.m5.2xlarge' instance for this example. These are supplied as arguments. [ ]: %%time predictor = sagemaker_model.deploy(initial_instance_count=1, instance_type="ml.m5.2xlarge") Validate the endpoint for use We can now use this endpoint to classify an example to ensure that it works. The output from predict will be an array of probabilities for each of the 3 classes. [ ]: sample = [6.4, 3.2, 4.5, 1.5] predictor.predict(sample) Delete all temporary directories so that we are not affecting the next run. Also, optionally delete the end points. [ ]: os.remove("model.tar.gz") os.remove("iris_test.csv") os.remove("iris_train.csv") os.remove("iris.data") shutil.rmtree("export") If you do not want to continue using the endpoint, you can remove it. Remember, open endpoints are charged. If this is a simple test or practice, it is recommended to delete them. [ ]: predictor.delete_endpoint() [ ]: [ ]:
https://sagemaker-examples.readthedocs.io/en/latest/advanced_functionality/tensorflow_iris_byom/tensorflow_BYOM_iris.html
2022-05-16T14:24:55
CC-MAIN-2022-21
1652662510138.6
[]
sagemaker-examples.readthedocs.io
GraphFrames GraphFrames is a package for Apache Spark that provides DataFrame-based graphs. It provides high-level APIs in Java, Python, and Scala. It aims to provide both the functionality of GraphX and extended functionality taking advantage of Spark DataFrames in Python and Scala. This extended functionality includes motif finding, DataFrame-based serialization, and highly expressive graph queries. The GraphFrames package is included in Databricks Runtime for Machine Learning
https://docs.databricks.com/spark/latest/graph-analysis/graphframes/index.html
2022-05-16T16:27:07
CC-MAIN-2022-21
1652662510138.6
[]
docs.databricks.com
Publishing a PostGIS table¶ This tutorial walks through the steps of publishing a PostGIS table with GeoServer. Note This tutorial assumes that PostgreSQL/PostGIS has been previously installed on the system and responding on localhost on port 5432, and also that GeoServer is running at Data preparation¶ First let’s gather that the data that we’ll be publishing. Download the file nyc_buildings.zip. It contains a PostGIS dump of a dataset of buildings from New York City. Create a PostGIS database called nyc. This can be done with the following commands: createdb nyc psql -d nyc -c 'CREATE EXTENSION postgis' Note You may need to supply a user name and password with these commands. Extract nyc_buildings.sqlfrom nyc_buildings.zip. Import nyc_buildings.sqlinto the nycdatabase: psql -f nyc_buildings.sql nyc Creating a new workspace¶ The next step is to create a workspace for the data.. Creating. Create a new store by clicking the PostGISlink. Enter the Basic Store Info: Select the nycWorkspace Enter the Data Source Name as nyc_buildings Add a brief Description Specify the PostGIS database Connection Parameters: Note Leave all other fields at their default values. Click Save. Creating a layer¶ Now that the store is loaded, we can publish the layer. Navigate to. Click Add a new resource. From the New Layer chooser menu, select nyc:nyc_buidings. On the resulting layer row, select the layer name nyc_buildings. The Edit Layer page defines the data and publishing parameters for a layer. Enter a short Title and an Abstract for the nyc_buildingslayer. polygon. Finalize the layer configuration by scrolling to the bottom of the page and clicking Save. Previewing the layer¶ In order to verify that the nyc_buildings layer is published correctly, we can preview the layer. Navigate to the Layer Preview screen and find the nyc:nyc_buildingslayer..
https://docs.geoserver.org/stable/en/user/gettingstarted/postgis-quickstart/index.html
2022-05-16T15:15:06
CC-MAIN-2022-21
1652662510138.6
[]
docs.geoserver.org
Hackuarium documentation Welcome to this site, which contains the documentation of some open-hardware projects led at the [ bio-hacking space) located in Écublens, Switzerland. You can navigate in the sidebar on the left part of your screen to see the doc written for our projects. If you are a little bit familiar with MarkDown and GitHub please feel free to correct and / or contribute to the documentation. There is an 'edit' button in the bottom of all the pages.
https://docs.hackuarium.org/docs/intro/
2022-05-16T16:27:37
CC-MAIN-2022-21
1652662510138.6
[]
docs.hackuarium.org
Frees the Array object and releases the resources that it holds. Both 1-D and N-D arrays are supported for this method. Syntax void free() throws SQLException Exceptions Throws SQLException if an error occurs while attempting to free the array and release its resources. A database specific code “9743 (ERRUDFJAVARRAY) <Failed to free Array object>” is returned.
https://docs.teradata.com/r/Teradata-VantageTM-SQL-External-Routine-Programming/July-2021/Java-Application-Classes/com.teradata.fnc.Array/free
2022-05-16T16:13:14
CC-MAIN-2022-21
1652662510138.6
[]
docs.teradata.com
Manage Azure Data Explorer database permissions Azure Data Explorer enables you to control access to databases and tables, using a role-based access control model. Under this model, principals (users, groups, and apps) are mapped to roles. Principals can access resources according to the roles they're assigned. For a list of available roles, see Role-based Authorization This article describes the available roles and how to assign principals to those roles using the Azure portal and Azure Data Explorer management commands. Manage permissions in the Azure portal Navigate to your Azure Data Explorer cluster. In the Overview section, select the database where you want to manage permissions. For roles that apply to all databases, skip this phase and go directly to the next step. Select Permissions then Add. Look up the principal, select it, then Select. Manage permissions with management commands Sign-in to and add your cluster if it's not already available. In the left pane, select the appropriate database. Use the .addcommand to assign principals to roles: .add database databasename rolename ('aaduser | [email protected]'). To add a user to the Database user role, run the following command, substituting your database name and user. .add database <TestDatabase> users ('aaduser=<[email protected]>') The output of the command shows the list of existing users and the roles they're assigned to in the database. For examples pertaining to Azure Active Directory and the Kusto authorization model, please see Principals and Identity Providers
https://docs.azure.cn/en-us/data-explorer/manage-database-permissions
2022-05-16T14:47:02
CC-MAIN-2022-21
1652662510138.6
[]
docs.azure.cn