content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
THYBaseMenuController Overview The THYBaseMenuController base class provides methods for dealing with menu objects. The Hydra framework uses controller classes to establish a connection with a specific menu object. You will not have to use this class directly, instead, you can use one of its descendants to specify the menu object that will be used by the Hydra framework for menu merging. Note: You can read about menu and toolbar usage in the Hydra framework in this article. Please also look at the Actions sample shipped with Hydra to see menu and toolbar merging in action. Location - Unit: Hydra.VCL.UserInterface.pas - Ancestry: TComponent | THYUpdateableController | THYBaseController | THYBaseMenuController Instance Methods constructor Create override Creates a new instance of the class. constructor Create(aOwner: TComponent) Parameters: - aOwner: Reference to the owner object. AddReference protected (declared in THYUpdateableController) procedure AddReference(const anItem: IHYVCLObjectReference) Parameters: - anItem: BeginUpdate protected virtual (declared in THYUpdateableController) Calls BeginUpdate prior to call(s) to AddReference. Each BeginUpdate must have an associated EndUpdate. Items added by AddReference can subsequently be removed by a single call to DeleteUpdates. procedure BeginUpdate(const aGUID: TGUID) Parameters: - aGUID: Unique identifier of the session. DeleteUpdates protected virtual (declared in THYUpdateableController) DeleteUpdates removes all items added to the host's menu and toolbars via AddReference calls. procedure DeleteUpdates(const aGUID: TGUID) Parameters: - aGUID: Unique identifier of the session. DoGetItems protected virtual abstract function DoGetItems: IHYVCLMenuItem EndUpdate protected virtual (declared in THYUpdateableController) Calls EndUpdate following call(s) to AddReference preceded by BeginUpdate. procedure EndUpdate(const aGUID: TGUID) Parameters: - aGUID: Unique identifier of the session. GetIsUpdating protected (declared in THYUpdateableController) Returns the value which indicates whether controller is being updated. function GetIsUpdating: Boolean GetItems protected function GetItems: IHYVCLMenuItem GetUpdateCount protected (declared in THYUpdateableController) Returns the number of processed updates. function GetUpdateCount(const aGUID: TGUID): Integer Parameters: - aGUID: Unique identifier of the session. Implements - this article
https://docs.hydra4.com/API/Delphi/Classes/THYBaseMenuController/
2022-05-16T12:52:39
CC-MAIN-2022-21
1652662510117.12
[]
docs.hydra4.com
Phylum Package Score Numeric risk score for your open source packages. Introduction The Phylum Package Score is an easy-to-understand score representing the overall reputation of an open source package. The objective is to help you quickly triage and act on issues. Similar to a credit score that captures your overall credit rating, the Phylum Package Score captures analytics, heuristics, and machine learning models applied to open source software dependencies. The Package Score is measured between 0-100. Higher values are "better" or "safer" compared to packages with lower scores. Phylum's big data technology modifies the score higher or lower based upon identified characteristics in the package under test. Why a single value score? A single value score provides sufficient fidelity relating complex information in a way that can be easily used in modern development practices. Over time, Phylum will apply thousands of analytics to modify the score assigned to a package. Some of these analytics, like remote code execution vulnerabilities or active malware, will have a huge impact on the Phylum Package Score. Other analytics will have a much smaller impact. The score between 0-100 allows nuanced mapping of large amounts of data. We favor this classification approach over a common severity scale of high/medium/and low. The score between 0-100 is more easily used in automation because there is more detail to build successful security policies around. Risk Domains The Phylum Package Score is made up of five key domains of risk: malicious code, technical debt, license, author, and software vulnerability. - Malicious Code Score - captures malware, backdoors, and other types of malicious code. Examples of risk analytics that modify the Malicious Code Score: - High-entropy data blobs or strings ending in an evaluation function - Download, decrypt, execute call patterns - Dynamic function resolution behaviors - Dynamic module loading behaviors - Technical Debt Score - encompasses engineering risk and technical debt. Examples of risk analytics that modify the Technical Debt Score: - Abandoned packages - Packages with 1 author or maintainer - Packages without tests or sufficient test coverage - License Score - evaluates the commercial friendliness of software licenses and the packages change over time. Examples of risk analytics that modify the License Score: - Presence of non-commercial friendly licenses in the package or dependencies - How frequently licenses change in the package and its dependency graph - Likelihood of future changes to licenses in the package and its dependency graph - Author Score - assesses author behavior, reputation, and risk to the package. Examples of risk analytics that modify the Author Score: - Has the author previously committed vulnerabilities to other software - Has the author previously committed malicious code to other software - Age of author account - Overall open source contributions - Does the author's identity map to other online identities (Twitter, Stack Overflow, Quora, etc.) - Software Vulnerability Score - encapsulates the domain of software vulnerabilities. Examples of risk analytics that modify the Software Vulnerabilities Score: - Severity and impact of the vulnerability - Difficulty in exploitation of the vulnerability - Age of the vulnerability - Presence of patch to the vulnerability How is the Phylum Package Score calculated? First, the Phylum system ingests and processes massive amounts of information about a package and the dependencies to that package. Next, analysis occurs on the dataset using analytics, heuristics and machine learning models. The ingested dataset includes: - Static analysis of package source code - File analysis of all files in package - Commit history analysis of any attached source code repositories - Metadata analysis of all artifacts captured from package manager and hosting repository - Known vulnerabilities for a package-version iteration - Commit analysis of prior and new authors - Author reputation from previous activities and behaviors - Full composition analysis of all dependencies required for package use This data set is maintained and curated over the lifetime of the package. As authors, source code, files, and other artifacts are added or removed over time, new data triggers updates to the Phylum Package Score. Analytics, Heuristics and Machine Learning The analysis layer combs over the package data to identify low indicators of risk and combines them with other associated information to extract high indicators of risk. The techniques that are used to extract this information vary, but can be lightly collected into analytics, heuristics and machine learning. These techniques operate on the Phylum platform continuously to extract meaningful indicators to better understand the risk in using an open source package. Once these indicators have been identified, they are weighted and combined with other indicators to create the Phylum Package Score. Example An example highlighting how low indicators of risk can be combined into high indicators: Using time-series analysis, we can understand how a package author typically commits source code. We can observe times of day, sizes of commit, how comments are used, variable names and more to enumerate a fingerprint that is representative of that author. These features can be used to model the author's behavior using machine learning. If we observe the author's identifier (e.g. GitHub email address) in a password breach dataset and notice a divergence from the normal fingerprint, we may have indication of malicious activity. This combination of analytics, heuristics, and machine learning can identify when an attacker may have recovered or stolen an author's credentials and used them perform unauthorized activity on source code that others rely upon. What are some ways to use the Phylum Package Score? Define and enforce policies for use of open source software as dependencies. By setting thresholds using either the Phylum CLI tool or Phylum User Interface, a user can define the policies by which dependencies with risk attributes can be controlled. A user can also get started by disallowing any packages with a Phylum Package Score under 50. This can be done easily in Phylum's UI or CLI tool. This can be integrated into a variety of places for the developer and devops automation systems in use today. More mature policies might define: - Packages with scores below 50 block builds during test execution - Packages with scores between 51 and 65 send a warning message to the security team and developer - Packages with scores that have dropped more than 15 points in the past 30 days send a warning message to the security team and developer - Packages that are severely abandoned or depend on abandoned packages will be alerted for 90 days, but will block builds after 90 days Updated about 1 year ago
https://docs.phylum.io/docs/phylum-package-score
2022-05-16T12:54:40
CC-MAIN-2022-21
1652662510117.12
[]
docs.phylum.io
概念 one of the daemon runners in case of the daemon running the process. In addition to those run instructions, any Process that has been executed ProcessNode class. This ProcessNode class is a sub class of Node and serves as the record of the process’ execution in the database and by extension the provenance graph. It is very important to understand this division of labor. A Process describes how something should be run, and the ProcessNode serves as a mere record in the database of what actually happened during execution. A good thing to remember is that while it is running, we are dealing with the Process and when it is finished we interact with the ProcessNode. Process types¶ Processes in AiiDA come in two flavors: - Calculation-like - Workflow-like The calculation-like processes have the capability to create data, whereas the workflow-like processes orchestrate other processes and have the ability to return data produced by calculations. Again, this is a distinction that plays a big role in AiiDA and is crucial to understand. For this reason, these different types of processes also get a different sub class of the ProcessNode class. The hierarchy of these node classes and the link types that are allowed between them and Data nodes, is explained in detail in the provenance implementation documentation. Currently, there are four types of processes in aiida-core and the following table shows with which node class it is represented in the provenance graph and what the process is used for. For basic information on the concept of a CalcJob or calcfunction, refer to the calculations concept The WorkChain and workfunction are described in the workflows concept. After having read and understood the basic concept of calculation and workflow processes, detailed information on how to implement and use them can be found in the dedicated developing sections for calculations and workflows, respectively. 注解 A FunctionProcess is never explicitly implemented but will be generated dynamically by the engine when a python function decorated with a calcfunction() or workfunction() is run. Process state¶ Each instance of a Process class that is being executed has a process state. This property tells you about the current status of the process. It is stored in the instance of the Process itself and the workflow engine, the plumpy library, operates only on that value. However, the Process instance ‘dies’ as soon as it is terminated, therefore the process state is also written to the calculation node that the process uses as its database record, under the process_state attribute. The process can be in one of six states: The three states in the left column are ‘active’ states, whereas the right column displays the three. If a process is in the Killed state, it means be considered to be successful, it was just executed without any problems. To distinguish between a successful and a failed execution, there is the exit status. This is another attribute that is stored in the node of the process and is an integer that can be set by the process. A 0 (zero) means that the result of the process was successful, and a non-zero value indicates a failure. All the process nodes used by the various processes are sub-classes of ProcessNode, which defines handy properties to query the process state and exit status. When you load a calculation node from the database, you can use these property methods to inquire about its state and exit status. Process exit codes¶ The previous section about the process state showed that a process that is Finished does not say anything about whether the result is ‘successful’ or ‘failed’. The Finished state means nothing more than that the engine succeeded in running the process to the end of execution, without it encountering exceptions or being killed. To distinguish between a ‘successful’ and ‘failed’ process, an ‘exit status’ can be defined. The exit status is a common concept in programming and is a small integer, where zero means that the result of the process was successful, and a non-zero value indicates a failure. By default a process that terminates nominally will get a 0 (zero) exit status. To mark a process as failed, one can return an instance of the ExitCode named tuple, which allows to set an integer exit_status and a string message as exit_message. When the engine receives such an ExitCode as the return value from a process, it will set the exit status and message on the corresponding attributes of the process node representing the process in the provenance graph. 参见 For how exit codes can be defined and returned see the exit code usage section. Process lifetime¶ The lifetime of a process is defined as the time from the moment it is launched until it reaches a terminal state. Process and node distinction¶ As explained in the introduction of this section, there is a clear and important distinction between the ‘process’ and the ‘node’ that represents its execution in the provenance graph. When a process is launched, an instance of the Process class is created in memory which will be propagated to completion by the responsible runner. This ‘process’ instance only exists in the memory of the python interpreter that it is running in, for example that of a daemon runner, and so we cannot directly inspect its state. That is why the process will write any of its state changes to the corresponding node representing it in the provenance graph. In this way, the node acts as a ‘proxy’ or a mirror image that reflects the state of the process in memory. This means that the output of many of the verdi commands, such as verdi process list, do not actually show the state of the process instances, but rather the state of the node to which they have last written their state. Process tasks¶ The previous section explained how launching a process means creating an instance of the Process class in memory. When the process is being ‘run’ (see the section on launching processes for more details) that is to say in a local interpreter, the particular process instance will die as soon as the interpreter dies. This is what often makes ‘submitting’ the preferred method of launching a process. When a process is ‘submitted’, an instance of the Process is created, along with the node that represents it in the database, and its state is then persisted (stored) in the database. This is called a ‘process checkpoint’, more information on which will follow later. Subsequently, the process instance is shut down and a ‘continuation task’ is sent to the process queue of RabbitMQ. This task is simply a small message that just contains an identifier for the process. In order to reconstruct the process from a checkpoint, the process needs to be importable in the daemon environment by a) giving it an associated entry point or b) including its module path in the PYTHONPATH that the daemon workers will have. All the daemon runners, when they are launched, subscribe to the process queue and RabbitMQ will distribute the continuation tasks to them as they come in, making sure that each task is only sent to one runner at a time. The receiving daemon runner can restore the process instance in memory from the checkpoint that was stored in the database and continue the execution. As soon as the process reaches a terminal state, the daemon runner will acknowledge to RabbitMQ that the task has been completed. Until the runner has confirmed that a task is completed, RabbitMQ will consider the task as incomplete. If a daemon runner is shut down or dies before it got the chance to finish running a process, the task will automatically be requeued by RabbitMQ and sent to another daemon runner. Together with the fact that all the tasks in the process queue are persisted to disk by RabbitMQ, guarantees that once a continuation task has been sent to RabbitMQ, it will at some point be finished, while allowing the machine to be shut down. Each daemon runner has a maximum number of tasks that it can run concurrently, which means that if there are more active tasks than available slots, some of the tasks will remain queued. Processes, whose task is in the queue and not with any runner, though technically ‘active’ as they are not terminated, are not actually being run at the moment. While a process is not actually being run, i.e. it is not in memory with a runner, one cannot interact with it. Similarly, as soon as the task disappears, either because the process was intentionally terminated (or unintentionally), the process will never continue running again. Process checkpoints¶ A process checkpoint is a complete representation of a Process instance in memory that can be stored in the database. Since it is a complete representation, the Process instance can also be fully reconstructed from such a checkpoint. At any state transition of a process, a checkpoint will be created, by serializing the process instance and storing it as an attribute on the corresponding process node. This mechanism is the final cog in the machine, together with the persisted process queue of RabbitMQ as explained in the previous section, that allows processes to continue after the machine they were running on, has been shut down and restarted. Process sealing¶ One of the cardinal rules of AiiDA is that once a node is stored, it is immutable, which means that its attributes can no longer be changed. This rule is a problem for processes, however, since in order to be able to start running it, its corresponding process node first has to be stored. However, at that point its attributes, such as the process state or other mutable attributes, can no longer be changed by the engine throughout the lifetime of the corresponding process. To overcome this limitation, the concept of updatable attributes is introduced. These are special attributes that are allowed to be changed even when the process node is already stored and the corresponding process is still active. To mark the point where a process is terminated and even the updatable attributes on the process node are to be considered immutable, the node is sealed. A sealed process node behaves exactly like a normal stored node, as in all of its attributes are immutable. In addition, once a process node is sealed, no more incoming or outgoing links can be attached to it. Unsealed process nodes can also not be exported, because they belong to processes that are still active. Note that the sealing concept does not apply to data nodes and they are exportable as soon as they are stored. To determine whether a process node is sealed, one can use the property is_sealed.
https://aiida.readthedocs.io/projects/aiida-core/zh_CN/latest/topics/processes/concepts.html
2022-05-16T11:43:45
CC-MAIN-2022-21
1652662510117.12
[]
aiida.readthedocs.io
THYToolbarItemDef Overview Location - Unit: Hydra.VCL.PluginControlsRepository.pas - Ancestry: TCollectionItem | THYRepositoryCollectionItem | THYActionItemDef | THYToolbarItemDef Properties Action (declared in THYActionItemDef) Gets or sets link to the action that will be linked to the control created in the host form by a toolbar or menu controller. Properties such as caption, shortcut, image bitmap, etc. must be properly assigned to the action. property Action: TBasicAction read write Collection (declared in THYRepositoryCollectionItem) Gets reference to the THYRepositoryCollection that holds this item. property Collection: THYRepositoryCollection read write Instance Methods constructor Create override (declared in THYRepositoryCollectionItem) Creates a new instance of the class. constructor Create(aCollection: TCollection) Parameters: - aCollection: Reference to a parent collection. Assign override (declared in THYActionItemDef) Assigns an action from the source object to this object. procedure Assign(Source: TPersistent) Parameters: - Source: Reference to the source object. SetAction protected virtual (declared in THYActionItemDef) Sets the link to an action. procedure SetAction(const Value: TBasicAction) Parameters: - Value: Reference to an action.
https://docs.hydra4.com/API/Delphi/Classes/THYToolbarItemDef/
2022-05-16T11:23:06
CC-MAIN-2022-21
1652662510117.12
[]
docs.hydra4.com
$ oc create secret tls <secret_name> --key=key.pem --cert=cert.pem OKD OKD ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Define an OKD See Identity provider parameters for information on parameters, such as mappingMethod, that are common to all identity providers. After you install your cluster, add an identity provider to it so your users can authenticate. Create an OKD
https://docs.okd.io/4.10/authentication/identity_providers/configuring-keystone-identity-provider.html
2022-05-16T11:15:13
CC-MAIN-2022-21
1652662510117.12
[]
docs.okd.io
The. Generate new Auth Key You can create a new Base64 Auth Key by running the code snippet below locally: var base64Key = System.Convert.ToBase64String(ServiceStack.AesUtils.CreateKey()); appsettings.json or Web.config <appSettings/> following the jwt.{PropertyName} format: appsettings.json { "jwt.AuthKeyBase64": "{Base64AuthKey}" } Web.config ! Upgrade to v5.9.2 If you're using JWT Auth please upgrade to v5.9.2 when possible to resolve a JWT signature verification issue comparing different lengthed signatures. If you're not able to upgrade, older versions should ensure a minimum length signature with a custom ValidateToken, e.g: new JwtAuthProvider(...) { ValidateToken = (js,req) => req.GetJwtToken().LastRightPart('.').FromBase64UrlSafe().Length >= 32, } Enable Server Cookies A popular way for maintaining JWT Tokens on clients is via Secure HttpOnly Cookies, this default behavior can be configured on the server to return Authenticated Sessions in a stateless JWT Token on the server with UseTokenCookie: new JwtAuthProvider(appSettings) { UseTokenCookie = true } JWT Token Cookies are supported for most built-in Auth Providers including Authenticate Requests as well as OAuth Web Flow Sign Ins. The alternative to configuring on the server is for clients to request it with UseTokenCookie on the Authenticate Request or in a hidden FORM Input.. Sending JWT in Request DTOs Similar to the IHasSessionId interface Request DTOs can also implement IHasBearerToken to send Bearer Tokens as an alternative for sending them in HTTP Headers or Cookies, e.g: public class Secure : IHasBearerToken { public string BearerToken { get; set; } public string Name { get; set; } } var response = client.Get(new Secure { BearerToken = jwtToken, Name = "World" }); Alternatively you can set the BearerToken property on the Service Client once where it will automatically populate all Request DTOs that implement IHasBearerToken, e.g: client.BearerToken = jwtToken; var response = client.Get(new Secure { Name = "World" });. Limit to Essential Info Only the above partial information is included in JWT payloads as JWTs are typically resent with every request that adds overhead to each HTTP Request so special consideration should be given to limit its payload to only include essential information identifying the User, any authorization info or other info that needs to accessed by most requests, e.g. TenantId for usage in partitioned queries or Display Info shown on each server generated page, etc. Any other info is recommended to not be included in JWT's, instead they should be sourced from the App's data sources using the identifying user info stored in JWTs when needed. You can add any additional properties you want included in JWTs and authenticated User Infos by using the CreatePayloadFilter and PopulateSessionFilter filters below, be mindful to include only minimal essential info and keep the properties names small to reduce the size (and request overhead) of JWTs. from Central Auth Server using Credentials Auth var authClient = new JsonServiceClient(centralAuthBaseUrl); var authResponse = authClient.Post(new Authenticate { provider = "credentials", UserName = "user", Password = "pass", RememberMe = true, }); var client = new JsonServiceClient(BaseUrl) { BearerToken = authResponse.BearerToken //Send JWT in HTTP Authorization Request Header }; var response = client.Get(new Secured { ... }); Once the ServiceClient is configured it can also optionally be converted to send the JWT Token using the ss-tok Cookie instead by calling ConvertSessionToToken, e.g: client.Send(new ConvertSessionToToken()); client.BearerToken = null; // No longer needed as JWT is automatically sent in ss-tok Cookie var response = client.Get(new Secured { ... }); Retrieve Token from Central Auth Server using API Key You can also choose to Authenticate with any AuthProvider and the Authenticate Service will return the JWT Token if Authentication was successful. The example below uses the JWT Token authenticates with the central Auth Server via its configured API Key Auth Provider. If successful the generated JWT can be populated in any of your Service Clients as normal, e.g:; Refresh Tokens Just like JWT Tokens, Refresh Tokens are populated on the AuthenticateResponse DTO after successfully authenticating via any registered Auth Provider, e.g: var response = client.Post(new Authenticate { provider = "credentials", UserName = userName, Password = password, }); var jwtToken = response.BearerToken; var refreshToken = response.Refresh()); Using an alternative JWT Server By default Service Clients will assume they should call the same ServiceStack Instance at the BaseUrl it's configured with to fetch new JWT Tokens. If instead refresh tokens need to be sent to a different server, it can be specified using the RefreshTokenUri property, e.g: var client = new JsonServiceClient(baseUrl) { RefreshToken = refreshToken, RefreshTokenUri = authBaseUrl + "/access-token" }; Handling Refresh Tokens Expiring For the case when Refresh Tokens themselves expire the WebServiceException is wrapped in a typed RefreshTokenException to make it easier to handle initiating the flow to re-authenticate the User, e.g: try { var response = client.Send(new Secured()); } catch (RefreshTokenException ex) { // re-authenticate to get new RefreshToken } Lifetimes of tokens The default expiry time of JWT and Refresh Tokens below can be overridden when registering the JwtAuthProvider: new JwtAuthProvider { ExpireTokensIn = TimeSpan.FromDays(14), // JWT Token Expiry ExpireRefreshTokensIn = TimeSpan.FromDays(365), // Refresh Token Expiry } These expiry times are use-case specific so you'll want to check what values are appropriate for your System. The ExpireTokensIn property controls how long a client is allowed to make Authenticated Requests with the same JWT Token, whilst the ExpireRefreshTokensIn property controls how long the client can keep requesting new JWT Tokens using the same Refresh Token before needing to re-authenticate and generate a new one. Requires User Auth Repository or IUserSessionSourceAsyncAsync interface: public interface IUserSessionSourceAsync { Task<IAuthSession> GetUserSessionAsync(string userAuthId, CancellationToken token=default); }");. Server Token Cookies In most cases the easiest way to utilize JWT with your other Auth Providers is to configure JwtAuthProvider to use UseTokenCookie to automatically return a JWT Token Cookie for all Auth Providers authenticating via Authenticate requests or after a successful OAuth Web Flow from an OAuth Provider. This is what techstacks.io uses to maintain Authentication via a JWT Token after Signing in with Twitter or GitHub: Plugins.Add(new AuthFeature(() => new CustomUserSession(), new IAuthProvider[] { new TwitterAuthProvider(AppSettings), new GithubAuthProvider(AppSettings), new JwtAuthProvider(AppSettings) { UseTokenCookie = true, } })); Clients can then detect whether a user is authenticated by sending an empty Authenticate request which either returns a AuthenticateResponse DTO containing basic Session Info for authenticated requests otherwise throws a 401 Unauthorized response. So clients will be able to detect whether a user is authenticated with something like: const client = new JsonServiceClient(BaseUrl); async function getSession() { try { return await client.get(new Authenticate()); } catch (e) { return null; } } const isAuthenticated = async () => await getSession() != null; //... if (await isAuthenticated()) { // User is authenticated } }); For cases where you don't have access to HTTP Client Cookies you can use the new opt-in IncludeJwtInConvertSessionToTokenResponse option on JwtAuthProvider to also include the JWT in AccessToken property of ConvertSessionToTokenResponse Responses which are otherwise only available in the ss-tok Cookie. Existing sites that already have an Authenticated Session can convert their current server Session into a JWT Token by sending a ConvertSessionToToken Request DTO or an empty POST request to its /session-to-token user-defined route: const authResponse = await client.post(new ConvertSessionToToken()); E.g. Single Page App can call this when their Web App is first loaded, which is ignored if the User isn't authenticated but if the Web App is loaded after Signing In via an OAuth Provider it will convert their OAuth Authenticated Session into a stateless client JWT Token Cookie. This approach is also used by the old Angular TechStacks angular.techstacks.io after signing in via Twitter and Github OAuth to use JWT with a single jQuery Ajax call: $.post("/session-to-token"); Whilst Gistlyn uses the(); Setting the JWT Token Cookie Multiple Audiences With the JWT support for issuing and validating JWT's with multiple audiences, = AuthenticateService.GetJwtAuthProvider();: new[]{ { ... }); Validating JWT Manually The"]; Refresh Token Cookies supported in all Service Clients JWT first-class support for Refresh Token Cookies is implicitly enabled when configuring the JwtAuthProvider to use Cookies: Plugins.Add(new AuthFeature(() => new AuthUserSession(), new IAuthProvider[] { new JwtAuthProvider { UseTokenCookie = true, }, })); Which upon authentication will return the Refresh Token in a ss-reftok Secure, HttpOnly Cookie alongside the Users stateless Authenticated UserSession in the JWT ss-tok Cookie. The benefit of maintaining smart, generic Service Clients for all Add ServiceStack Reference languages is being able to provide a nicer (i.e. maintenance-free) development experience with all Service Clients now including built-in support for Refresh Token Cookies where they’ll automatically fetch new JWT Bearer Tokens & transparently Auto Retry Requests on 401 Unauthorized responses: C#, F# & VB .NET Service Clients var client = new JsonServiceClient(baseUrl); var authRequest = new Authenticate { provider = "credentials", UserName = userName, Password = password, RememberMe = true }; var authResponse = client.Post(authRequest); //client.GetTokenCookie(); // JWT Bearer Token //client.GetRefreshTokenCookie(); // JWT Refresh Token // When no longer valid, Auto Refreshes JWT Bearer Token using Refresh Token Cookie var response = client.Post(new SecureRequest { Name = "World" }); Inspect.printDump(response); // print API Response into human-readable format (alias: `response.PrintDump()`) TypeScript & JS Service Client let client = new JsonServiceClient(baseUrl); let authRequest = new Authenticate({provider:"credentials",userName,password,rememberMe}); let authResponse = await client.post(authRequest); // In Browser can't read "HttpOnly" Token Cookies by design, In Node.js can access client.cookies // When no longer valid, Auto Refreshes JWT Bearer Token using Refresh Token Cookie let response = await client.post(new SecureRequest({ name: "World" })); Inspect.printDump(response); // print API Response into human-readable format Python Service Client client = JsonServiceClient(baseUrl) authRequest = Authenticate( provider="credentials", user_name=user_name, password=password, rememberMe=true) authResponse = client.post(authRequest) # When no longer valid, Auto Refreshes JWT Bearer Token using Refresh Token Cookie response = client.post(SecureRequest(name="World")) #client.token_cookie # JWT Bearer Token #client.refresh_token_cookie # JWT Refresh Token printdump(response) # print API Response into human-readable format Dart Service Clients var client = ClientFactory.create(baseUrl); var authRequest = Authenticate(provider:"credentials", userName:userName, password:password); var authResponse = await client.post(authRequest) //client.getTokenCookie() // JWT Bearer Token //client.getRefreshTokenCookie() // JWT Refresh Token // When no longer valid, Auto Refreshes JWT Bearer Token using Refresh Token Cookie var response = await client.post(SecureRequest(name:"World")); Inspect.printDump(response); // print API Response into human-readable format Java Service Clients JsonServiceClient client = new JsonServiceClient(baseUrl); Authenticate authRequest = new Authenticate() .setProvider("credentials") .setUserName(userName) .setPassword(password) .setRememberMe(true)); AuthenticateResponse authResponse = client.post(authRequest); //client.getTokenCookie(); // JWT Bearer Token //client.getRefreshTokenCookie(); // JWT Refresh Token // When no longer valid, Auto Refreshes JWT Bearer Token using Refresh Token Cookie SecureResponse response = client.post(new SecureRequest().setName("World")); Inspect.printDump(response); // print API Response into human-readable format Kotlin Service Clients val client = new JsonServiceClient(baseUrl) val authResponse = client.post(Authenticate().apply { provider = "credentials" userName = userName password = password rememberMe = true }) //client.tokenCookie // JWT Bearer Token //client.refreshTokenCookie // JWT Refresh Token // When no longer valid, Auto Refreshes JWT Bearer Token using Refresh Token Cookie val response = client.post(SecureRequest().apply { name = "World" }) Inspect.printDump(response) // print API Response into human-readable format Swift Service Client let client = JsonServiceClient(baseUrl: baseUrl); let authRequest = Authenticate() authRequest.provider = "credentials" authRequest.userName = userName authRequest.password = password authRequest.rememberMe = true let authResponse = try client.post(authRequest) //client.getTokenCookie() // JWT Bearer Token //client.getRefreshTokenCookie() // JWT Refresh Token // When no longer valid, Auto Refreshes JWT Bearer Token using Refresh Token Cookie let request = SecureRequest() request.name = "World" let response = try client.post(request) Inspect.printDump(response) // print API Response into human-readable format initialize the Private Key via exported XML string PrivateKeyXml // The RSA Public Key used to Verify the JWT Token when RSA is used RSAParameters? PublicKey // Convenient overload to initialize.
https://docs.servicestack.net/jwt-authprovider
2022-05-16T11:43:03
CC-MAIN-2022-21
1652662510117.12
[]
docs.servicestack.net
0211 Feb Demo: Pangaea [Shahidh K Muhammed, Tanmai Gopal, and Akshaya Acharya] - Microservices packages - Focused on Application developers - Demo at recording +4 minutes - Single node kubernetes cluster — runs locally using Vagrant CoreOS image - Single user/system cluster allows use of DNS integration (unlike Compose) - Can run locally or in cloud - SIG Report: - Release Automation and an introduction to David McMahon - Docs and k8s website redesign proposal and an introduction to John Mulhausen - * no major features or refactors accepted - discussion about release criteria: we will hold release date for bugs Testing flake surge is over (one time event and then maintain test stability).
https://v1-20.docs.kubernetes.io/blog/2016/02/kubernetes-community-meeting-notes-20160211/
2022-05-16T12:05:42
CC-MAIN-2022-21
1652662510117.12
[]
v1-20.docs.kubernetes.io
UntagResource Remove tags from an App Runner resource. Request Syntax { "ResourceArn": " string", "TagKeys": [ " string" ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - ResourceArn The Amazon Resource Name (ARN) of the resource that you want to remove tags from. It must be the ARN of an App Runner resource. Type: String Length Constraints: Minimum length of 1. Maximum length of 1011. Pattern: arn:aws(-[\w]+)*:[a-z0-9-\\.]{0,63}:[a-z0-9-\\.]{0,63}:[0-9]{12}:(\w|\/|-){1,1011} Required: Yes - TagKeys A list of tag keys that you want to remove. Type: Array of strings Length Constraints: Minimum length of 1. Maximum length of 128. Pattern: ^(?!aws:).+ Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. - InternalServiceErrorException An unexpected service exception occurred. HTTP Status Code: 500 - InvalidRequestException One or more input parameters aren't valid. Refer to the API action's document page, correct the input parameters, and try the action again. HTTP Status Code: 400 - InvalidStateException You can't perform this action when the resource is in its current state. HTTP Status Code: 400 - ResourceNotFoundException A resource doesn't exist for the specified Amazon Resource Name (ARN) in your AWS account. HTTP Status Code: 400 Examples Remove tags from an App Runner service This example illustrates how to remove two tags from an App Runner service. Sample Request $ aws apprunner untag-resource --cli-input-json "`cat`" { "ResourceArn": "arn:aws:apprunner:us-east-1:123456789012:service/python-app/8fe1e10304f84fd2b0df550fe98a71fa", "TagKeys": [ "Department", "CustomerId" ] } Sample Response { } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/apprunner/latest/api/API_UntagResource.html
2022-05-16T13:01:50
CC-MAIN-2022-21
1652662510117.12
[]
docs.aws.amazon.com
Debounce In the previous example, you create a light switch using a button. However, you may notice that when you press or release the button slowly, the LED doesn't respond as expected. For example, it may turn on and off several times with just one press. So this example demonstrates how to deal with this problem. What you need - SwiftIO Feather (or SwiftIO board) - Breadboard - Button - Jumper wires Circuit - Plug the button on the breadboard. - Connect one leg on the left to the pin GND. Connect the other on the right to the pin D1. Example code You can find the example code at the bottom left corner of IDE: / SimpleIO / Debounce. // Check if the button is pressed. //, mode: .pullUp) // Declare the values in order to record and judge the button state. var count = 0 var triggered = false while true { // Read from the input pin. let value = button.read() // Ignore the change due to the noise. if value == false { count += 1 } else { count = 0 triggered = false } // Wait a certain period to check if the button is definitely pressed. // Toggle the LED and then reset the value for next press. if count > 50 && !triggered { red.toggle() triggered = true count = 0 } // Wait a millisecond and then read to ensure the current state last for enough time. sleep(ms: 1) } Background Debounce When you press or release the button, you may think the button will immediately come to a stable state, closed or open. However, there will be several bounces inside the button before it finally comes to a stable state. That's because of its mechanical structure. Once pressed, the connection inside it will change several times between two states: establish the connection and disconnect the circuit until the button comes to a perfect connection. The button bounce isn’t visible to your eye, but you can observe it in the oscilloscope. If you directly determine the button state according to input values, these noises may be regarded as multiple presses. So you will need a debounce method. There are many methods, including hardware and software debounce. - The hardware solution is to perfect the circuit to eliminate this problem. For example, add a capacitor to smooth the signal and filter instant changes. - And you will use the software debounce, which lies in checking twice in a short time to make sure the button is closed or open. Pull-up and pull-down resistor As you have known, the input will always be either high or low. But if the input pin connects to nothing, what will the input value be? High or low? That is hard to say. The state of that pin will be uncertain. It will change randomly between high and low states, which is called floating. So a pull-up or pull-down resistor is needed to ensure a stable state. Pull-up resistor A pull-up resistor connects the pin to power. In this case, the button should connect to the input pin and ground. By default, when the button is not pressed, the current flows from power to the input pin. So it reads high level. If the button is pressed, the current flows from power directly to the ground. Thus the pin reads low level. Pull-down resistor A pull-down resistor connects the pin to the ground. If so, the button should connect to the power and input pin. By default, the pin connects directly to the ground, so the pin keeps in a low state. And if you press the button, the current flows from power to the input pin, and the pin reads high level. You usually need them when you use a button. On this board, there are already internal pull-up and pull-down resistors. By default, the pull-down resistor is connected. You can also change it when initializing the pin. Code analysis let button = DigitalIn(Id.D1, mode: .pullUp) Initialize the digital input pin. The default mode is pullDown. And you will use the pull-up resistor here, so the mode is set to pullUp. In this mode, the pin reads low when you press the button, and the onboard LEDs need low voltage to turn on, so it's more straightforward. Of course, you can keep the default mode, and you will need to change the circuit and code accordingly. var count = 0 var triggered = false if value == false { count += 1 } else { count = 0 triggered = false } if count > 50 && !triggered { red.toggle() triggered = true count = 0 } sleep(ms: 1) These lines of code can eliminate the noises from the button. You can look at the image below to have a better understanding. When the value is false, there are two cases: the button is pressed or it's the noise signal. The noise usually doesn't last long. When the low level lasts for a period, you can be sure the button is pressed. The variable count is used to store the time. Your board does calculations extremely quickly, so you can ignore it. And the period is about 50ms here. At that time, the LED turns on. Reference DigitalOut - set whether the pin output a high or low voltage. DigitalIn - read the input value from a digital pin. MadBoard - find the corresponding pin id of your board.
https://docs.madmachine.io/tutorials/general/simpleio/debounce
2022-05-16T12:22:28
CC-MAIN-2022-21
1652662510117.12
[]
docs.madmachine.io
Mission1_Blink As you get a new board, if you don't have some previous knowledge, you might not be able to get it to work out of the box. It is so discouraging. So this first project would like to get everyone started with electronic stuff and the Swift language. You will start with the hello world project - blink the LED. You will make the LED on and off alternatively to get it to blink. Let's break all stuff down to see how it works. What you need - SwiftIO board You can notice there is an onboard LED (marked with a red box above). You will only deal with it in this project, and there is no need for other components. Circuit Just connect the SwiftIO board to your computer through the download port using a USB cable. There are two ports on the board. The one beside the golden ring is the download port. Example code // Import the SwiftIO library to use everything in it. import SwiftIO // Import the board library to use the Id of the specific board. import MadBoard // Initialize the blue LED let led = DigitalOut(Id.BLUE) // The code here will run all the time. while true { // Set Blue LED off. led.write(true) // Interval of LED blink (milliseconds). sleep(ms: 1000) // Set Blue LED on. led.write(false) sleep(ms: 1000) } Background Digital signal The digital signal usually has two states. Its value is either 1 or 0. For the SwiftIO board, 1 represents 3.3V, and 0 represents 0V. There are also other ways to express the same meaning: high or low, true or false. In this project, can only flow in one direction, from positive to negative. You need to connect the positive leg to the current source. Only when you connect it in the right direction, the current can flow. There are two ways to connect the LED: - Connect the LED to the power and a digital pin. Since the current always flows from high to low voltage, if the pin outputs a high voltage, there is no voltage difference between the two ends of the LED, so the LED is off. When the pin outputs a low voltage, the current can flow from the power to that pin, and the LED will be on. This is how the onboard LED works. - Connect the LED to the digital pin and ground. If the pin outputs a high voltage, the current flows from that pin to the ground, and the LED will be on. If it outputs a low voltage, the LED is off. You can find an RGB LED on your board. It is a different type from the images above for easier soldering. It has three colors: red, green and blue. As you download the code, it serves as a status indicator. Besides, you can also control its color and state by setting the output voltage. You can light any, so it will appear red, green, or blue. You can also turn on two of them. If you turn on red and blue, you can notice it appears magenta. If all three are on, the LED seems to be white. The onboard LED is connected to 3.3V internally. If you set it to high voltage, there will be no current. So it will be lighted when you apply low voltage. Code analysis Let's look into the code in detail: import SwiftIO import MadBoard SwiftIO consists of all the functionalities to control your board. All programs must first reference it so you can use everything in it, like classes and functions. SwiftIOBoard defines the corresponding pin id of the SwiftIO board. The pins of different boards are different. So this library tells the IDE you are dealing with the SwiftIO board, not any others. Then you can use the id in it. let led = DigitalOut(Id.BLUE) Before you set a specific pin, you need to initialize it. - First, declare a constant: use the keyword letfollowed by a constant name led. - Then make it an instance of DigitalOutclass and initialize that pin. - To initialize the pin, you need to indicate its id. All ids are in an enumeration, and the built-in RGB LEDs use the id RED, GREEN, or BLUE. Thus the id of blue LED here is written as Id.BLUEusing dot syntax. while true { led.write(true) sleep(ms: 1000) led.write(false) sleep(ms: 1000) } In the dead loop while true, all code in the brackets will run over and over again unless you power off the board. The method write(_:) is used to set the pin to output high or low voltage. Its parameter is a boolean type: true corresponds to a high level, and false corresponds to a low level. And as mentioned above, you need to set a low voltage to turn on the LED. The function sleep(ms:) will stop the microcontroller's work for a specified period. It needs a period in milliseconds as its parameter. It is a global function in the library, so you can directly use it in your code. So the code above makes LED alternate between off and on every second. Reference DigitalOut - set whether the pin output a high or low voltage. sleep(ms:) - suspend the microcontroller's work and thus make the current state last for a specified time, measured in milliseconds. SwiftIOBoard - find the corresponding pin id of SwiftIO board.
https://docs.madmachine.io/tutorials/swiftio-maker-kit/mission1
2022-05-16T12:15:02
CC-MAIN-2022-21
1652662510117.12
[]
docs.madmachine.io
2. Software Installation 2.1. Introduction This chapter describes how to download and set up METplus Wrappers. 2.2. Supported architectures METplus Wrappers was developed on Debian Linux and is supported on this platform. Each release listed on the METplus Downloads page includes a link to the Existing Builds and Docker for that version. The METplus team supports the installation of the METplus components on several operational and research high performance computing platforms, including those at NCAR, NOAA, and other community machines. Pre-built METplus images on DockerHub are also provided. 2.3. Programming/scripting languages METplus Wrappers is written in Python 3.6.3. It is intended to be a tool for the modeling community to use and adapt. As users make upgrades and improvements to the tools, they are encouraged to offer those upgrades to the broader community by offering feedback to the developers or coordinating for a GitHub pull. For more information on contributing code to METplus Wrappers, please create a post in the METplus GitHub Discussions Forum. 2.4. Requirements 2.4.1. Software Requirements Minimum Requirements The following software is required to run METplus Wrappers: Python 3.6.3 or above MET version 10.0.0 or above - For information on installing MET please see the Software Installation/Getting Started section of the MET User’s Guide. Wrapper Specific Requirements TCMPRPlotter wrapper R version 3.2.5 SeriesAnalysis wrapper convert (ImageMagick) utility - if generating plots and/or animated images from the output PlotDataPlane wrapper convert (ImageMagick) utility - if generating images from the Postscript output 2.4.2. Python Package Requirements The version number listed next to any Python package corresponds to the version that was used for testing purposes. Other versions of the packages may still work but it is not guaranteed. Please install these packages using pip or conda. Minimum Requirements To run most of the METplus wrappers, the following packages are required: dateutil (2.8) Using pip: pip3 install python-dateutil==2.8 Using conda: conda install -c conda-forge python-dateutil=2.8 MET Python Embedding Requirements If running use cases that use Python embedding, the MET executables must be installed with Python enabled and the following Python packages installed: xarray (0.17.0) numpy (1.19.2) pandas (1.0.5) netCDF4 (1.5.4) See Appendix F Python Embedding section in the MET User’s Guide for more information. Wrapper Specific Requirements The following wrappers require that additional Python packages be installed to run. SeriesAnalysis wrapper netCDF4 (1.5.4) MakePlots wrapper cartopy (0.18.0) pandas (1.0.5) CyclonePlotter wrapper cartopy (0.18.0) matplotlib (3.3.4) Cartopy, one of the dependencies of CyclonePlotter, attempts to download shapefiles from the internet to complete successfully. So if CyclonePlotter is run on a closed system (i.e. no internet), additional steps need to be taken. First, go to the Natural Earth Data webpage and download the small scale (1:110m) cultural and physical files that will have multiple extensions (e.g. .dbf, .shp, .shx). Untar these files in a noted location. Finally, create an environment variable in the user-specific system configuration file for CARTOPY_DIR, setting it to the location where the shapefiles are located. 2.5. Getting the METplus Wrappers source code The METplus Wrappers source code is available for download from the public GitHub repository. The source code can be retrieved either through a web browser or the command line. 2.5.1. Get the source code via Web Browser Create a directory where the METplus Wrappers will be installed Open a web browser and go to the latest stable METplus release. Click on the ‘Source code’ link (either the zip or tar.gz) under Assets and when prompted, save it to the directory. Uncompress the source code (on Linux/Unix: gunzip for zip file or tar xvfz for the tar.gz file) 2.5.2. Get the source code via Command Line Open a shell terminal Clone the DTCenter/METplus GitHub repository: SSH: git clone [email protected]:dtcenter/metplus HTTPS: git clone 2.6. Obtain sample input data The use cases provided with the METplus release have sample input data associated with them. This step is optional but is required to be able to run the example use cases, which illustrate how the wrappers work. Create a directory to put the sample input data. This will be the directory to set for the value of INPUT_BASE in the METplus Configuration. Go to the web page with the sample input data. Click on the vX.Y version directory that corresponds to the release to install, i.e. v4.0 directory for the v4.0.0 release. Click on the sample data tgz file for the desired use case category or categories run and when prompted, save the file to the directory created above. Note Files with the version number in the name, i.e. sample_data-data_assimilation-4.0.tgz, have been updated since the last major release. Files without the version number in the file name have not changed since the last major release and can be skipped if the data have already been obtained with a previous release. 2.7. METplus Wrappers directory structure The METplus Wrappers source code contains the following directory structure: METplus/ build_components/ docs/ environment.yml internal_tests/ manage_exernals/ metplus/ parm/ produtil/ README.md requirements.txt scripts/ setup.py ush/ The top-level METplus Wrappers directory consists of a README.md file and several subdirectories. The build_components/ directory contains scripts that use manage_externals and files available on dtcenter.org to download MET and start the build process. The docs/ directory contains documentation for users and contributors (HTML) and Doxygen files that are used to create the METplus wrapper API documentation. The Doxygen documentation can be created and viewed via web browser if the developer has Doxygen installed on the host. The Doxygen documentation is useful to contributors and is not necessary for METplus end-users. The internal_tests/ directory contains test scripts that are only relevant to METplus developers and contributors. The manage_externals/ directory contains scripts used to facilitate the downloading and management of components that METplus interacts with such as MET and METviewer. The metplus/ directory contains the wrapper scripts and utilities. The parm/ directory contains all the configuration files for MET and METplus Wrappers. The produtil/ directory contains part of the external utility produtil. The scripts/ directory contains scripts that are used for creating Docker images. The ush/ directory contains the run_metplus.py script that is executed to run use cases. 2.8. External Components 2.8.1. GFDL Tracker (optional) The standalone Geophysical Fluid Dynamics Laboratory (GFDL) vortex tracker is a program that objectively analyzes forecast data to provide an estimate of the vortex center position (latitude and longitude), and track the storm for the duration of the forecast. Visit for more information See the manage externals section of this documentation to download the GFDL vortex tracker automatically as part of the system. To download and install the tracker locally, get and follow the instructions listed in that archive to build on a local system. Instructions on how to configure and use the GFDL tracker are found here 2.9. Disable UserScript wrapper (optional) The UserScript wrapper allows any shell command or script to be run as part of a METplus use case. It is used to preprocess/postprocess data or to run intermediate commands between other wrappers. If desired, this wrapper can be disabled upon installation to prevent security risks. To disable the UserScript wrapper, simply remove the following file from the installation location: METplus/metplus/wrapper/user_script_wrapper.py Please note that use cases provided with the METplus repository that utilize the UserScript wrapper will fail if attempted to run after it has been disabled. 2.10. Add ush directory to shell path (optional) To call the run_metplus.py script from any directory, add the ush directory to the path. The following commands can be run in a terminal. They can also be added to the shell run commands file (.cshrc for csh/tcsh or .bashrc for bash). For the following commands, change /path/to to the actual path to the METplus directory on the local file system. csh/tcsh: # Add METplus to path set path = (/path/to/METplus/ush $path) bash/ksh: # Add METplus to path export PATH=/path/to/METplus/ush:$PATH
https://metplus.readthedocs.io/en/latest/Users_Guide/installation.html
2022-05-16T11:55:53
CC-MAIN-2022-21
1652662510117.12
[]
metplus.readthedocs.io
scripts/folder for the hardhat tutorial, or your home directory for the Truffle tutorial, create a new file named interact.jsadd the following lines of code: .envfile and make sure that the dotenvmodule is loading these variables. API_KEYand the CONTRACT_ADDRESSwhere your smart contract was deployed. .envfile should look something like this: contract-interact.jsfile: interact.jsand see your ABI printed to the console navigate to your terminal and run initMessage = "Hello world!"? We are now going to read that message stored in our smart contract and print it to the console. messagefunction in our smart contract and read the init message: npx hardhat run scripts/interact.jsin the terminal we should see this response: updatefunction! Pretty cool, right? updatefunction on our instantiated Contract object, like so: .wait()on the returned transaction object. This ensures that our script waits for the transaction to be mined on the blockchain before proceeding onwards. If you were to leave this line out, your script may not be able to see the updated messagevalue in your contract. messagevalue. Take a moment and see if you can make the changes necessary to print out that new value! interact.jsfile should look like at this point: npx hardhat run scripts/interact.js --network ropsten Updating the message...step takes a while to load before the new message is set. That is due to the mining process! If you are curious about how to track transactions while they are being mined, visit the Alchemy mempool to see the status of your transaction (whether it's pending, mined, or got dropped by the network). If your transaction got dropped, it's also helpful to check Ropsten Etherscan and search for your transaction hash.
https://docs.alchemy.com/alchemy/tutorials/hello-world-smart-contract/interacting-with-a-smart-contract
2022-05-16T12:21:40
CC-MAIN-2022-21
1652662510117.12
[]
docs.alchemy.com
Using multiple network interfaces Steps for configuring DataStax Enterprise for multiple network interfaces or when using different regions in cloud implementations. Steps for configuring DataStax Enterprise for multiple network interfaces or when using different regions in cloud implementations. cassandra.yamlThe location of the cassandra.yaml file depends on the type of installation: cassandra-rackdc.propertiesThe location of the cassandra-rackdc.properties datacenter name and 1 is the rack location. (Racks are important for distributing replicas, but not for datacenter naming.) In the example below, there are two DataStax Enterprise datacenters and each datacenter.
https://docs.datastax.com/en/dse/6.8/dse-admin/datastax_enterprise/config/configMultiNetworks.html
2022-05-16T11:04:50
CC-MAIN-2022-21
1652662510117.12
[]
docs.datastax.com
Mission2_RGB_LED After successfully get an LED to work, why not try more LEDs? In this project, let's build a circuit for the first time and make the LEDs blink one after another repeatedly. What you need The parts you will need are all included in the Maker kit. - SwiftIO board - Breadboard - Red, green, and blue LEDs - Resistor - Jumper wires Circuit Let's know something about the breadboard first. The one in the kit is a tiny and simplified version. You can find many holes in it. Each upper or lower five sockets vertically beside the gap in the middle are connected as shown above. It is very convenient for your project prototype. - Place three LEDs on different columns. - The long leg of each LED connects to a digital pin: red LED connects to D16, green LED connects to D17, blue LED connects to D18. - The short leg is connected to a 1k ohm resistor and goes to the pin GND. BTW, you can usually find that the red jumper wires are for power, and the black ones are for ground. note The resistance of the resistor is not absolute, as long as it is bigger than the minimum requirement to resist the voltage. And the brightness of the LED will be influenced by the resistor: its resistance is higher, the LED will be dimmer. Example code // Import the SwiftIO library to use everything in it. import SwiftIO // Import the board library to use the Id of the specific board. import MadBoard // Initialize three digital pins used for the LEDs. let red = DigitalOut(Id.D16) let green = DigitalOut(Id.D17) let blue = DigitalOut(Id.D18) while true { // Turn on red LED for 1 second, then off. red.write(true) sleep(ms: 1000) red.write(false) // Turn on green LED for 1 second, then off. green.write(true) sleep(ms: 1000) green.write(false) // Turn on blue LED for 1 second, then off. blue.high() sleep(ms: 1000) blue.low() } red = DigitalOut(Id.D16) let green = DigitalOut(Id.D17) let blue = DigitalOut(Id.D18) The class DigitalOut allows you to set the pin to output high or low voltage. You need to initialize three output pins: D16, D17, and D18 that the LEDs connect. Only after initialization, the pin can output the designated levels. while true { } To make the LED blink repeatedly, you need to write the code in the dead loop while true. The code inside it could run all the time unless you power off the board. red.write(true) sleep(ms: 1000) red.write(false) In the loop, you will set three LEDs separately. The operations are similar. Let's look at the red LED. At first, the pin outputs a high voltage to light the LED. Since each of the three LEDs connects to the digital pin and ground, they will be on as you apply a high voltage. After 1s, turn off the LED with a low voltage. So the LED will be on for 1s and then be turned off. The following LED turns on immediately and repeats the process above. Thus three LEDs blink in turns. Reference DigitalOut - set whether the pin output a high or low voltage. sleep(ms:) - suspend the microcontroller's work and thus make the current state last for a specified time, measured in milliseconds. SwiftIOBoard - find the corresponding pin id of SwiftIO board.
https://docs.madmachine.io/tutorials/swiftio-maker-kit/mission2
2022-05-16T12:07:41
CC-MAIN-2022-21
1652662510117.12
[]
docs.madmachine.io
Hardware Guide¶ This guide describes the SQream reference architecture, emphasizing the benefits to the technical audience, and provides guidance for end-users on selecting the right configuration for a SQream installation. Need help? This page is intended as a “reference” to suggested hardware. However, different workloads require different solution sizes. SQream’s experienced customer support has the experience to advise on these matters to ensure the best experience. Visit SQream’s support portal for additional support. A SQream Cluster¶ SQream recommends rackmount servers by server manufacturers Dell, Lenovo, HP, Cisco, Supermicro, IBM, and others. A typical SQream cluster includes one or more nodes, consisting of Two-socket enterprise processors, like the Intel® Xeon® Gold processor family or an IBM® POWER9 processors, providing the high performance required for compute-bound database workloads. NVIDIA Tesla GPU accelerators, with up to 5,120 CUDA and Tensor cores, running on PCIe or fast NVLINK busses, delivering high core count, and high-throughput performance on massive datasets High density chassis design, offering between 2 and 4 GPUs in a 1U, 2U, or 3U package, for best-in-class performance per cm2. Single-Node Cluster Example¶ A single-node SQream cluster can handle between 1 and 8 concurrent users, with up to 1PB of data storage (when connected via NAS). An average single-node cluster can be a rackmount server or workstation, containing the following components: Note If you are using internal storage, your volumes must be formatted as xfs. In this system configuration, SQream can store about 200TB of raw data (assuming average compression ratio and ~50TB of usable raw storage). If a NAS is used, the 14x SSD drives can be omitted, but SQream recommends 2TB of local spool space on SSD or NVMe drives. Multi-Node Cluster Example¶ Multi-node clusters can handle any number of concurrent users. A typical SQream cluster relies on several GPU-enabled servers and shared storage connected over a network fabric, such as InfiniBand EDR, 40GbE, or 100GbE. The following table shows SQream’s recommended hardware specifications: Note With a NAS connected over GPFS, Lustre, or NFS, each SQream worker can read data at up to 5GB/s. Cluster Design Considerations¶ In a SQream installation, the storage and compute are logically separated. While they may reside on the same machine in a standalone installation, they may also reside on different hosts, providing additional flexibility and scalability. SQream uses all resources in a machine, including CPU, RAM, and GPU to deliver the best performance. At least 256GB of RAM per physical GPU is recommended. Local disk space is required for good temporary spooling performance, particularly when performing intensive operations exceeding the available RAM, such as sorting. SQream recommends an SSD or NVMe drive in RAID 1 configuration with about twice the RAM size available for temporary storage. This can be shared with the operating system drive if necessary. When using SAN or NAS devices, SQream recommends approximately 5GB/s of burst throughput from storage per GPU. Balancing Cost and Performance¶ Prior to designing and deploying a SQream cluster, a number of important factors must be considered. The Balancing Cost and Performance section provides a breakdown of deployment details to ensure that this installation exceeds or meets the stated requirements. The rationale provided includes the necessary information for modifying configurations to suit the customer use-case scenario, as shown in the following table: CPU Compute¶ SQream relies on multi-core Intel Gold Xeon processors or IBM POWER9 processors, and recommends a dual-socket machine populated with CPUs with 18C/36HT or better. While a higher core count may not necessarily affect query performance, more cores will enable higher concurrency and better load performance. GPU Compute and RAM¶ The NVIDIA Tesla range of high-throughput GPU accelerators provides the best performance for enterprise environments. Most cards have ECC memory, which is crucial for delivering correct results every time. SQream recommends the NVIDIA Tesla V100 32GB or NVIDIA Tesla A100 40GB GPU for best performance and highest concurrent user support. GPU RAM, sometimes called GRAM or VRAM, is used for processing queries. It is possible to select GPUs with less RAM, like the NVIDIA Tesla V100 16GB or P100 16GB, or T4 16GB. However, the smaller GPU RAM results in reduced concurrency, as the GPU RAM is used extensively in operations like JOINs, ORDER BY, GROUP BY, and all SQL transforms. RAM¶ SQream requires using Error-Correcting Code memory (ECC), standard on most enterprise servers. Large amounts of memory are required for improved performance for heavy external operations, such as sorting and joining. SQream recommends at least 256GB of RAM per GPU on your machine. Operating System¶ SQream can run on the following 64-bit Linux operating systems: - Red Hat Enterprise Linux (RHEL) v7 - CentOS v7 - Amazon Linux 2018.03 - Ubuntu v16.04 LTS, v18.04 LTS - Other Linux distributions may be supported via nvidia-docker Storage¶ For clustered scale-out installations, SQream relies on NAS/SAN storage. For stand-alone installations, SQream relies on redundant disk configurations, such as RAID 5, 6, or 10. These RAID configurations replicate blocks of data between disks to avoid data loss or system unavailability. SQream recommends using enterprise-grade SAS SSD or NVMe drives. For a 32-user configuration, the number of GPUs should roughly match the number of users. SQream recommends 1 Tesla V100 or A100 GPU per 2 users, for full, uninterrupted dedicated access. Download the full SQream Reference Architecture document.
https://docs.sqream.com/en/v2021.2/operational_guides/hardware_guide.html
2022-05-16T12:39:01
CC-MAIN-2022-21
1652662510117.12
[]
docs.sqream.com
Device IDand Descriptionslots as required. Callback / Settingstab, check the "authorization" box. A bearer token will appear into this section. Callback/Overviewtab, an specification of the REST API that provides access to this device will be shown, ready to be copied into the program or HTTP request entry. Callback/Settingstab. In the next section shows a complete specification of the features that can be exploit: REST API+ ?authorization=+ Token Callback / Curltab, ready to copy and modify:
https://docs.thinger.io/http-devices
2022-05-16T11:23:57
CC-MAIN-2022-21
1652662510117.12
[]
docs.thinger.io
We understand that businesses have sales teams, marketing teams, support teams, and many others roles, but you really don’t want to give access to sensitive website functionality to someone who “doesn’t know what they’re doing” or perhaps even have malicious intent. Groundhogg has added several new user roles to help make giving your teams access to the correct information and nothing more. Quick refresher on user roles… WordPress has a versatile built in user management system. You can assign different roles to different users, and every role has it’s own user permissions. Groundhogg has many capabilities, several for each of it’s modules.You can use plugins to modify the roles we’ve added, or apply capabilities to existing roles. The New Roles… By default all the capabilities for Groundhogg’s modules are added to the Administrator role. In addition Groundhogg adds two new roles. - Marketer - Sales Manager The Marketer Role The new “Marketer role” means exactly that; anyone who is a marketer in your company, or 3rd party supplier. The Marketer role is similar to the editor role in that they will have editing access across most of the website content while protecting key areas such as plugins and settings. The Marketer role will be given total access to Groundhogg EXCEPT for the ability to edit any options. The Sales Manager Role This role is very limited and essentially only has access to the contact’s database and controlling funnel events. The Sales Manager can cancel or schedule funnel events for contacts which they are the owner of. They can add, delete, view and edit contacts as well. But they do not have exporting capabilities. A unique feature of the sales manager role is that they can only see contacts that are assigned to them. This is important when segmenting and assigning leads among your sales team. Pro Tip… Use this plugin ==> to edit permissions for your other WordPress roles if you want to grant them specific access to Groundhogg.
https://docs.groundhogg.io/docs/settings/user-roles/
2019-04-18T17:22:46
CC-MAIN-2019-18
1555578517745.15
[]
docs.groundhogg.io
Fatal Error C1010 unexpected end of file while looking for precompiled header. Did you forget to add '#include name' to your source? An include file specified with /Yu is not listed in the source file. This option is enabled by default in most Visual C++ Project types and "stdafx.h" is the default include file specified by this option. In the Visual Studio environment, use one of the following methods to resolve this error: (by default, stdafx.h) from the current project. This file also needs to be included before any other code in your source files using #include "stdafx.h". (This header file is specified as Create/Use PCH Through File project property) Feedback Send feedback about:
https://docs.microsoft.com/en-us/cpp/error-messages/compiler-errors-1/fatal-error-c1010?view=vs-2019
2019-04-18T16:35:46
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Securing WMI Namespaces Access to WMI namespaces and their data is controlled by security descriptors. You can protect data in your namespaces by adjusting the namespace security descriptor to control who has access to the data and methods. For more information, see Access to WMI Securable Objects. The following topics describe WMI namespace security and how to control access to namespaces. - WMI namespace security relies on standard Windows user security identifiers (SIDs) and access control lists. Administrators and users have different default permissions. Setting Namespace Security Descriptors After a namespace exists in the WMI repository, you can change the security on the namespace by using the WMI Control or by calling the methods of __SystemSecurity. Requiring an Encrypted Connection to a Namespace The RequiresEncryption qualifier on a namespace requires the WMI client application or script to use the authentication level which encrypts remote procedure calls. Both incoming data requests and asynchronous callbacks must be encrypted. Establishing Inheritance of Namespace Security You can control whether a child namespace inherits the security descriptor of the parent namespace. Related topics - Connecting to WMI on a Remote Computer Creating a Namespace with the WMI API WMI Security Descriptor Objects
https://docs.microsoft.com/en-us/windows/desktop/WmiSdk/securing-wmi-namespaces
2019-04-18T16:23:02
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
This page provides information on the PlainForce component. Page Contents Overview The PlainForce helper object is a force that pushes a fluid according to the helper's orientation. PlainForce is used to produce a gravity or a wind effect. The dynamics are controlled by two parameters that balance between predictability and natural behavior: force magnitude (Strength) and Drag. Parameters Strength | strength – Specifies the acceleration force measured in scene units/sec^2. Drag | drag – Specifies a value between 0 and 1 to determine how much of the existing velocity will be negated over the period of 1 second. The bigger the drag, the more the fluid behavior is suppressed and the less the fluid features are pronounced. A value of 1.0 stops the fluid's motion. Max Distance | maxdist – Specifies the distance where the force disappears. Fade Start | fadestart – Specifies the relative distance (as part of the Max Distance) where the force starts to gradually decline. Affect | affect – Specifies the affected components of the simulation separated by commas. Note that if in a Fire/Smoke simulation all voxels will be affected uniformly by the Plain Force, while in a Liquid simulation you can choose which kinds of particles will be influenced.. Affect Names are not case sensitive and any unknown element found in the list is ignored. Apply Force Behind Icon | applyforcebehind – When enabled, the force will be applied behind the helper icon. Terminal velocity – The velocity at which the acceleration caused by the force (Strength) is equal to the deceleration caused by the drag.
https://docs.chaosgroup.com/display/PHX3MAX/Plain+Force+%7C+PlainForce
2019-04-18T16:46:48
CC-MAIN-2019-18
1555578517745.15
[]
docs.chaosgroup.com
Azure Active Directory Graph API Important As of February 2019, we started the process to deprecate some earlier versions of Azure Active Directory Graph API in favor of the Microsoft Graph API. For details, updates, and time frames, see Microsoft Graph or the Azure AD Graph in the Office Dev Center. Moving forward, applications should use the Microsoft Graph API. This article applies to Azure AD Graph API. For similar info related to Microsoft Graph API, see Use the Microsoft Graph API. The Azure Active Directory Graph API provides programmatic access to Azure AD through REST API endpoints. Applications can use Azure AD Graph API to perform create, read, update, and delete (CRUD) operations on directory data and objects. For example, Azure AD Additionally, you can perform similar operations on other objects such as groups and applications. To call Azure AD Graph API on a directory, your application must be registered with Azure AD. Your application must also be granted access to Azure AD Graph API. This access is normally achieved through a user or admin consent flow. To begin using the Azure Active Directory Graph API, see the Azure AD Graph API quickstart guide, or view the interactive Azure AD Graph API reference documentation. Features Azure AD Graph API provides the following features: REST API Endpoints: Azure AD Graph API is a RESTful service comprised of endpoints that are accessed using standard HTTP requests. Azure AD Graph API supports XML or Javascript Object Notation (JSON) content types for requests and responses. For more information, see Azure AD Graph REST API reference. Authentication with Azure AD: Every request to Azure AD Graph API must be authenticated by appending a JSON Web Token (JWT) in the Authorization header of the request. This token is acquired by making a request to Azure AD’s token endpoint and providing valid credentials. You can use the OAuth 2.0 client credentials flow or the authorization code grant flow to acquire a token to call the Graph. For more information, OAuth 2.0 in Azure AD. Role-Based Authorization (RBAC): Security groups are used to perform RBAC in Azure AD Graph API. For example, if you want to determine whether a user has access to a specific resource, the application can call the Check group membership (transitive) operation, which returns true or false. Differential Query: Differential query allows you to track changes in a directory between two time periods without having to make frequent queries to Azure AD Graph API. This type of request will return only the changes made between the previous differential query request and the current request. For more information, see Azure AD Graph API differential query. Directory Extensions: You can add custom properties to directory objects without requiring an external data store. For example, if your application requires a Skype ID property for each user, you can register the new property in the directory and it will be available for use on every user object. For more information, see Azure AD Graph API directory schema extensions. Secured by permission scopes: Azure AD Graph API exposes permission scopes that enable secure access to Azure AD data using OAuth 2.0. It supports a variety of client app types, including: user interfaces that are given delegated access to data via authorization from the signed-in user (delegated) service/daemon applications that operate in the background without a signed-in user being present and use application-defined role-based access control Both delegated and application permissions represent a privilege exposed by the Azure AD Graph API and can be requested by client applications through application registration permissions features in the Azure portal. Azure AD Graph API permission scopes provides information on what's available for use by your client application. Scenarios Azure AD Graph API enables many application scenarios. The following scenarios are the most common: - Line of Business (Single Tenant) Application: In this scenario, an enterprise developer works for an organization that has an Office 365 subscription. The developer is building a web application that interacts with Azure AD to perform tasks such as assigning a license to a user. This task requires access to the Azure AD Graph API, so the developer registers the single tenant application in Azure AD and configures read and write permissions for Azure AD Graph API. Then the application is configured to use either its own credentials or those of the currently sign-in user to acquire a token to call the Azure AD Graph API. - Software as a Service Application (Multi-Tenant): In this scenario, an independent software vendor (ISV) is developing a hosted multi-tenant web application that provides user management features for other organizations that use Azure AD. These features require access to directory objects, so the application needs to call the Azure AD Graph API. The developer registers the application in Azure AD, configures it to require read and write permissions for Azure AD Graph API, and then enables external access so that other organizations can consent to use the application in their directory. When a user in another organization authenticates to the application for the first time, they are shown a consent dialog with the permissions the application is requesting. Granting consent will then give the application those requested permissions to Azure AD Graph API in the user’s directory. For more information on the consent framework, see Overview of the consent framework. Next steps To begin using the Azure Active Directory Graph API, see the following topics: Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-graph-api
2019-04-18T16:59:13
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Configregion shortcut settings. For non-replicated regions, decide whether you want to receive all entry events from the distributed cache or only events for the data you have stored locally. To configure: To receive all events, set the subscription-attributes interest-policyto all: <region-attributes> <subscription-attributes </region-attributes> To receive events just for the data you have stored locally, set the subscription-attributes interest-policyto cache-contentor do not set it ( cache-contentis the default): <region-attributes> <subscription-attributes </region-attributes> For partitioned regions, this only affects the receipt of events, as the data is stored according to the region partitioning. Partitioned regions with interest policy of allcan create network bottlenecks, so if you can, run listeners in every member that hosts the partitioned region data and use the cache-contentinterest policy. Note: You can also configure Regions using the gfsh command-line interface. See Region Commands.
http://gemfire.docs.pivotal.io/94/geode/developing/events/configure_p2p_event_messaging.html
2019-04-18T17:15:32
CC-MAIN-2019-18
1555578517745.15
[]
gemfire.docs.pivotal.io
Sql Personalization Sql Provider Personalization Sql Provider Personalization Sql Provider Personalization Class Provider Definition Implements a personalization provider that uses Microsoft SQL Server. public ref class SqlPersonalizationProvider : System::Web::UI::WebControls::WebParts::PersonalizationProvider public class SqlPersonalizationProvider : System.Web.UI.WebControls.WebParts.PersonalizationProvider type SqlPersonalizationProvider = class inherit PersonalizationProvider Public Class SqlPersonalizationProvider Inherits PersonalizationProvider - Inheritance - SqlPersonalizationProviderSqlPersonalizationProviderSqlPersonalizationProviderSqlPersonalizationProvider Remarks.
https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.webcontrols.webparts.sqlpersonalizationprovider?redirectedfrom=MSDN&view=netframework-4.7.2
2019-04-18T16:20:27
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
How to: Create the User Control and Host in a Dialog Box The procedure in this topic assumes you are creating a new dialog-based (CDialog Class) in the Properties window and change the TabStop property to True. Configure the project. In Solution Explorer, right-click the MFC01 project0101 project directory so that the program will run. In stdafx.h, find this line: #endif // _AFX_NO_AFXCMN_SUPPORT Add these lines above it: #include <afxwinforms.h> // MFC Windows Forms support Add code to create the managed control. First, declare the managed control. In MFC01Dlg.h, go to the declaration of the dialog class, and add a data member for the user control in Protected scope as follows: class CMFC01Dlg : public CDialog { // ... // Data member for the .NET User Control: CWinFormsControl<WindowsControlLibrary1::UserControl1> m_ctrl1; Next, provide an implementation for the managed control. In MFC01Dlg.cpp, in the dialog override of CMFC01Dlg::DoDataExchange generated by the MFC Application wizard (not CAboutDlg::DoDataExchange, which is in the same file), add the following code to create the managed control and associate it with the static place holder IDC_CTRL1: void CMFC01Dlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); DDX_ManagedControl(pDX, IDC_CTRL1, m. See Also Other Resources Hosting a Windows Forms User Control in an MFC Dialog Box
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/b1kyh79x%28v%3Dvs.90%29
2019-04-18T17:39:48
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Kubernetes 1.9 StorageOS requires mount propagation in order to present devices as volumes to containers. In Kubernetes 1.8 and 1.9 you need to enable this alpha feature. - Set --feature-gates MountPropagation=truein the kube-apiserver, usually found in the master nodes under /etc/kubernetes/manifests/kube-apiserver.manifest. - Set KUBELET_EXTRA_ARGS=--feature-gates=MountPropagation=truein the kubelet service config. For systemd, this usually is located in /etc/systemd/system/. If the kubelets run as containers, you also need to share the StorageOS data directory into each of the kubelets by adding --volume=/var/lib/storageos:/var/lib/storageos:rshared to each of the kubelets. # Install StorageOS as a daemonset with RBAC support git clone storageos cd storageos/k8s/deploy-storageos/standard ./deploy-storageos.sh or using the Helm chart: git clone storageos cd storageos # Set cluster.join to hostnames or ip addresses of at least one node helm install . --name my-release --set cluster.join=node01,node02,node03 # Follow the instructions printed by helm install to update the link between Kubernetes and StorageOS. They look like: $ ClusterIP=$(kubectl get svc/storageos --namespace storageos -o custom-columns=IP:spec.clusterIP --no-headers=true) $ ApiAddress=$(echo -n "tcp://$ClusterIP:5705" | base64) $ kubectl patch secret/storageos-api --namespace storageos --patch "{\"data\":{\"apiAddress\": \"$ApiAddress\"}}" If this is your first installation you may wish to follow the StorageOS First use guide for an example of how to mount a StorageOS volume in a Pod.
https://docs.storageos.com/docs/platforms/kubernetes/install/1.9
2019-04-18T17:08:35
CC-MAIN-2019-18
1555578517745.15
[]
docs.storageos.com
All Files Jumping to Keyframes You can jump between the selected layer's keyframes in the Timeline view. How to jump to keyframes In the Camera or Timeline view, select the layer that contains the keyframes you want to flip through. From the top menu, select Animation > Go to Previous Keyframe or Go to Next Keyframe or press semicolon (;) and single quote (').
https://docs.toonboom.com/help/harmony-15/advanced/motion-path/jump-keyframe.html
2019-04-18T16:28:09
CC-MAIN-2019-18
1555578517745.15
[]
docs.toonboom.com
To develop sections in your course, you must first understand the following topics. A section is the topmost category in your course. A section can represent a time period in your course, a chapter, or another organizing principle. A section contains one or more subsections. scheduled date must pass for the section to be visible to learners. A section that is released is visible to learners; however, learners see only subsections within the section that are also released, and units that are published. If you change a unit in a released section but do not publish the changes, learners see the last published version of the modified unit. You must publish the unit for learners to see the updates. A section can contain a unit that is hidden from learners and available to members of the course team only. That unit is not visible to learners, regardless of the release date of the section or subsection. improve learner engagement, edX can send an automatic weekly email message to learners who enroll in self-paced courses. These weekly messages correspond to course sections in Studio, and contain three to five “highlights” for each upcoming course section. A highlight is a brief description of an important concept, idea, or activity. EdX provides most of the text for this weekly course highlight email in a template, and you enter the highlights for the email in Studio. For an example email, see Weekly Highlight Email Text. For more information about email messages that edX sends to learners automatically, see Automatic Email Messages from edX. When you add highlights for a section, keep the following information in mind. EdX sends the first highlight email seven days after the learner enrolls in a course, and sends additional highlight emails every seven days. Each highlight has a limit of 250 characters. If you include a hyperlink in your highlights, we recommend that you use a URL shortener to shorten any long URLs, and then enter the shortened URL in the highlight. Most HTML email renderers automatically convert URLs into hyperlinks. If you do not add highlights for a section, edX does not send learners a message for that section. We strongly encourage you to add highlights for all course sections. Additionally, edX uses consecutive numbers for each message, even if some sections do not have highlights. For example, if you add highlights for section 1 and section 3, but you do not add highlights for section 2, learners receive a message on day 14 that contains the highlights for section 3. Learners who enroll in the course before you enable highlights do not receive any course highlight messages for the duration of the course. To make sure that all of your learners receive weekly course highlight messages, enable highlights for each section before any learners enroll in your course. If you update a highlight for a section, the change takes effect immediately for all subsequent messages that contain that highlight. Note The highlights that you specify persist when you re-run your course. The following example shows the edX email template with three example highlights. Sender: Creating a Course with edX Studio Subject: Welcome to week 1 We hope you're enjoying Creating a Course with edX Studio! We want to let you know what you can look forward to this week: * Learn how to take StudioX and pass the course. * Get access to edX Studio on Edge or Open edX. * Meet the rest of the course author community. With self-paced courses, you learn on your own schedule. We encourage you to spend time with the course each week. Your focused attention will pay off in the end! In addition, edX appends the following message to the end of the weekly course highlight message for weeks 2 and 3 if the learner hasn’t upgraded to the verified enrollment track. Don't miss the opportunity to highlight your new knowledge and skills by earning a verified certificate. Upgrade to the verified track by September 10, 2017. To send weekly highlight emails to your learners, you must first set highlights for each section. When you have set section highlights, you then enable the Weekly Highlight Emails setting. The number of highlights that you have set for a section is visible in the course outline, below the name of the section. If you do not enter highlights for a section, the edX platform does not send an email message for that section. Instead, edX sends an email message for the next section that has highlights. To set highlights for a course section, follow these steps. Note You can also enter course highlights in OLX. After you have set and reviewed the highlights for each course section, you enable weekly highlight emails. To enable weekly highlight emails, follow these steps. Note If you do not enable highlights, the edX platform does not send weekly course highlight emails, even if you enter highlights for one or more sections. You cannot disable weekly highlight emails after you enable them. If you do not want to send weekly highlight emails after you enable them, you can delete highlights in all sections..
https://edx.readthedocs.io/projects/edx-partner-course-staff/en/latest/developing_course/course_sections.html
2019-04-18T17:14:55
CC-MAIN-2019-18
1555578517745.15
[]
edx.readthedocs.io
Walkthrough: Adding multiple target servers to the environment This topic walks you through the process of adding target servers to your BMC Server Automation (BSA) 8.7 environment and installing RSCD agents on them using the unified agent installer. This topic includes the following sections: Introduction This topic is intended for system administrators preparing to add target servers to the BSA environment after successfully setting up the default application server node. We will add target servers in the following two phases: - Enrolling multiple target servers to the BSA environment using the Import Servers wizard - Installing RSCD agents on the target servers using the unified agent installer What is the Import Servers wizard? The Import Servers wizard allows you to add multiple servers to a server hierarchy by specifying a text file that contains a list of server names and properties assigned to each server. You can import servers to the Servers node (the top node in the Servers folder) or a server group. When you import servers to the Servers node, the system adds those servers to its internal list of servers being managed. What is What does this walkthrough show? In this walkthrough, we will use the quick start page to add the following two servers into the BSA environment: - clm-pun-016803 – Windows (64 bit) - clm-pun-016809 – Linux (64 bit) What do I need to do before I get started? Perform the following prerequisite steps before executing this walkthrough: Install and set up an Application Server, console, database and file server in your BSA environment. If you are using the unified product installer to install BSA, you must install the default Application Server node successfully. If you want the unified agent installer to automatically install the RSCD agent on your Windows target, you need to download the Microsoft Sysinternals Suite from the Microsoft tech support site and copy the PsExec file to the %PATH% variable (typically C:\Windows\System32\) on any Windows machine that you plan to use as a PsExec server. How to add target servers and install agents Wrapping it up Congratulations! You have successfully added your target servers to your BSA environment. Where to go from here For more information about the various tools, processes, and UIs that an administrator uses to manage the BMC Server Automation environment, see Administering. You can also reference the Managing servers section, for additional server management tasks.
https://docs.bmc.com/docs/ServerAutomation/87/configuring-after-installation/walkthrough-adding-multiple-target-servers-to-the-environment
2019-12-05T23:47:38
CC-MAIN-2019-51
1575540482284.9
[]
docs.bmc.com
GitHub Collect Slice OverviewOverview GitHub is a web-based version-control and collaboration platform for software developers. GitHub, which is delivered through a software-as-a-service (SaaS) business model, was started in 2008 and was founded on Git to make software builds faster. The Datacoral GitHub slice collects data from a GitHub account and enables data flow of repo statistics into a data warehouse, such as Redshift. Steps to add this slice to your installationSteps to add this slice to your installation The steps to launch your slice are: - Generate GitHub API keys - Specify the slice config - Add the GitHub slice 1. Generate GitHub API keys1. Generate GitHub API keys Setup requirementsSetup requirements Before getting started please make sure to have the following information: - Admin access in your GitHub account Setup instructionsSetup instructions You can generate your access auth_token using the following steps: - In your GitHub account, click your account name in the top right corner, then click Settings. - In the left sidebar menu, navigate to Developer settings > Personal access tokens. - If a key has never been generated for your account, click "Generate a personal access token". - Once an token has been created for your account, the token will appear. Click Copy to copy the auth token to your clipboard. 2. Specify the slice config2. Specify the slice config To get a the starting template save the output of the describe --input-parameters command as follows: datacoral collect describe --slice-type github \ --input-parameters > github_parameters_file.json Necessary input parameters:Necessary input parameters: auth_token- your auth_token from step 4 above user_agent- username or application name Example templates: { "auth_token": "test", "user_agent": "test_username" } 3. Add the Slice3. Add the Slice datacoral collect add --slice-type github --slice-name <slice-name> --parameters-file <params-file> slice-nameName of your slice. A schema with your slice-name is automatically created in your warehouse params-fileFile path to your input parameters file. Ex. github_parameters_file.json Supported load unitsSupported load units repositories: captures all the attributes for Repositories which are associated with your account milestones: captures all the attributes for Milestones which are associated with your account commits: captures all the attributes for Commits which are associated with Repositories issues: captures all the attributes for Issues which are associated with your account pulls: captures all the attributes for Pulls which are associated with your account organizations: captures all the attributes for Organizations which are associated with your account members: captures all the attributes for Members which are associated with your organizations.repositories - schema.milestones - schema.commits - schema.issues - schema.organizations - schema.members - schema.pulls Questions? Interested?Questions? Interested? If you have questions or feedback, feel free to reach out at [email protected] or Request a demo
https://docs.datacoral.com/collect/api/github/
2019-12-05T22:19:51
CC-MAIN-2019-51
1575540482284.9
[]
docs.datacoral.com
Use the XRChart control to add a chart to a report. Use Charts in Reports The XRChart control is implemented the same way as DevExpress WinForms/ASP.NET/WPF chart controls. Use these controls' documentation for information about various chart configurations and the XRChart class's description for instructions on how to apply these configurations in reports.
https://docs.devexpress.com/XtraReports/15039/detailed-guide-to-devexpress-reporting/use-report-controls/use-charts
2019-12-05T22:22:25
CC-MAIN-2019-51
1575540482284.9
[]
docs.devexpress.com
Carrier Mapping: Add Support for all Tracktor Carriers Overview Shopify only supports around 40 carriers. Tracktor supports more than 800! Carrier mapping is a feature that allows you to support carriers that are not supported by Shopify. Carrier Mapping does so by overriding Shopify supported carriers with carriers that Tracktor supports. We call this Carrier Mapping. You must set up Carrier Mapping if you are using a carrier that is not supported by Shopify. If you are using a carrier that Shopify does NOT support, like Mexico Post, you will need to use Carrier Mapping to display tracking results. In the example below Mexico Post will be connected to Bluedart using carrier mapping. By connecting Mexico Post to Bluedart, you will display Mexico Post tracking information on packages fulfilled with the Bluedart carrier. Getting Started 1. Starting from the Tracktor dashboard, click on the Settings tab. 2. Set the Actual Carrier Name to the carrier you are using to ship orders. For this example the actual carrier will be Mexico Post. 3. Set the Shopify Carrier Name to the carrier you wish to override. For this example the Shopify carrier will be Bluedart. 4. Make sure that your orders that are shipped with Mexico Post are configured with the Bluedart carrier on your Shopify Admin Orders page. 5. Save your changes. 6. Test this out by clicking on More Actions and then Tractor: Track order. This should now show tracking for the Mexico Post package. Don't see tracking information? Our support team is here to help. Contact Us Using Dropshipping? Tracktor supports carrier mapping for dropshipping fulfillment apps such as Dropify, Dsers and Oberlo. To set this up, set the Actual Carrier Name to Drop Shipping (Dropify, Oberlo, Dsers) and the Shopify Carrier Name to Other. Fulfillment Service Fulfilling Orders with Unrecognized Carriers Sometimes a fulfillment service will fulfill orders with carrier names that are not supported by Shopify or Tracktor. Most of the time, this means that your fulfillment service is inputting the carrier name incorrectly. For example, the fulfillment service may be inputting the carrier as USPS Ground Shipment: USPS Ground Shipment is not a valid carrier name, but the actual carrier is USPS, and USPS is a valid carrier. To show tracking for this package you should manually change USPS Ground Shipment to USPS, but this would take a lot time. To save time you should utilize carrier mapping. To set up this carrier mapping you must map the incorrect carrier name to the actual carrier. In this case you would map USPS Ground Shipment to USPS. To do so, add USPS as the actual carrier name and USPS Ground Shipment as the Shopify carrier name. You will have to type in USPS Ground Shipment into the Shopify Carrier Name box and then click on the result that starts with "Create option": Now packages fulfilled with USPS Ground Shipment will show tracking information as if they are USPS packages.
https://docs.theshoppad.com/article/268-carrier-mapping
2019-12-05T22:01:07
CC-MAIN-2019-51
1575540482284.9
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e25e4e4b027e1978e1c9a/images/5da0d30f2c7d3a7e9ae26ac3/file-yrW0jLU00Q.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e25e4e4b027e1978e1c9a/images/5d7ac77404286364bc8f1131/file-yf059TxM5S.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e25e4e4b027e1978e1c9a/images/5d7990b62c7d3a7e9ae10d39/file-JFhvlQsjLn.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e25e4e4b027e1978e1c9a/images/5d7a663804286364bc8f0bdd/file-llvEl1dMjC.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e25e4e4b027e1978e1c9a/images/5d7abef82c7d3a7e9ae11a86/file-N1PElCoa7H.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e25e4e4b027e1978e1c9a/images/5d7ac0a304286364bc8f10f2/file-amyb4Bzs77.png', None], dtype=object) ]
docs.theshoppad.com
FullStory Collect Slice OverviewOverview FullStory is an app that captures all customer experience data in one powerful, easy-to-use platform. This datasource slice retrieves Data Export extracts and writes it to S3 and Redshift. Steps to add this slice to your installationSteps to add this slice to your installation The steps to launch your slice are: - Generate FullStory API key - Specify the slice config - Add the FullStory slice 1. Generate FullStory API key1. Generate FullStory API key Setup requirementsSetup requirements Before getting started please make sure to have the following information: - Access to an active FullStory account Setup instructionsSetup instructions - Click the user menu (three dots, upper right corner) > Settings. - Click Integrations & API Keys in the menu on the left side of the page. - Click API Key - Your API token will display on the page. Copy the API token. 2. Specify the slice config2. Specify the slice config To get the input params required to deploy FullStory slice, run the describe command below and save the output of the command to a file. datacoral collect describe --slice-type fullstory \ --input-parameters > fullstory_parameters_file.json Necessary input parameters:Necessary input parameters: api_key- your FullStory API token Example templates: { "api_key": "test", } 3. Add the Slice3. Add the Slice Add the token in the above params file and add the slice using the following command datacoral collect add --slice-type fullstory --slice-name <slice-name> \ [--parameters-file <params-file>] slice-nameName of your slice. A schema with your slice-name is automatically created in your warehouse params-fileFile path to your input parameters file. Ex. fullstory_parameters_file.json Supported load unitsSupported load units events lists LoadunitLoadunit events: This is the only load unit in the slice that has data of all.events - schema.lists NotesNotes Data Export pack?What is The Data Export Pack provides a periodic, raw data extract of events that have been recorded for your organization and an API endpoint to retrieve the data extracts. How often are the data export files updated?How often are the data export files updated? The data is provided in the form of bundles. By default, a bundle contains data about events that occurred during a period of 24 hours. This period can be changed to anywhere between 30 minutes to 24 hours. This bundle will be available to download 24 hours after the last event in this bundle occurred. For example, if your bundle period was set to 6 hours, a data export bundle corresponding to events that happened on Jul 11 between 12:00 PM - 6:00 PM will be available to download on Jul 12 at 6:00 PM. Preferred option would be to set this to hourly so that data does not become too large. How far back in time can I export data?How far back in time can I export data? Data export availability matches the session retention length that you currently subscribe to. This means that if your account is configured for 2 months of session retention, you will be able to export data for sessions that are up to two months old. It is important to note that once sessions expire and are deleted, they are truly not recoverable. Note: the timestamps in the slice are in UTC. However, directly searching through the FullStory app uses your local time. Please keep this in mind if you see differing results. Questions? Interested?Questions? Interested? If you have questions or feedback, feel free to reach out at [email protected] or Request a demo
https://docs.datacoral.com/collect/api/fullstory/
2019-12-05T22:50:32
CC-MAIN-2019-51
1575540482284.9
[]
docs.datacoral.com
Install Command Line Interface Our CLI is the main way you manage your Datacoral stack. You can use it to find out more information about the slices we offer, to deploy a new slice, update an existing slice, or remove a slice. You can also use it to manage, create, and delete materialized views. Pre-requisitesPre-requisites Install the Datacoral CLI on your computer by downloading and running the Datacoral installer. You should have received an email with the values for the TEAM-KEY and USER-KEY parameters in the command. Please contact [email protected] if you have not. InstallationInstallation The installer performs the following operations: - sets up the right version of Node - verifies that computer has AWS credentials already setup - sets up local configuration that is used for creating Datacoral services in your AWS account - installs the CLI - authenticates the CLI with your team key and user key curl -Lo installer.sh; chmod a+x installer.sh In the command below replace TEAM-KEY and USER-KEY with the corresponding values in the email you received. ./installer.sh --team-key TEAM-KEY --user-key USER-KEY When you run the installer, it will ask you for the AWS Region and Availability Zone where you want your installation. In addition, you can specify a name for your AWS account and your Datacoral Installation. See below for details about each. Supported AWS Regions and ZonesSupported AWS Regions and Zones You'll be asked to input your region and availability zone. We support the following regions and zones because they contain all of the resources your installation will need. Pick the ones that are closest to you: Account name and installation nameAccount name and installation name You can specify these if you'd like, or leave them be. If you do change either one, keep to lowercase alphanumeric characters, starting with a letter. Once you have installed the CLI, continue to creating your Datacoral Installation. VerificationVerification If you already have an active installation, you can confirm that your CLI is working properly by running the following command: datacoral collect list-slice-types
https://docs.datacoral.com/install_cli/
2019-12-05T22:29:37
CC-MAIN-2019-51
1575540482284.9
[]
docs.datacoral.com
Timelabels What is timelabel?What is timelabel? Datacoral does micro-batch processing and each micro-batch has a label that is associated approximately with the data time within that batch. There are several caveats to this statement and will be discussed separately. Timelabel is a tag (format: YYYYMMDDHHmm00) associated with operations that would result in data changes. These changes could be one of the following: - collect data sync to s3 - collect data load to redshift - collect data partition creation in glue - rotation of a timeseries table - materialized view refresh - harness (publisher) slice data sync to external systems Timelabel does NOT reflect the wall clock time of the data change operation - instead, it represents the state of underlying data at a given time (barring the caveats). Specifically, timelabel is represented as the top of the schedule of repeated operations. Following scenarios would provide more clarity… A collect datasource loadunit with a schedule of 10 * * * *(10 mins past every hour) which get triggered at 2018-03-05 09:10 +00:00, would be represented by timelabel 20180305080000instead of 20180305091000. So, data with timelabel 20180305080000has data that corresponds to the hour from 2018-03-05 08:00 +00:00 UTC through 2018-03-05 08:59 +59:99 UTC. And the process starts AFTER the hour ends, i.e., 2018-03-05 09:00 +00:00 UTC. A collect datasource loadunit with schedule of 10 0 * * *when backfilled from beginning of year will have timelabel corresponding to each day with values 20180101000000for data from 2018-01-01 00:00 +00.00 UTC through 2018-01-01 23:59 +59.99 UTC. Process will be kicked off at 2018-01-02 00:10 +00.00 UTC with timelabel 20180102000000for data from 2018-01-02 00:00 +00.00 UTC through 2018-01-02 23:59 +59.99 UTC. Process will be kicked off at 2018-01-03 00:10 +00.00 UTC and so on. Table types and timelabelsTable types and timelabels Redshift table types and timelabelsRedshift table types and timelabels Types of tables supported in redshift - Regular - A normal table in redshift - Timeseries - A collection table in redshift created over a set of partition tables, one partition per timelabel. The collection is represented as a UNION ALL view over the set of partition tables. - Partition table is named _<timeseries-table>_<timelabel> - UNION ALL view is named <timeseries-table>_view Timeseries tables have different conventions depending on which part of the system is creating them. - Events timeseries tables schemaname.tablename- table with latest data that firehose writes to, which will then get archived into a partition table schemaname._tablename_<timelabel>- the partition table of historical data schemaname.tablename_view- the union all view. - Timeseries materialized views schemaname.mv_<viewname>- always empty schemaname.mv_<viewname>_view- UNION ALL view schemaname._mv_<viewname>_<timelabel>- partition table - (Future) Loader timeseries tables schemaname.loadunitname- always empty schemaname.<loadunitname>_view- the UNION ALL view schemaname._<loadunitname>_<timelabel>- partition table Operations on different table types - Timeseries tables - ADD PARTITION - add partitions to a timeseries table (this is the same as append) - DROP PARTITION - drop partitions in a timeseries table - Regular tables support multiple operations - REPLACE - this is the most straight forward. You replace all the contents of the table with new content. (This is the same as snapshot) - INSERT - insert new rows into the same table (this is the same as incremental append) - UPSERT - insert new rows or update existing rows (this is the same as incremental upsert) Glue Data Catalog table types and timelabelsGlue Data Catalog table types and timelabels All Glue Data Catalog (GDC) tables are partitionedi, i.e., they behave just like redshift timeseries tables. Each table has the following timebased partition columns y- year m- month d- date h- hour n- minute Timelabels map to partition names in a natural way. For example, a timelabel 20180101000000 corresponds to the partition (y = '2018', m = '01', d = '01', h = '00', n = '00') which in turn maps to the S3 path like below: s3://<customer-s3-bucket>/<schemaname>/<tablename>/y=YYYY/m=MM/d=DD/h=HH/n=NN/ where <customer-s3-bucket>is the data bucket where all customer data gets written <schemaname>is either the redshift schema or the GDC database name <tablename>is either the redshift table name or the GDC table name Collect and Harness Slices and timelabelsCollect and Harness Slices and timelabels As indicated in the examples above, collect slices mostly try to match the data in a timelabel batch to have timestamps that correspond to the time window of the timelabel. There are several caveats that cause this to be a best effort rather than a hard constraint. Making it a hard constraint is possible via materialized views (See Close of Books) but will result in a lot more expensive queries on the database and reduced data freshness (i.e., data batches will be made available with a significant lag after waiting for all data in that batch to show up) and potentially lost data. - Events slices - Events may come in out of order - Batches of events are tagged with timelabels based on arrival time of events rather than on the timestamps in the events. More specifically - GDC table partition value is derived from the timelabel which in turn is derived from the S3 paths like yyyy/mm/dd/hhthat AWS Firehose uses to stage data in S3. - Redshift tables that Firehose writes to are rotated periodically to include timelabel in the name. So, Datacoral renames the events table ( schemaname.tablename) to a partition table ( schemaname._tablename_<timelabel>) and creates a new empty events table all in a single transaction. Firehose continues to write to the events table. - GDC partitions dont correspond to timestamps in the events - Redshift partition tables are not exactly the same as the GDC table partitions - API slices - Most APIs allow for data to be extracted based on timestamps. In those cases, GDC partitions as well as Redshift partition tables (for timeseries tables) are equivalent. - In cases where redshift tables are regular tables, timelabels are only tags on the batch processing jobs rather than tags on specific batches of data in redshift. - GDC partitions are created as append-only. - Database slices - Database slices can be configured to do one of REPLACE, INSERT, and UPSERT operations on the final tables in Redshift. - If the final tables in Redshift are regular tables, timelabels are just tags for processing steps - If final tables are made Timeseries Tables, then, timelabels correspond to the partition tables within the final tables - Change Data Capture slices
https://docs.datacoral.com/tech_docs/timelabel/
2019-12-05T21:49:54
CC-MAIN-2019-51
1575540482284.9
[]
docs.datacoral.com
Developer Installation¶ Last Updated: October 2019 These instructions are intended for those that want to contribute to the Tethys Platform source code. Use these instructions to install the Tethys source code in a development environment (for Unix based systems only). Tip To install and use Tethys Platform, you will need to be familiar with using the command line/terminal. For a quick introduction to the command line, see the Terminal Quick Guide article. 1. Download and Run the Installation Script¶ Run the following commands from a terminal to download and run the Tethys Platform install script. For systems with wget (most Linux distributions): wget bash install_tethys.sh -b master For Systems with curl (e.g. Mac OSX and CentOS): curl -o ./install_tethys.sh bash install_tethys.sh -b master Install Script Options¶ You can customize your tethys installation by passing command line options to the installation script. The available options can be listed by running: bash install_tethys.sh --help - Each option is also descriped here: - -n, --conda-env-name <NAME>: Name for tethys conda environment. Default is 'tethys-dev'. - -t, --tethys-home <PATH>: Path for tethys home directory. Default is ~/.tethys/${CONDA_ENV_NAME}/. Note If ${CONDA_ENV_NAME}is "tethys" then the default for TETHYS_HOMEis just ~/.tethys/ - -s, --tethys-src <PATH>: Path to the tethys source directory. Default is ${TETHYS_HOME}/tethys/. - -a, --allowed-hosts <HOST>: Hostname or IP address on which to serve Tethys. Default is 127.0.0.1. - -p, --port <PORT>: Port on which to serve Tethys. Default is 8000. - -b, --branch <BRANCH_NAME>: Branch to checkout from version control. Default is 'master'. - -c, --conda-home <PATH>: Path to conda home directory where Miniconda will be installed, or to an existing installation of Miniconda. Default is ~/miniconda/. Tip The conda home path cannot contain spaces. If the your home path contains spaces then the --conda-home option must be specified and point to a path without spaces. - --db-username <USERNAME>: Username that the tethys database server will use. Default is 'tethys_super'. Note The default DB_USERNAMEis the same as the default DB_SUPER_USERNAMEso that tests can be run. - --db-password <PASSWORD>: Password that the tethys database server will use. Default is 'pass'. - --db-super-username <USERNAME>: Username for super user on the tethys database server. Default is 'tethys_super'. - --db-super-password <PASSWORD>: Password for super user on the tethys database server. Default is 'pass'. - --db-port <PORT>: Port that the tethys database server will use. Default is 5436. - --db-dir <PATH>: Path where the local PostgreSQL database will be created. Default is ${TETHYS_HOME}/psql/. - -S, --superuser <USERNAME>: Tethys super user name. Default is 'admin'. - -E, --superuser-email <EMAIL>: Tethys super user email. Default is ''. - -P, --superuser-pass <PASSWORD>: Tethys super user password. Default is 'pass'. - --skip-tethys-install: Flag to skip the Tethys installation so that the Docker installation or production installation can be added to an existing Tethys installation. Tip If conda home is not in the default location then the --conda-home options must also be specified with this option. - --partial-tethys-install <FLAGS>: List of flags to indicate which steps of the installation to do. - Flags: m - Install Miniconda r - Clone Tethys repository (the --tethys-src option is required if you omit this flag). c - Checkout the branch specified by the option --branch (specifying the flag r will also trigger this flag) e - Create Conda environment s - Create portal_config.ymlfile and configure settings d - Create a local database server i - Initialize database server with the Tethys database (specifying the flag d will also trigger this flag) u - Add a Tethys Portal Super User to the user database (specifying the flag d will also trigger this flag) a - Create activation/deactivation scripts for the Tethys Conda environment t - Create the t alias to activate the Tethys Conda environment - For example, if you already have Miniconda installed and you have the repository cloned and have generated a portal_config.ymlfile, but you want to use the install script to: create a conda environment, setup a local database server, create the conda activation/deactivation scripts, and create the t shortcut then you can run the following command: bash install_tethys.sh --partial-tethys-install edat Warning If --skip-tethys-install is used then this option will be ignored. - --install-docker: Flag to include Docker installation as part of the install script (Linux only). See 2. Install Docker (OPTIONAL) for more details. - --docker-options <OPTIONS>: Command line options to pass to the tethys docker init call if --install-docker is used. Default is "'-d'". Tip The value for the --docker-options option must have nested quotes. For example "'-d -c geoserver'" or '"-d -c geoserver"'. - --production Flag to install Tethys in a production configuration. - --configure-selinux Flag to perform configuration of SELinux for production installation. (Linux only). - -x: Flag to turn on shell command echoing. - -h, --help: Print this help information. Here is an example of calling the installation script with customized options: bash install_tethys.sh -t ~/Workspace/tethys -a localhost -p 8005 -c ~/miniconda3 --db-username tethys_db_user --db-password db_user_pass --db-port 5437 -S tethys -E [email protected] -P tpass The installation script may take several minutes to run. Once it is completed you will need to activate the new conda environment so you can start the Tethys development server. This is most easily done using an alias created by the install script. To enable the alias you need to open a new terminal or re-run the .bashrc (Linux) or .bash_profile (Mac) file. For Linux: . ~/.bashrc For Mac: . ~/.bash_profile You can then activate the Tethys conda environment and start the Tethys development server by running:: t tethys manage start or simply just: t tms Tip The installation script adds several environmental variables and aliases to help make using Tethys easier. Most of them are active only while the tethys conda environment is activated, however one alias to activate the tethys conda environment was added to your .bashrc or bash_profile file in your home directory and should be available from any terminal session: t: Alias to activate the tethys conda environment. It is a shortcut for the command source <CONDA_HOME>/bin/activate tethys where <CONDA_HOME> is the value of the --conda-home option that was passed to the install script. The following environmental variables are available once the tethys conda environment is activated: - TETHYS_HOME: The directory where the Tethys source code and other Tethys resources are. It is set from the value of the --tethys-home option that was passed to the install script. - TETHYS_PORT: The port that the Tethys development server will be served on. Set from the --port option. - TETHYS_DB_PORT: The port that the Tethys local database server is running on. Set from the --db-port option. Also, the following aliases are available: - tms: An alias to start the Tethys development server. It calls the command tethys manage start -p <HOST>:${TETHYS_PORT} where <HOST> is the value of the --allowed-host option that was passed to the install script and ${TETHYS_PORT} is the value of the environmental variable which is set from the --port option of the install script. - tstart: Combines the tethys_start_db and the tms commands. When installing Tethys in production mode the following additional environmental variables and aliases are added: - NGINX_USER: The name of the Nginx user. - NGINX_HOME: The home directory of the Nginx user. - tethys_user_own: Changes ownership of relevant files to the current user by running the command sudo chown -R ${USER} ${TETHYS_HOME}/src ${NGINX_HOME}/tethys. - tuo: Another alias for tethys_user_own - tethys_server_own: Reverses the effects of tethys_user_own by changing ownership back to the Nginx user. - tso: Another alias for tethys_server_own - When you start up a new terminal there are three steps to get the Tethys development server running again: Activate the Tethys conda environment Start the Tethys database server start the Tethys development server Using the supplied aliases, starting the Tethys development server from a fresh terminal can be done with the following two commands: t tstart Congratulations! You now have Tethys Platform running a in a development server on your machine. Tethys Platform provides a web interface that is called the Tethys Portal. You can access your Tethys Portal by opening (or if you provided custom host and port options to the install script then it will be <HOST>:<PORT>) in a new tab in your web browser. - To log in, use the credentials that you specified with the -S or --superuser and the -P or --superuser-pass options. If you did not specify these options then the default credentials are: username: admin password: pass 2. Install Docker (OPTIONAL)¶. 3. Customize Settings (OPTIONAL)¶ The Tethys installation script created a portal configuration file called portal_config.yml in the directory $TETHYS_HOME/. The installation script has defined the most essential settings that will allow the Tethys development server to function based on the options that were passed to the script or based on the default values of those options. If you would like to further customize the settings then open the portal_config.yml file and make any desired changes. Refer to the Tethys Portal Configuration documentation for a description of each of the settings.
http://docs.tethysplatform.org/en/latest/installation/developer_installation.html
2019-12-05T22:17:06
CC-MAIN-2019-51
1575540482284.9
[]
docs.tethysplatform.org
Events from Ruby See the Overview for details about setting up the Datacoral Collect Events Slice and setting up keys. Step 1: Install the ruby instrumentation moduleStep 1: Install the ruby instrumentation module In your Gemfile, add the source for snowplow-tracker gem gem "snowplow-tracker", :git => "git+ssh://[email protected]/diffusion/RUBYEVT/Ruby-Events.git" In your .gemspec file, add 'snowplow-tracker' to your dependencies by adding: spec.add_runtime_dependency "snowplow-tracker", ">= 0.7.0" Step 2: Sample Ruby TrackingStep 2: Sample Ruby Tracking Substitute the appropriate environment parameters based on the following definitions. # Require the snowplow-tracker gem: require 'snowplow-tracker' # Initialize an emitter instance. This object will be responsible # for how and when events are sent to datacoral. # Substitute appropriate values for Collector endpoint, api key and datacoral environment. e = DatacoralEmitter.new('URL_ENDPOINT', # Collector endpoint 'API_KEY', # api key 'ENVIRONMENT' # datacoral environment ) # Initialize a tracker instance like this: s = SnowplowTracker::Subject.new s.set_platform('mob') # Substitute appropriate value for app_id. t = SnowplowTracker::Tracker.new(e, s, 'NAMESPACE', 'APP_ID', true) # You can set the user id to any string: t.set_user_id('my_user_id');
https://docs.datacoral.com/collect/events/ruby/
2019-12-05T21:49:32
CC-MAIN-2019-51
1575540482284.9
[]
docs.datacoral.com
TreeView controls provide a way to represent hierarchical relationships within a list. The TreeView provides a standard interface for expanding and collapsing branches of a hierarchy: When to use a TreeView You use TreeViews in windows and custom visual user objects. Choose a TreeView instead of a ListBox or ListView when your information is more complex than a list of similar items and when levels of information have a one-to-many relationship. Choose a TreeView instead of a DataWindow control when your user will want to expand and collapse the list using the standard TreeView interface. Hierarchy of items Although items in a TreeView can be a single, flat list like the report view of a ListView, you tap the power of a TreeView when items have a one-to-many relationship two or more levels deep. For example, your list might have one or several parent categories with child items within each category. Or the list might have several levels of subcategories before getting to the end of a branch in the hierarchy: Root Category 1 Subcategory 1a Detail Detail Subcategory 1b Detail Detail Category 2 Subcategory 2a Detail Number of levels in each branch You do not have to have the same number of levels in every branch of the hierarchy if your data requires more levels of categorization in some branches. However, programming for the TreeView is simpler if the items at a particular level are the same type of item, rather than subcategories in some branches and detail items in others. For example, in scripts you might test the level of an item to determine appropriate actions. You can call the SetLevelPictures function to set pictures for all the items at a particular level. Content sources for a TreeView For most of the list types in PowerBuilder, you can add items in the painter or in a script, but for a TreeView, you have to write a script. Generally, you will populate the first level (the root level) of the TreeView when its window opens. When the user wants to view a branch, a script for the TreeView's ItemPopulate event can add items at the next levels. The data for items can be hard-coded in the script, but it is more likely that you will use the user's own input or a database for the TreeView's content. Because of the one-to-many relationship of an item to its child items, you might use several tables in a database to populate the TreeView. For an example using DataStores, see Using DataWindow information to populate a TreeView. Pictures for items Pictures are associated with individual items in a TreeView. You identify pictures you want to use in the control's picture lists and then associate the index of the picture with an item. Generally, pictures are not unique for each item. Pictures provide a way to categorize or mark items within a level. To help the user understand the data, you might: Use a different picture for each level Use several pictures within a level to identify different types of items Use pictures on some levels only Change the picture after the user clicks on an item Pictures are not required You do not have to use pictures if they do not convey useful information to the user. Item labels and the levels of the hierarchy may provide all the information the user needs. Appearance of the TreeView You can control the appearance of the TreeView by setting property values. Properties that affect the overall appearance are shown in the following table. For more information about these properties, see the section called “TreeView control” in Objects and Controls. User interaction Basic TreeView functionality allows users to edit labels, delete items, expand and collapse branches, and sort alphabetically, without any scripting on your part. For example, the user can click a second time on a selected item to edit it, or press the Delete key to delete an item. If you do not want to allow these actions, properties let you disable them. You can customize any of these basic actions by writing scripts. Events associated with the basic actions let you provide validation or prevent an action from completing. You can also implement other features such as adding items, dragging items, and performing customized sorting. Using custom events In PowerBuilder 7 and later releases, PowerBuilder uses Microsoft controls for ListView and Treeview controls. The events that fire when the right mouse button is clicked are different from earlier releases. When you release the right mouse button, the pbm_rbuttonup event does not fire. The stock RightClicked! event for a TreeView control, pbm_tvnrclickedevent, fires when the button is released. When you click the right mouse button on an unselected TreeView item, focus returns to the previously selected TreeView item when you release the button. To select the new item, insert this code in the pbm_tvnrclickedevent script before any code that acts on the selected item: this.SelectItem(handle) When you right double-click, only the pbm_rbuttondblclk event fires. In previous releases, both the pbm_rbuttondblclk and pbm_tvnrdoubleclick events fired.
https://docs.appeon.com/appeon_online_help/pb2017r2/application_techniques/ch08s01.html
2019-12-05T23:10:00
CC-MAIN-2019-51
1575540482284.9
[array(['images/uitv01.gif', None], dtype=object)]
docs.appeon.com
Cutting Edge DHTML-Enabled ASP.NET Controls Dino Esposito Code download available at:CuttingEdge0507.exe(131 KB) Contents Anatomy of a Postback The Client-Side Counterpart DHTML Behaviors A DropDownList Example The Extended Object Model Summing It Up In the past, I've covered some core aspects of the interaction between DHTML behaviors, the browser, and ASP.NET runtime (see Cutting Edge: Extend the ASP.NET DataGrid with Client-side Behaviors and Cutting Edge: Moving DataGrid Rows Up and Down). But I haven't covered the intricacies of DHTML behaviors and advanced client-side scripting so I'll do that here. I'll show how to make ASP.NET code and the Internet Explorer DHTML Document Object Model (DOM) work together and discuss how you set up the communication between the ASP.NET runtime and a server-side instance of an ASP.NET control. Anatomy of a Postback To design an effective mechanism for cooperation between DHTML and server-side controls, you need a solid understanding of the ASP.NET postback mechanism. Imagine you have a page with a couple of textboxes and a Submit button. When a user clicks the button, the page posts back. The post can be initiated in one of two ways—through a Submit button or through script. A Submit button is represented by the HTML: <INPUT type="submit">. Most browsers also support posting via the submit method in a <form> element. In ASP.NET, the second approach is used for LinkButtons and auto-postbacks. When a submit operation is initiated, the browser prepares and sends an HTTP request according to the form's contents. In ASP.NET, the "action" attribute of the sending form is set to the URL of the current page; the "method" attribute, on the other hand, can be changed at will and even programmatically. Possible methods include GET and POST. The postback for an ASP.NET page that contains a couple of textboxes, a dropdown list, and a Submit button looks like this: _VIEWSTATE=%D...%2D &TextBox1=One &TextBox2=Two &DropDownList1=one &Button1=Submit The contents of all input fields, including hidden fields, are sent as part of the payload. In addition, the value of the currently selected item in all list controls is added, as is the name of the Submit button that triggered the post. If there are one or more LinkButtons on the page, two extra hidden fields called __EVENTTARGET and __EVENTARGUMENT are added to the payload: __EVENTTARGET= &__EVENTARGUMENT= &_VIEWSTATE=%D...%2D &TextBox1=One &TextBox2=Two &DropDownList1=one &Button1=Submit Both of these hidden fields are empty if the page posts back through a Submit button. If you post back through LinkButton in the page, the payload changes as follows: __EVENTTARGET=LinkButton1 &__EVENTARGUMENT= &__VIEWSTATE=%D ... %2D &TextBox1=One &TextBox2=Two &DropDownList1=one In this case, the __EVENTTARGET field contains the name of the LinkButton that initiated the post. ASP.NET uses this information when constructing the server-side representation of the requested page and in determining what caused the postback. On the server, IIS picks up the request and forwards it on to the ASP.NET runtime. A pipeline of internal modules processes the request and instantiates a Page-derived class. The page class is an HTTP handler and, as such, implements the IHttpHandler interface. The runtime calls the Page's ProcessRequest method through the IHttpHandler interface and the server-side processing starts. Figure 1 provides an overall view of the request process. Figure 1** ASP.NET Postback Process ** Once the server-side processing of the page has begun, the Page object goes through a sequence of steps, as outlined in Figure 2. As its first step, the Page object creates an instance of all server controls that have a runat="server" attribute set in the requested ASPX source file. At this time, each control is created from scratch and has exactly the same attributes and values outlined in the ASPX source. The Page_Init event is fired when all controls have been initialized. Next, the page gives all of its controls a chance to restore the state they had last time the posting instance of that page was created. During this step, each control accesses the posted view state and restores its state as appropriate. Figure 2 Page Lifecycle Events At this point, each control's state must be updated with any data posted by the browser. For this to happen, a special conversation is set up between individual controls and the ASP.NET runtime. This is an important point to consider in light of client-side interaction. The ASP.NET Page class looks up the Form or QueryString collection, depending on the HTTP verb that was used to submit the request. The collection is scanned to find a match between a posted name and the ID property of a server-side control created to serve the request. For example, if the HTTP payload contains TextBox1=One, the Page class expects to find a server-side control named TextBox1. Each ASP.NET control lives on the server but retains a counterpart on the client. The link between them is the string containing the control's ID. While the ASP.NET Page class can successfully locate a server control with a given name, it has no idea of the type of that control. In other words, from the page perspective, TextBox1 can be either a TextBox, a DropDownList, a DataGrid, or a custom control. For this reason, the Page class processes the control only if it adheres to an expected contract—the IPostBackDataHandler interface. If the control implements that interface, the page invokes its LoadPostData method. The method receives the name of the control (TextBox1, in the example) plus the collection of posted values—that is, Form or QueryString. As an example, a TextBox control will extract the corresponding value ("One", in the example) and compare it to its internal state. This behavior is common to all input controls and to all controls that expect to receive input data from the browser. For example, a DataGrid control that allows users to change the order of columns using drag and drop will, at this point, receive the modified order of columns. The LoadPostData implementation depends on the characteristics and expected behavior of the particular control. The TextBox control compares the posted string to the value of its Text property. The DropDownList control compares the incoming data to the value of the currently selected item. If the compared values coincide, the method returns false. If the values differ, then the relevant control properties are updated and the method returns true. Figure 3 shows an implementation for a TextBox control. Figure 3 IPostBackDataHandler Implementation bool IPostBackDataHandler.LoadPostData( string name, NameValueCollection postedValues) { string oldValue = this.Text; string newValue = postedValues[name]; if (!oldValue.Equals(newValue)) { this.Text = newValue; return true; } return false; } void IPostBackDataHandler.RaisePostDataChangedEvent() { this.OnTextChanged(EventArgs.Empty); } LoadPostData for a TextBox control compares the value posted for a given control with the current value of the Text property. Note that at the time this comparison is made, the Text property contains the value just restored from the view state. From now on, the state of the control is up to date and reflects the old state and the input coming from the client. The Boolean value that LoadPostData returns indicates whether or not the second method on the interface—RaisePostDataChangedEvent—must be invoked later. A return value of true means that the value of Text (or the property or properties a control updates with posted values) has been refreshed and subsequently the TextBox raises a server-side data-changed event. For a TextBox control, this event is TextChanged. Once this step has been accomplished, the Page_Load event is fired and a second check is made on the control that appears to be responsible for the postback (based on the information sent from the browser). If this control implements IPostBackEventHandler, the RaisePostBackEvent method is invoked to give the control a chance to perform the postback action. The following pseudocode illustrates the implementation of this method for the Button class: void IPostBackEventHandler.RaisePostBackEvent(string eventArgument) { if (CausesValidation) Page.Validate(); OnClick(new EventArgs()); OnCommand(new CommandEventArgs( CommandName, CommandArgument)); } As you can see, when a button is clicked and the host page has completed its restoration process, the OnClick event is invoked, followed by OnCommand. A similar piece of code serves the LinkButton class. Code like this is used for any custom controls that require the post action to be started on the client. The Client-Side Counterpart Each server control outputs some markup that is sent down to the client. The browser then uses that information to build a DOM rooted in the outermost tag of the control's markup. Simple server controls such as TextBox map directly to HTML elements; more complex controls like the DataGrid map to a subtree of HTML elements, in many cases rooted in an HTML table tag. The root tag, or the most significant tag in the HTML, is given a name (the name HTML attribute) that matches the ID of the server control. This guarantees that the ASP.NET runtime can correctly match up client HTML elements with instances of server-side controls. When you use or build an ASP.NET control with rich client-side functionalities you end up with at least two related problems. First, you have to figure out how to transfer to the server any input generated on the client. Second, you must make sure that the server control retrieves and properly handles that chunk of information. A third issue revolves around the format you use to send data across the wire. There might be many ways to solve these issues and, frankly, any approach that works is valid. But when writing code for an ASP.NET control, why not do as the ASP.NET team did. That's where that anatomy of a postback fits in. In the two articles that I mentioned at the beginning of this piece, I create a custom DataGrid control and collect some user input through drag and drop and other client-side operations. The input is then serialized to a string and packed into a hidden field. The hidden field is like any other <INPUT> tag except that it doesn't show up in the user interface. The hidden field is part of the form and its contents are picked up and used to prepare the HTTP payload when a postback is made. The hidden field is created by the server-side control and given the same ID of the control. For example, a custom DataGrid named DragDropGrid1 will create its own personal hidden field with the same name. Any client-side action that is relevant to the behavior of the grid is persisted to the hidden field. When the page posts back, that information is carried to the server and consumed by ASP.NET in the manner described earlier. The matching ID determines the link between the contents of the input field and a server-side control. The control-specific implementation of the IPostBackDataHandler interface does the rest, giving the control a chance to modify its server state in light of client-side user actions. If you get to consider simple and basic controls such as TextBox and DropDownList, then the <INPUT> element and the displayed user interface are the same thing. Any user interface-related operation automatically modifies the contents associated with the input element. This is much less automatic with more complex and advanced controls. Again, think of a DataGrid control that allows row movements or columns by drag and drop. The user interface of a DataGrid is a mere HTML table padded with plain text. The hidden field to carry data is silently created as part of the markup and injected in the page. Some additional code is needed to capture UI events and persist results to the hidden field. What do you think this additional code should look like? Can it really be different from mere script code? It has to be pure JavaScript code at its core, but if the browser supports it, you can wrap it up in a more elegant and neater object model—that's mostly what a DHTML behavior is all about. DHTML Behaviors DHTML behaviors are a feature of Internet Explorer 5.0 and later. They're not supported by any other browser. A DHTML behavior component can be written in any Internet Explorer-compatible scripting language (usually JavaScript) and supplies dynamic functionality that can be applied to any element in an HTML document through CSS style sheets. DHTML behaviors use CSS to separate script and content in a document using an .htc that incorporates all the DHTML-based functionality needed to describe and implement a given behavior. This behavior, in turn, can be attached to a variety of HTML elements via a new CSS style. Put another way, DHTML behaviors bring the benefits of reuse to the world of scripting. What's in a DHTML behavior? First, a behavior component can define an object model—a collection of methods, properties, and events that describe the provided behavior and supply tools to control it programmatically. In addition, a DHTML behavior needs to capture some page- and element-level events and handle them. You code this through classic HTML event handlers. You have access to the whole page DOM and can read and write attributes throughout the page. Figure 4 shows an HTC component that allows expanding and collapsing the children of the element to which it is applied. Figure 4 DHTML Behavior Component <PROPERTY NAME="Expanded" /> <ATTACH EVENT="onreadystatechange" HANDLER="Init" /> <ATTACH EVENT="onclick" HANDLER="HandleClick" /> <script language="javascript"> // Handles the initialization phase function Init() { if (Expanded == null) Expanded = true; // Toggle visibility for all children of THIS element for (i=0; i<children.length; i++) { if (Expanded == true) children[i].style.display = ""; else children[i].style.display = "none"; } } // Handles the OnClick event on the current element function HandleClick() { var i; var style; // Make sure the sender of the event is THIS element if (event.srcElement != element) return; // Toggle visibility for all children of THIS element for (i=0; i<children.length; i++) { style = children[i].style; if (style.display == "none") { style.display = ""; } else { style.display = "none"; } } } </script> In DHTML, the expand/collapse functionality is achieved by toggling the value of the display attribute in the style object. A value of "none" keeps the element hidden; a value of "" (empty string) makes the element visible. The core functionality is found in a <script> tag that collects public event handlers as well as internal functions and classes. Outside of the <script> tag, you define the object model of the behavior and the internal events it wants to handle: <PROPERTY NAME="Expanded" /> <ATTACH EVENT="onreadystatechange" HANDLER="Init" /> <ATTACH EVENT="onclick" HANDLER="HandleClick" /> The preceding code snippet declares a variable named Expanded and a couple of handlers for the onclick and onreadystatechange DOM events. Properties can be assigned a value in the HTML source through the mechanism of attributes. Event handlers must be defined in the <script> tag. The onreadystatechange event is a common presence in many DHTML behaviors because it represents the initialization phase of the component. In Figure 4, you check the value of Expanded in the initializer and, based on that, you toggle the visibility value of child elements. To attach a behavior, you use CSS notation (behaviors are ignored in browsers that do not support CSS): <style> .LIST {behavior:url(expand.htc);} </style> The CSS attribute is named "behavior". It is assigned a URL that ultimately points to the HTC file. Once you have defined a LIST class, you assign it to any HTML element that requires it: <ul class="LIST" style="cursor: hand;" expanded="false"> As you see, any public properties defined on the behavior can be initialized as an attribute in any tags that contain style attributes. Other than the public object model, DHTML behavior offers nothing that you can't get through plain scripting. But with a single attribute you can attach a certain behavior to a given HTML element or to the root of a HTML element subtree, as is the case with ASP.NET controls. A DHTML behavior can encapsulate a lot of details regarding the internal implementation of the behavior and it has full access to the page's DOM. A DropDownList Example In both aforementioned articles, I glossed over the code that specifically handles browser/server communication. Now it's time to focus on what the control needs to do in order to receive and properly process on the server any client-side input. The sample ExtendedDropDownList control is a custom control that is derived from the basic DropDownList control: public class ExtendedDropDownList : System.Web.UI.WebControls.DropDownList { ... } The most important difference between the basic and extended dropdown control is that the extended one exposes a client-side object model to let script code add elements dynamically. Newly added elements are added to the items collection and sent to the server out-of-band, that is outside the classic format that the HTTP payload takes when a dropdown control is involved. In ASP.NET, the DropDownList control is designed to be read-only across postbacks. In other words, any list items dynamically added through DHTML code are lost once the page posts back to the server. The canonical HTTP payload doesn't include the items in the dropdown list. It only mentions the ID of the currently selected item. Control-specific information generated on the client can get to the server only in a hidden field. The hidden field can be given an arbitrary name, but in general you give the worker hidden field the same ID as the server control. As explained in the earlier "Anatomy of a Postback" section, this guarantees that the ASP.NET runtime invokes the methods of the IPostbackDataHandler interface on the control to post incoming data. For a custom DropDownList control, though, things are a little bit different. In fact, the base DropDownList control already requires an input element with the same name as the control's ID. This is a <SELECT> element: <SELECT name="DropDownList1"> <OPTION value="one">One <OPTION value="two">Two <OPTION value="three">Three </SELECT> A custom and enhanced dropdown list control requires an extra hidden field to carry the text and IDs of the additional items appended at the client. To avoid naming conflicts, this hidden field must have a different name. In my code, I use "_My" to postfix the ID. This is arbitrary, but once you choose a naming convention you must stick with it. In the source code of the customized ExtendedDropDownList control (available in the code download), the control defines a Boolean property to enable client-side insertion and overrides the OnPreRender method. The OnPreRender method simply registers the additional hidden field with the predetermined name. The hidden field is empty when the page is rendered to the client and will be filled as the user works with the control adding items dynamically to the list. What about the contents of the hidden field? Should you use any specific format or convention? The data being passed to the server should be laid out according to a format that is known to both the server control and the client-side DHTML behavior. The format you choose is arbitrary as long as it achieves the expected goals. In this example, I'll use a pipe-separated pair of strings for each dynamically added item (note that without proper escaping, this prohibits the use of the pipe character in the actual values). The left part of the pair represents the text; the right part is for the ID. Here's an example: One|id1,Two|id2 The custom ExtendedDropDownList control must implement the IPostBackDataHandler interface from scratch. The implementation on the base class is marked as private and can't be invoked from within a derived class. The code in the LoadPostData method serves two main purposes. First, it manages the index of the selected item. The ID of this item comes through the HTTP payload and is matched against the current contents of the dropdown list. The index found (if any) is then used to overwrite the value of the SelectedIndex property. If this results in a change to the existing value, the host page gets a SelectedIndexChanged event. It is worth noting here that the event is not raised by the ASP.NET runtime. To be more precise, when LoadPostData returns true, the ASP.NET runtime invokes the RaisePostDataChangedEvent method on the same IPostBackDataHandler interface. By implementing this method, a control can fire a proper event. The second goal of LoadPostData for the ExtendedDropDownList control is populating the Items collection with the new elements just added on the client. As mentioned, text and ID of these new elements are stored in the control's hidden field. You can decide to raise an ad hoc event to signal that new elements have been added on the client. To do so, you define a private Boolean variable (_addedNewElements in the example) that is set during the execution of LoadPostData only if new elements have been added, as shown here: void IPostBackDataHandler.RaisePostDataChangedEvent() { OnSelectedIndexChanged(EventArgs.Empty); if (_addedNewElements) OnNewItemsAdded(EventArgs.Empty); } NewItemsAdded is a custom server-side event that is fired right after SelectedIndexChanged and before the postback event caused by the submit control: public event EventHandler NewItemsAdded; With the control's implementation discussed so far, when the Page_Load event is fired to the page the dropdown list control has been fully rebuilt to reflect the changes on the client—the new items are now definitely part of the control's state. The Extended Object Model When the dropdown list is displayed on the client, it takes the form of a <SELECT> tag. The DHTML object model designs a tree of objects around these elements and gives you the tools to add or remove items programmatically via JavaScript. The following code shows what you really need to execute: var oOption = document.createElement("OPTION"); element.options.add(oOption); oOption.innerText = text; oOption.value = id; By employing a DHTML behavior, you can wrap the previous code in a new method that's easier to use. Figure 5 details my dropdownlistex.htc behavior. It contains a Boolean property to enable support for client-side insertions as well as a client-side method named AddItem. The code for this method actually extends the DHTML tree for a regular dropdown element and simplifies the insertion of a new item. When attached to a client-side button, the following code adds a new item to the specified dropdown list entirely on the client. As you can see, it leverages the AddItem method defined on the behavior: <SCRIPT lang="javascript"> function InsertTheNewItem() { var obj = document.getElementById("NewElement"); var text = obj.value; var id = obj.value; document.getElementById("DropDownList2").AddItem(text, id); } </SCRIPT> Figure 5 The DropDownListEx.htc Behavior <PROPERTY NAME="Modifiable" /> <METHOD NAME="AddItem" /> <ATTACH EVENT="onreadystatechange" HANDLER="Init" /> <script language="javascript"> // Handles the initialization phase function Init() { if (Modifiable == null) Modifiable = false; } // Add a new item programmatically function AddItem(text, id) { if (!eval(Modifiable)) return false; var oOption = document.createElement("OPTION"); element.options.add(oOption); oOption.innerText = text; oOption.value = id; var hiddenField = GetHiddenField(element.id + "_My"); // Add a separator var tmp = hiddenField.value; if (tmp != "") hiddenField.value += ","; hiddenField.value += text + "|" + id; } function GetHiddenField(fieldName) { // Go up in the hierarchy until the FORM is found var obj = element.parentElement; while (obj.tagName.toLowerCase() != "form") { obj = obj.parentElement; if (obj == null) return null; } if (fieldName != null) return obj[fieldName]; } </script> Summing It Up DHTML behaviors are Internet Explorer client-side components that encapsulate a given behavior and attach it to an HTML element. From the ASP.NET perspective, you can utilize these components to enrich server controls with advanced, browser-specific capabilities. It is important to realize that DHTML behaviors are not strictly necessary to endow server controls with powerful client capabilities; their use, however, makes it possible to better encapsulate all of the required features in a reusable and easily accessible object model. In addition to learning the internal mechanics of DHTML behaviors, you should become familiar with the postback interfaces of ASP.NET controls—in particular, IPostBackDataHandler. This interface lets developers handle any posted data whose format and layout is entirely up to you. A deep understanding of this interface is key to implementing effective interaction between browser and server environments within the boundaries of ASP.NET controls. Send your questions and comments for Dino to [email protected]. Dino Esposito is a Wintellect instructor and consultant based in Italy. Author of Programming ASP.NET and the new book Introducing ASP.NET 2.0 (both from Microsoft Press), he spends most of his time teaching classes in ASP.NET and ADO.NET and speaking at conferences. Get in touch with Dino at [email protected] or join the blog at weblogs.asp.net/despos.
https://docs.microsoft.com/en-us/archive/msdn-magazine/2005/july/cutting-edge-dhtml-enabled-asp-net-controls
2019-12-05T23:15:00
CC-MAIN-2019-51
1575540482284.9
[array(['images/cc163765.fig01.gif', 'Figure 1 ASP.NET Postback Process'], dtype=object) ]
docs.microsoft.com
This setting this argument to true, a specified hook will fire whenever one of those field’s value changes. Creating this magic is really quite easy. Let’s begin with this basic field: array( 'id' =>'text', 'type' => 'text', 'title' => __('Test Compiler', 'redux-framework-demo'), 'subtitle' => __('This is to test the compiler hook.', 'redux-framework-demo'), 'desc' => __('Each time this field is set, a flag is set. On save, that flag initiates a compiler hook!', 'redux-framework-demo'), 'compiler' => true, 'default' => 'Test Compiler' ), Note the 'compiler' => true argument. This sets the compiler flag. Now we need to hook into the fired hook. Add this snippet to your code: Setting up the Compiler Function Next, the compiler function itself needs to be set up. It requires two parts. The add_filter statement, and the actual function. Ideally, these codes would be placed within your config php file, however, it can be used anywhere in your code provided the opt_name portion of the add_filter line is replaced with the value specified in your opt_name argument. For this example, we’ll be using the example found in the sample-config.php. In the initSettings section of the sample-config.php, sure the following line is included and/or uncommented: add_filter('redux/options/' . $this->args['opt_name'] . '/compiler', array( $this, 'compiler_action' ), 10, 3); Now, add (or uncomment) the following function to the Redux_Framework_sample_config class. This is our test function that will allow you see when the compiler hook occurs. It will only fire if a field set with 'compiler' => true is changed. Please note that for this example, $css will return empty as this is only a basic compiler hook. function compiler_action($options, $css, $changed_values) { echo '<h1>The compiler hook has run!</h1>'; print_r ($options); print_r ($css); print_r ($changed_values); } If all has been set up correctly, you will see the compiler hook message and the passed values on your options panel after the field with the active compiler hook’s value has changed and settings saved. Please note that if the output_tag argument is set to to false, Redux will not auto-echo a tag into the page header.
https://docs.reduxframework.com/core/advanced/integrating-a-compiler/
2017-09-19T20:37:38
CC-MAIN-2017-39
1505818686034.31
[]
docs.reduxframework.com
Generator Plugin¶ The Generator allows testing of synthetic workloads by generating HTTP responses of various sizes. The size and cacheability of the response is specified by the first two components of the requested URL path. This plugin only supports the GET and HEAD HTTP methods. Path components after the first 2 are ignored. This means that the trailing path components can be manipulated to create unique URLs following any convenient convention. The Generator plugin inspects the following HTTP client request headers: The Generator plugin publishes the following metrics: - generator.response_bytes: - The total number of bytes emitted - generator.response_count: - The number of HTTP responses generated by the plugin Examples:¶ The most common way to use the Generator plugin is to configure it as a remap plugin in remap.config: map \ @plugin=generator.so Notice that although the remap target is never contacted because the Generator plugin intercepts the request and acts as the origin server, it must be syntactically valid and resolvable in DNS. A 10 byte, cacheable object can then be generated: $ curl -o /dev/null -x 127.0.0.1:8080 The Generator plugin can return responses as large as you like: $ curl -o /dev/null -x 127.0.0.1:8080((10 * 1024 * 1024))/$RANDOM
https://docs.trafficserver.apache.org/en/latest/admin-guide/plugins/generator.en.html
2017-09-19T20:38:11
CC-MAIN-2017-39
1505818686034.31
[]
docs.trafficserver.apache.org
Dependency (see Section 7.13.1, “Bean definition profiles” and Section 7.13.2, , which. Since Spring Framework 4.0, the set of mocks in the org.springframework.mock.web package is based on the Servlet 3.0 API. For thorough integration testing of your Spring MVC and REST Controllers in conjunction with your WebApplicationContext configuration for Spring MVC, see the Spring MVC Test Framework.. privateor protectedfield access as opposed to publicsetter methods for properties in a domain entity. @Autowired, @Inject, and @Resource, which provides dependency injection for privateor protectedfields, setter methods, and configuration methods. .
https://docs.spring.io/spring-framework/docs/current/spring-framework-reference/html/unit-testing.html
2017-09-19T20:41:04
CC-MAIN-2017-39
1505818686034.31
[]
docs.spring.io
Clamp To Constraint¶ The Clamp To constraint clamps an object to a curve. The Clamp To constraint is very similar to the Follow Path constraint, but instead of using the evaluation time of the target curve, Clamp To will get the actual location properties of its owner (those shown in the Transform panel), and judge where to put it by „mapping“ this location along the target curve. One benefit is that when you are working with Clamp To, it is easier to see what your owner will be doing; since you are working in the 3D View, it will just be a lot more precise than sliding keys around on an F-Curve and playing the animation over and over. A downside is that unlike in the Follow Path constraint, Clamp To does not have any option to track your owner’s rotation (pitch, roll, yaw) to the banking of the targeted curve, but you do not always need rotation on, so in cases like this it’s usually a lot handier to fire up a Clamp To, and get the bits of rotation you do need some other way. The mapping from the object’s original position to its position on the curve is not perfect, but uses the following simplified algorithm: A „main axis“ is chosen, either by the user, or as the longest axis of the curve’s bounding box (the default). The position of the object is compared to the bounding box of the curve in the direction of the main axis. So for example if X is the main axis, and the object is aligned with the curve bounding box’s left side, the result is 0; if it is aligned with the right side, the result is 1. If the cyclic option is unchecked, this value is clamped in the range 0-1. This number is used as the curve time, to find the final position along the curve that the object is clamped to. This algorithm does not produce exactly the desired result because curve time does not map exactly to the main axis position. For example an object directly in the center of a curve will be clamped to a curve time of 0.5 regardless of the shape of the curve, because it is halfway along the curve’s bounding box. However, the 0.5 curve time position can actually be anywhere within the bounding box! Options¶ - Target The Target: field indicates which curve object the Clamp To constraint will track along. The Target: field must be a curve object type. If this Data ID field is not filled in then it will be highlighted in red indicating that this constraint does not have all the information it needs to carry out its task and will therefore be ignored on the constraint stack. - Main Axis This button group controls which global axis (X, Y or Z) is the main direction of the path. When clamping the object to the target curve, it will not be moved significantly on this axis. It may move a small amount on that axis because of the inexact way this constraint functions. For example if you are animating a rocket launch, it will be the Z axis because the main direction of the launch path is up. The default Auto option chooses the axis which the curve is longest in (or X if they are equal). This is usually the best option. - Cyclic By default, once the object has reached one end of its target curve, it will be constrained there. When the Cyclic option is enabled, as soon as it reaches one end of the curve, it is instantaneously moved to its other end. This is of course primarily designed for closed curves (circles & co), as this allows your owner to go around it over and over.
https://docs.blender.org/manual/de/dev/rigging/constraints/tracking/clamp_to.html
2019-05-19T08:25:11
CC-MAIN-2019-22
1558232254731.5
[]
docs.blender.org
Insights Outlook add-in Applies to: Office 365 Enterprise E5, Office 365 A5, Office 365 E3, Office 365 E1, Office 365 Nonprofit E5, MyAnalytics add-on, Microsoft 365 E3, Microsoft 365 Business, Business Premium, and Business Essentials.. This add-in is an extension of your Outlook experience and works within Outlook to help you gain focus time, maintain your work relationships, and improve your overall work-life balance. Note Only you can see your data, see Privacy FAQ for details. What you might see In Outlook, open the add-in by selecting the Insights icon in the Outlook Home ribbon. If you are using Outlook on the web, open an email message, select the ellipsis (...) in the top-right corner of your email message, and then select Insights. You'll see Insights in the right panel in Outlook: Applies to: Office 365 Enterprise E5, Office 365 A5, Office 365 Nonprofit E5, and the MyAnalytics add-on Insights can tell you how many people have opened your emails and how long they spent reading them. In general, it informs you about emails that you sent to five or more Office 365 users who are internal to your company. (For more information about which email messages are reported about, see Reporting details.) After you send an email message, it takes up to fifteen minutes before Insights can inform you about it. Insights groups similar information into a single summary card that you can select and expand to see a more detailed view. Reporting details Insights, has a cloud mailbox, and has not opted out of Insights. Other exceptions Insights. Open rate The Insights add-in reports the open rate of qualifying emails that you have sent. The following table describes how Insights reports the open rate of a particular email: User privacy is the reason that the imprecise values ("Low" and "High") are reported for read activity. For more information, see the Email read rates section in the MyAnalytics privacy guide. To see read information about sent emails On the Home ribbon, select the Insights icon. If the Insights panel isn't already open, it opens now. Note If you see a "Welcome!" message, select Get started. On the Insights panel, locate the Track email open rates card: This card lets you see more information about recent emails that you've sent. To see this information, select the Track email open rates card. The panel displays insight cards for each of these recently sent messages These cards state the subject line, a brief summary of the open rate, the open rate (sometimes expressed as a percentage), and the number of forwards. Privacy by design The Outlook add-in preserves all the data-subject rights afforded by GDPR. The insights you see in the add-in are only available to you. No admin or manager can see these insights. They are computed as needed, from the your email and meeting information, and are never stored outside your mailbox. Additionally, the add-in begins processing data for insights only after the first time you open it. Learn more about how Microsoft protects your privacy. To turn off the add-in You can turn off the add-in by opting out of MyAnalytics. See Can I opt out of MyAnalytics? for how-to steps. Feedback Send feedback about:
https://docs.microsoft.com/en-us/workplace-analytics/myanalytics/use/add-in
2019-05-19T08:26:30
CC-MAIN-2019-22
1558232254731.5
[array(['../../images/mya/overview/insights-cards-9.png', 'Insights panel'], dtype=object) ]
docs.microsoft.com
About MapBuilder MapBuilder is a powerful, standards compliant geographic mapping client which runs in a web browser. Key Features - Browser based mapping client - Excels at transforming and rendering XML documents (like GML, Context etc) using XSLT in the browser. -+) - Customisable and easy to extend - Open source under the LGPL licence - No plugins required - ... Feature Matrix If you wish to learn more, you can try the on-line tutorials or the Examples and read the on-line user-guide.
http://docs.codehaus.org/pages/viewpage.action?pageId=92373262
2014-04-16T11:07:21
CC-MAIN-2014-15
1397609523265.25
[]
docs.codehaus.org
User Guide Local Navigation Use a shortcut for switching typing input languages when you are typing Before you begin: You can use the following shortcut only on the physical keyboard of your BlackBerry® smartphone. After you finish: To turn off the shortcut for switching typing input languages, change the Shortcut Keys field to None. - On the Home screen or in a folder, click the Options icon. - Click Typing and Input > Language. - Press the key > Save. Next topic: Add a display language Previous topic: Change the language Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/18577/Use_shortcut_for_switching_langs_60_1123521_11.jsp
2014-04-16T11:05:32
CC-MAIN-2014-15
1397609523265.25
[]
docs.blackberry.com
phplist features phplist is a one-way email announcement delivery system. It is great for newsletters, publicity lists, notifications, and many other uses. phplist is designed to manage mailing lists with hundreds of thousands of subscribers. phplist is excellent with smaller lists too! The Web Interface lets you write and send messages, and manage phplist over the internet. phplist keeps sending messages from your web server, even after you shut down your computer. -. - Email to Fax (soon).. CategoryDocumentation
http://docs.phplist.com/phplistFeatures.html
2014-04-16T12:12:22
CC-MAIN-2014-15
1397609523265.25
[]
docs.phplist.com
Zero to JupyterHub with Kubernetes¶ J the Community Resources section. This documentation is for Helm chart version 1.1.1 that deploys JupyterHub version 1.4.2 and other components versioned in hub/images/requirements.txt. The Helm chart requires Kubernetes version >=1.17.0 and Helm >=3.5. What To Expect¶ Note For a more elaborate introduction to the tools and services that JupyterHub depends upon, see our page about that. Setup Kubernetes¶ This section describes a how to setup a Kubernetes cluster on a selection of cloud providers and environments, as well as initialize Helm, a Kubernetes package manager, to work with it. Setup JupyterHub¶ This tutorial starts from Step Zero: Your Kubernetes cluster and describes the steps needed for you to create a complete initial JupyterHub deployment. Please ensure you have a working installation of Kubernetes and Helm before proceeding with this section.. Administrator Guide¶ This section provides information on managing and maintaining a staging or production deployment of JupyterHub. It has considerations for managing cloud-based deployments and tips for maintaining your deployment. Resources¶ This section holds all the references and resources that helped make this project what it is today. Community Resources¶ This section gives the community a space to provide information on setting up, managing, and maintaining JupyterHub. Note We recognize that Kubernetes has many deployment options. As a project team with limited resources to provide end user support, we rely on community members to share their collective Kubernetes knowledge and JupyterHub experiences. Contributing If you would like to help improve this guide or Helm chart,. Institutional support¶ This guide and the associated helm chart would not be possible without the amazing institutional support from the following organizations (and the organizations that support them!)
https://zero-to-jupyterhub.readthedocs.io/en/stable/
2021-07-23T23:36:21
CC-MAIN-2021-31
1627046150067.51
[]
zero-to-jupyterhub.readthedocs.io
Authenticating Hue users with LDAP Configuring Hue for Lightweight Directory Access Protocol (LDAP) enables you to import users and groups from a directory service, synchronize group membership manually or automatically at login, and authenticate with an LDAP server. Hue supports Microsoft Active Directory (AD) and open standard LDAP such as OpenLDAP and Forgerock OpenDJ Directory Services. Integrating Hue with LDAP When Hue is integrated with LDAP, users can use their existing credentials to authenticate and inherit their existing groups transparently. There is no need to save or duplicate any employee password in Hue. When authenticating using LDAP, Hue validates login credentials against an LDAP directory service if Hue is configured with the LDAP authentication backend (desktop.auth.backend.LdapBackend) in Cloudera Manager. create_users_on_loginproperty in the field to false. [desktop] [[ldap]] create_users_on_login=false The purpose of disabling the automatic import is to allow only a predefined list of manually imported users to login. Binding Hue with LDAP There are two ways to bind Hue with an LDAP directory service: - Search Bind - The search bind mechanism for authenticating will perform an ldapsearch against the directory service and bind using the found distinguished name (DN) and password provided. This is the default method of authentication used by Hue with LDAP. - You can restrict the search process by configuring the following two properties under the Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini [desktop] > [[ldap]] > [[[users]]] section. - With the above configuration, the LDAP search filter takes the following form: (&(objectClass=*)(sAMAccountName=[***USERNAME-ENTERED-BY-USER***])) - Direct Bind - The direct bind mechanism for authenticating binds to the LDAP server using the username and password provided at login. - Hue authenticates (without searching) in one of two ways: - NT Domain ( nt_domain): (Only for use with Microsoft Active Directory) Hue binds to the AD with username@domain using the User Principal Names (UPN) to bind to the LDAP service. This AD-specific property allows Hue to authenticate with AD without having to follow LDAP references to other partitions. This typically maps to the email address of the user or the user's ID in conjunction with the domain. Default: mycompany.com. - Username Pattern ( ldap_username_pattern): Bind to open standard LDAP with full path of directory information tree (DIT). It provides a template for the DN that is ultimately sent to the directory service when authenticating. The [***USERNAME***]parameter is replaced with the username provided at login.Default: "uid=[***USERNAME***],ou=People,dc=mycompany,dc=com" Encryption To prevent credentials from transmitting in the clear, encrypt with LDAP over SSL, using the LDAPS protocol on the LDAPS port, which uses port 636 by default. An alternative, is to encrypt with the StartTLS operation using the standard LDAP protocol, which uses port 389 by default. Cloudera recommends LDAPS. You must have a CA Certificate in either case. Prerequisites - LDAP server - Bind account (or support for anonymous binds) - Cloudera Manager access with Full Administrator permissions - [optional] LDAP server with LDAPS or StartTLS encryption.
https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/securing-hue/topics/hue-authenticate-users-with-ldap.html
2021-07-23T21:48:37
CC-MAIN-2021-31
1627046150067.51
[]
docs.cloudera.com
Libelium Waspmote - Plug & Sense! loggers can be configured to send data to eagle.io in a few easy steps. Waspmotes can communicate directly or use a Meshlium to connect large sensor networks. Note The eagle.io application in the Meshlium device needs to be started and initial connection established with eagle.io before it can be used as a transport. Note You can disable/enable parameters from the Data Source properties dialog after initial creation. Use the settings below for Direct Connection or Meshlium Connection with your Waspmote. Refer to our example Waspmote operating programs for use with 3G (download) and 4G modems (download) to establish a direct connection to eagle.io. Apply the following settings to the variables at the top of the operating program: Connect to the web interface on the Meshlium device, open the eagle.io Cloud Connector configuration and Save the following settings prior to clicking Start to run the application:
https://docs.eagle.io/en/latest/topics/device_configuration/libelium_waspmote/index.html
2021-07-23T22:59:58
CC-MAIN-2021-31
1627046150067.51
[]
docs.eagle.io
Method GtkHeaderBarset_title_widget Declaration [src] void gtk_header_bar_set_title_widget ( GtkHeaderBar* bar, GtkWidget* title_widget ) Description [src] Sets the title for the GtkHeaderBar. When set to NULL, the headerbar will display the title of the window it is contained in. The title should help a user identify the current view. To achieve the same style as the builtin title, use the “title” style class. You should set the title widget to NULL, for the window title label to be visible again.
https://docs.gtk.org/gtk4/method.HeaderBar.set_title_widget.html
2021-07-23T21:38:58
CC-MAIN-2021-31
1627046150067.51
[]
docs.gtk.org
Export Reporting Data Need raw data for more analysis? You can export your reporting data on demand. Export options are available for the All Channels, Email, Company/User, and Happiness reports. In this article Generate an Export - 1 - Head over to the Reports menu, then select the report you want to export. In our example, we're exporting data from the All Channels report. - 2 Set your date range and a view if you want, then click Export/Print icon next to the date filter in the top right corner. Click Export. - 3 Verify that the date range that shows in the modal window is the one you want to export. Select your preferred format (CSV or XLSX) and click the Export button. - 4 You'll get an email at your Help Scout User email address with a link to download the report when it's ready. Click the blue Huzzah! button to dismiss the modal and just keep an eye out for that email! Download the Export We automatically send an email to the email address that you use to log in to Help Scout. It'll come through with a subject of Data export from Help Scout. Click on the Download button in the email to download the CSV or XLSX file. You must be logged in to Help Scout as the same user to complete the download. Note: The email is sent when your export request has been completed and the export is available to download. Multiple report export requests are queued and processed one at a time. Large reports with a lot of data will take a longer time to process. We will notify you if we are unable to process the export. Our support team will also receive a notification and will follow up with you from there! Data Included Your export will contain all data about each conversation included in your parameters, except the conversation thread contents and satisfaction rating comments. Collected data such as averages, totals, and percentages are not included in the export but you can use your favorite tools to roll up the data in the ways that suit your team best with the export! Note: If you're a Plus plan customer, all Custom Field values for all mailboxes appear in exported reports. The column will be entirely blank if the criteria specified for the export does not include the mailbox where those fields exist.) - Rating Comments (y/n) Note: Ratings comment text is not included in report exports. If you need the actual comment text in an export, just ask your Account Owner or Administrator to reach out to our support team — we're able to pull a list of all comments your team has received. Conversations included The conversations included in your export vary depending on the report. Times The times shown in XLSX exports are in UTC. Times in a CSV export will use the timezone that is set by Administrators at Manage > Company in Help Scout.
https://docs.helpscout.com/article/849-export-reporting-data
2021-07-23T22:57:08
CC-MAIN-2021-31
1627046150067.51
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/5f1727a104286306f8072d7f/file-4ms4u0gg0a.png', None], dtype=object) ]
docs.helpscout.com
Supported platforms - Estimated hardware requirements - Supported operating systems - User permissions - Supported web servers - Required server software - Supported web browsers - See also Estimated hardware requirements These hardware requirements are estimated, and are expected to vary significantly with queries per second and collection size. Please contact [email protected] if you need a more accurate assessment of your individual hardware requirements. Funnelback recommends that production search environments consist of a minimum 2-server architecture: Dedicated gathering/filtering/indexing/administration server Dedicated query processing server Additional query processors can be added as query load increases, or if additional redundancy is required. Small website / intranet 10K documents Predominantly HTML content, some binary files (averaging 1MB each) 1000+ search queries/day Large website 100K documents Predominantly HTML content, some binary files (averaging 1MB each) 10,000+ queries/day Medium enterprise 2 million documents Predominantly binary files (averaging 1MB each), some HTML / textual content 1,000+ search queries/day 2% content changes daily Large enterprise 10 million+ documents Predominantly binary files (averaging 1MB each), some HTML / textual content 10,000+ search queries/day 2% content changes daily Knowledge Graph The Knowledge Graph service uses additional memory above Funnelback’s normal memory requirements, either on the Funnelback server itself or on a separate graph database server if preferred. The amount of memory required will scale with the amount of data stored in the knowledge graph (e.g. number of nodes * average amount of metadata per node). At this time we do not have enough large scale datapoints to provide useful estimates. Supported operating systems Funnelback is tested and fully supported on the following operating systems: Red Hat Enterprise Linux 7 (64 bit) or CentOS 7 (64 bit) Microsoft Windows Server 2012 (64 bit) Microsoft Windows Server 2016 (64 bit) User permissions Supported web servers Funnelback ships with an embedded webserver (Jetty) which is used to serve all web pages. The use of another web server for administration and Funnelback’s modern search UI is not supported. Required server software Common Java (installer ships with an embedded Java Virtual Machine (JVM) to fulfil this requirement) - - - Linux libstdc++ - Normally installed by default, or with yum install libstdc++ crontab - Normally installed by default, or with yum install cronie GLIBC 2.4 or greater Some fonts - e.g. yum install dejavu-fonts-common dejavu-sans-fonts dejavu-sans-mono-fonts dejavu-serif-fonts dejavu-lgc-sans-mono-fonts urw-fonts.noarch Supported web browsers Administration dashboard The Funnelback administration dashboard supports the following desktop web browsers: Google Chrome - latest release Mozilla Firefox - latest release Mozilla Firefox - extended support release Microsoft Edge - latest release Safari - the current and immediately preceding version Search results templates The Funnelback search results templates support the following desktop browsers: Google Chrome - latest release Mozilla Firefox - latest release Mozilla Firefox - extended support release Microsoft Edge - latest release Internet Explorer 11 Safari - the current and immediately preceding version The Funnelback search results templates also support the following mobile browsers: Google Chrome - Latest Stable Release on Android Safari on iOS14 Safari on iOS13
https://docs.squiz.net/funnelback/docs/latest/administer/installing-patching-upgrading/supported-platforms.html
2021-07-23T22:26:54
CC-MAIN-2021-31
1627046150067.51
[]
docs.squiz.net
API Key is a part of an API Key and Integrations add-on, click here to learn how to get it on Marketplace » How to set it up Step #1 Log in to Woodpecker and generate an API key. 1. Go to your Settings → MARKETPLACE → 'INTEGRATIONS'. 2. Go to API keys and click 'CREATE A KEY'. 3. Copy the API Key. Step #2 Log in to LeadFuze 1. Open your LeadFuze account and navigate to Settings → Integrations. 2. Choose Woodpecker in the list and turn the integration on. 6. Paste the API key generated previously in your Woodpecker account and click ' Authenticate' afterward. Now, you can start to move your prospects from LeadFuze to Woodpecker. It will save a lot of your time - you won't need to import CSV, XLS or XLSX files to Woodpecker anymore. You can choose one of the two options: Automatic sync of contacts or Manual export. Q: How to set up the Automatic sync of contacts? To set that up, follow these steps: Open your LeadFuze account, click Leads (1.) → choose a list (2.), 2. Edit the list. 3. Go to List Options → Send to → Woodpecker → Settings; 4. Choose the Woodpecker campaign which will receive the data from this list; 5. Click ‘Save’. 6. Your leads from LeadFuze will be automatically added as prospects in the chosen campaign in Woodpecker. Q: How to export leads to Woodpecker manually? As in the case of automatic sync, open your LeadFuze account, click Leads → choose a list. Edit the list. Select contacts from the list. Go to List Options → Send to → Woodpecker → Settings. Choose the Woodpecker campaign from the list. If you haven't created any campaign in Woodpecker yet, you should do this first.
https://docs.woodpecker.co/en/articles/5223335-how-to-integrate-leadfuze-with-woodpecker
2021-07-23T22:52:40
CC-MAIN-2021-31
1627046150067.51
[array(['https://downloads.intercomcdn.com/i/o/361035443/c32a3982b8f5cf89904caf53/obraz.png', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335447448/8e869d1bd7a09976409c2ea0/file-PRArKN9Req.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335447449/5767586749d8378a3ad03760/file-S5cfvgsb8y.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335447454/5847c4cf6beee9fba4465db0/file-hSqpAM7QGw.png', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/335447458/218e897e487b44a1cb345c20/API-token-LeadFuze.png', None], dtype=object) ]
docs.woodpecker.co
Authentication Authentication is made with API key. You can generate it in your Woodpecker panel. Not sure how? Check this guide. API Key is a part of an API Key and Integrations add-on, click here to learn how to get it on Marketplace » Remember, API key is a required parameter. We'll return an error if it's missing or invalid. Another noteworthy thing is that if you use Agency panel, each company you've added can have their own API keys. Therefore, to access any data from a specific company added to your Agency, you need to generate API key for this company. Your API key identifies your account so keep it secret! To make request you can place your API key in headers, as following: headers : { "Authentication" : "Basic <API KEY>" } If you prefer to use cURL, you can use the following syntax: curl --location --request GET '' \ --header 'Authorization: Basic <API_KEY>' Remember to encode your API key to Base64 format before using it in cURL request.
https://docs.woodpecker.co/en/articles/5223425-authentication
2021-07-23T22:49:10
CC-MAIN-2021-31
1627046150067.51
[]
docs.woodpecker.co
WSO2 Storage Server comes with a default Cassandra configuration, which you can override by changing the following configuration files: Pointing to a remote Cassandra cluster Change the <SS_HOME>/repository/conf/etc/hector-config.xml file. The default configuration is as follows: <HectorConfiguration> <Cluster> <Name>ClusterOne</Name> <Nodes>localhost:9160</Nodes> <AutoDiscovery disable="false" delay="1000"/> </Cluster> </HectorConfiguration> The following are the XML elements which you can use to change the default configuration: Changing default IPs and ports Storage Server comes with configurations suited for a standalone Cassandra deployment , but if you set up a cluster, you must change listening and ports accordingly in <SS_HOME>/ repository/conf/etc/cassandra.yaml file. - Cassandra listening IP is used for inter-node communications in a clustered environment: listen_address: <Server listening IP or domain name> Storage port is used to exchange the data and the command between the cluster nodes: storage_port: 7000 This port changes according to <Offset> value in <Ports> section in carbon.xml. Changing storage_port value in cassandra.yaml will not affect the server. If encrypted communication is enabled, the cluster uses the port defined in ssl_storage_port for cluster-related commands and data communication: ssl_storage_port: 7001 RPC listen address is used for the thrift-based communication between server and the client: rpc_address: <IP_ADDRESS> # port for Thrift to listen for clients on rpc_port: 9160 RPC port changes according to <Offset> value in <Ports> section in carbon.xml. Changing rpc_port value in cassandra.yaml will not affect the server. Native Transport Port is the port which is listening to CQL clients. Please note that the address on which the native transport is bound is the same as the rpc_address (to start the native transport server, start_native_transportshould be equal to true, which is its default value). This needs to be set as follows: start_native_transport: true native_transport_port: 9042 For a full list of explanations of each configuration directive, refer to the file's code comments. Additionally, see . Cassandra Cluster Configuration for Statistics and Node Operations To view Cassandra cluster statistics and do cluster operations, <SS_HOME>/repository/conf/etc/cluster-config.xml needs to be configured. Here, all the SS nodes and their service URLs needs to be configured. <cluster> <configuration> <cluster_authentication> <username>admin</username> <password>admin</password> </cluster_authentication> <nodes> <node> <host>127.0.0.1</host> <backend_url>local://services/</backend_url> </node> </nodes> </configuration> </cluster> The following are the XML elements which you can use to change the default configuration: Exposing services to the public In a IaaS infrastructure, services, public IP and domain names of the backend Cassandra cluster must be exposed via public addresses. This is done in <SS_HOME>/repository/conf/etc/cassandra-endpoint.xml file. Given below is the default configuration, where the <EndPoint> and <HostName> elements represent each Cassandra node by its host name. <Cassandra> <EndPoints> <EndPoint> <HostName>css0.stratoslive.wso2.com</HostName> </EndPoint> <EndPoint> <HostName>css1.stratoslive.wso2.com</HostName> </EndPoint> <EndPoint> <HostName>css2.stratoslive.wso2.com</HostName> </EndPoint> <EndPoint> <HostName>css3.stratoslive.wso2.com</HostName> </EndPoint> <EndPoint> <HostName>css4.stratoslive.wso2.com</HostName> </EndPoint> </EndPoints> </Cassandra>
https://docs.wso2.com/display/SS110/Changing+the+Default+Cassandra+Configuration
2021-07-23T22:33:45
CC-MAIN-2021-31
1627046150067.51
[]
docs.wso2.com
. - Page Transform shows how to scale and shift the position of the page allowing space to be cleared around the original page so stamps don’t overwrite any original page content. - StampPDF Batch on Windows is C:\Appligent\StampBatch\. On other platforms, it will be wherever you installed it. What is a Stamp File? A stamp file is a text file that specifies how StampPDF Batch. StampPDF Batch supports these types of stamps: - Text stamps — Specify the text to stamp onto the document - Unicode text stamps — Unicode text to stamp onto the document; for example, Chinese, Japanese or Korean text - Image stamps — JPEG or TIFF image to stamp onto the document - PDF stamps — PDF files stamped into the document - Barcode stamps — Currently supports a variety of 1D and 2D barcodes - PageTransform — Scale and Transform the page contents. There must be at least one space in between Parameter (value).
https://docs.appligent.com/stamppdf-batch/stamp-files/
2021-07-23T22:40:18
CC-MAIN-2021-31
1627046150067.51
[]
docs.appligent.com
Personalise Your Workspace You can personalise your workspace to suit your work and preferences by changing pages so that they display only the information you need, where you need it. The personalisation changes that you make will only affect what you see, not what other users see. You can personalise all types of pages, including the Role Centre page. For more information about Role Centres, see Role Centre. Depending on the type of page and what it includes, you can make various changes, such as move or hide fields, columns, actions, and entire parts, and add new fields. Most personalisation must be done by first activating the Personalising banner, but very simple adjustments, such as column width can be performed immediately on any list. Note Administrators can perform the same layout changes as users can by customising the workspace for a profile that multiple users are assigned. For more information, see Customise Pages for Roles Administrators can also override or disable users' personalisation, and they can define which features are even available for users to see in all or specific companies. For more information, see Customising Business Central. Video Overview The following video shows some of the ways in which you can personalise your Role Centre. To change the width of a column You can easily resize columns on any list by dragging the boundary between two columns to the left or the right. - In the header of a list, select and drag the boundary between two columns. - Alternatively, double-click the boundary between two columns to auto-fit the width of the column. This sets the width to the optimal size for readability. As for other personalisation, the changes you make to column width are stored on your account and follow you no matter which device you sign into. To start personalising a page through the Personalising banner Open any page that you want to personalise. In the upper-right corner, select the icon, and then choose the Personalise action. The Personalising banner appears at the top to indicate that you can start making changes. Note To navigate during personalisation, use Ctrl + Click on an action if it is highlighted by the arrowhead. If you see a or on the banner, you cannot personalise the page. For more details, see Why a Page is Locked from Personalisation. To add a field, choose the + Field action. From the Add Field to Page pane, drag and drop a field into the desired position on the page. To change a UI element, point to the element, such as an action, a field, or a part. The element is immediately highlighted with an arrowhead or border. Choose the element, and then choose either Move, Remove, Hide, Show, Show under "Show more", Show when collapsed, Show always, Set/Clear Freeze Pane, or Include/Exclude from Quick Entry, depending on the type and county of the UI element. For more information, see What You Can Personalise. When you have finished changing the layout of one or more pages, choose the Done button on the Personalising banner. What You Can Personalise Personalising Actions Personalisation lets you decide which actions to show on the navigation and action bars and on Role Centres and where to show them. You can show, hide, or move individual actions or action groups. Personalising the navigation and action bars is done basically the same way as with other UI elements. However, what you can do with an action or group depends on where the action or group is located. The best way to find out is to enter personalising mode and then let the arrowheads guide you. There are a couple terms that you should be familiar with to better understand action personalisation: action group and promoted category. An action group is an element that expands to display other actions or groups. For example, on the Sales Orders page, the Functions action that appear when you choose the Actions action is an action groups. A promoted category is an action group that appears before the vertical line | on the action bar. The categories typically include the most commonly used actions, so that you can quickly find them. For example, on the Sales Orders page, the Order, Release, and Posting actions are promoted categories. Note You cannot personalise the action bar that appears in parts on the page (for example, the sales lines part on the Sales Order page). To remove, hide, and show actions and action groups When you want to show or hide an action, the options under the arrowhead define what can do depending on the action's state. - Choose the arrowhead for an action or action group. - Choose from one of the following options: To move actions and action groups Where you can drop actions or actions groups is indicated by a horizontal line between two actions or a border around an action group. The following limitations exist: - You can move individual actions into the promoted categories, but you cannot rearrange the order of the actions in the category. - You cannot move an action group into a promoted category. - To move an action or action group, drag and drop it to the desired position, like you do with fields and columns. - To move an action or action group into another action group that is empty, drag the action or action group to the new group and drop it in the Drop an action here box. Personalising Parts Parts are areas on a page that are typically composed of multiple fields, charts or other content, and can be identified by a coloured border when setting focus to the part. For example, a Role Centre home screen has multiple parts. Because of their well-defined boundary, you can personalise the entire part as well as its' contents. - To move a part, drag and drop it to the desired position. A coloured line indicates valid positions on the screen. For example, FactBoxes can only be moved next to other FactBoxes in the FactBox pane. - You can hide a part by choosing the Hide option under the arrowhead. - When you start personalising or navigate to a new page, any parts that are currently hidden will appear on the page with distinctive visuals to indicate they are hidden. You can unhide that part by choosing the Show option under the arrowhead. You can clear all personalisation changes that you have made within a single part by choosing the Clear personalisation option under the part's arrowhead. Clearing personalisation of a part only affects changes to the contents of the part, not the placement or visibility of the part on the page. To clear personalisation At some point, you might want to undo some or all of the personalisation changes that you have made to a page over time. - On the Personalising banner, choose the Clear personalisation action. - Choose one of the following options. Be aware that clearing personalisation cannot be undone. Additional Points of Interest To help you better understand personalisation, here are some pointers. - When you make changes to a card page that you open from a list, the changes will take effect on all records that you open from that list. For example, let's say you open a specific customer from the Customers list page, and then personalise the page by adding a field. When you open other customers from the list, the field that you added will also be shown. - Changes that you make will take effect on all your Role Centres. For example, if you make a change to the Customer list when the Role Centre is set to Business Manager, you will also see the change on the Customers page when the Role Centre is set to Sales Order Processor. - Changes to a page in a pane will take effect on the page where ever it is shown. - You can only add fields and columns from a predefined list, which is based on the page. You cannot create new ones. See Related Training at Microsoft Learn See Also Customise Pages for Profiles Working with Business Central Change Basic Settings Change Which Features are Displayed
https://docs.microsoft.com/en-gb/dynamics365/business-central/ui-personalization-user
2021-07-23T23:50:05
CC-MAIN-2021-31
1627046150067.51
[]
docs.microsoft.com
Metadata class types - Introduction - Metadata class types: text - Metadata class types: date - Metadata class types: number - Metadata class types: geospatial x/y coordinate - Metadata class types: document permissions - See also Introduction (e.g. 2017-09-24) or number (e.g. 20170924) types: text A text type metadata class has the values interpreted as a text string. The text can include code such as HTML tags and these will be returned as is by Funnelback. It is the responsibility of the user interface layer to interpret or escape the field content. Searching textual metadata Funnelback includes a number of query language and CGI parameters that can be used to search a text type metadata field Text metadata can also be sorted alphabetically using the sort=metaCLASSNAME or sort=dmetaCLASSNAME parameters. See: sort options for more information on sorting search results. Metadata class types: date Funnelback supports a single date-type metadata class using the reserved d metadata class. The value of this field is interpreted as a date and is assigned as the document’s date for the purposes of recency in the ranking algorithm, and also for sort and presentation. Only a single date value will be assigned to the document. If multiple date metadata fields exist in the document the assigned date is chosen based on the date precedence rules below. Supported date formats Notes: All date formats are case insensitive. There is no locale support for dates. Month names and abbreviations must be in English. Date precedence order When multiple dates are encountered for a document the following precedence order applies: External metadata (highest priority) The first occurrence in the document of dc.dateor any metadata source mapped to the dmetadata class. dc.date.modified dc.date.created dc.date.issued HTTP last modified date (lowest priority) Searching date metadata A number of special date parameters are supported via CGI parameters and the query language. Dates must be specified as DMMMYYYY format. e.g. 1Jan2015, 5Sep2001. Parameters can be combined to create date range queries. e.g. the query below would match results with dates after 28th July, 1914 and before 11th November, 1918: meta_d1=28Jul1914&meta_d2=11Nov1918 Additional day, month and year variants are available for each of the above CGI parameters to facilitate easy form integration. The parameters can be modified further by appending day month year The example below would match results with dates matching 25th April 1915: meta_dday=25 meta_dmonth=Apr meta_dyear=1915 The example below would match results with dates from 1st September, 1939 to 2nd September, 1945: meta_d3day=01 meta_d3month=Sep meta_d3year=1939 meta_d4day=11 meta_d4month=Sep meta_d4year=1945 Note: d3and d4require all three components ( day, monthand year) to be provided d, d1and d2do not require all three components. e.g. just the year could be specified. Date metadata can also be sorted by date by using the sort=date or sort=adate parameters. See: sort options for more information on sorting search results. Metadata class types: number Defining a metadata class as a number tells Funnelback to interpret the contents of the field as a number. This allows numeric comparisons (==, !=, >=, >, <, <=) to be run against the field, and for numeric ranges to be defined as faceted navigation using the class. Numeric metadata is only required if you wish to make use of these range comparisons or for numeric range facets. Numbers for the purpose of display in the search results should be defined as text metadata. The value of a numeric field will contain an integer or float, and this number is interpreted by Funnelback as an 8-byte double. This affects the precision of large and small numerical values when applying range searches against a specific number. The lt_x and gt_x operators compare against the exact value specified. Other operators allow a small tolerance, enforced by the accuracy of 8-byte doubles. Searching numeric metadata Numeric fields can be queried using CGI parameters. There are no equivalent query language operators for numeric metadata search. The CGI parameters are: Numeric metadata can also be sorted using the sort=metaCLASSNAME or sort=dmetaCLASSNAME parameters. See: sort options for more information on sorting search results. Metadata class types: geospatial x/y coordinate Defining a field as geospatial type metadata tells Funnelback to interpret the contents of the field as a decimal lat/long coordinate. (e.g. -31.95516;115.85766). This is used by Funnelback to assign a geospatial coordinate to an indexed metadata coordinate is not required if you just want to plot the item onto a map in the search results (a text type value will be fine as it’s just a text value you are passing to the mapping API service that will generate the map). Searching geospatial metadata A number of geospatial CGI parameters are available when searching geospatial metadata. These parameters can be used to scope the search to items with a geospatial coordinate within a specific distance of an origin point. This allows for a show results near me search when used in conjunction with a user’s GPS or browser-derived location coordinates. Geospatial metadata can also be sorted by proximity to the origin point by using the sort=prox or sort=dprox parameters. See: sort options for more information on sorting search results. Metadata class types: document permissions Funnelback interprets the value contained in a document permissions type metadata class as a document lock string describing the access controls that apply to the document. This is used for enterprise search collections that enforce document level security. The format of the lockstring is determined by the connector that is used for the repository that is being indexed. Defining a document permissions type metadata field will prevent all results from the index from being returned unless an appropriate security plugin has been defined. This is to enforce a miniminum level of security over the collection when document level security is enabled. For this reason metadata fields of this type should only be defined when indexing a supported repository type that requires a document permissions metadata field to be defined. See: document level security for further information.
https://docs.squiz.net/funnelback/docs/latest/build/data-sources/indexer-configuration/metadata/metadata-class-types.html
2021-07-23T22:12:09
CC-MAIN-2021-31
1627046150067.51
[]
docs.squiz.net
WP Travel Engine Documentations Itinerary Downloader is an extension for the WP Travel Engine plugin to generate the itinerary PDF and include various content of the trip to create an offline, quick accessible trip detail brochure file to help you get updated and plan every moment of your trip. The addon grants the ability to download the trip details including various sections such as trip overview, trip availability, trip cost, trip itinerary, trip fact, most frequently asked question about trip, quick contact detail, and so on. This one-stop travel companion addon helps you access the important most necessary details about the trip. Few of the basic settings and required fields and enabled/added automatically when add on is enabled. So, you can simply use the shortcode and paste it on the editor inside any of the trip posts to create the quick link. The shortcode usable is [wte_itinerary_downloader]. There’s only single shortcode and all other configurations are controlled through the setting on the plugin dashboard which is described in detail as below: After you enable this addon, the global settings tab will be displayed in Admin Dashboard > Trips > Settings > Itinerary Downloader. You will be able to configure the various setting as required to modify the behavior of the addon. The global setting is divided into 6 sections, From this setting section, you can control the major aspect of the addon such as enabling disabling the addon, enabling popup form instead of single click download, and MailChimp form. You can find the following configurable settings under this subhead. User consent is very useful in response to the various laws and policies recently being emerged as the GDPR. You can configure the following settings. The email form when enabled will allow the ability to send the email to the user with itinerary pdf attached. And you can configure the following info that will be sent to the user as below: In this setting, you can configure various sections and fields that will be added in generating the PDF. This section allows you to configure text and various other section-wise configuration. PDF description pages, footers, last page(that contains the info about the company and person to contact and as such), and so on. You can configure various of these as available below: This setting will allow you to configure whether to include or exclude certain sections while generating the PDF. By default, the following fields are shown in the pdf section: All other fields are displayed by default. Currently, configurable settings are as below: Name: * Your email address will not be published. Required fields are marked * Comment Name* Website Save my name, email, and website in this browser for the next time I comment.
https://docs.wptravelengine.com/docs/itinerary-downloader/
2021-07-23T21:29:57
CC-MAIN-2021-31
1627046150067.51
[]
docs.wptravelengine.com
Date: Fri, 23 Jul 2021 15:57:25 -0700 (PDT) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_75552_1218293105.1627081045531" ------=_Part_75552_1218293105.1627081045531 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: User management functionality= is provided by default in all WSO2 Carbon-based products and is configured= in the user-mgt.xml file found in the <WSO2_= OB_APIM_HOME>/repository/conf/ directories. The following do= cumentation explains the configurations that should be done in WSO2 product= s in order to set up the User Management module.: The following sections include instructions on the above required = configurations and repositories:
https://docs.wso2.com/exportword?pageId=148931996
2021-07-23T22:57:25
CC-MAIN-2021-31
1627046150067.51
[]
docs.wso2.com
You are viewing version 2.23 of the documentation, which is no longer maintained. For up-to-date documentation, see the latest version. DNS and SSL Overview In order to use Spinnaker in your organization, you’re going to want to configure your infrastructure so that users can access Spinnaker. This has several steps: - Expose the Spinnaker endpoints (Deck and Gate) - Configure TLS encryption for the exposed endpoints - Create DNS entries for your endpoints - Update Spinnaker so that it’s aware of its new endpoints. Expose the Spinnaker endpoints Spinnaker users need access to two endpoints within Spinnaker - Deck (the Spinnaker UI microservice), which listens on port 9000 - Gate (the spinnaker API microservice), which listens on port 8084 There are a number of ways to expose these endpoints, and your configuration of these will be heavily dependent on the Kubernetes environment where Spinnaker is installed. Several common options are as follows: - Set up an ALB ingress controller within your Kubernetes environment, and add an ingress for the spin-deckand spin-gateservices. - Set up an nginx ingress controller within your Kubernetes environment, and add an ingress for the spin-deckand spin-gateservices. - Create Kubernetes loadbalancerservices for both the spin-deckand spin-gateKubernetes deployments Configure TLS encryption for the exposed endpoints It’s recommended to encrypt the exposed Spinnaker endpoints. There are three high-level ways of achieving this: - Most common: Terminate TLS on the load balancer(s) in front of the endpoints, and allow HTTP traffic between the load balancer and the endpoint backends. - Terminate TLS on the load balancer(s) in front of the endpoints, and configure the load balancer and endpoint backends with TLS between them, as well. - Least common: Configure your load balancer(s) to support the SNI so that the load balancer passes the initial TLS connection to the backends. There are a number of ways to achieve all of these - you can work with your Kubernetes, security, and networking teams to determine which methods best meet your organization(s) needs. If you need to terminate TLS on the backend containers (the second or third options), review the Open Source Spinnaker documentation regarding configuring TLS certificates on the backend microservices: (Setup/Security/SSL)[]. Create a DNS Entry for your load balancer Add a DNS Entry to your DNS management system. You should only need to add a DNS entry for the user-facing ALB, ELB, or other load balancer which is what you use to currently access Spinnaker. It typically has a name such as the one below armoryspinnaker-prod-external-123456789.us-west-1.elb.amazonaws.com Add a CNAME entry for the given ELB to create a simple name you will use to access your instance of Spinnaker, e.g. spinnaker.armory.io. Update Spinnaker configuration Update the endpoints for Spinnaker Deck (the Spinnaker UI microservice) and Spinnaker Gate (the Spinnaker API microservice) Operator apiVersion: spinnaker.armory.io/v1alpha2 kind: SpinnakerService metadata: name: spinnaker spec: spinnakerConfig: config: security: apiSecurity: overrideBaseUrl: uiSecurity: overrideBaseUrl: Don’t forget to apply your changes: kubectl -n <spinnaker namespace> apply -f <SpinnakerService manifest> Halyard hal config security ui edit --override-base-url= hal config security api edit --override-base-url= Don’t forget to apply your changes: hal deploy apply
https://v2-23.docs.armory.io/docs/armory-admin/dns-and-ssl/
2021-07-23T21:16:36
CC-MAIN-2021-31
1627046150067.51
[]
v2-23.docs.armory.io
TestEventPattern Tests whether an event pattern matches the provided event. Note:. Request Syntax { "Event": " string", "EventPattern": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - Event The event in the JSON format to test against the event pattern. Type: String Required: Yes - EventPattern The event pattern you want to test. Type: String Length Constraints: Maximum length of 2048. Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Errors For information about the errors that are common to all actions, see Common Errors. Example Tests that a given event matches a given event pattern. The following is an example of a TestEventPattern.TestEventPattern { "EventPattern": "{\"source\": [\"com.mycompany.myapp\"]}", "Event": "{\"id\": \"e00c66cb-fe7a-4fcc-81ad-58eb60f5d96b\", \"detail-type\": \"myDetailType\", \"source\": \"com.mycompany.myapp\", \"account\": \"123456789012\", \"time\": \"2016-01-10T01:29:23Z\", \"region\": \"us-east-1\", \"resources\": [\"resource1\", \"resource2\"], \"detail\": {{\"key1\": \"value1\", \"key2\": \"value2\"}}" } } Sample Response HTTP/1.1 200 OK x-amzn-RequestId: <RequestId> Content-Type: application/x-amz-json-1.1 Content-Length: <PayloadSizeBytes> Date: <Date> { "Result": true }
http://docs.aws.amazon.com/AmazonCloudWatchEvents/latest/APIReference/API_TestEventPattern.html
2016-10-21T11:32:24
CC-MAIN-2016-44
1476988717963.49
[]
docs.aws.amazon.com
Message-ID: <199317380.765746.1386227430335.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_765745_2084503334.1386227430335" ------=_Part_765745_2084503334.1386227430335 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Griffon 0.9.5 =E2=80=93 "Aquila clanga" - is a maintenance rel= ease of Griffon 0.9. Griffon 0.9.5 upgrades the following dependencies Perhaps the biggest change brought by this release is the full rework of= the plugin system. Under the new rules it should be easier to create/build= /install/upgrade/manage plugins. Archetypes too can be versioned and releas= ed like plugins are. this should make it easier to locate and install arche= types. Hosting your own plugins and archetypes just got easier. No more fumblin= g around with SVN and HTTP, the only thing you need now is a writable direc= tory in the filesystem for now. Griffon 0.9.5 delivers 3 types of artifact = repositories: local, remote and legacy. A default local repository is alway= s available to you; you may configure additional ones. Remote repositories = are supported in this version but the code to publish them is not yet relea= sed (keep an eye on /griffon-artifact-portal though). Finally the legacy repository should = ease up the transition to the new workflow. Configuring a local repository is dead simple, as the following snippet = shows griffon.artifact.repositories =3D [ 'my-local-repo': [ type: 'local', path: '/usr/local/share/griffon/repository' ] ]=20 The repo definition may be placed under griffon-app/conf/BuildConfig.gro= ovy or $USER_HOME/.griffon/settings.groovy The classpath used for build, compile, runtime and test should now be re= solved at the last possible moment, instead of the earliest possible as it = was before. Resolving the classpath eagerly caused a lot of trouble with ad= dons. Sometimes a command may require the user to specify a missing value. Whe= n the build is run in interactive mode (the default mode) then it's just a = matter of typing the value in the console. However, if the build is run in = non-interactive mode then it's very likely it will fail. For this reason, the Griffon build accepts the definition of a default a= nswer if the griffon.noninteractive.default.answer key is specifie= d, like this griffon -Dgriffon.noninteractive.default.answer=3Dy release-plu= gin=20 Be warned that this setting applies to every single input asked by a com= mand. 4 packaging targets get executed when the package command is called with= no arguments. There's now the option to specify which ones, by defining a = list of Strings for griffon.packaging, for example griffon.packaging =3D ['zip', 'jar']=20 Now only those 2 targets will be executed whenever the package command i= s called without arguments. You can specify additional targets if the Insta= ller plugin is available. It's now possible to specify SNAPSHOT dependencies on plugins and JARs. = Classifiers on dependencies will also be honored, both in condensed and ext= ended format, that is compile 'net.sf.json-lib:json-lib:2.4:jdk15' compile group: 'net.sf.json-lib: name: 'json-lib', version: '2.4', classifi= er: 'jdk15'=20 There's a new interactive shell based on Apache Karaf's console. This ne= w tool can be invoked by calling griffonsh from the command line. = This console should enable faster responses as the JVM is started only once= ; also dependencies are cached and environment settings are retained. This = shell grants access to all standard griffon command plus a few ones specifi= c to this new environment. There are some rough edges still so treat it car= efully Dependency resolution can now work in offline mode. When engaged, no ext= ernal repository will be queried for dependencies; all dependencies should = be resolved against the current cache. Also, all artifact repositories are = off limits, except those of type local. This mode can be enabled by specify= ing griffon.offline.mode in either griffon-app/conf/Buil= dConfig.groovy or $USER_HOME/.griffon/settings.groovy. = This flag can also be set as a system property. Swing support has been moved out of core and into its own plugin (). T= his should enable faster updates for Swing related bugs and features. Speaking of Swing, the WindowManager is now capable of dealing with JInt= ernalFrames as if they were windows. You can now show/hide/manage JInternal= Frames in the same way as Windows. Addons will now be automatically discovered and registered by the runtim= e. There's no longer a need to configure addons in plugin scripts (like _In= stal.groovy) unless the addon requires non-standard configuration (which sh= ould be the least of cases). It's now possible to supply a group with more configuration while each m= ember is being initialized. Simply define a config member in the g= roup's definition, for example mvcGroups { // MVC Group for "sample" 'sample' { model =3D 'sample.SampleModel' view =3D 'sample.SampleView' controller =3D 'sample.SampleController' config { someKey =3D 'someValue' } } }=20 You can access these values directly from the arguments passed to the mc= GroupInit method, like this package sample class SampleController { void mvcGroupInit(Map args) { assert args.configuration.config.someKey =3D=3D 'someValue' } }=20 Sometimes you don't want controllers to be registered as application eve= nt listeners because their code never handles an event. This results in per= formance upgrades as controllers need not be notified. Both the application= 's event bus and custom event buses (classes annotated with @griffon.= transform.EventPublisher) have a new method that control if the even= t bus is enabled or not. Events posted while the event bus is disabled will= be automatically discarded. Sometimes you don't want controllers to be registered as application eve= nt listeners because their code never handles an event. This results in per= formance upgrades as controllers need not be notified. { listener =3D false } } } }=20 There are times when creating multiple MVC groups where there's no need = to trigger MVC events for example, a custom MVCGroupManager can potentially= disable the event router for a time, then enable it after the group has be= en constructed. { lifecycle =3D false } } } }=20 Applications will fire an event named NewInstance whenever = an artifact gets instantiated, this results in 3 events per MVC group in th= e typical case. This is great for letting other parts of the application kn= ow that there's a new artifact instance that can be processed by dependency= injection for example. But, the runtime also pays the penalty for notifyin= g listeners that may not handle the event. Skipping these events may lead t= o better performance. This feature relies on the new config se= ction available to MVC groups. Here's how this feature can be specified mvcGroups { // MVC Group for "foo" 'foo' { model =3D 'foo.FooModel' view =3D 'foo.FooView' controller =3D 'foo.FooController' config { events { instantiation =3D false } } } }=20 There's a new abstraction that deals with resource location: griff= on.core.ResourceHandler. It defines the following contract URL getResourceAsURL(String resourceName); InputStream getResourceAsStream(String resourceName); List<URL> getResources(String resourceName);=20 Applications, addons and artifacts have been retrofitted with this inter= face; it's recommended that you use these facilities instead of querying a = classloader. Also, there's a new AST transformation that grafts these metho= ds to any class: griffon.transform.ResourcesAware. The plugin system and classpath resolution have been completely overhaul= ed. We don't expect any major breackages however we sure to upgrade to the = latest versions of available plugins. If you're running a plugin that has n= ot been upgraded to 0.9.5 and its causing you trouble then please let us kn= ow asap and we'll fix it. Now that Swing support is provided outside of core every application mus= t make sure to include it as a dependency. The upgrade command does this fo= r you. The names of the threading methods (execSync, execAsync, execOutside) ca= n be confusing. they have been renamed to the following ones in griffon.core.ThreadingHandler execOutside -> execOutsideUI=20 execSync -> execInsideUISync execAsync -> execInsideUIAsync=20 in griffon.core.GriffonApplication eventOutside -> eventOutsideUI=20 in griffon.util.EventPublisher publishEventOutside -> publishEventOutsideUI=20 in org.codehaus.griffon.runtime.core.EventRouter publishOutside -> publishOutsideUI=20 The old method names are still available and have been marked as depreca= ted. They will be removed when Griffon 1.0 is released. Griffon 0.9.4 ships with 5.
http://docs.codehaus.org/exportword?pageId=228186405
2013-12-05T07:10:30
CC-MAIN-2013-48
1386163041301
[]
docs.codehaus.org
Apple News notifications are now supported via the Engage platform, as announced in our press release on November 2nd. Select Apple News publishing partners may now have access to our Apple News Composer. This new composer enables support for yet another engagement channel through our popular Engage platform. Contact your Urban Airship Account Manager if you are interested in enabling this feature. Connect seamlessly with your Apple News publishing workflow. Simply add notification text, select your story and country audience, preview and send. Enjoy!
https://docs.urbanairship.com/whats-new/2016-11-30-apple-news-composer/
2017-08-16T17:14:10
CC-MAIN-2017-34
1502886102309.55
[array(['https://docs.urbanairship.com/images/apple-news-new.png', None], dtype=object) ]
docs.urbanairship.com
Various versions of the Data Import/Export Framework are available. The version that you use depends on the version of Microsoft Dynamics AX that you run in your environment: - For Microsoft Dynamics AX 2012 R3, use the version of the Data Import/Export Framework that is included in that release. - For Microsoft Dynamics AX 2012 R2, use the version of the Data Import/Export Framework that is available in cumulative update 7 for Microsoft Dynamics AX 2012 R2. - For AX 2012 or Microsoft Dynamics AX 2012 Feature Pack, use the version of the Data Import/Export Framework that is available from the Lifecycle Services Downloadable Tools (formerly on InformationSource). Architecture The following diagram shows the architecture of the Data Import/Export Framework. The Data Import/Export Framework creates a staging table for each entity in the Microsoft Dynamics AX database where the target table resides. Data that is being migrated is first moved to the staging table. There, you can verify the data, and perform any cleanup or conversion that is required. You can then move the data to the target table or export it. The Import/Export Process The following diagram shows the steps that are required to import or export data in Microsoft Dynamics AX. Determine the source of the data to export or import, and create a source data format for the data. For export, the source is AX. For import, you can use any of the following sources: - AX – Import data from another Microsoft Dynamics AX instance. - ODBC – Import data from another database, such as Microsoft SQL Server or Microsoft Access. - File – Import data from a fixed-width or delimited text file, XML file, or Microsoft Excel file. For more information about how to create a source data format, see Migrating data using the Data import/export framework (DIXF, DMF). - Determine which entity to associate with the data. This entity is either the source of the export data or the target for the import data. You can use an existing entity or create a custom entity. For a list of available entities, see Data import/export framework entities (DIXF, DMF). For information about how to create a custom entity, see Create a custom target entity for the Data import/export framework (DIXF, DMF). - Determine which entities should be imported or exported together, and put all these entities in a processing group. A processing group is a set of entities that must be processed in a sequence, or that can logically be grouped together. The entities in a processing group are exported together, or they are imported together from source to staging and then from staging to target. In a processing group, you also associate each entity with a source data format. For more information about how to create a processing group, see Migrating data using the Data import/export framework (DIXF, DMF). Use the processing group options to either import or export data. For import, you first import the data to a staging table, where you can clean or transform the data as you require. You should validate that the data appears accurate, and that the reference data is mapped correctly. You then migrate the data from the staging table to the target table. You should validate that the entity appears accurate in the target table. For export, you also move the data from the source to a staging table, where you can clean or transform the data as you require. You then export the data either to Microsoft Dynamics AX or to a file. The first option creates a .dat file and a .def file for the data, so that it can be imported into another Microsoft Dynamics AX instance. The second option creates a flat file for the data.
https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/lifecycle-services/ax-2012/user-guide-dixf
2017-08-16T17:39:25
CC-MAIN-2017-34
1502886102309.55
[array(['media/dmfarchitecture.png', 'Data Migration Framework architecture diagram'], dtype=object) array(['media/dmfconfiguration.png', 'Data Migration Framework configuration diagram'], dtype=object)]
docs.microsoft.com
Using Amazon CloudWatch Logs with AWS OpsWorks Stacks To simplify the process of monitoring logs on multiple instances, AWS OpsWorks Stacks supports Amazon CloudWatch Logs. You enable CloudWatch Logs at the layer level in AWS OpsWorks Stacks. CloudWatch Logs integration works with Chef 11.10 and Chef 12 Linux-based stacks. You incur additional charges when you enable CloudWatch Logs, so review Amazon CloudWatch Pricing before you get started. CloudWatch Logs monitors selected logs for the occurrence of a user-specified pattern. For example, you can monitor logs for the occurrence of a literal term such as NullReferenceException, or count the number of such occurrences. After you enable CloudWatch Logs in AWS OpsWorks Stacks, the AWS OpsWorks Stacks agent sends the logs to CloudWatch Logs. For more information about CloudWatch Logs, see Getting Started with CloudWatch Logs. Prerequisites Before you can enable CloudWatch Logs, your instances must be running version 3444 or later of the AWS OpsWorks Stacks agent in Chef 11.10 stacks, and 4023 or later in Chef 12 stacks. You must also use a compatible instance profile for any instances that you are monitoring by using CloudWatch Logs. AWS OpsWorks Stacks prompts you to let it upgrade the agent version and instance profile when you open the CloudWatch Logs tab on the Layer page. If you are using a custom instance profile (one that AWS OpsWorks Stacks did not provide when you created the stack), AWS OpsWorks Stacks cannot automatically upgrade the instance profile. You must manually attach the AWSOpsWorksCloudWatchLogs policy to your profile by using IAM. For information, see Attaching Managed Policies in the IAM User Guide. The following screenshot shows the upgrade prompt. Updating the agent on all instances in a layer can take some time. If you try to enable CloudWatch Logs on a layer before the agent upgrade is complete, you see a message similar to the following. Enabling CloudWatch Logs After any required agent and instance profile upgrades are complete, you can enable CloudWatch Logs by setting the slider control on the CloudWatch Logs tab to On. To stream command logs, set the Stream command logs slider to On. This sends logs of Chef activities and user-initiated commands on your layer's instances to CloudWatch Logs. The data included in these logs closely matches what you see in the results of a DescribeCommands operation, when you open the target of the log URL. It includes data about setup, configure, deploy, undeploy, start, stop, and recipe run commands. To stream logs of activities that are stored in a custom location on your layer's instances, such as /var/log/apache/myapp/mylog*, type the custom location in the Stream custom logs string box, and then choose Add (+). Choose Save. Within a few minutes, AWS OpsWorks Stacks log streams should be visible in the CloudWatch Logs console. Turning Off CloudWatch Logs To turn off CloudWatch Logs, edit your layer settings. On your layer's properties page, choose Edit. On the editing page, choose the CloudWatch Logs tab. In the CloudWatch Logs area, turn off Stream command logs. Choose X on custom logs to delete them from log streams, if applicable. Choose Save. Deleting Streamed Logs from CloudWatch Logs After you turn off CloudWatch Logs streaming from AWS OpsWorks Stacks, existing logs are still available in the CloudWatch Logs management console. You still incur charges for stored logs, unless you export the logs to Amazon S3 or delete them. For more information about exporting logs to S3, see Exporting Log Data to Amazon S3. You can delete log streams and log groups in the CloudWatch Logs management console, or by running the delete-log-stream and delete-log-group AWS CLI commands. For more information about changing log retention periods, see Change Log Data Retention in CloudWatch Logs. Managing Your Logs in CloudWatch Logs The logs that you are streaming are managed in the CloudWatch Logs console. AWS OpsWorks creates default log groups and log streams automatically. Log groups for AWS OpsWorks Stacks data have names that match the following pattern: stack_name / layer_name / chef_log_name Custom logs have names that match the following pattern: /stack_name/layer_short_name/file_path_name. The path name is made more human-readable by the removal of special characters, such as asterisks (*). When you've located your logs in CloudWatch Logs, you can organize the logs into groups, search and filter logs by creating metric filters, and create custom alarms. Configuring Chef 12.2 Windows Layers to Use CloudWatch Logs CloudWatch Logs automatic integration is not supported for Windows-based instances. The CloudWatch Logs tab is not available on layers in Chef 12.2 stacks. To manually enable streaming to for Windows-based instances, do the following. Update the instance profile for Windows-based instances so that the CloudWatch Logs agent has appropriate permissions. The AWSOpsWorksCloudWatchLogs policy statement shows which permissions are required. Typically, you do this task only once. You can then use the updated instance profile for all Windows instances in a layer. Edit the following JSON configuration file on each instance. This file includes log stream preferences, such as which logs to monitor. %PROGRAMFILES%\Amazon\Ec2ConfigService\Settings\AWS.EC2.Windows.CloudWatch.json You can automate the preceding two tasks by creating custom recipes to handle the required tasks and assigning them to the Chef 12.2 layer's Setup events. Each time you start a new instance on those layers, AWS OpsWorks Stacks automatically runs your recipes after the instance finishes booting, enabling CloudWatch Logs. For more information about manually configuring CloudWatch Logs streams for Windows-based instances, see the following. To turn off CloudWatch Logs on Windows-based instances, reverse the process. Clear the Enable CloudWatch Logs integration check box in the EC2 Service Properties dialog box, delete log stream preferences from the AWS.EC2.Windows.CloudWatch.json file; and stop running any Chef recipes that are automatically assigning CloudWatch Logs permissions to new instances in Chef 12.2 layers.
http://docs.aws.amazon.com/opsworks/latest/userguide/monitoring-cloudwatch-logs.html
2017-08-16T17:35:03
CC-MAIN-2017-34
1502886102309.55
[array(['images/cw_logs_upgrade.png', 'CloudWatch Logs tab on the Layer page'], dtype=object) array(['images/cloudwatch_logs_upgrade_time.png', 'CloudWatch Logs tab on the Layer page'], dtype=object) array(['images/cw_logs_dash.png', 'CloudWatch Logs console'], dtype=object)]
docs.aws.amazon.com
This topic explains how you can personalize Microsoft Dynamics 365 for Finance and Operations. There are many types of personalizations in Microsoft Dynamics 365 for Finance and Operations. Some personalizations are selections that you make in a list of options on a setup page. Some personalizations are implicit, for example, Finance and Operations keeps track of the widths of grid columns if you adjust them, and the expanded/collapsed state of FastTabs. Other personalizations are explicit. For explicit personalizations, you enter an interactive personalization mode and modify the appearance of a page by directly managing the way that elements appear or act on the page. All personalizations, of any type, that a user makes in Finance and Operations are for that user only, regardless of the company that the user interacts with. Changes that a user makes to a page don't affect other users in the system. Systemwide options for the current user In the Navigation bar you'll find a gear image that is called the Settings menu button. Opening the Settings menu will show a number of choices. Selecting Options will open the user Options page. There you'll find four option tabs: - Visual - Use to choose a color theme and the default size of the elements on your pages. - Preferences - Here you can choose defaults for each time you open Finance and Operations, including the company, initial page, and default view/edit mode (which determines if a page is locked for viewing or opened for editing each time you open it). You'll also find language, time zone, and date, time, and number format options. Lastly this page contains a number of miscellaneous preferences that will vary from release to release. - Account - Use to provide your user ID and other account-related options. - Workflow - This where you can choose workflow-related options. Implicit personalizations Implicit personalizations are those personalizations that you perform simply by interacting with certain controls that remember their current visible state. Grid columns - You can adjust the width of a column in a list by selecting the sizing bar to the left or right of the column header and sliding it left or right to the desired width. Finance and Operations will store the width that you'd like and show that column with that width every time you open the page with that list. FastTabs - Some pages have expandable sections called FastTabs. Finance and Operations will store which FastTabs you have expanded, and which FastTabs you have collapsed. Each time you return to the page, those same FastTabs will be expanded or collapsed based on the last time you used them. In this article, we'll explain how to change the order of your FastTab sections. In some cases, collapsing a FastTab may improve performance because Finance and Operations will not need to retrieve the information for that FastTab until the FastTab is expanded. Fact Boxes - Some pages have a section called a Fact Box pane. This pane contains read-only information related to the current subject of the page. Each section in the Fact Box Pane is called a Fact Box. You can expand or collapse a Fact Box and Finance and Operations will store your preference. In some cases, collapsing a Fact Box may improve performance because Finance and Operations will not need to retrieve the information for that Fact Box until the Fact Box is expanded. Explicit personalizations using the Personalization toolbar Every person and company has a different perspective on which data is most important to them, or which data isn’t needed for the way they run their business. The ability to tailor the way your information is ordered, interacted with, or even hidden is key to making Finance and Operations a personal and productive experience. Explicit personalizations are those personalizations that you perform explicitly with the intent to change the appearance or behavior of an element or page, by choosing a personalization menu. The most basic type of explicit personalization is where you right-click an element and select Personalize. (Note that not all elements on your page can be personalized.) When you select this method of personalization, you'll see the element's property window. You’ll personalize an element on your page in this manner if you simply want to change the element's label, hide the element so that it isn’t shown on the page (this doesn’t change any data, it simply doesn’t show you the information), include the information in the FastTab summary section (if the element is in a FastTab), skip the field when tabbing, or make it so that data cannot be changed by marking it as “Don’t Edit." When you want to move or hide elements or make several changes, you can use the Personalization toolbar, available from the elements Property window by choosing Personalize this form. The Personalization toolbar is also available on the form's Action pane, under the Personalize group of the Options tab. Select Personalize this form and you'll see the Personalization toolbar. The Personalization toolbar has a number of personalization actions. Choose the Select tool when you want to select and change the properties of many elements, one at a time. First, click the Select tool, and then click the element whose properties that you want to modify. When you select an element, the element's property window will open and you can modify any of the properties for that element. You can repeat the process for other elements on your form that are personalizable. In some cases, you'll select an element and see that some of the properties are not modifiable. This means that based on the way the current element is used, Finance and Operations cannot let you change that property. For example, you cannot hide a field that is required. Choose the Move tool when you want to select and move an element to a different location within the current group of elements. (You cannot move an element outside of its parent group). First, click the Move tool and then click the element that you want to move. When you click the element that you want to move, Finance and Operations will scan the form to understand where this element can be moved and create a series of "drop zones" that show as a colored, bold line next to the area where the element can be dropped as you drag the element around within the current group. Choose the Hide tool to select and hide an element. To hide an element, simply choose the Hide tool and click the element that you'd like to hide. When you choose the Hide tool, all currently hidden elements will be made visible and shown in a shaded container so that you can choose the element to unhide it. Choose the Select tool to see how he page will look with the selected elements hidden. Choose the Summary tool when you want a numeric or string field to show in the FastTab summary area. The Summary tool will only apply to fields that are contained within a FastTab section. When you choose the Summary tool, Finance and Operations will show all fields that have been selected as summary fields by enclosing them in a shaded container. You can interactively add and remove fields from a FastTab summary by clicking the field. Choose the Skip tool to remove an element from the page's keyboard tab sequence. When you choose the Skip tool, all currently skipped elements will be shown in a shaded container so that you can choose them again to make them part of the tab sequence by selecting a skipped element. Choose the Edit tool when you want to mark an element as Editable or Not Editable. When you choose the Edit tool, all currently non-editable elements will be shown in a shaded container so that you can choose them to make them editable. Note, some fields are required and cannot be made non-editable. Those fields will appear with a padlock icon next to them. Choose the Add tool to add a field to your page. With the add tool, you cannot create a new field, but you can add fields that are part of the current page definition, but not shown on the page. When you choose the Add tool, you'll first need to select the group or area where you'd like to add a field. A dialog box will display the list of fields related to the section that you've selected. From that dialog box, you can select one or more fields to add and click Insert. If you later want to remove a field that you've previously added, repeat the process, but simply clear the field that you previously added. Choose the Manage button to see a list of management options related to all personalizations for the current page. Choose Clear to reset the page to its default, installed state. All personalizations on the current page will be cleared. There is no undo action, so only use this option when you are certain that you want to reset your page. Choose Import to use a personalization from a personalization file that you or someone else previously created for this page. Importing a personalization will clear any personalizations that you've performed on the entire page and instead use all of the personalizations from the selected file. If you want to save or share a personalization, then you'll select the Export option to save the personalizations to a file. Choose the Close button to close the toolbar and return the page to it's previously interactive state. With the Personalization toolbar, saving is implicit. Your personalizations take effect immediately as you make them and there is no need to click a Save button. In some cases, you'll see a padlock icon next to an element when you select a tool. This means that in order for the page to work as correctly, you cannot modify the properties related to the selected tool. When the Personalization toolbar is opened, the page becomes non-interactive. You cannot enter data or expand or collapse sections. Explicit personalization: Adding a tile or list to a workspace Some pages with lists will have an additional personalization feature available within its Action Pane, under the Personalize group of the Options tab. Select Add to Workspace to open the drop-down list that gives you the ability to show the information in the current list (filtered and sorted or default) on a Workspace as a list or a summary tile (that can be used to show the number of items in the list). To add a list to a workspace, first sort or filter the list with the information as you'd like to see it on your workspace, then select the Add to Workspace dialog. Next, select the desired workspace and select List from the Presentation drop-down list. When you select List a dialog will open allowing you to pick the columns you'd like to see in the list, and the label for the list as it will appear on the workspace. To add a tile to a workspace, first filter the list to represent the data you want summarized (or want quick access to). Then, open the Add to Workspace drop dialog. Next, select the desired workspace and select Tile from the Presentation drop down. When you select Tile, a dialog will open allowing you to provide a tile label and decide if the tile will show a count. When placed on a workspace, the tile will allow you to open the current page from the workspace, and show the list of information related to the tile. When your list or tile is added to a workspace, you can then open that workspace and re-order the list or tile within the group it was placed. Explicit personalization: Adding a summary from a workspace to a dashboard Some workspaces contain count tiles (tiles with numbers on them) that you'd also like to see on your dashboard. In a workspace, right-click a count tile and select Personalize. Select Pin to Dashboard. The next time you navigate to (and refresh) the selected dashboard, you'll see that count below that workspace's navigation tile on the dashboard. Explicit personalization: Personalizing your dashboard The dashboard is often the first page you'll see when you open Finance and Operations. You can personalize the dashboard to rename your workspace navigation tiles, to show only the tiles that you'd like to see, rename the tiles, or to arrange the tiles in the order you'd prefer to see them. To personalize the dashboard, select any tile and right-click to open a context menu. On the context menu, select Personalize. If the selected tile is one that you'd like to hide or rename or skip, you can make that change directly on the Property window that has appeared. If you'd like to arrange tiles, then select Personalize this form in the Property window to open the Personalization toolbar. You can then use the Move Tool to arrange the tiles. Administration of personalization After you personalize a page, you can share your personalizations with other user users by exporting the personalized page. You can then ask the other users to navigate to the personalized page and import the personalization file that you created. Users who have administrator privileges can also manage personalizations for other users on the Personalization page. This page has four tabs: - System – You can temporarily disable or turn off all personalizations in the system. In this case, you don't delete personalizations. Instead, you just reset all pages to their default state. If you re-enable personalization later, all personalizations are reapplied to each user's pages. You can also delete all personalizations for all users. Note that when you delete personalizations, there is no way to automatically re-enable personalizations from the system. Therefore, before you perform this step, make sure that you have exported all personalizations that you might want to import later. - Users – You can specify whether each user can do either implicit personalization or explicit personalization. You can also specify whether each user can do implicit or explicit personalization on a specific page. Finally, you can import, export, or delete a personalization for each user. - Import – You can import a personalization for one or more users. You use this tab after you've created a personalization on a page or workspace, and then exported that personalization as a personalization file. To import your personalization file and apply it to one or more users, select individual users in the list of all users, or filter by a specific role and then select users in that role. After you've selected the users who will use your personalization, click Import, and select your personalization file. The personalization will be validated and applied to all the selected users the next time that they open the selected page. - Clear – You can clear page or workspace personalizations for one or more users. First, select the page or workspace to clear personalizations for. Next, select individual users in the list of all users, or filter by a specific role and then select users in that role. After you've selected both a page or workspace and users, click Clear. All personalizations that the selected users have applied to the selected page or workspace are cleared. This action can't be undone. However, if the page or workspace has a saved personalization, that personalization can be re-imported. Personalization of inventory dimensions When you personalize the setup of inventory dimensions on a page, consider the settings that have been created by using the Display dimension option. For example, if you use personalization to hide a column for the Batch number inventory dimension and the column appears the next time the page is opened, it could be because the Dimension display settings control what inventory dimension columns are displayed. The Dimension display settings apply across all pages and these settings will override any personalized setup of inventory dimension fields on individual pages. For the example with the Batch number inventory dimension, this dimension would have to be cleared as part of the Display dimensions option for the table to not display this column. Eventually this change would apply not only on one specific page but across all pages.
https://docs.microsoft.com/en-us/dynamics365/unified-operations/fin-and-ops/get-started/personalize-user-experience
2017-08-16T17:39:53
CC-MAIN-2017-34
1502886102309.55
[]
docs.microsoft.com
In This Section An Apple News message is a push notification that links to an Apple News Story. Learn more in the Composer Overview. What You’ll Do In this tutorial, you will: - Enter your message’s content. - Choose which Apple New Story the message will link to. - Optionally select recipient countries. - Preview and send the message. Features and options are explained along the way. Steps - Choose your project from the Urban Airship dashboard, then click the Create button and select the Apple News composer. - Enter the Push Notification Text you want to accompany the Apple News story, limited to 130 characters. The previewer on the right side of the screen updates as you compose your message. The Channel Name entered when setting up the service appears in bold. - Click the Select a story button, then click to select from the list that appears. -. - Optionally select recipient Countries. This defaults to All. Available countries are dependent on the selections made when setting up the service with Urban Airship, not necessarily the countries you have configured with Apple. Select the radio button for Choose, then click to make your selection from the list that appears. After your first selection, use the Select a country dropdown menu to add another country. - Confirm the message appearance and content in the previewer. - Send your message! Click the Send Now button at the bottom of your window. If the selected story is still processing, the button will instead be labeled Send When Live . We validate the country selection when the Send Now button is clicked. If a country selected here is not configured in your Apple News publisher channel settings, you will see the error Country not available when attempting to send to that country. Edit country selection in Settings » Platforms. Messages cannot be retracted or deleted. History Go to Messages » Apple News to see a year’s history of your Apple News messages.
https://docs.urbanairship.com/engage/apple-news/
2017-08-16T17:27:32
CC-MAIN-2017-34
1502886102309.55
[]
docs.urbanairship.com
Dayparting Now you can specify dayparts for targeting users in your peak audience windows. Daypart is a term borrowed from the broadcasting industry, used to denote times of day within which you would like to reach your target audience. Some recognizable examples from television and radio include Morning Rush, Afternoon Drive, or Must-see TV. Since advertisers know that the eyeballs (or ears) they are targeting are in abundance during those days and times, they will pay a premium to advertise during those dayparts. Dayparts are available in our new Timing Object, which now serves as the sole place for timing-based Automation feature development. Note that we also moved the Automation delay feature to the timing object! Preferred Time You can also specify a preferred time within a window, further optimizing the effect of the message. For example, if a user adds a tag indicating they might eat lunch tomorrow, you can send out your lunch-related message tomorrow, “after 11 a.m., but no later than 1 p.m., preferrably at 11:30 a.m., when they are making lunch plans.” { "days_of_week" : ["monday", "tuesday", "wednesday", "thursday", "friday"], "allowed_times" : [ { "start" : "11:00:00", "end" : "13:00:00", "preferred" : "11:30:00" } ] } Automation Timing and Dayparting are available in the API as of this release. Watch this space for an upcoming announcement about dashboard support. To learn more, check out Automation Timing in our API reference.
https://docs.urbanairship.com/whats-new/2016-06-30-automation-timing/
2017-08-16T17:13:21
CC-MAIN-2017-34
1502886102309.55
[]
docs.urbanairship.com
Customers with CDN access now have the option to upload the media included in notifications, rather than entering a URL. This is the easiest way to include rich media in your notifications, especially if you don’t have another way to host media yourself. A CDN supports optimized delivery of rich media to mobile devices across a global audience. It dynamically caches the media in critical locations across the globe to minimize download times, even across lower-bandwidth cellular connections. We currently recommend using file sizes around 1MB to ensure successful delivery of hosted media. Our media hosting solution does not store files over 2MB at this time. Although iOS 10 supports theoretical maximum file sizes that are much higher, in practice we have not yet seen the platform successfully download and display files anywhere near those limits. Supported Media Images: JPEG, GIF, PNG Audio: AIFF, WAV, MP3 Video: AVI Contact Support if you are interested in enabling CDN media hosting. See the additional Media option in our composers’ Optional Message Features
https://docs.urbanairship.com/whats-new/2016-10-31-rich-notification-content/
2017-08-16T17:23:29
CC-MAIN-2017-34
1502886102309.55
[array(['https://docs.urbanairship.com/images/cdn-upload.png', None], dtype=object) ]
docs.urbanairship.com
Software Defined Networking (SDN) in Windows Server 2016 is made up of a combination of a Network Controller, Hyper-V Hosts, Software Load Balancer Gateways and HNV Gateways. For tuning of each of these components refer to the following sections: Network Controller The network controller is a Windows Server role which must be enabled on Virtual Machines running on hosts that are configured to use SDN and are controlled by the network controller. Three Network Controller enabled VMs are sufficient for high availability and maximum performance. Each VM must be sized according to the guidelines provided in the SDN infrastructure virtual machine role requirements section of the Plan Software Defined Networking topic. SDN Quality of Service (QoS) To ensure virtual machine traffic is prioritized effectively and fairly, it is recommended that you configure SDN QoS on the workload virtual machines. For more information on configuring SDN QoS, refer to the Configure QoS for a Tenant VM Network Adapter topic. Hyper-V Host Networking The guidance provided in the Hyper-V network I/O performance section of the Performance Tuning for Hyper-V Servers guide is applicable when SDN is used, however this section covers additional guidelines that must be followed to ensure the best performance when using SDN. Physical Network Adapter (NIC) Teaming For best performance and fail-over capabilities, it is recommended that you configure the physical network adapters to be teamed. When using SDN you must create the team with Switch Embedded Teaming (SET). The optimal number of team members is two as virtualized traffic will be spread across both of the team members for both inbound and outbound directions. You can have more than two team members; however inbound traffic will be spread over at most two of the adapters. Outbound traffic will always be spread across all adapters if the default of dynamic load balancing remains configured on the virtual switch. Encapsulation Offloads SDN relies on encapsulation of packets to to virtualize the network. For optimal performance, it is important that the network adapter supports hardware offload for the encapsulation format that is used. There is no significant performance benefit of one encapsulation format over another. The default encapsulation format when the network controller is used is VXLAN. You can determine which encapsulation format is being used through the network controller with the following PowerShell cmdlet: (Get-NetworkControllerVirtualNetworkConfiguration -connectionuri $uri).properties.networkvirtualizationprotocol For best performance, if VXLAN is returned then you must make sure your physical network adapters support VXLAN task offload. If NVGRE is returned, then your physical network adapters must support NVGRE task offload. MTU Encapsulation results in extra bytes being added to each packet. In order to avoid fragmentation of these packets, the physical network must be configured to use jumbo frames. An MTU value of 9234 is the recommended size for either VXLAN or NVGRE and must be configured on the physical switch for the physical interfaces of the host ports (L2) and the router interfaces (L3) of the VLANs over which encapsulated packets will be sent. This includes the Transit, HNV Provider and Management networks. MTU on the Hyper-V host is configured through the network adapter, and the Network Controller Host Agent running on the Hyper-V host will adjust for the encapsulation overhead automatically if supported by the network adapter driver. Once traffic egresses from the virtual network via a Gateway, the encapsulation is removed and the original MTU as sent from the VM is used. Single Root IO Virtualization (SR-IOV) SDN is implemented on the Hyper-V host using a forwarding switch extension in the virtual switch. For this switch extension to process packets, SR-IOV must not be used on virtual network interfaces that are configured for use with the network controller as it causes VM traffic to bypass the virtual switch. SR-IOV can still be enabled on the virtual switch if desired and can be used by VM network adapters that are not controlled by the network controller. These SR-IOV VMs can coexist on the same virtual switch as network controller controlled VMs which do not use SR-IOV. If you are using 40Gbit network adapters it is recommended that you enable SR-IOV on the virtual switch for the Software Load Balancing (SLB) Gateways to achieve maximum throughput. This is covered in more detail in the Software Load Balancer Gateways section. HNV Gateways You can find information on tuning HNV Gateways for use with SDN in the HVN Gateways section. Software Load Balancer (SLB) SLB Gateways can only be used with the Network Controller and SDN. You can find more information on tuning SDN for use iwth SLB Gateways in the Software Load Balancer Gateways section.
https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/subsystem/software-defined-networking/index
2017-08-16T17:40:47
CC-MAIN-2017-34
1502886102309.55
[]
docs.microsoft.com
A half-logistic continuous random variable. Continuous random variables are defined from a standard form and may require some shape parameters to complete its specification. Any optional keyword parameters can be passed to the methods of the RV object as given below: Notes The probability density function for halflogistic is: halflogistic.pdf(x) = 2 * exp(-x) / (1+exp(-x))**2 = 1/2 * sech(x/2)**2 for x >= 0. Examples >>> from scipy.stats import halflogistic >>> numargs = halflogistic.numargs >>> [ ] = [0.9,] * numargs >>> rv = halfloglogistic.cdf(x, ) >>> h = plt.semilogy(np.abs(x - halflogistic.ppf(prb, )) + 1e-20) Random number generation >>> R = halflogistic.rvs(size=100) Methods
http://docs.scipy.org/doc/scipy-0.12.0/reference/generated/scipy.stats.halflogistic.html
2015-02-01T07:12:24
CC-MAIN-2015-06
1422115855897.0
[]
docs.scipy.org
This is a step by step guide on developing Netbeans modules (plugins) using Maven and Mevenide. It doesn't matter if the module is for Netbeans IDE or your custom application based on Netbeans Platform. The tutorial assumes basic knowledge module development as described on netbeans.org site (Api List page and openide site in general). In this text below we discuss the specific issues of building Netbeans modules using Maven, not how to use the APIs to create a module. Installing Maven is rather straightforward, just follow instructions. It's important that you setup the MAVEN_HOME environment property (Windows installer does it for you). If you are behind firewall, you should setup Maven to use your proxy settings. It's best done by creating a file named build.properties in your user home directory and populate it with the required proxy related properties. First download the latest Mevenide release. This tutorial assumes you have installed Mevenide for Netbeans, however the same can be achieved through the Mevenide for Eclipse with minor variations, in case you want to develop Netbeans modules with that IDE. On the command-line, run this command which downloads the appropriate maven plugin for creating Netbeans modules: Assuming you already have the Netbeans IDE or Netbeans Platform that you want to develop against, please recall it's installation directory. If you don't have it, please install it first. Run goal . It will prompt you for the installation directory. Then it will find all Netbeans modules that are in that directory structure and copy them to your local Maven repository. (That's where Maven is looking for artifacts/dependencies, unfortunately there's currently no remote repository that would host Netbeans artifacts). You can later check maven-nbm-plugin homepage for updates and details on other available goal and customization properties. Start up Netbeans IDE (assuming Mevenide 0.6 installed) and create a new project, under Maven category there is "Sample Netbeans module" project template. A skeleton project is created for you. Now we need to specify the correct Netbeans API module jar dependencies. If you have done everyting correctly, you should have the Netbeans APIs now available in the editor when you start coding your module. Building is simple, just run "Build" or "Rebuild" from the project's popup menu. The resulting module jar and nbm files will appear under target/nbm directory in your project. That's it. wiki
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=22106
2015-02-01T07:20:06
CC-MAIN-2015-06
1422115855897.0
[]
docs.codehaus.org
I don't know how to use my BlackBerry ID Your BlackBerry ID provides you with seamless access to multiple BlackBerry products and services, such as the BlackBerry World storefront, BBM, and BlackBerry Protect. Note: It's important to choose a BlackBerry ID password that you can remember. If you forget your BlackBerry ID password, password recovery details can be sent to the email address that you use as your BlackBerry ID username. For your username, be sure to use an email address that you use frequently and that you can access through a browser on your computer. The email address that you use as your username doesn't have to be associated with your BlackBerry device. Should I create a new BlackBerry ID or reuse an old BlackBerry ID? If you previously created a BlackBerry ID, you must use it when you set up your new device. You should only create a new BlackBerry ID if you have not created one before. You can sign in to both a tablet and smartphone using the same BlackBerry ID. - On the home screen, swipe down from the top of the screen. - Tap . - Tap BlackBerry ID. - Tap Create New. - Follow the instructions on the screen. Remember to try to choose a username and password that is easy for you to remember! Try the next option > Problem solved? Great! Check out this additional information > Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/48934/rok1377705631104.jsp
2015-02-01T07:13:21
CC-MAIN-2015-06
1422115855897.0
[]
docs.blackberry.com
Known Issues¶ updating from Anaconda 1.2.1 (and up) works as follows: conda update conda conda update anaconda Uninstalling anaconda does not clear some setting files in C:Documents and SettingsYour_User_Name ; specifically, the directories .spyder2, .ipython, .matplotlib, .astropy will exist after uninstall and the user can choose if these should be removed Spyder may not launch properly on Windows machines. In that case, try the following steps: - Try launching it again. - Use the Reset Spyder Settings option from the menu under start then try to launch Spyder again. - Launch the command prompt by running cmd from the Start menu and then typing Spyder in the command prompt. - Delete the directory .spyder2 from the C:Documents and settingsYour_User_Name directory and then repeat the previous steps
http://docs.continuum.io/anaconda/known-issues.html
2015-02-01T07:03:12
CC-MAIN-2015-06
1422115855897.0
[]
docs.continuum.io
Learnosity Author Our hosted item bank platform. Powerful authoring and item bank repository platform which uses a flexible tag based system for organizing items. It is the easiest way to create and host rich Learnosity content. Learnosity Author uses the Question Editor API for the core authoring experience. Author API View and search Learnosity's ItemBank from within your CMS. Our Author API allows searching of the content inside our ItemBank that can be shown in third party CMSs or assessment platforms. We provide quick and easy hooks for developers, to simplify integration into your CMS. Data API Get some info on what has been authored. Designed to be called hourly or daily, to give developers a view of what content has been authored and stored in Learnosity Author. It returns a JSON structure of items, item IDs and tagging information.Data API Documentation » QTI Import & Export Available as a service Easy come, easy go. Liberate your items. We have a set of internal APIs which provides programatic access to import and export QTI 2.1. These have not yet been exposed publicly, but Import & Export of items is available as a service.
http://docs.learnosity.com/authoring/
2015-02-01T07:07:22
CC-MAIN-2015-06
1422115855897.0
[]
docs.learnosity.com
in tabs, and options to navigate through your content either by keywords (i.e. any valid Funnelback search query), but filtering the report based on URL prefixes, and by filtering based on any metadata value shown within the Content Auditor interface. The first tab within Content Auditor, as shown above, provides a range of 'recommendation' reports which are associated with common content best practices. These reports are as follows: Reading grade The reading grade report measures how easily documents are to read, based on the Flesch Kincaid grade level measure, which relates roughly to the number of formal education require to understand the document. While this measure is an heuristic rather than an exact measurement, it may be useful in ensuring that website content is written at an appropriate level. The range of 'green' grade levels can be configured with the ui.modern.content-auditor.reading-grade.lower-ok-limit and ui.modern.content-auditor.reading-grade.upper-ok-limit parameters Missing metadata The missing metadata report identifies documents for which no metadata of a given type occurs. This may be helpful in enforcing content policies requiring certain types of metadata to be available in all documents within certain areas. Duplicate titles The duplicate titles report identifies documents for which the given title is also used by other documents. Duplicated titles can make websites and search result pages less useful, since they lack sufficient context for a user to understand what page is being shown. Note - For this report to be used with documents which are not originally HTML or filtered to HTML (such as XML records), a copy of the title metadata to be considered must be mapped to the FunDuplicateTitle metadata class. Date modified The date modified report presents a chart of when documents were last modified, based on metadata within the documents, and hence may be helpful in identifying documents which should be updated or reviewed. The allowable document age before it is marked in red can be configured with the Ui.modern.content-auditor.date-modified.ok-age-years setting. Response time The response time report provides a chart of the time taken by Funnelback's web crawler to load each document, which may help to identify documents, sections or entire sites where response time is in need of improvement. Undesirable text In its default configuration, the undesirable text report provides information on documents which contain common misspellings, which allows such typos to be rapidly found and corrected. This report may be configured through the filter.jsoup.undesirable_text-source.* collection configuration setting, which allows for organization-specific lists of undesirable terms, such as outdated product names, to be included within the set to be identified. Duplicate content The duplicate content report shows documents for which the content (or if configured, some metadata) is duplicated by other documents. Duplicated content makes site more difficult to navigate, and may also be penalized as a ranking factor by some search engines. The ui.modern.content-auditor.collapsing-signature configuration parameter can be used to configure exactly what parts of documents are considered for duplication. Other Content Auditor reports The overview tab of content, shown below, provides a snapshot of the top metadata within each configured facet of the collection, showing the most common four entries for each. Each category provides a link which can be used to 'drill down', allowing content audit reports to be created for chosen subsets of content. The example below shows a number of facets for a simple example collection. From the overview page, you can navigate to the attributes tab which provides a complete list of metadata values found in each facet, and estimates of the count of matching documents. Again, clicking on one of the values in the list will restrict subsequent reports to documents containing that metadata value. The third Content Auditor tab provides a list of currently matching search results, with easy links through to various Funnelback tools, as well as to CSV exports of the result list. The final tab, shown in the example above with the number 15 beside is, shows any sets of duplicate content which was encountered within the collection, and allows this duplicate content to be shown as a result list. Note also that the search box at the top of the content auditor interface allows auditing reports to be restricted based on any Funnelback query, in addition to the drill-down options.' Once a facet category has been selected, the constraints applied are displayed by Content Auditor as in the image below. The small 'x' links to the right of each constraint allow that constraint to be cleared if needed. Configuring Content Auditor Content Auditor can be configured in a number of ways to provide relevant reports for different data sets. Most configuration occurs via the collection.cfg settings.
https://docs.funnelback.com/analyse/content/index.html
2018-08-14T16:19:18
CC-MAIN-2018-34
1534221209165.16
[array(['../../images/Content-auditor-recommendations.png', 'Content-auditor-recommendations.png'], dtype=object) array(['../../images/Content-auditor-overview.png', 'Content-auditor-overview.png'], dtype=object) array(['../../images/Content-auditor-custom-report.png', 'Content-auditor-custom-report.png'], dtype=object) array(['../../images/Content-auditor-clear-filters.png', 'Content-auditor-clear-filters.png'], dtype=object)]
docs.funnelback.com
Reload comes with the Visual Composer (value $34),! Lets. Dividers – Gaps With this element, you can create a divider (with line or only gap) to better separate your elements and sections. Additionally, you can split you pages by using full-width dividers. 3. Quotes Reload comes with 2 styles for your quotes. Just add your text and choose your style. 4. Dropcaps Two more styles for dropcaps. Add your text and choose your style. Just like Quote element. 5. Lists You can simply create a list with the Font Awesome web icon font you prefer and choose the icon color as well. 6. Buttons The button element is an easy way to add a styled button to your page. Just choose the appropriate type (simple, line, advanced), size and color, fill out the other fields (text, link, hover text for advanced) and off you go! 7. Icon Box Reload comes with 2 basic styles for icon boxes, small and classic. You can use any Font Awesome web icon font, upload a png icon or even write a character(letter or number)! Give title, text, link, align, add the white background (if you wish). 8. Media Box Media for Reload means image, video or map. Combine one of these with title, text, link and you’re ready! 9. Image Text With this shortcode you can simply upload an image(with a video popup if you like), title, text and button(any type). 10. Slogan The slogan element creates a slogan with two buttons. Simply add title, subtitle, text, line-style and buttons(1 or 2). 11. Call-Out This shortcode comes with 2 different styles. Simply choose your preferred style, give a title, your test, button and off you go. 12. Single Image Upload a single image and give the align, animation, the link you wish. Additionally, you can use a popup video. 13. Slider This element is just a simple slider. Upload your images and that’s all! Keep in mind that you can expand element to full width. 14. Gallery The Gallery element has 3 different styles for showing your image galleries. Fitrows, Masonry and Stamp gallery. You can also create full-width galleries. 15. Message Box With this element you create a message text with an icon and background color. 16. Google Map Give the address you like and your map is ready. Upload the marker you like, set the height and type. Don’t miss to expand your map to full-width if you wish. 17. Video You can just add a video(Youtube, Vimeo), even full-width video. 18. Accordion – Toggle This element creates an accordion-toggle panel that expands when the user clicks on the title to reveal more information. Two different styles is provided. 19. Tabs Simply add tabs as needed until you are ready. Choose among horizontal and vertical tabs. 20. Testimonial This element creates a nice slider out of your testimonials items. Go Testimonial > Testimonial Items and create your testimonials. Additionally you can choose among horizontal or vertical navigation and the categories you want to show. 21. Pricing tables Pricing tables is used to display any subscription options in an appropriate column. Create the tables you wish (don’t overdone with this,4 tables per row are enough) and add your data. 22. Progress Bars This element creates horizontal bars that animate to the percent given. The best way of showing your skills in a visual way. 23. Carousel Upload photos(mainly logos) for the content of an attractive carousel. Define the number of images per screenshot and expand it to full width if you wish. 24. Social Share With this element you can simply add social media icons anywhere in your pages. 25. Team Member This element takes in a quick profile for a team member/employee and formats it attractively. Add the information you wish and select among two styles. 26. Promo Advanced This is a unique element. You can easily create a nicely colored tabbed section with image (logo), text and button. Just create multiple Promo Advanced elements (in the same row) and define the color you prefer for this row. Automated opacity of the main background color (row background color) will be set in each element. This creates very appealing and interesting areas to show off your partners or anything else you wish. Promo Advanced should only be used on pages without sidebars. 27. Blog You can easily create a blog page or just insert a blog section in a page. Select the categories and the style you wish. 28. Portfolio This element is just like Blog element. Make a nicely presentation of your portfolio. Simply select the style and categories you like. 29. Contact Form With this element, you can simply use any contact form 7 you’ve created. * In case you 'd like to style your contact form 7 just like in Reload preview create a form by using the lines below: <p class="grve-form">[text* grve-form-name placeholder "Name (required)"][email* grve-form-mail placeholder "E-mail (required)"][text grve-form-subject placeholder "Subject"][textarea grve-form-textarea class:grve-form-7][submit class:grve-btn "Send Message"]</p> Additionally, don ‘t forget to update the mail fields just like the screenshot below in order to make the form work.
https://docs.greatives.eu/tutorials/elements-of-reload/
2018-08-14T15:27:07
CC-MAIN-2018-34
1534221209165.16
[]
docs.greatives.eu
Upgrade Readiness requirements This article introduces concepts and steps needed to get up and running with Upgrade Readiness. We recommend that you review this list of requirements before getting started as you may need to collect information, such as account credentials, and get approval from internal IT groups, such as your network security group, before you can start using Upgrade Readiness. Supported upgrade paths Windows 7 and Windows 8.1 To perform an in-place upgrade, user computers must be running the latest version of either Windows 7 SP1 or Windows 8.1. After you enable Windows diagnostic data, Upgrade Readiness performs a full inventory of computers so that you can see which version of Windows is installed on each computer. The compatibility update that sends diagnostic data from user computers to Microsoft data centers works with Windows 7 SP1 and Windows 8.1 only. Upgrade Readiness cannot evaluate Windows XP or Windows Vista for upgrade eligibility. If you need to update user computers to Windows 7 SP1 or Windows 8.1, use Windows Update or download and deploy the applicable package from the Microsoft Download Center. Note: Upgrade Readiness is designed to best support in-place upgrades. In-place upgrades do not support migrations from BIOS to UEFI or from 32-bit to 64-bit architecture. If you need to migrate computers in these scenarios, use the wipe-and-reload method. Upgrade Readiness insights are still valuable in this scenario, however, you can ignore in-place upgrade specific guidance. See Windows 10 Specifications for additional information about computer system requirements. Windows 10 Keeping Windows 10 up to date involves deploying a feature update, and Upgrade Readiness tools help you prepare and plan for these Windows updates. The latest cumulative updates must be installed on Windows 10 computers to make sure that the required compatibility updates are installed. You can find the latest cumulative update on the Microsoft Update Catalog. While Upgrade Readiness can be used to assist with updating devices from Windows 10 Long-Term Servicing Channel (LTSC) to Windows 10 Semi-Annual Channel, Upgrade Readiness does not support updates to Windows 10 LTSC. The Long-Term Servicing Channel of Windows 10 is not intended for general deployment, and does not receive feature updates, therefore it is not a supported target with Upgrade Readiness. See Windows as a service overview to understand more about LTSC. Operations Management Suite or Azure Log Analytics Upgrade Readiness is offered as a solution in Microsoft Operations Management Suite (OMS) and Azure Log Analytics, a collection of cloud based services for managing on premises and cloud computing environments. For more information about OMS, see Operations Management Suite overview or the Azure Log Analytics overview. If you’re already using OMS or Azure Log Analytics, you’ll find Upgrade Readiness in the Solutions Gallery. Click the Upgrade Readiness tile in the gallery and then click Add on the solution’s details page. Upgrade Readiness is now visible in your workspace. If you are not using OMS or Azure Log Analytics, go to Log Analytics on Microsoft.com and select Start free to start the setup process. During the process, you’ll create a workspace and add the Upgrade Readiness solution to it. Important You can use either a Microsoft Account or a Work or School account to create a workspace. If your company is already using Azure Active Directory, use a Work or School account when you sign in to OMS. Using a Work or School account allows you to use identities from your Azure AD to manage permissions in OMS. You also need an Azure subscription to link to your OMS workspace. The account you used to create the workspace must have administrator permissions on the Azure subscription in order to link the workspace to the Azure account. Once the link has been established, you can revoke the administrator permissions. System Center Configuration Manager integration Upgrade Readiness can be integrated with your installation of Configuration Manager. For more information, see Integrate Upgrade Readiness with System Center Configuration Manager. Important information about this release Before you get started configuring Upgrade Anatlyics, review the following tips and limitations about this release. Upgrade Readiness does not support on-premises Windows deployments. Upgrade Readiness is built as a cloud service, which allows Upgrade Readiness to provide you with insights based on the data from user computers and other Microsoft compatibility services. Cloud services are easy to get up and running and are cost-effective because there is no requirement to physically implement and maintain services on-premises. In-region data storage requirements. Windows diagnostic data from user computers is encrypted, sent to, and processed at Microsoft-managed secure data centers located in the US. Our analysis of the upgrade readiness-related data is then provided to you through the Upgrade Readiness solution in the Microsoft Operations Management Suite (OMS) portal. Upgrade Readiness is supported in all OMS regions; however, selecting an international OMS region does not prevent diagnostic data from being sent to and processed in Microsoft's secure data centers in the US. Tips When viewing inventory items in table view, the maximum number of rows that can be viewed and exported is limited to 5,000. If you need to view or export more than 5,000 items, reduce the scope of the query so you can export a list with fewer items. Sorting data by clicking a column heading may not sort your complete list of items. For information about how to sort data in OMS, see Sorting DocumentDB data using Order By. See Get started with Upgrade Readiness for detailed, step-by-step instructions for configuring Upgrade Readiness and getting started on your Windows upgrade project.
https://docs.microsoft.com/en-us/windows/deployment/upgrade/upgrade-readiness-requirements
2018-08-14T15:40:20
CC-MAIN-2018-34
1534221209165.16
[]
docs.microsoft.com
What is a Trigger? A trigger is comprised of a condition and an action. If the condition is met, the action is triggered. The action an be either an email or a SMS text (if a mobile number is configured for the recipient). Actions can be sent to employees, owners or guests. Several triggers are pre-configured in all new Lodgix accounts. It is recommended that you thoroughly read the documentation and reference the existing triggers. Please make sure to have a thorough understanding of Custom Variables prior to enabling your triggers. Triggers Menu ItemTriggers Menu Item Create New TriggerCreate New Trigger Define the TriggerDefine the Trigger The steps below correlate to the numbers in the image above: #1: Title - Give the trigger a name that will be easy to remember and know what it is with a quick glance. #2: Conditions -. Invoice Status Conditions. - One use for an invoice status trigger is automated confirmations. For example you could set up the condition "Invoice Status, changed to, Confirmed". However you might also want to click on the "+" sign and setup another condition "Invoice Status, changed from, Unconfirmed". This tightens up the trigger. Note: The "+" sign adds a new condition and the "-" sign removes a condition. #3: Perform Action - Once you've setup the conditions now it's time to set up the action the condition will trigger - email or SMS text. Once you choose the action, then you must select the recipient. For those using the Employees module , there will be an additional recipient option for "employee" and then another drop down will appear where you can choose which employee should receive the trigger email. #4: Attachments - The next four drop down menus are courtesy drop downs allowing access to responses, templates and uploaded files. Thus if you want to attach a system generated confirmation, you can do that. If you have a rental agreement you've uploaded you can attach it. If you are communicating to an employee and you want to attach a pdf of the availability calendar you can do that. Or you can choose to attach / append nothing. The interface is very flexible. #5: Email Subject - This will be the subject line of the email. You can include a "placeholder" in the subject. Just click on the link (see #7) for "view available placeholders" and then copy and paste the variable into the appropriate place within the subject line #6: Email Body - If you append a response (see #4) that response will show up in the email body. In that event there is probably nothing else you need to do here. Otherwise simply type out what you want to say to the email recipient and customize it using any of the available placeholders (see #7) #7: View Available Placeholders - When this linked is clicked a list of all the available placeholders is displayed. By copy and pasting just the bracketed word <CPOSTALCODE> and NOT the placeholder description, the application will pull the corresponding data from the database and automatically insert it into the subject or body of your email. It's a slick way of customizing your emails! #8: Save - You MUST save your trigger once complete. #9: Send Test Emails - If you want to send yourself a test email you MUST have at least one reservation present in the system otherwise the conditions won't have any invoices / reservations to reference. ALL TRIGGERS ARE BATCHED AND EXECUTED EVERY 15 MINUTES. There might be a small delay when sending test emails for the trigger depending on when during the batch process the test email is sent. By default, test triggers will be sent to the master subscriber email address, however a custom "To" email can be entered prior to sending. Additional OptionsAdditional Options Apply trigger to last minute reservations This would be an option to check for example if you send out a pre-arrival checklist 7 days before arrival. If this option is not checked, and you receive a reservation 5 days before arrival that guest would not receive the pre-arrival checklist. Just a note: The email won't be sent immediately, it will enter into a queue and be sent together with the other time related triggers (within an hour, because the script runs hourly). Fire trigger only once per reservation This would be an option to check for example if you are sending out automated confirmations. Setting up the trigger condition would involve setting up an "invoice status" trigger to "meet any" of the following conditions: invoice status, changed to, confirmed or invoice status, changed to, paid-in-full If you don't check the "fire trigger once per reservation" box for this trigger, it could fire twice. Once when the reservation deposit is paid and the invoice status changes to "confirmed" and another when the remaining balance is paid and the invoice status changes to "paid-in-full". With the option checked, the confirmation will only be sent once.
http://docs.lodgix.com/m/5502/l/23283-what-is-a-trigger
2018-08-14T15:12:27
CC-MAIN-2018-34
1534221209165.16
[array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/376/560/original/31641504-0e6e-4301-bdf1-9739b1576cf7.png?1488210087', 'Triggers Menu Item'], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/376/534/original/media_1292352410959.png?1488209600', 'Create New Trigger'], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/376/537/original/trigger.png?1488209603', 'Define the Trigger'], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/376/536/original/media_1339735890645.png?1488209601', 'Additional Options'], dtype=object) ]
docs.lodgix.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Deletes the specified image builder and releases the capacity. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to DeleteImageBuilderAsync. Namespace: Amazon.AppStream Assembly: AWSSDK.AppStream.dll Version: 3.x.y.z Container for the necessary parameters to execute the DeleteImageBuilder service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/AppStream/MAppStreamDeleteImageBuilderDeleteImageBuilderRequest.html
2018-08-14T16:48:53
CC-MAIN-2018-34
1534221209165.16
[]
docs.aws.amazon.com
Home > Journals > RR > Vol. 3 (2007) > Iss. 1 Abstract Despite a common belief, service learning is completely different from community service. Service learning is different from community service because not only is working being accomplished in a community, but also students are learning how to apply service to their everyday lives as citizens of the community. Active citizenship is an essential skill to add to higher education curriculums because the students graduating are more likely going to become our future leaders. Recommended Citation Spindler, Jena (2008) "Service learning: a concept we are not as familiar with as we might think," Reason and Respect: Vol. 3 : Iss. 1 , Article 4. Available at:
https://docs.rwu.edu/rr/vol3/iss1/4/
2018-08-14T15:46:16
CC-MAIN-2018-34
1534221209165.16
[]
docs.rwu.edu
. Install SRAs at both sites as described in Install Storage Replication Adapters. Procedure - Select Array Managers in the Site Recovery Manager interface, and select the site on which you want to configure array managers. - Click the Summary tab and click Add Array Manager. - Type a name for the array in the Display Name text box. Use a descriptive name that makes it easy for you to identify the storage associated with this array manager. - Select the array manager type that you want Site Recovery Manager to use from the SRA Type drop-down menu. If no manager type appears, rescan for SRAs or check that you have installed an SRA on the Site Recovery Manager Server host. - Provide the required information for the type of SRA you selected. The SRA creates these text boxes. For more information about how to fill in these text boxes, see the documentation that your SRA vendor provides. Text boxes vary between SRAs, but common text boxes include IP address, protocol information, mapping between array names and IP addresses, and user name and password. - Click Finish. - Repeat steps 1 through 6 to configure an array manager for the recovery site. - Select an array in the Array Managers panel and click the Array Pairs tab. - (Optional) Click Refresh to scan for new array pairs. - Select an array pair in the Discovered Array Pairs panel, and click Enable. If you have added array managers, but no array pairs are visible, click Refresh to collect the latest information about array pairs.
https://docs.vmware.com/en/Site-Recovery-Manager/5.5/com.vmware.srm.install_config.doc/GUID-FAA4F4A5-D89B-425C-A0FD-C29057118C35.html
2018-08-14T15:50:05
CC-MAIN-2018-34
1534221209165.16
[]
docs.vmware.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Container for the parameters to the LookupPolicy operation.. Namespace: Amazon.CloudDirectory.Model Assembly: AWSSDK.CloudDirectory.dll Version: 3.x.y.z The LookupPolicy
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CloudDirectory/TLookupPolicyRequest.html
2018-08-14T16:41:03
CC-MAIN-2018-34
1534221209165.16
[]
docs.aws.amazon.com
gazeboEstimated reading time: 10 minutes Gazebo is an open source project for simulating robots, offering robust physics and rendering. GitHub repo: Library reference This content is imported from the official Docker Library docs, and is provided by the original uploader. You can view the Docker Store page for this image at Supported tags and respective Dockerfile links gzserver4, gzserver4-trusty(gazebo/4/ubuntu/trusty/gzserver4/Dockerfile) libgazebo4, libgazebo4-trusty(gazebo/4/ubuntu/trusty/libgazebo4/Dockerfile) gzserver5, gzserver5-trusty(gazebo/5/ubuntu/trusty/gzserver5/Dockerfile) libgazebo5, libgazebo5-trusty(gazebo/5/ubuntu/trusty/libgazebo5/Dockerfile) gzserver6, gzserver6-trusty(gazebo/6/ubuntu/trusty/gzserver6/Dockerfile) libgazebo6, libgazebo6-trusty(gazebo/6/ubuntu/trusty/libgazebo6/Dockerfile) gzserver7, gzserver7-xenial(gazebo/7/ubuntu/xenial/gzserver7/Dockerfile) libgazebo7, libgazebo7-xenial(gazebo/7/ubuntu/xenial/libgazebo7/Dockerfile) gzserver8, gzserver8-xenial(gazebo/8/ubuntu/xenial/gzserver8/Dockerfile) libgazebo8, libgazebo8-xenial(gazebo/8/ubuntu/xenial/libgazebo8/Dockerfile) gzserver9, gzserver9-xenial(gazebo/9/ubuntu/xenial/gzserver9/Dockerfile) libgazebo9, libgazebo9-xenial, latest(gazebo/9/ubuntu/xenial/libgazebo9/Dockerfile) Quick reference Where to get help: the Docker Community Forums, the Docker Community Slack, or Stack Overflow Where to file issues: Maintained by: the Open Source Robotics Foundation Supported architectures: (more info) amd64 Published image artifact details: repo-info repo’s repos/gazebo/directory (history) (image metadata, transfer size, etc) Image updates: official-images PRs with label library/gazebo official-images repo’s library/gazebofile (history) Source of this description: docs repo’s gazebo/directory (history) Supported Docker versions: the latest release (down to 1.6 on a best-effort basis) What is Gazebo? Robot interfaces. Best of all, Gazebo is free with a vibrant community. wikipedia.org/wiki/Gazebo_simulator How to use this image Create a Dockerfile in your Gazebo project FROM gazebo:gzserver8 # place here your application's setup specifics CMD [ "gzserver", "my-gazebo-app-args" ] You can then build and run the Docker image: $ docker build -t my-gazebo-app . $ docker run -it -v="/tmp/.gazebo/:/root/.gazebo/" --name my-running-app my-gazebo-app Deployment use cases This dockerized image of Gazebo is intended to provide a simplified and consistent platform to build and deploy cloud based robotic simulations. Built from the official Ubuntu image and Gazebo’s official Debian packages, it includes recent supported releases for quick access and download. This provides roboticists in research and industry with an easy way to develop continuous integration and testing on training for autonomous actions and task planning, control dynamics and regions of stability, kinematic modeling and prototype characterization, localization and mapping algorithms, swarm behavior and networking, as well as general system integration and validation. Conducting such complex simulations with high validity remains computationally demanding, and oftentimes outside the capacity of a modest local workstation. With the added complexity of the algorithms being benchmarked, we can soon exceed the capacity of even the most formidable servers. This is why a more distributed approach remains attractive for those who begin to encounter limitations of a centralized computing host. However, the added complication of building and maintaining a distributed testbed over a set of clusters has for a while required more time and effort than many smaller labs and businesses would have deemed appropriate to implement. With the advancements and standardization of software containers, roboticists are primed to acquire a host of improved developer tooling for building and shipping software. To help alleviate the growing pains and technical challenges of adopting new practices, we have focused on providing an official resource for using Gazebo with these new technologies. Deployment suggestions The gzserver tags are designed to have a small footprint and simple configuration, thus only include required Gazebo dependencies. The standard messaging port 11345 is exposed to allow for client connections and messages API. Volumes Gazebo uses the ~/.gazebo/ directory for storing logs, models and scene info. If you wish to persist these files beyond the lifecycle of the containers which produced them, the ~/.gazebo/ folder can be mounted to an external volume on the host, or a derived image can specify volumes to be managed by the Docker engine. By default, the container runs as the root user, so /root/.gazebo/ would be the full path to these files. For example, if one wishes to use their own .gazebo folder that already resides in their local home directory, with a username of ubuntu, we can simple launch the container with an additional volume argument: $ docker run -v "/home/ubuntu/.gazebo/:/root/.gazebo/" gazebo One thing to be careful about is that gzserver logs to files named /root/.gazebo/server-<port>/*.log, where <port> is the port number that server is listening on (11345 by default). If you run and mount multiple containers using the same default port and same host side directory, then they will collide and attempt writing to the same file. If you want to run multiple gzservers on the same docker host, then a bit more clever volume mounting of ~/.gazebo/ subfolders would be required. Devices As of Gazebo version 5.0, physics simulation under a headless instances of gzserver works fine. However some application may require image rendering camera views and ray traces for other sensor modalities. For Gazebo, this requires a running X server for rendering and capturing scenes. In addition, graphical hardware acceleration is also needed for reasonable realtime framerates. To this extent, mounting additional graphic devices into the container and linking to a running X server is required. In the interest of maintaining a general purpose and minimalistic image which is not tightly coupled to host system software and hardware, we do not include tags here with these additional requirements and instructions. You can however use this repo to build and customize your own images to fit your software/hardware configuration. The OSRF’s Docker Hub organization profile contains a Gazebo repo at osrf/gazebo which is based on this repo but includes additional tags for these advanced use cases. Development If you not only wish to run Gazebo, but develop for it too, i.e. compile custom plug-ins or build upon messaging interfaces for ROS, this will require the development package included in the libgazebo tag. If you simply need to run Gazebo as a headless server, then the gzserver tag consist of a smaller image size. Deployment example In this short example, we’ll spin up a new container running gazebo server, connect to it using a local gazebo client, then spawn a double inverted pendulum and record the simulation for later playback. First launch a gazebo server with a mounted volume for logging and name the container gazebo: $ docker run -d -v="/tmp/.gazebo/:/root/.gazebo/" --name=gazebo gazebo Now open a new bash session in the container using the same entrypoint to configure the environment. Then download the double_pendulum model and load it into the simulation. $ docker exec -it gazebo bash $ apt-get update && apt-get install -y curl $ curl -o double_pendulum.sdf $ gz model --model-name double_pendulum --spawn-file double_pendulum.sdf To start recording the running simulation, simply use gz logto do so. $ gz log --record 1 After a few seconds, go ahead and stop recording by disabling the same flag. $ gz log --record 0 To introspect our logged recording, we can navigate to log directory and use gz logto open and examine the motion and joint state of the pendulum. This will allow you to step through the poses of the pendulum links. $ cd ~/.gazebo/log/*/gzserver/ $ gz log --step --hz 10 --filter *.pose/*.pose --file state.log If you have an equivalent release of Gazebo installed locally, you can connect to the gzserver inside the container using gzclient GUI by setting the address of the master URI to the containers public address. $ export GAZEBO_MASTER_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' gazebo) $ export GAZEBO_MASTER_URI=$GAZEBO_MASTER_IP:11345 $ gzclient --verbose In the rendered OpenGL view with gzclient you should see the moving double pendulum created prior still oscillating. From here you can control or monitor state of the simulation using the graphical interface, add more pendulums, reset the world, make more logs, etc. To quit the simulation, close the gzclient window and stop the container. $ docker stop gazebo $ docker rm gazebo Even though our old gazebo container has been removed, we can still see that our record log has been preserved in the host volume directory. $ cd /tmp/.gazebo/log/ $ ls Again, if you have an equivalent release of Gazebo installed on your host system, you can play back the simulation with gazebo by using the recorded log file. $ export GAZEBO_MASTER_IP=127.0.0.1 $ export GAZEBO_MASTER_URI=$GAZEBO_MASTER_IP:11345 $ cd /tmp/.gazebo/log/*/gzserver/ $ gazebo --verbose --play state.log More Resources Gazebosim.org: Main Gazebo website Answers: Find answers and ask questions Wiki: General information and tutorials Mailing List: Join for news and announcements Simulation Models: Robots, objects, and other simulation models OSRF: Open Source Robotics Foundation License Gazebo is open-source licensed under Apache 2.0. gazebo/ directory. As for any pre-built image usage, it is the image user’s responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.library, sample, gazebo
https://docs.docker.com/samples/library/gazebo/
2018-08-14T15:48:06
CC-MAIN-2018-34
1534221209165.16
[]
docs.docker.com
The Query Window consists of three panes or areas. These are: Each area can be resized to the size of your choice. You can display the object names as well as the descriptive names as described in 3.3.1 Display the Object Names. If you prefer not to use the mouse pointer, you can switch from one area to another using the Tab key.
https://docs.lansa.com/14/en/lansa037/content/lansa/jmp_query_window.htm
2018-08-14T15:50:40
CC-MAIN-2018-34
1534221209165.16
[]
docs.lansa.com
The Java Service Manager Administration on Windows is a service application. How To Start performed an upgrade, then the JSM will have stopped.). Clear Trace can be run from the Start | Programs | Menu or from the command line. You can use any of the following three options, all of which are optional. /batch Run Clear Trace as a batch job /temp Remove files and subdirectories in the temp directory only. /trace Remove files and subdirectories in the trace directory only. Interactive mode Example: From the Clear Instance dialog, select the actions you wish Clear Trace to take: This will result in: clrjsm to remove trace and temp files (default behavior) clrjsm /trace /temp to remove trace and temp files clrjsm /temp to remove temp files and not trace files clrjsm /trace to remove trace files and not temp files. Batch mode Examples: clrjsm/batch removes trace and temp files clrjsm /batch /trace /temp removes trace and temp files clrjsm /batch /temp removes temp files and not trace files clrjsm /batch /trace removes trace files and not temp files.
https://docs.lansa.com/14/en/lansa093/content/lansa/intb4_0001.htm
2018-08-14T15:50:37
CC-MAIN-2018-34
1534221209165.16
[]
docs.lansa.com
The optional keyword VALIDATING is used to configure the service to use a validating or non-validating XML parser. The default is to use a validating XML parser. This option can also be controlled by the service property 'validation.parser'. A nonvalidating parser ensures that the XML data is well formed, but does not verify that it is valid. A validating parser uses the XML document defined DTD or XMLSchema grammars to validate that the XML data elements and attributes conform to the structural constraints of these schemas. Why run in nonvalidating mode when a parser is capable of validation?. Because validation can significantly impact performance, especially when long and complex DTDs or XMLSchemas are involved. Some developers find that while enabling validation during development and test phases is crucial, it's sometimes beneficial to surpress validation in production systems where document throughput is most valued and the reliability of the data is already known. Example SERVICE_LOAD SERVICE(HTTPInboundXMLService) VALIDATING(*NO) # validation.parser=*no #
https://docs.lansa.com/14/en/lansa093/content/lansa/intengb7_3090.htm
2018-08-14T15:50:35
CC-MAIN-2018-34
1534221209165.16
[]
docs.lansa.com
numpy.arccosh¶ numpy. arccosh(x, /, out=None, *, where=True,¶ Inverse hyperbolic cosine, element-wise. Notes arccoshalways returns real output. For each value that cannot be expressed as a real number or infinity, it yields nanand sets the invalid floating point error flag. For complex-valued input, arccoshis a complex analytical function that has a branch cut [-inf, 1] and is continuous from above on it. References Examples >>> np.arccosh([np.e, 10.0]) array([ 1.65745445, 2.99322285]) >>> np.arccosh(1) 0.0
https://docs.scipy.org/doc/numpy/reference/generated/numpy.arccosh.html
2018-08-14T15:52:53
CC-MAIN-2018-34
1534221209165.16
[]
docs.scipy.org
Other deployment considerations In many applications, the Splunk Add-on for Unix and Linux installs on a *nix server and collects data from that server. You then use Splunk Web and the Splunk App for Unix and Linux (or another Splunk app) to gain insight into that data. Additional uses for the add-on There are additional uses for the app and add-on: - You can use the add-on to collect *nix data from a number of *nix hosts by installing a universal forwarder on each host and deploying the app to those forwarders. After each forwarder receives the add-on, you can then forward the data to a receiving indexer that runs the full app. See Deploy the Splunk App-on for Unix and Linux in a distributed Splunk environment for additional information and instructions. - You can also install the add-on on an indexer to provide data inputs for another app on that indexer, such as Splunk Enterprise Security. - If you install the Splunk App for Unix and Linux in a distributed environment and have configured the search heads in that environment to send data to the indexers, you might need to deploy the indexes.conffile that comes with the Splunk Supporting Add-on for Unix and Linux component ( SA-nix/default/indexes.conf) onto your indexers to ensure that the unix_summarysummary index is available. Failure to do so might cause issues with alerts for the app, as alerts use this special index. This documentation applies to the following versions of Splunk® Add-on for Unix and Linux: 5.1.0, 5.1.1, 5.1.2, 5.2.0, 5.2.1, 5.2.2, 5.2.3, 5.2.4 Feedback submitted, thanks!
http://docs.splunk.com/Documentation/UnixAddOn/5.2.4/User/Otherdeploymentconsiderations
2018-08-14T15:55:55
CC-MAIN-2018-34
1534221209165.16
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
HH\KeyedIterator For those entities that are KeyedTraversable, the KeyedIterator interfaces provides the methods of iteration, included being able to get the key If a class implements KeyedIterator, then it provides the infrastructure to be iterated over using a foreach loop. Interface Synopsis namespace HH { interface KeyedIterator implements HH\Iterator, HH\KeyedTraversable {...} } ->key(): Tk Return the current key at the current iterator position
https://docs.hhvm.com/hack/reference/interface/HH.KeyedIterator/
2018-08-14T15:10:51
CC-MAIN-2018-34
1534221209165.16
[]
docs.hhvm.com
- Résumé - Sommaire - Extraits - Descriptif - À propos de l'auteur - Lecture - Offert ! Ernest Renan, "Qu'est-ce qu'une nation ?" : commentaireAccédez à la dissert' du jour ! The Jewish community and the US Résumé de l'exposé In 2000-01, a National Jewish Population Survey was conducted. This survey is realized once every ten years. The survey declared that the population of Jews in the U.S. amounted to nearly 5.2 million (it was 5.5 million in 1990). The Herculean task of defining ?Who is a Jew' was the main difficulty researchers encountered when conducting this survey. Finally, it was decided to categorize all the people who either identify themselves as Jews, or have had a Jewish parents/parent or were raised as Jews and did not convert into any other religion. If they had included every single person who had a Jewish background, the total population counted would have risen to 6.9 million which is a mammoth! But even the choice they made was not completely accurate as a recent survey by the Jewish Studies Centre of CCNY concluded that half of the population they counted as Jews list their religion as either "other" or "none." Sommaire de l'exposé - Characteristics of the Jewish population in the US - Who is Jewish? - Where do they come from, and where do they live? - How do they live? - Intermarriage - Education and social life - Jews' assimilation and importance of Jewish culture in the American society - Jews' assimilation - Jewish culture in the American society - Jews in the American society - Problems of Integration - Their influence - Consequences of this power on the American political life Extraits de l'exposé [...] And Jewish people definitely take part of this America. JEWS IN THE AMERICAN SOCIETY A. Problems of Integration 1. History of their integration .? Thanks to this citation of George Washington, one's of the founder of the United States, we can understand that they were a real problem for Jews to integrate themselves in America. [...] [...] The use of a question to answer a question to which the answer is so self-evident that the use of the first question (by you) constitutes an affront (to me) best erased either by repeating the original question or retorting with a question of comparably asinine self- answeringness Examples of Jewish Ethnic Humor and Cultural References In the ice sculpture reflected bar-mitzvah guests nosh on chopped liver. The same kimono the top geishas are wearing it at Loehmann's. Scrabble anarchy after putzhead is placed on a triple-word score. Seven-foot Jews in the NBA slam-dunking alarm clock rings. Harry Houdini amazing escape from his real name, Erich Weiss. [...] [...] Among all married Jews today are intermarried. (?Intermarriage? means marriage between a Jewish and a non- Jewish ; marriage between two Jewish persons is called ?inmarriage?). As most Jews live in the Northeast, intermarriage is more frequent in the West. Another interesting fact is that children with two Jewish parents are almost all raised Jewish, whereas only one third of children with one Jewish parent are. B. Education and social life On the education level, Jews generally have higher achievement than Americans. [...] [...] However in the sixties a re-mapping of Jewish spiritual and communal life started back. For instance, The Jewish Catalog was edited in 750,000 copies by the end of the 1970's. Nowadays Judaism and Jews are less than ever related of all Jews are uncertain or reject theism, with only 14% of Americans saying they have no religion. According to the American Jewish Identity Survey-2001, ?More Jews than most other Americans respond when asked ?What is your religion, if The same survey reported that only 51% still believed in some form of Judaism, a 12% decline since 1990. [...] [...] They feel American. For instance, they contributed to about 60% of Mr. Clinton's non institutional campaign funds. However the political representation of the Jewry does not respect the repartition of the Jewish population since the majority of the funding comes from the Zionists[1] when they only represents now 22% of the Jews. So not only religious skeptic Jews try to get assimilated since Zionists have an enormous influence on domestic politics Very expensive intensive Jewish experience - limit to a Jewish culture? [...] [...] Jewish culture in the American society 1. In music, movies, literature, Jews have contributed to American culture drama and in musical comedy, in popular song and in symphonic music, in movies and in literature, Jews have contributed to American culture in the 20th century to a degree out of all proportion to their numbers? wrote Stephen J. Whitfield in his book: search of American Jewish Culture?. To illustrate what this Jewish author said, we can quote just as example: George and Ira Gershwin, Bob Dylan, Leonard Bernstein, Arthur Miller, Lillian Hellman, Paul Auster, J. [...] [...] This common fate closed the views of the two communities and they started to work together in order to gain a better condition. First, a small group of Jurist and intellectuals tried to cooperate. The New Deal of Franklin Roosevelt was a great achievement for the two communities since it enabled them to press the political power to implement some strong measures in their favour (mostly for the Jewish community): right of going to school and University, less discrimination in order to get a job or a flat and a greater participation of minorities in the political life of the country Yet, Black people did not enjoy the same liberties and this fact can explain the separation between these two communities. [...] [...] Jews' assimilation 1. Religious skepticism During the 20th Century, the assimilation of the Jewish community has mostly been considered as related to a certain kind of religious skepticism, a distance to the Jewish religion, despite a remapping of Jewish spiritual and communal life in the 1960's. The Pittsburgh platform of 1885: Reform The Pittsburgh platform of 1885 is the first act that shows how the Jews did some efforts or sacrifices with the religious aspect of their life to get assimilated to the American society. [...] [...] ) "Jewish mother" stereotype iii. (Other) Examples of Yiddish Terminology in SAE bagels & lox, bialy, blintz, borsht chutzpa, cockamamie gesundheit, glitch kibitzer, klutz, kvetch mazel tov, nebbish nudnik [boring person] (Phudnik . "nudnik with a PhD . schlemiehl, schlep, schlock shlong, shtik yenta, zaftig iv. Examples of Yiddish phraseology in SAE Who needs it? Get lost! I should have such luck. It shouldn't happen to a dog. This I need yet? I need it like a hole in the head. It's O.K. [...] [...] The energy on her! The thoroughness . She is never ashamed of her house: a stranger could walk in and open any closet, any drawer, and she would have nothing to be ashamed of. You could even eat off her bathroom floor, if that should ever become necessary. When she loses at mah-jongg she takes it like a sport, not like the others whose names she could mention but she won't not even Tilly Hochman it's too petty to even talk about let's just forget she even brought it up. [...] À propos de l'auteurEmmanuelle R.étudiante Philosophie - Niveau - Expert - Etude suivie - commerce - Ecole, université - EM Lyon Descriptif de l'exposé - Date de publication - 2004-03-07 - Date de mise à jour - 2004-03-07 - Langue - anglais - Format - Word - Type - dissertation - Nombre de pages - 12 pages - Niveau - expert - Téléchargé - 2 fois - Validé par - le comité de lecture Autres docs sur : The Jewish community and the US - The cultural identity and the assimilation of the Jewish Community in the Global Cities - Comparing the Jewish experience in Western Europe and the United States in the modern era - Was the establishment of a specifically Jewish state in Palestine the most favorable way for... - The International Community towards the Israeli security fence - Nespresso, crisis communication report : implantation in Israel
https://docs.school/philosophie-et-litterature/culture-generale-et-philosophie/dissertation/jewish-community-us-12486.html
2018-08-14T16:16:55
CC-MAIN-2018-34
1534221209165.16
[array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-PL.png', None], dtype=object) array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-PL.png', None], dtype=object) array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-PL.png', None], dtype=object) ]
docs.school
- Résumé - Sommaire - Extraits - Descriptif - À propos de l'auteur - Lecture - Offert ! Ernest Renan, "Qu'est-ce qu'une nation ?" : commentaireAccédez à la dissert' du jour ! Muslims and the State: A comparative study of Muslim religious needs in the educational domain in France and Great Britain Résumé de l'exposé. Sommaire de l'exposé - State education and religion in France and Great Britain. - The British Education Act: From a Christian education to a multicultural education. - The wearing of Islamic veils and state run schools. - Public funding of Islamic schools. - Explaining the disparate political responses to the religious needs of Muslims? - Muslims' mobilization as organisations. - The weight of History: two different Nation's political ideologies. - Accommodation of Muslim religious practices and the two unique Church-State histories. - Conclusion. - Bibliography. Extraits de l'exposé [...] Accommodation of Muslim religious practices and the two unique Church- State histories The French strict separation between the Church and the State has restricted Muslims' ability to fight for a religious public recognition Great Britain: the presence of an established Church has enabled Muslims to claim religious rights.. [...] [...] Finally, the theory of church-state institutions asserts that the development of public policies concerning Muslims' religious rights is significantly linked to the different institutional church-state features. In France, the state-church relations are crucial for the understanding of the little political accommodation to Muslims' rights and needs. Indeed, the idea of laïcité is central in all the public debates linked to the recognition of groups' religious rights in the public sphere. It shapes the whole debate about Islamic headscarves at school for example. [...] [...] It specifies that a majority of the religious acts in state-run schools have to be "wholly or mainly of a broadly Christian character". Moreover, the Christian faith is not imposed on non- Christians students and parents can withdraw their children from the religious education. Although this Act seems to reaffirm Christian tradition in religious education, it is very flexibly applied in practice. Indeed, the religious education is a local responsibility and the wishes of the parents as well as the local school population are taken into account for the religious education requirements of the national curriculum. [...] [...] In Great Britain, the question of the wearing of Islamic headscarves is a "non-issue". Wearing the hijab is totally accepted in British state-run schools; the only requirement for Muslim girls who choose to wear it is that the headscarves should be conformed to the colour of the school uniform. British educational policy towards Muslims has shown a tendency toward understanding Muslims' needs. Requirements are bent so as to avoid political controversies. On the contrary, state funding of Islamic schools has been a political issue for a long time in Great Britain. [...] [...] These issues have been partly crystallized on the states educational systems. Different political responses have been given by France and Great Britain to Muslims' religious and cultural claims related to schools. How did they accommodate the religious needs of Muslims? How to explain the different political responses in France and Great Britain? Why is there such a gap between both politics? First, I will analyse the different responses given by both states to school controversies related to Muslims needs and rights. [...] [...]. In France, once more, the situation is sharply different. The centralised state makes political opportunity structures less effective than in Britain; there is a real lack of Muslim representatives. [...] [...] Muslims and the State: A Comparative Study of Muslim religious needs in the educational domain in France and Great Britain The political answers to Muslims' religious and cultural needs and rights: the case of the school system. A. State education and religion in France and Great Britain The British Education Act: from 1944 to 1988, from a Christian education to a multicultural education The French laïcité: no room for the religious matters at school. B. The wearing of Islamic veils and state run schools French State schools: the "veil affair", a long politico-cultural war English pragmatism: no controversy on Islamic headscarves. [...] [...] Although there were tougher laws limiting immigration, a second wave of immigration as family members occurred and it was precisely at that time that the status of Muslim people changed. The immigration population has been transformed from single migrants to families who wanted permanent settlement. Immigrants became not only concerned with their social and economical rights but also with their cultural and religious needs as any other citizen. Over the last decades, it appeared that state accommodation of Muslim religious practices has become one of the main political issues in Western European countries. [...] [...] The theory of mobilization is thus valid in Britain as well in France. In these countries, organizations are differently built and mobilizations depend on the political organization of the State. What is more, the ideological theory argues that there is a pre-existing spirit of the state built through history which influences the current political decisions about the major state issues. This theory is often taken into account to explain public policy and tradition on citizenship. On that ground, Britain's liberal political tradition would make policy makers open to recognize Muslim immigrants' rights. [...] [...] In Great Britain, the education policy inherited by Muslims requires religious education and worship in state-run schools. According to the Education Act of 1944, "all state-run schools provide religious education and each school day begins with collective worship". Obviously, because of the religious homogeneity at the time this Act was enacted, this statement implies that the religious education is Christian. So as to precise this law and to be more conform to the various beliefs of the current British society, the second Education Act of 1988 has been enacted. [...] À propos de l'auteurMalek A.étudiant Philosophie - Niveau - Grand public - Etude suivie - logistique - Ecole, université - Icosup Descriptif de l'exposé - Date de publication - 2007-01-23 - Date de mise à jour - 2007-01-23 - Langue - anglais - Format - Word - Type - dissertation - Nombre de pages - 6 pages - Niveau - grand public - Téléchargé - 6 fois - Validé par - le comité de lecture Autres docs sur : Muslims and the State: A comparative study of Muslim religious needs in the educational domain in France and Great Britain - Etude comparative des programmes de prévention VIH/SIDA en Afrique du Sud - "La Grande-Bretagne et le monde de 1815 à nos jours", Philippe Chassaigne (2001) - Enjeux de la France contemporaine et de l'Union européenne - "Le monde britannique (1815-1931)", Dominique Barjot, Charles-François Mathis (2009) - étude des... - Les Français et l'Afghanistan : la redécouverte de la guerre
https://docs.school/philosophie-et-litterature/culture-generale-et-philosophie/dissertation/musulmans-etat-etude-comparative-besoins-religieux-musulmans-domaine-education-france-22488.html
2018-08-14T16:16:57
CC-MAIN-2018-34
1534221209165.16
[array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-PL.png', None], dtype=object) array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-PL.png', None], dtype=object) array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-PL.png', None], dtype=object) ]
docs.school
Business rules installed with Service Desk Call Service Desk Call plugin adds the following business rules. Business rule Table Description CallTypeChanged Call [new_call] Creates an incident, problem, or change record, based on the call type selection. CallTypeChanged to Request Call [new_call] Redirects to a new service catalog request page based on the call type and request item selection. Calculate time spent Call [new_call] Calculates the time spent between opening the form and saving it. Domain - Set Domain - SD Call Call [new_call] Supports domain separation. Related TopicsBusiness rules
https://docs.servicenow.com/bundle/istanbul-it-service-management/page/product/service-desk/reference/r_BusinessRulesServiceDesk.html
2018-08-14T15:10:53
CC-MAIN-2018-34
1534221209165.16
[]
docs.servicenow.com
Welcome to the MenpoWidgets documentation! MenpoWidgets is the Menpo Project’s Python package for fancy. We highly recommend that you render all matplotlib figures inline the Jupyter notebook for the best menpowidgets experience. This can be done by runningWe highly recommend that you render all matplotlib figures inline the Jupyter notebook for the best menpowidgets experience. This can be done by running %matplotlib inline API Documentation¶ In MenpoWidgets, we use legible docstrings, and therefore, all documentation should be easily accessible in any sensible IDE (or IPython) via tab completion. However, this section should make most of the core classes available for viewing online. - Main Widgets Functions for visualizing the various Menpo and MenpoFit objects using interactive widgets. - Options Widgets Independent widget objects that can be used as the main components for designing high-level widget functions. - Tools Widgets Low-level widget objects that can be used as the main ingredients for creating more complex widgets. Usage Example¶ A short example is often more illustrative than a verbose explanation. Let’s assume that you want to quickly explore a folder of numerous annotated images, without the overhead of waiting to load them and writing code to view them. The images can be easily loaded using the Menpo package and then visualized using an interactive widget as: import menpo.io as mio from menpowidgets import visualize_images images = mio.import_images('/path/to/images/') visualize_images(images) Similarly, the fitting result of a deformable model from the MenpoFit package can be demonstrated as: result = fitter.fit_from_bb(image, initial_bounding_box) result.view_widget()
https://menpowidgets.readthedocs.io/en/stable/
2018-08-14T15:47:37
CC-MAIN-2018-34
1534221209165.16
[]
menpowidgets.readthedocs.io
How to Confirm a Reservation When a reservation is made an invoice is created. The invoice status by default is unconfirmed. The only way to confirm a reservation is to process a payment for the reservation deposit. The reservation deposit is universal and setup here: Many property managers look for a "confirm" button within each new reservation invoice. With Lodgix a reservation is confirmed by the act of processing the payment for the reservation deposit. After the reservation deposit is processed, the next step is to send an email confirmation to the guest. This can be done manually or set up to be automatically sent through use of a trigger.
http://docs.lodgix.com/m/5502/l/50047-how-to-confirm-a-reservation
2018-08-14T15:11:32
CC-MAIN-2018-34
1534221209165.16
[]
docs.lodgix.com
Learn the application architecture Estimated reading time: 3 minutesEstimated reading time: 3. On this page, you learn about the Swarm at scale example. Make sure you have read through the introduction to get an idea of the skills and time required first. Learn the example back story Your company is a pet food company that has bought a commercial during the Superbowl. The commercial drives viewers to a web survey that asks users to vote – cats or dogs. You are developing the web survey. Your survey must ensure that millions of people can vote concurrently without your website becoming unavailable. You don’t need real-time results, a company press release announces the results. However, you do need confidence that every vote is counted. Understand the application architecture The voting application is composed of several microservices. It uses a parallel web frontend that sends jobs to asynchronous background workers. The application’s design can accommodate arbitrarily large scale. The diagram below shows the application’s high level architecture: All the servers are running Docker Engine. The entire application is fully “Dockerized” in that all services are running inside of containers. The frontend consists of a load balancer with N frontend instances. Each frontend consists of a web server and a Redis queue. The load balancer can handle an arbitrary number of web containers behind it ( frontend01- frontendN). The web containers run a simple Python application that takes a vote between two options. It queues the votes to a Redis container running on the datastore. Behind the frontend is a worker tier which runs on separate nodes. This tier: - scans the Redis containers - dequeues votes - deduplicates votes to prevent double voting - commits the results to a Postgres database Just like the frontend, the worker tier can also scale arbitrarily. The worker count and frontend count are independent from each other. The application’s Dockerized microservices are deployed to a container network. Container networks are a feature of Docker Engine that allows communication between multiple containers across multiple Docker hosts. Swarm cluster architecture To support the application, the design calls for a Swarm cluster with a single Swarm manager and four nodes as shown below. All four nodes in the cluster are running the Docker daemon, as is the Swarm manager and the load balancer. The Swarm manager is part of the cluster and is considered out of band for the application. A single host running the Consul server acts as a keystore for both Swarm discovery and for the container network. The load balancer could be placed inside of the cluster, but for this demonstration it is not. After completing the example and deploying your application, this is what your environment should look like. As the previous diagram shows, each node in the cluster runs the following containers: frontend01: - Container: voting-app - Container: Swarm agent frontend02: - Container: voting-app - Container: Swarm agent worker01: - Container: voting-app-worker - Container: Swarm agent dbstore: - Container: voting-app-result-app - Container: db (Postgres 9.4) - Container: redis - Container: Swarm agent After deploying the application, configure your local system so that you can test the application from your local browser. In production, of course, this step wouldn’t be needed. Next step Now that you understand the application architecture, you need to deploy a network configuration that can support it. In the next step, you deploy network infrastructure for use in this sample.docker, swarm, scale, voting, application, architecture
https://docs.docker.com/swarm/swarm_at_scale/about/
2018-08-14T15:50:29
CC-MAIN-2018-34
1534221209165.16
[array(['/swarm/images/app-architecture.png', "Voting application's high level architecture"], dtype=object) array(['/swarm/images/swarm-cluster-arch.png', 'Swarm cluster architecture'], dtype=object) array(['/swarm/images/final-result.png', 'Overview of deployment'], dtype=object) ]
docs.docker.com
Creating Strokes to Paint Your Drawings on a Separated Layer You can use the outline you traced on one of the four embedded layers and create invisible strokes to paint your drawings on separate layers, this provides more inking and painting flexibility. To do so, you must use the Create Colour Art from Line Art option. You can also configure the option to create the invisible strokes on any of the four embedded layers. To create Colour Art zones out of the Line Art content: To Configure the Line Art to Colour Art command settings: The Configure Line Art to Colour Art dialog box opens. Related Topics
https://docs.toonboom.com/help/animate-pro/Content/HAR/Stage/006_Colour/053_H2_Creating_Strokes_to_Paint_Your_Drawings_on_a_Separated_Layer_.html
2018-10-15T20:02:30
CC-MAIN-2018-43
1539583509690.35
[array(['../../../Resources/Images/HAR/Stage/Colours/Steps/014_linearttocloourart_001.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Colours/Steps/014_linearttocloourart_002.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Colours/HAR_configureLineArtColourArt_001.png', None], dtype=object) ]
docs.toonboom.com
Recipes¶ Create a Custom Template Block¶ Start off by creating a new app in your project, e.g. a blocks app. Conent blocks in fancypages are basically Django models that require a few additional attributes and definitions. Let’s assume we want to create a simple widget that displays a custom template without providing any additional data that can be edited. All we need to do is define the following model: from fancypages.models.blocks import ContentBlock from fancypages.library import register_content_block @register_content_block class MyTemplateBlock(ContentBlock): name = _("My template") code = u'my-template' group = u'My Blocks' template_name = u'blocks/my_template_block.html' def __unicode__(self): return self.name The first three attributes name, code and group are important and have to be specified on every new content block. Changing Rich Text Editor¶ Fancypages uses Trumbowyg as the rich text editor by default. It is an open-source tool licensed under the MIT license and provides the basics required for rich text editing in the fancypages editor panel. Alternatively, other rich text editors can be used instead. Fancypages comes with an alternative setup for Froala. Although Froala is a more comprehensive editor, it is not the default because of its license. It is only free to use for personal and non-profit project, commercial projects require a license. Switching to Froala¶ The Froala editor can be enabled in three simple steps but before we get started, you have to download Froala from their website and unpack it. Step 1: Copy the files froala_editor.min.js and froala_editor.min.css into your project’s static file directory. This would usually be something like static/libs/froala/. Step 2: Override the fancypages partials that define JavaScript and CSS files required to the editor panel. Copy the following three files from fancypages into your template directory: templates/fancypages/editor/head.html templates/fancypages/editor/partials/cdn_scripts.html templates/fancypages/editor/partials/extrascripts.html Remove the trumbowyg.css and trumbowyg.min.js files forom the head.html and extrascripts.html respectively and replace them with the corresponding CSS and JavaScript files for Froala. You’ll also need to add Font Awesome to the cdn_scripts.html, e.g.: <link href="//maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet"> Step 3: Set the rich text editor to Froala when initialising the Fancypages app in the editor panel by overwriting templates/fancypages/editor/body.html and starting the application using: $(document).ready(function(){ FancypageApp.start({'editor': 'froala'}); }); The rich text editors in the editor panel should now use Froala instead of the default Trumbowyg editor. Using a custom editor¶ You can also use your favourite editor by adding all the JavaScript and CSS requirements similar to the Froala example and providing a Backbone/Marionette view class that provides the necessary initialisations. For an example, take a look at the FroalaEditor and TrumbowygEditor views in the Marionette views for Fancypages. To enable your editor set the editor option for the Fancypages app to custom and pass you view class as the editorView. An example might look like this: $(document).ready(function(){ FancypageApp.start({ editor: 'custom', editorView: myownjavascript.Views.FavouriteEditor }); }); Customising Rich Text Editor¶ In addition to choose the editor you want to use for rich text editing, you can also configure the way the editor behaves by passing editor-specific options to the fancypages app when it is initialised in the fancypages/editor/body.html template. Simply overwrite the template and update the script section at the bottom with something like this: .. code-block:: javascript - $(document).ready(function(){ - - FancypageApp.start({ - editor: ‘trumbowyg’, editorOptions: { fullscreenable: true btns: [‘viewHTML’, ‘|’, ‘formatting’, ‘|’, ‘link’, ‘|’, ‘insertImage’, ‘|’, ‘insertHorizontalRule’ ] }, }); });
http://django-fancypages.readthedocs.io/recipes.html
2017-06-22T16:18:51
CC-MAIN-2017-26
1498128319636.73
[]
django-fancypages.readthedocs.io
Overview There is a specific set of buttons introduced with the RadRibbonView control. They all inherit and extend the functionality of the standard button controls i.e. RadRibbonToggleButton derives from RadToggleButton, RadRibbonSplitButton derives from RadSplitButton, etc. The additional functionality which they provide allows you to easily implement MS-Office-Ribbon-like behavior in your application. This topic covers the common functionality for all ribbon buttons. The RadRibbonButtons can be used outside the RadRibbonView control as well. The following Ribbon buttons are available: Button States There are three button states: Large - displays the large image and the text label defined for the button. Medium - displays the small image and the text label defined for the button. Small - displays the small image defined for the button. The state of the button depends on the state of the RadRibbonGroup and can be controlled via the CollapseToSmall, CollapseToMedium and the IsAutoSize properties of the ribbon buttons. To learn more about that take a look at the Common Functionality section of this topic. To learn more about the states of the RadRibbonGroup take a look at this topic. Common Functionality As it was mentioned above all RadRibbonButtons derive from the base button controls. Each of them inherits the specifics of the respective button and implements additional functionality. Although they are different controls, there is a common set of properties explained below. Text - gets or sets the text label that is shown in Medium and Large button state. SmallImage - gets or sets the image that is shown in Medium and Small button state. LargeImage - gets or sets the image that is shown in Large button state. Size - gets or sets the button initial size. This will be the maximum size of the button as well. SplitText - enables or disables the text wrapping for the large-sized button. This property is available only for the RadRibbonSplitButton, RadRibbonDropDownButton, RadRibbonButton. CollapseToSmall - specifies when the button will be collapsed to its Small state, depending on the state of the RadRibbonGroup it belongs to. CollapseToMedium - specifies when the button will be collapsed to its Medium state, depending on the state of the RadRibbonGroup to which it belongs. The CollapseToSmall and CollapseToMedium properties use the CollapseThreshold enumeration. It has the following values: - Never - indicates that the button will never collapse to Small/Medium state. This is the default value of the properties. - WhenGroupIsMedium - indicates that the button will go to the Small/Medium state when its RadRibbonGroup is in Medium state. - WhenGroupIsSmall - indicates that the button will go to the Small/Medium state when its RadRibbonGroup is in Small state. IsAutoSize - specifies whether the button Image will be sized accordingly to the RibbonView guidance specification. If set to False, the button will display its images (both Small and Large) in its original size. Otherwise the SmallImage will be displayed with size of 16x16px and the LargeImage will be displayed with size of 32x32px. TextRow1 - gets the text that is shown in Medium and Large button state. TextRow2 - gets the text that is shown in the Large button state. Example Here is an example of a RadRibbonButton with the following properties set. XAML <telerik:RadRibbonButton. and Handling the Button Clicks There are two ways to implement a custom logic upon a button click - via event handler and via commands. The first one is the standard way. You have to attach an event handler to the Click event of the button. XAML <telerik:RadRibbonButton C# private void RadRibbonButton_Click(object sender, RoutedEventArgs e) { //place your custom logic here. } VB.NET Private Sub RadRibbonButton_Click(sender As Object, e As RoutedEventArgs) 'place your custom logic here.' End Sub The other way is to set the Command property to a certain command. Here is an example of the command defined in the code-behind file of your UserControl. In order to create a command you have to create a static read-only instance of Telerik.Windows.Controls.RoutedUICommand and then add execute and you can execute event handlers to the Telerik.Windows.Controls.CommandManager class. C# public partial class RibbonButtonsSample : UserControl { public static readonly RoutedUICommand EquationCommand = new RoutedUICommand( "Equation", "EquationCommand", typeof( RibbonButtonsSample ) ); public RibbonButtonsSample() { InitializeComponent(); CommandManager.AddExecutedHandler( this, this.OnExecuted ); CommandManager.AddCanExecuteHandler( this, this.OnCanExecute ); } private void OnExecuted( object sender, ExecutedRoutedEventArgs e ) { this.LayoutRoot.Background = new SolidColorBrush( Colors.Blue ); } private void OnCanExecute( object sender, CanExecuteRoutedEventArgs e ) { e.CanExecute = true; } } VB.NET Public Partial Class RibbonButtonsSample Inherits UserControl Public Shared ReadOnly EquationCommand As New RoutedUICommand("Equation", "EquationCommand", GetType(RibbonButtonsSample)) Public Sub New() InitializeComponent() CommandManager.AddExecutedHandler(Me, AddressOf Me.OnExecuted) CommandManager.AddCanExecuteHandler(Me, AddressOf Me.OnCanExecute) End Sub Private Sub OnExecuted(sender As Object, e As ExecutedRoutedEventArgs) Me.LayoutRoot.Background = New SolidColorBrush(Colors.Blue) End Sub Private Sub OnCanExecute(sender As Object, e As CanExecuteRoutedEventArgs) e.CanExecute = True End Sub End Class After that set the Command property of the RadRibbonButton to the full qualified path to the command. XAML <telerik:RadRibbonButton And now if you run your application and hit the 'Equation' button, the background of the user control will be changed to Blue as it is shown on the snapshot below. ButtonGroup RadRibbonView allows you to additionally organize your buttons with common functionality (i.e. wrap Increase, Decrease Font buttons) in one panel. For this purpose you should use the RadButtonGroup class. It will automatically apply a Small Size on all buttons wrapped in it. Furthermore, the RadButtonGroup is designed to create a separator between every two buttons in it. The next example shows you how to use RadButtonGroup. XAML <telerik:RadRibbonView x: <telerik:RadRibbonTab <telerik:RadRibbonGroup <telerik:RadOrderedWrapPanel> <telerik:RadButtonGroup> <telerik:RadRibbonButton <telerik:RadRibbonButton </telerik:RadButtonGroup> <telerik:RadButtonGroup> <telerik:RadRibbonButton </telerik:RadButtonGroup> </telerik:RadOrderedWrapPanel> </telerik:RadRibbonGroup> </telerik:RadRibbonTab> </telerik:RadRibbonView>
http://docs.telerik.com/devtools/silverlight/controls/radribbonview/features/ribbon-controls/ribbon-buttons/buttons-overview
2017-06-22T16:41:17
CC-MAIN-2017-26
1498128319636.73
[array(['images/RibbonView_Buttons_Overview_LargeGroup.png', None], dtype=object) array(['images/RibbonView_Buttons_Overview_CollapseGroup.png', None], dtype=object) array(['images/RibbonView_Buttons_Overview_Command.png', None], dtype=object) array(['images/RibbonView_Buttons_Overview_ButtonGroup.png', None], dtype=object) ]
docs.telerik.com