content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Heuristics An approximate solution to a problem. When an algorithm is described as a heuristic it means that: - The approach seems to be sensible (i.e., somebody created it thinking it was a good approach). - The approach has not been proven to be correct. - The precise situations when the approach will be unreliable are not known (by contrast, when a solution is not a heuristic, but instead can be derived from formal assumptions, it is possible to understand when it will fail by reference to the closeness of a situation to the assumptions).
https://docs.displayr.com/wiki/Heuristics
2020-09-18T13:34:01
CC-MAIN-2020-40
1600400187899.11
[]
docs.displayr.com
[−][src]Crate datadog_logs Simple crate to send logs directly to DataDog via HTTP It offloads the job of sending logs to DataDog to a separate thread. Therefore it is easy to integrate it with some crates providing synchronous logging API like log. Feature flags log-integration - enables optional integration with log crate self-log - enables console logging of events inside DataDogLogger itself
https://docs.rs/datadog-logs/0.1.0/datadog_logs/
2020-09-18T13:09:36
CC-MAIN-2020-40
1600400187899.11
[]
docs.rs
Conveyor lets you quickly compute and reuse information stored in Jupyter notebooks in just a couple lines of code. Using this tool to reference prior work in outside notebooks can help keep notebook workflows organized, separating large ideas that require multiple steps to execute into a smaller, more focused file structure. Quickstart¶ The fastest way to get started using Conveyor is with the nbglobals module: import conveyor conveyor.run_notebook("Sample Calculations I.ipynb", import_globals=True) from conveyor.nbglobals import x, z Any time run_notebook is called with import_globals=True, the variables in conveyor.nbglobals are updated by those in the new notebook.
https://conveyor.readthedocs.io/en/latest/
2020-09-18T13:43:00
CC-MAIN-2020-40
1600400187899.11
[]
conveyor.readthedocs.io
- Security > - Authentication > - Authentication Mechanisms > - x.509 x.509¶ MongoDB supports x.509 certificate authentication for client authentication and internal authentication of the members of replica sets and sharded clusters. x.509 certificate authentication requires a secure TLS/SSL connection. Note Starting in version 4.0, MongoDB disables support for TLS 1.0 encryption on systems where TLS 1.1+ is available. For more details, see Disable TLS 1.0.). If the MongoDB deployment has tlsX509ClusterAuthDNOverrideset (available starting in MongoDB 4.2), the client x.509 certificate’s subject must also differ from that value. Warning If a client x.509 certificate’s subject has the same O, OU, and DCcombination as the Member x.509 Certificate (or tlsX509ClusterAuthDNOverrideif set), the client will be identified as a cluster member and granted full permission on the system. connect and authenticate using x.509 client certificate: - For MongoDB 4.2 or greater, include the following options for the client: --tls(or the deprecated --ssloption) --tlsCertificateKeyFile(or the deprecated --sslPEMKeyFileoption) --tlsCertificateKeyFilePassword(or the deprecated --sslPEMKeyPasswordoption) if the certificate key file is encrypted --authenticationDatabase '$external' --authenticationMechanism MONGODB-X509 - For MongoDB 4.0 and earlier, include the following options for the client: --ssl --sslPEMKeyFile --sslPEMKeyPasswordoption if the --sslPEMKeyFileis encrypted. --authenticationDatabase '$external' --authenticationMechanism MONGODB-X509 You can also make the TLS/SSL connection first, and then use db.auth() in the $external database to authenticate. For examples of both cases, see the Authenticate with a x.509 Certificate (Using tls Options) section in Use x.509 Certificates to Authenticate Clients Member x.509 Certificates¶ For internal authentication, members of sharded clusters and replica sets can use x.509 certificates instead of keyfiles, which use the SCRAM authentication mechanism. Member Certificate Requirements¶ The member certificate ( net.tls.clusterFile, if specified, and net.tls.certificateKeyFile), used (or the tlsX509ClusterAuthDNOverridevalue, if set). Configuration for Membership Authentication Note.
https://docs.mongodb.com/manual/core/security-x.509/
2020-09-18T14:30:53
CC-MAIN-2020-40
1600400187899.11
[]
docs.mongodb.com
In recent years, Docker has been becoming more and more popular tool used to deploy web applications. According to Datadog in 2018, the adoption of Docker in large organizations reached about 47 percent and almost 20 percent in small organizations. This report it two years old - no doubt Docker is even more common now. In my opinion, knowing Docker basics is an essential tool in the toolbox of every software engineer, especially in the web development ecosystem. In this article, I'll demonstrate the easiest way to Dockerize and deploy a simple application. Before we dive deep into practical steps, let's first answer two essential questions - "What is Docker" and "Why should I use it" in the first place. Docker in a nutshellDocker in a nutshell Docker is a tool that makes it easy to build and deploy your applications, typically to the cloud environment. It allows you to package your application in a container that contains your app with all of the things it needs, such as libraries and other dependencies. Then, this package can be run on any machine with a Docker engine installed, no matter the underlying configuration or system distribution. Why should I use Docker?Why should I use Docker? It works on my machine sentence has become a meme in the software world. You can even get a sticker on your laptop: Making applications run consistently in various environments is one of the issues addressed very well by Docker. Docker makes sure that your containerized applications run in the same way on your machine, on your friend's machine, and on the AWS server (and anywhere else where Docker engine is installed). It is truly a superpower. As a developer, you no longer need to worry about the underlying system. After you Dockerize your app, you can be sure that it behaves in the same manner in your development, testing, and production environments, as well as on your local machine. It makes building and testing applications way more comfortable than it was before. Another reason why you should be interested in Docker is the popularization of cloud, microservices, and Kubernetes. Docker is the first-class citizen in the cloud-native world, so if you want to take the full advantage of scalable, cloud-native application architectures, Docker is the way to go. How to deploy Docker containersHow to deploy Docker containers Let's move on to the practical application and usage of Docker. We'll now build a very simple web application that responds to HTTP requests, dockerize it and deploy to Qovery - a scalable Container as a Service platform. Create a simple applicationCreate a simple application For the sake of simplicity, we'll create a simple Node.js application that returns a Hello, World text in response to HTTP requests. I choose Node.js here because it's simple and popular technology, but you can use Docker with basically any language and framework. Let's create an empty folder for our new application and initialize an empty Git repository: mkdir deploying-dockercd deploying-dockergit init Now, create app.js file with the source code of our server: const http = require('http');const hostname = '0.0.0.0';const port = 3000;const server = http.createServer((req, res) => {res.statusCode = 200;res.setHeader('Content-Type', 'text/plain');res.end('Hello World');});server.listen(port, hostname, () => {console.log(`Server running at{hostname}:${port}/`);}); It is a very simple server that returns "Hello World" text on its root endpoint. After it's done, we want to make this app run in a Docker container. To do so, we need to create a Dockerfile. What is Dockerfile?What is Dockerfile? Besides containers, Docker uses the concept of Images. Image is a template used to create and run containers. Dockerfile describes the steps required to build the image. Later on, this image is used as a template to run containers with your application. You can think about images and containers as a good analogy to classes and objects (instances of a given class) in the Object-Oriented Programming world. Create a Dockerfile that will allow us to run our Node.js app in a container. Create a file named Dockerfile with the following content: FROM node:13-alpineRUN mkdir -p /usr/src/appWORKDIR /usr/src/appCOPY . .EXPOSE 3000CMD node app.js Let's discuss all lines of the Dockerfile: FROM node:13-alpinespecifies the base of our Docker image. It's a base used to get started with building an image. RUN mkdir -p /usr/src/appcreates a new empty folder in /usr/src/app WORKDIR /usr/src/appdefines the working directory of our container COPY . .adds the contents of our application to the container EXPOSE 3000informs Docker that the container listens on the specified network port at runtime - and, finally: CMD node app.jsis the command that starts our application. Now we got all basic things we need to run our application in a Docker container! Let's try it out: - Build Docker image of the app using docker build testing/docker . - Run a container with our application by executing docker run -p 3000:3000 testing/docker the -p 3000:3000 flag makes container port 3000 accessible on your localhost:3000. Great! The container is up. Run docker ps to see the list of running containers and confirm that it is indeed running. Now open a browser at to see that the application in a container responded with Hello, World message. Did it work? Great. Our app works well in the Docker container. It's adorable, but we want to share our app with the world - running applications only on our own machine won't make us millionaires! Container as a ServiceContainer as a Service To deploy our Dockerized application, we'll use Qovery. It's a Container as a Service platform that allows us to deploy Dockerized apps without any efforts. Qovery is free up to three applications (and databases!) in the community version. Install Qovery CLIInstall Qovery CLI - Linux - MacOS - Windows # Download and install Qovery CLI on every Linux distribution$ curl -s | sudo bash Your browser window with sign-in options will open. Note: Qovery needs access to your account to be able to clone your repository for future application builds. Click here to authorize Qovery to clone and build your applications. Congratulations, you are logged-in. After you have access to Qovery, it's time to deploy the application. Deploy the docker containerDeploy the docker container - Run qovery init - Choose application name, e.g., node-app - Choose project name, e.g., testing-docker - Commit and push your changes to Github: git add . ; git commit -m "Initial commit" ; git push -u origin master"(create an empty repository beforefor your application on Github before if it's not done yet) Voila! That's all. Your Dockerized application is being deployed as a Docker container. To deploy a Docker container on Qovery, all you need is a Dockerfile that describes containers with your application + running qovery init command to initialize Qovery. From now on, Qovery will build and deploy your Dockerized application after you make any changes in your repository to scalable Kubernetes clusters as a Docker container. To check that your application is in fact deploying, run qovery status: BRANCH NAME | STATUS | ENDPOINTS | APPLICATIONS | DATABASESmaster | running | | node-app |APPLICATION NAME | STATUS | DATABASESnode-app | running |DATABASE NAME | STATUS | TYPE | VERSION | ENDPOINT | PORT | USERNAME | PASSWORD | APPLICATIONS SummarySummary In this guide, you learned the essential basics of Docker. You also learned why you should be interested in using it, and how to deploy your application to the cloud as a Docker container. This is all you need to know to improve your development experience and deploy your application to the cloud with ease! If you have any questions, feedback or want to learn more, please join us on our Discord server and feel free to speak your mind.
https://docs.qovery.com/guides/tutorial/how-to-deploy-a-docker-container/
2020-09-18T13:16:19
CC-MAIN-2020-40
1600400187899.11
[array(['/img/it-works-on-my-machine.jpg', 'It works - on my machine!'], dtype=object) ]
docs.qovery.com
Calendar Shows the calendar). Calendar query¶ Indicate what activity or issue will show in the calendar. The options are: - Topic Activity - Allows to view topic ativity of selected topics from creation to modification. - Open topics - Shows open topics from creation to final statuses. - Calendar planner (eg: Milestones or Environment planner) - Show schedulers with job slots o specific milestones. - Own fields - Personalize the fields to show, for example, if user want to see only a specific range of dates. Need to specify two fields: - Initial date. - Final date Default View¶ Establish the default view of the calendar: - Month - Shows a month view by default. - Basic week - Show the complete week (from Sunday to Monday) - Schedule week - Show the week divided by hours. - Basic day - Only shows the present day. - Schedule day - Shows the present day divided by hours.s First weekday¶ Select the first day of the week to see the calendar. Select topics in categories¶ The option gives to user the chance to select the topics that will appear in the calendar view Exclude selected categories?¶ Show only the categories not selected in the combo above. Show jobs?¶ If the user has permissions it shows the jobs scheduled in the calendar.}
https://docs.clarive.com/palette/dashlets/calendar/
2020-09-18T13:20:25
CC-MAIN-2020-40
1600400187899.11
[]
docs.clarive.com
· The newest version of Windows Intune: More features and even better PC management. The (US Pacific Time). Next steps: Learn more about what’s new! · Read: o What’s New in Windows Intune o Windows Intune at a Glance o Windows Intune Product Guide o Deploying Software and Third-Party Updates with Windows Intune o Frequently Asked Questions o Windows Intune Springboard Series · Watch: o Windows Intune Overview Video o Software Deployment Video o Software license management Video o Windows Intune Policy enhancements Video · Interact: o Sign up for the Windows Intune Newsletter · If you’re not using Windows Intune yet, this is the perfect time to sign up for a free 30-day trial! View the complete Windows Intune Getting Started Guide to learn how to get the most from your trial.* · If you’re already a Windows Intune pro, don’t worry—the architecture hasn’t changed at all. It just offers many new features and is a lot easier to use. Learn more about how you’ll get this next release.
https://docs.microsoft.com/en-us/archive/blogs/nickmac/the-newest-version-of-windows-intune-more-features-and-even-better-pc-management
2020-09-18T12:53:30
CC-MAIN-2020-40
1600400187899.11
[]
docs.microsoft.com
Elemental metals for environmental remediation: lessons from hydrometallurgy Crane, R. A. Noubactep, C. Crane, R. A.; Noubactep, C., 2012: Elemental metals for environmental remediation: lessons from hydrometallurgy. In: Crane, R.A.; Noubactep, C. (2012): Elemental metals for environmental remediation: lessons from hydrometallurgy, 1192 - 1196, DOI 10.23689/fidgeo-2523. In the mining industry, the separation of economically valuable metals from gangue materials is a well established process. As part of this field, hydrometallurgy uses chemical fluids (leachates) of acidic or basic pH to dissolve the target metal(s) for subsequent concentration, purification and recovery. The type and concentration of the leach solution is typically controlled to allow selective dissolution of the target metal(s), and other parameters such as oxidation potential, temperature and the presence of complexing/chelating agents. In the remediation industry the use of elemental metals (M0) for the removal of aqueous contaminant species is also a well established process. Removal is achieved by the oxidative corrosion of the M0 and associated pH and/or redox potential change. Whilst the two processes are directly opposed and mutually exclusive they both stem from the same theoretical background: metal dissolution/precipitation reactions. In the mining industry, with each prospective ore deposit physically and chemically unique, a robust series of tests are performed at each mine site to determine optimal hydrometallurgical fluid composition and treatment conditions (e.g. fluid temperature, flow rate) for target metal dissolution/yield. In comparison, within the remediation industry not all such variables are typically considered. In the present communication a comparison of the processes adopted in both industries are presented. The consequent need for a more robust empirical framework within the remediation industry is outlined.
https://e-docs.geo-leo.de/handle/11858/6836
2020-09-18T14:21:17
CC-MAIN-2020-40
1600400187899.11
[]
e-docs.geo-leo.de
(Anarchist workers in the Spanish revolution) Years ago I saw the film, The Prime of Miss Jean Brodie about a teacher in a Scottish girl’s school who strayed from the school curriculum by praising Adolf Hitler and Benito Mussolini while romanticizing the Spanish Civil War. The arguments she used in her classroom reappear in Adam Hochschild’s new book SPAIN IN OUR HEARTS: AMERICANS IN THE SPANISH CIVIL WAR, 1936-1939 as the author presents the positions of multiple sides engaged in the fight for Republican Spain. The title leads one to believe that the books main focus is on the American experience, but in reality Hochschild paints a much wider canvas that includes Spaniards, French, Italian, German, Russian, in addition to American actors. Hochschild is a prolific author whose work includes KING LEOPOLD’S GHOST, BURY THE CHAINS, and the award winning TO END ALL WARS. He begins his latest effort in striking style as two naked American volunteers fighting for the Spanish Republic against the fascists emerge from the Ebro River as they flee Francisco Franco’s forces. Fortunately for them, they run into Herbert Matthews, a New York Times reporter and Ernest Hemingway, who at the time is a free-lance writer for a newspaper syndicate covering the civil war. The reader is immediately hooked as Hochschild begins to narrate a conflict that many historians describe as the precursor of World War II as Nazi Germany and Fascist Italy allied with Franco’s forces as a testing ground for new weapons and allowing their soldiers to gain significant combat experience. It became very difficult for the Republican government to gain support outside of Spain. England and France were in the midst of appeasement after allowing Hitler’s troops to seize the Rhineland. In the United States, Franklin D. Roosevelt facing reelection refused to provide aid as not to anger isolationist forces who preached neutrality. This left only Stalin’s Soviet Union as a source of weapons and soldiers which for the Republican government became a “devil’s bargain” with the Russian dictator. (Fascist dictator Francisco Franco) Hochschild does a superb job describing all the major aspects of the war. He details the ideological conflicts that exited in Republican ranks; those who supported the Comintern, better described as the Communist Party of the Soviet Union; anarchists who were to the left of the communists; and the Partido Obrero de Unificacion Marxista, or Spanish communists. The conflicts between these groups greatly hindered creating a united front against Franco’s forces. Aside from the ideological battle on the left, another existed among the journalists who covered the war. Among New York Times reporters was William P. Carney who admired Franco and his reports from the front mirrored fascist propaganda. Herbert Matthews a Times colleague sparred with Carney repeatedly as he refused to give up on the Republican cause. Another important journalist was Louis Fischer, married to a Russian woman, was in the Stalinist camp, even after witnessing the purges in the Soviet Union. Literary figures abound in the narrative as we encounter George Orwell, who would be wounded fighting for the British Battalion, in addition to Virginia Cowles, Ernest Hemingway and others. The actual fighting is covered in detail as Hochschild describes the enormity of the conflict. The amount of aid and troops poured in by Hitler and Mussolini is staggering and as a portent for the future the author describes the new weaponry that is tested that will be staples for the Nazi Germany and Fascist Italy during World War II. Franco never could have been victorious without the aid of Germany and Italy. (Male and female militia fighters who fought against Franco) The title of the book intimates the role of Americans in the war and here Hochschild does not disappoint. We meet a number of Americans, married couples and single individuals who played a prominent role in the war and provided new sources of material for the author. The story that Hochschild narrates from the battle front and operations in the rear and the efforts to end American neutrality come from Charles and Lois Orr, economics instructors in California who as socialists believed that democracy could be attained peacefully, not like in the Soviet Union. They will arrive in Barcelona in September, 1939 and help describe the disaster that will eventually evolve in that Catalonian city. Bob and Marion Merriman, had lived in the Soviet Union, and witnessed the disaster of collectivization and would have a major impact on the International Brigade, particularly the Lincoln-Washington Brigade of American soldiers. The intensity of the fighting is often told through the eyes of Bob Merriman who became one of the commanders of the International Brigade. One of the most important documents that turned up at least fifty years after the fighting was a diary kept by James Neugass, an American ambulance driver for Dr. Edward Barsky, an American surgeon who seemed to operate twenty-four hours a day. Neugass’ diary depicts the paucity of medical supplies and physicians that attended to American volunteers. The diary also describes the International Brigades’ retreat as Franco’s forces split the Republicans in two as they reached the Mediterranean Sea. Another important aspect of the war that Hochschild presents his description of the fighting in and around Madrid that will end up as a siege of the Spanish capitol. Hochschild places the reader inside the city and is witness to the horrors that ensued. (International volunteers for the Republic) Perhaps the most disturbing part of the book aside from the horrors of war was the role played by Texaco and the blinders that the Roosevelt administration employed in order to not make political waves that could endanger elections. Texaco was headed by the Norwegian born Torkid Rieber who rose from very little to become the top executive of the oil company. Rieber was an admirer of Hitler and early on in the fighting switched supplying oil from the Republican government to Franco’s armies. Further, Rieber allowed Franco to purchase the oil on credit. This violated American law and if Roosevelt had wanted to he could have almost stopped the fighting by enforcing US statutes. Roosevelt, fearing a catholic backlash in the 1936 election refused to do so. Not only did Texaco supply the oil for Franco’s victory, they also supplied over 12,000 trucks and Firestone tires that were extremely scarce as well as providing important shipping intelligence to Franco pertaining to oil deliveries to Republican forces. All told Texaco provided over $200 million worth of oil in over 300 deliveries. (343) the role of the papacy in the war gains Hochschild’s attention as Spanish priests with the approval of the Pope supported Franco’s war to the hilt. Many Spanish priests supported the execution of their brethren who did not support Franco in addition to the execution of Republican soldiers. Further, they were apoplectic when the Republican government implemented land reform and church properties were given to peasants, a major reason for their support of the Spanish dictator. The civil war itself exhibited the Spanish class struggle and Hochschild delves into the economic and moral implications of Spanish land policies. One of the most important points the author puts forth is that “while much [the civil war] of that feels distant now, other aspects of for ours as well.” (xix-xx) Hochschild has written an important book that revisits the Spanish Civil War integrating a number of new sources that previous authors had not uncovered. For those interested in the topic, you will not find a better read. (Workers who supported the Republic)
https://docs-books.com/2016/04/19/spain-in-our-hearts-americans-in-the-spanish-civil-war-1936-1939-by-adam-hochschild/
2020-09-18T14:37:16
CC-MAIN-2020-40
1600400187899.11
[]
docs-books.com
This page details common use cases for adding and customizing observability with Datadog APM. Add custom span tags to your spans to customize your observability within Datadog. The span tags are applied to your incoming traces, allowing you to correlate observed behavior with code-level information such as merchant tier, checkout amount, or user ID. Add custom tags to your spans corresponding to any dynamic value within your application code such as customer.id. Access the current active span from any method within your code. Note: If the method is called and there is no span currently active, active_span is nil. Add tags to all spans by configuring the tracer with the tags option: Datadog.configure do |c| c.tags = { 'team' => 'qa' } end You can also use the DD_TAGS environment variable to set tags on all spans for an application. For more information on Ruby environment variables, refer to the setup documentation. There are two ways to set an error on a span span.set_errorand pass in the Exception Object. This will automatically extract the error type, message, and backtrace. require 'ddtrace' require 'timeout' def example_method span = Datadog.tracer.trace('example.trace') puts 'some work' sleep(1) raise StandardError.new "This is an exception" rescue StandardError => error span = Datadog.tracer.active_span span.set_error(error) unless span.nil? raise error ensure span.finish end example_method() The second is to use tracer.trace which will by default set the error type, message, and backtrace. To configure this behavior you can use the on_error option, which is the Handler invoked when a block is provided to trace, and the block raises an error. The Proc is provided span and error as arguments. By default, on_error Sets error on the span. Default Behavior: on_error require 'ddtrace' require 'timeout' def example_method puts 'some work' sleep(1) raise StandardError.new "This is a exception" end Datadog.tracer.trace('example.trace', on_error: custom_error_handler) do |span| example_method() end Custom Behavior: on_error require 'ddtrace' require 'timeout' def example_method puts 'some work' sleep(1) raise StandardError.new "This is a special exception" end custom_error_handler = proc do |span, error| span.set_tag('custom_tag', 'custom_value') span.set_error(error) unless error.message.include?("a special exception") end Datadog.tracer.trace('example.trace', on_error: custom_error_handler) do |span| example_method() end If you aren’t using supported library instrumentation (see library compatibility), you may want to to manually instrument your code. Adding tracing to your code is easy using the Datadog.tracer.trace method, which you can wrap around any Ruby. Programmatically create spans around any block of code. Spans created in this manner integrate with other tracing mechanisms automatically. In other words, if a trace has already started, the manual span will have its caller as its parent span. Similarly, any traced methods called from the wrapped block of code will have the manual span as its parent. # An example of a Sinatra endpoint, # with Datadog tracing around the request, # database query, and rendering steps. get '/posts' do Datadog.tracer.trace('web.request', service: '<SERVICE_NAME>',=.*/, '') } ) There are additional configurations possible for both the tracing client and Datadog Agent for context propagation with B3 Headers, as well as to exclude specific Resources from sending traces to Datadog in the event these traces are not wanted to count in metrics calculated, such as Health Checks. Datadog APM tracer supports B3 headers extraction and injection for distributed tracing. Distributed headers injection and extraction is controlled by configuring injection/extraction styles. Currently two styles are supported: Datadog B3 Injection styles can be configured using: DD_PROPAGATION_STYLE_INJECT=Datadog,B3 The value of the environment variable is a comma separated list of header styles that are enabled for injection. By default only Datadog injection style is enabled. Extraction styles can be configured using: DD_PROPAGATION_STYLE_EXTRACT=Datadog,B3 The value of the environment variable is a comma separated list of header styles that are enabled for extraction. By default only Datadog extraction style is enabled. If multiple extraction styles are enabled extraction attempt is done on the order those styles are configured and first successful extracted value is used. Traces can be excluded based on their resource name, to remove synthetic traffic such as health checks from reporting traces to Datadog. This and other security and fine-tuning configurations can be found on the Security page. To set up Datadog with OpenTracing, see the Ruby Quickstart for OpenTracing for details. as described in the Ruby tracer settings section. By default, configuring OpenTracing with Datadog does enable this, see Ruby integration instrumentation for more details. OpenTelemetry support is available by using the opentelemetry-exporters-datadog gem to export traces from OpenTelemetry to Datadog. If you use bundler, include the following in your Gemfile: gem 'opentelemetry-exporters-datadog' gem 'opentelemetry-api', '~> 0.5' gem 'opentelemetry-sdk', '~> 0.5' Or install the gems directly using: gem install opentelemetry-api gem install opentelemetry-sdk gem install opentelemetry-exporters-datadog Install the Datadog processor and exporter in your application and configure the options. Then use the OpenTelemetry interfaces to produce traces and other information: require 'opentelemetry/sdk' require 'opentelemetry-exporters-datadog' # # For propagation of Datadog-specific distributed tracing headers, # set HTTP propagation to the Composite Propagator OpenTelemetry::Exporters::Datadog::Propagator.auto_configure # To start a trace,(name: 'event in bar') # create bar as child of foo tracer.in_span('bar') do |child_span| # inspect the span pp child_span end end The Datadog Agent URL and span tag values can be configured if necessary or desired based upon your environment and Agent location. By default, the OpenTelemetry Datadog Exporter sends traces to. Send traces to a different URL by configuring the following environment variables: DD_TRACE_AGENT_URL: The <host>:<port>where Datadog Agent is listening for traces, for example,. You can override these values at the trace exporter level: # Configure the application to automatically tag your Datadog exported traces by setting the following environment variables: DD_ENV: Your application environment, for example production, staging. DD_SERVICE: Your application’s default service name, for example, billing-api. DD_VERSION: Your application version, for example, 2.5, 202003181415, or 1.3-alpha. DD_TAGS: Custom tags in value pairs, separated by commas, for example, layer:api,team:intake. DD_ENV, DD_SERVICE, or DD_VERSIONis set, it will override any corresponding env, service, or versiontag defined in DD_TAGS. DD_ENV, DD_SERVICEand DD_VERSIONare not set, you can configure environment, service, and version by using corresponding tags in DD_TAGS. Tag values can also be overridden at the trace exporter level. This lets you set values on a per-application basis, so you can have multiple applications reporting for different environments on the same host: #: '', env: 'prod', version: '1.5-alpha', tags: 'team:ops,region:west' ) ) ) end Tags that are set directly on individual spans supersede conflicting tags defined at the application level. Documentation, liens et articles supplémentaires utiles:
https://docs.datadoghq.com/fr/tracing/custom_instrumentation/ruby/?lang_pref=fr
2020-09-18T14:57:53
CC-MAIN-2020-40
1600400187899.11
[]
docs.datadoghq.com
Searching data / Building a query / Operations reference / Geolocation group / Geolocated organization name with MaxMind GeoIP2 (mm2org) Geolocated organization name with MaxMind GeoIP2 (mm2org) Description Uses MaxMind GeoIP2 services to return the name of the organization associated with an IPv4 or IPv6 address. If you are looking for the organization associated with the autonomous system number associated with the IP address, then use the Geolocated AS Organization Name with MaxMind GeoIP2 (mm2asorg). How does it work in the search window? This operation only requires one argument: The data type of the new column is string. Example We want to get the organization names corresponding to the IP addresses in our clientIpAddress column, so we click Create column and select the Geolocated organization name with MaxMind GeoIP2 operation. Select clientIpAddress as the argument and assign a name to the new column - let's call it orgNameIp. The result is a column of string data type containing the organization name. How does it work in LINQ? Use the operator as... and add the operation syntax to create the new column. mm2org(ip) mm2org(ip6) Example Copy the following LINQ script and try the above example on the demo.ecommerce.data table.
https://docs.devo.com/confluence/ndt/searching-data/building-a-query/operations-reference/geolocation-group/geolocated-organization-name-with-maxmind-geoip2-mm2org
2020-09-18T14:18:28
CC-MAIN-2020-40
1600400187899.11
[]
docs.devo.com
Limiting Active User Sessions Based On Criteria¶ This tutorial demonstrates how you can set up active user session limiting based on a particular criteria with WSO2 Identity Server. To understand how to set up the concurrent session limiting feature, let's consider a scenario where you want a user who has an administrator role that cannot have more than one active concurrent session at a time. Here, you will use a sample application named Pickup Dispatch to deploy and set up sample authenticators required to try out the scenario. - On the Main menu of WSO2 Identity Server Management Console, click Concurrent-Session-Management. Click Ok. Note The authentication script defines a conditional step that executes session handling prompt only if the user belongs to an 'admin' or 'manager' role. Here you can specify the value of MaxSessionCountvariable to indicate the maximum number of allowed sessions. The default value is 1. For the purpose of this demo, change the value to 3. You can configure the MaxSessionCountvariable via the deployment.toml file in the <IS_HOME>/repository/conf/directory as well. Priority will be given to the configuration in the adaptive authentication script. To configure the MaxSessionCountvariable through deployment.tomlfile, append the following configuration with the intended value for MaxSessionCount. Please note that there is no specific maximum limit for the value of MaxSessionCountvariable. [authentication.authenticator.session_handler.parameters] max_session_count = "3" 6. Click Add Authentication Step. 7. Select active-sessions-limit-handler from the dropdown under Local Authenticators and click Add Authenticator. 8. Click Update. Testing the sample scenario¶ - Access the sample PickUp application using the following URL:. - Click Login and enter admin/admin credentials. - Repeat the previous two steps in three different web browsers, e.g. Firefox, Safari, and Opera. Now you can either terminate one or more active sessions or deny the login. Tip - If you select and terminate the active sessions exceeding the maximum limit, you will be naviagated to the application home page. Otherwise you will be re-prompted until the minimum required number of sessions are terminated. - You can use the Refresh Sessions button re-check active user sessions. 5. If you deny the login, the Authentication Error screen appears.
https://is.docs.wso2.com/en/5.11.0/learn/limiting-active-user-sessions-based-on-criteria/
2020-09-18T14:05:15
CC-MAIN-2020-40
1600400187899.11
[]
is.docs.wso2.com
Services¶ python-for-android supports the use of Android Services, background tasks running in separate processes. These are the closest Android equivalent to multiprocessing on e.g. desktop platforms, and it is not possible to use normal multiprocessing on Android. Services are also the only way to run code when your app is not currently opened by the user. Services must be declared when building your APK. Each one will have its own main.py file with the Python script to be run. Please. It is recommended to use the second method (below) where possible. Create a folder named service in your app directory, and add a file service/main.py. This file should contain the Python code that you want the service to run. To start the service, use the start_service function from the android module (you may need to add android to your app requirements): import android android.start_service(title='service name', description='service description', arg='argument to service') Arbitrary service scripts¶ This method is recommended for non-trivial use of services as it is more flexible, supporting multiple services and a wider range of options. To create the service, create a python script with your service code and add a --service=myservice:/path/to/myservice.py argument when calling python-for-android. The myservice name before the colon is the name of the service class, via which you will interact with it later. You can add multiple --service arguments to include multiple services, which you will later be able to stop and start from your app. To run the services (i.e. starting them from within your main app code), you must use PyJNIus to interact with the java class python-for-android creates for each one, as follows: from jnius import autoclass service = autoclass('your.package_ARGUMENT', '') Services support a range of options and interactions not yet documented here but all accessible via calling other methods of the service reference. Note The app root directory for Python imports will be in the app root folder even if the service file is in a subfolder. To import from your service folder you must use e.g. import service.module instead of import module, if the service file is in the service/ folder.)
https://python-for-android.readthedocs.io/en/latest/services/
2020-09-18T13:28:52
CC-MAIN-2020-40
1600400187899.11
[]
python-for-android.readthedocs.io
This section provides an overview of odl-integration-compatible-with-all and odl-integration-all features. Integration/Distribution project produces a Karaf 4 distribution which gives users access to many Karaf features provided by upstream OpenDaylight projects. Users are free to install arbitrary subset of those features, but not every feature combination is expected to work properly. Some features are pro-active, which means OpenDaylight in contact with othe network elements starts diving changes in the network even without prompting by users, in order to satisfy initial conditions their use case expects. Such activity from one feature may in turn affect behavior of another feature. In some cases, there exists features which offer diferent implementation of the same service, they may fail to initialize properly (e.g. failing to bind a port already bound by the other feature). Integration/Test project is maintaining system tests (CSIT) jobs. Aside of testing scenarios with only a minimal set of features installed (-only- jobs), the scenarios are also tested with a large set of features installed (-all- jobs). In order to define a proper set of features to test with, Integration/Distribution project defines two “aggregate” features. Note that these features are not intended for production use, so the feature repository which defines them is not enabled by default. The content of these features is determined by upstream OpenDaylight contributions, with Integration/Test providing insight on observed compatibuility relations. Integration/Distribution team is focused only on making sure the build process is reliable. This feature repository is enabled by default. It does not refer to any new features directly, instead it refers to upstream feature repositories, enabling any feature contained therein to be available for installation. The two aggregate features, defining sets of user-facing features defined by compatibility requirements. Note that is the compatibility relation differs between single node an cluster deployments, single node point of view takes precedence. This feature contains the largest set of user-facing features which may affect each others operation, but the set does not affect usability of Karaf infrastructure. Note that port binding conflicts and “server is unhealthy” status of config subsystem are considered to affect usability, as is a failure of Restconf to respond to GET on /restconf/modules with HTTP status 200. This feature is used in verification process for Integration/Distribution contributions. This feature contains the largest set of user-facing features which are not pro-active and do not affect each others operation. Installing this set together with just one of odl-integration-all features should still result in fully operational installation, as one pro-active feature should not lead to any conflicts. This should also hold if the single added feature is outside odl-integration-all, if it is one of conflicting implementations (and no such implementations is in odl-integration-all). This feature is used in the aforementioned -all- CSIT jobs.
https://docs.opendaylight.org/en/stable-oxygen/developer-guide/distribution-test-features.html
2020-09-18T13:48:43
CC-MAIN-2020-40
1600400187899.11
[]
docs.opendaylight.org
End-of-Life (EoL) Set Up an IKE Gateway To set up a VPN tunnel, the VPN peers or gateways must authenticate each other using preshared keys or digital certificates and establish a secure channel in which to negotiate the IPSec security association (SA) that will be used to secure traffic between the hosts on each side. - Define the IKE Gateway. - Select, clickNetworkNetwork ProfilesIKE GatewaysAdd, and on theGeneraltab, enter theNameof the gateway. - ForVersion, selectIKEv1 only mode,IKEv2 only mode, orIKEv2 preferred mode. The IKE gateway begins its negotiation with its peer in the mode specified here. If you selectIKEv2 preferred mode, the two peers will use IKEv2 if the remote peer supports it; otherwise they will use IKEv1.(TheVersionselection also determines which options are available on theAdvanced Optionstab.) - Establish the local endpoint of the tunnel (gateway). - ForAddress Type, clickIPv4orIPv6. - Select the physical, outgoingInterfaceon the firewall where the local gateway resides. - From theLocal IP Addressdrop-down, select the IP address that will be used as the endpoint for the VPN connection. This is the external-facing interface with a publicly routable IP address on the firewall. - Establish the peer at the far end of the tunnel (gateway). - Select thePeer IP Typeto be aStaticorDynamicaddress assignment. - If thePeer IP Addressis static, enter the IP address of the peer. - Configure a pre-shared key. - Enter aPre-shared Key, which is the security key to use),User FQDN (email address). Local identification defines the format and identification of the local gateway. If no value is specified, the local IP address will be used as the local identification value. - ForPeer Identification, choose from the following types and enter the value:FQDN (hostname),IP address,KEYID (binary format ID string in HEX),User FQDN (email address). Peer identification defines the format and identification of the peer gateway. If no value is specified, the peer IP address will be used as the peer identification value. - Configure certificate-based authentication.Perform the remaining steps in this procedure if you selectedCertificateas the method of authenticating the peer gateway at the opposite end of the tunnel. - Select aLocal Certificatethat is already on the firewall from the drop-down, orImporta certificate, orGenerateto create a new certificate. - If you want toImporta certificate, Import a Certificate for IKEv2 Gateway Authentication and then return to this task. - Click theHTTP Certificate Exchangecheck box if you want to configure Hash and URL (IKEv2 only). For an HTTP certificate exchange, enter theCertificate URL. For more information, see Hash and URL Certificate Exchange. - Select theLocal Identificationtype from the following:Distinguished Name (Subject), FQDN (hostname),IP address,User FQDN (email address), and enter the value. Local identification defines the format and identification of the local gateway. - Select thePeer Identificationtype from the following:Distinguished Name (Subject), FQDN (hostname),IP address,User FQDN (email address), and enter the value. Peer identification defines the format and identification of the peer gateway. - Select one type ofPeer ID Check: - Exact—Check this to ensure that the local setting and peer IKE ID payload match exactly. - Wildcard—Check this to allow the peer identification to match as long as every character before the wildcard (*) matches. The characters after the wildcard need not match. - ClickPermit peer identification and certificate payload identification mismatchif you want to allow a successful IKE SA even when the peer identification does not match the peer identification in the certificate. - Choose aCertificate Profilefrom the drop-down. A certificate profile contains information about how to authenticate the peer gateway. - ClickEnable strict validation of peer’s extended key useif you want to strictly control how the key can be used. - Configure advanced options for the gateway. - Select theAdvanced Optionstab. - In the Common Options section,Enable Passive Modeif you want the firewall to only respond to IKE connection requests and never initiate them. - Enable NAT Traversalif you have a device performing NAT between the gateways, to have UDP encapsulation used on IKE and UDP protocols, enabling them to pass through intermediate NAT devices. - If you choseIKEv1 only modeearlier, on the IKEv1 tab: If the exchange mode is not set toauto, you must configure both peers with the same exchange mode to allow each peer to accept negotiation requests. - Chooseauto,aggressive, ormainfor theExchange Mode. When a device is set to useautoexchange mode, it can accept bothmainmode andaggressivemode negotiation requests; however, whenever possible, it initiates negotiation and allows exchanges inmainmode. - Select an existing profile or keep the default profile fromIKE Crypto Profiledrop-down. For details on defining an IKE Crypto profile, see Define IKE Crypto Profiles. - (Only if using certificate-based authentication and the exchange mode is not set toaggressivemode) ClickEnable Fragmentationto enable the firewall to operate with IKE Fragmentation. - ClickDead Peer Detectionand enter anInterval(range is 2-100 seconds). ForRetry,define the time to delay (range is 2-100 seconds) before attempting to re-check availability. Dead peer detection identifies inactive or unavailable IKE peers by sending an IKE phase 1 notification payload to the peer and waiting for an acknowledgment. - Select anIKE Crypto Profilefrom the drop-down, which configures IKE Phase 1 options such as the DH group, hash algorithm, and ESP authentication. For information about IKE crypto profiles, see IKE Phase 1. - EnableStrict Cookie Validationif you want to always enforce cookie validation on IKEv2 SAs for this gateway. See Cookie Activation Threshold and Strict Cookie Validation. - Enable Liveness Checkand enter anInterval (sec)(default is 5)if you want to have the gateway send a message request to its gateway peer, requesting a response. If necessary, the Initiator attempts the liveness check up to 10 times. If it doesn’t get a response, the Initiator closes and deletes the IKE_SA and CHILD_SA. The Initiator will start over by sending out another IKE_SA_INIT. - Save the changes.ClickOKandCommit. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/pan-os/7-1/pan-os-admin/vpns/site-to-site-vpn-concepts/set-up-site-to-site-vpn/set-up-an-ike-gateway.html
2020-09-18T14:55:39
CC-MAIN-2020-40
1600400187899.11
[]
docs.paloaltonetworks.com
Building, Uploading, Running Once you are done coding your project it's time to build, upload and run it. Make sure that the Debug mode is selected in the drop-down on the debug toolbar ( View > Debug Toolbar). You will need this later when I introduce you to Debugging. At this point... ... If you are using a hardware Tibbo device, make sure it is on and connected to your PC's LAN segment, as explained in Preparing. ... If you are using TiOS Simulator, start it by selecting Debug > Start TiOS Simulator from the TIDE's main menu... - ... and make sure you have the Use WinPCap option of the Simulator enabled ( Simulator > Options to access), and that the correct Ethernet interface is selected in the drop-down..
https://docs.tibbo.com/taiko/fb_building
2020-09-18T14:10:42
CC-MAIN-2020-40
1600400187899.11
[array(['tide_button_buildmode.png', 'tide_button_buildmode'], dtype=object) array(['tide_button_run.png', 'tide_button_run'], dtype=object) array(['tide_status_progress.png', 'tide_status_progress'], dtype=object) array(['tide_state_comm_alive.png', 'tide_state_comm_alive'], dtype=object)]
docs.tibbo.com
Custom Alert Message Variables The following variables can be used to create a custom alert message. You can add or remove variables to suit your needs when creating or modifying an Alert Profile. These alert variables are also available as input parameter values when configuring an Action Profile to initiate a VMware vCenter Orchestrator workflow. Script Alert Variables The following variables can be used in a script alert, which can be included as part of an Alert Profile. The platform on which the script runs determines how to denote the variable: $UPTIME_VARNAMEon Linux %UPTIME_VARNAME%on Windows Recovery Script Variables The following variables can be used in a recovery script as part of an Action Profile.
http://docs.uptimesoftware.com/pages/viewpage.action?pageId=7801702&navigatingVersions=true
2020-09-18T13:14:06
CC-MAIN-2020-40
1600400187899.11
[]
docs.uptimesoftware.com
Key Features Keeps you up-to-date on the status of Zenoss events and all relevant event attributes. Correlates Zenoss events into actionable incidents in BigPanda, so you can easily manage the events and the associated devices. How It Works The agent queries for all new Zenoss events and for changes in event states in real-time via the Zenoss API. To ensure a full sync of all current alerts, the agent also queries for all Zenoss events at regular intervals. The query intervals are configurable. See Customizing Zenoss. How and When Alerts are Closed When Zenoss sends a close event for an alert, the alert is automatically closed in BigPanda. However, closing an alert manually in BigPanda does not close the alert in Zenoss. Installing The Integration Administrators can install the integration by following the on-screen instructions in BigPanda. For more information, see Installing an Integration. Zenoss Data Model BigPanda normalizes alert data from Zenoss into tags. You can use tag values to filter the incident feed and to define filter conditions for Environments. The primary and secondary properties are also used during the correlation process. Standard Tags Customizing Zenoss To ensure a full sync of all current alerts, the BigPanda agent queries for all Zenoss events at regular intervals. You can configure the query interval or disable the full sync by editing the config file for the agent. Prerequisites Obtain access to the server where the BigPanda agent is installed. Configuring the Query Interval You may want to configure the full sync to reduce the load on your Zenoss server, depending on the average volume of events Zenoss generates. - Open the config file, which is located at /etc/bigpanda/bigpanda.conf. - Locate the settings for the Zenoss full sync, named zenoss/fullsync. - Change the value of the full_sync_intervalto your preferred interval between full syncs, in seconds.The default value is 900 (15 minutes). Disabling the Full Sync The full sync acts as a backup to the incremental sync to ensure accuracy. Disable this feature only if you need to reduce the load on your Zenoss server. - Open the config file, which is located at /etc/bigpanda/bigpanda.conf. - Locate the settings for the Zenoss full sync, named zenoss/fullsync. - Add the following key and value pair: "enabled": false. Make sure you add the pair outside of the configproperty. "name": "zenoss/fullsync", "enabled": false, "config": { "user": "BigPanda",... }, Uninstalling Zenoss To stop sending Zenoss events to BigPanda, you can uninstall the agent from the server or remove the Zenossenoss integration or supports multiple BigPanda integrations. Determine the OS on the server. Uninstalling the Agent If the agent supports only the Zenoss integration, you can uninstall the agent from the server. Do not uninstall the BigPanda agent if it is supporting other BigPanda integrations. In this case, remove the Zenoss integration from the agent. Removing the Zen
https://docs.bigpanda.io/docs/zenoss?utm_source=site-partners-page
2020-09-18T13:46:40
CC-MAIN-2020-40
1600400187899.11
[]
docs.bigpanda.io
The goal of tl is to provide an R equivalent to the tldr command line tool, a community-contributed set of quick reference guides for R functions. “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away” - Antoine de Saint-Exupéry Briefly forgotten how a function works? Don’t want to scroll through a long help file to find the example usage? You need tl;dr! You can install the current version of tl from github with: devtools::install_github("ropenscilabs/tl") Here are some basic examples of how to use tl: Use the dr function to find a help page fits a linear model - fits a linear model of y dependent on x1 and x2 m1 <- lm(y ~ x1 + x2) - fits on data in data frame df m1 <- lm(y ~ x1 + x2, data = df) - fits model of y on categorical variable xc m1 <- lm(y ~ as.factor(xc), data = df) - print model summary(m1) tidyr::gather converts data from wide to long - collapse columns x1 to x5 into 5 rows gather(data, key = "key", value = "value", x1:x5) - collapse all columns gather(data, key = "key", value = "value") Page you are looking for does not exist? Use the create_page function to make one! Please follow the instructions to keep it brief. tl::create_page(base::system.file) Want to submit your new page to the tl package? Use the submit_page function to get instructions on how to add your help page to tl. tl::submit_page(base::system.file)
https://docs.ropensci.org/tl/index.html
2020-09-18T13:44:08
CC-MAIN-2020-40
1600400187899.11
[]
docs.ropensci.org
v1.13 Series v1.13.0¶ Release: 2019/09/24 Features¶ - Volumetric Video support Tutorials It's a whole new world of possibilities, if you need some guidance we'll provide it. Tap Trigger Actions can now be triggered by a Tap on a scene element. Actions - Play Media - Pause Media Videos, Sounds and Animations can now be played or paused with Triggers. glTF default animation glTF default animations can now be played in your experience. Improvements¶ Global improvements¶ - Revamped Sign In experience - Revamped Menu, Notifications & Dialogs - Improved Asset Picker - Improved multitouch gestures Faster glTF assets loading We care about your time as we know it's precious! Improved spatial mapping shaders - Faster assets loading - Improved image tracking stability in Edit mode - Improved UI feedback for image tracking state Lumin Improvements¶ - Image Anchor available on Magic Leap One - New Control, Laser and Cursor Android Improvements¶ - Image Anchor available on compatible Android devices Bugfixes¶ Fixed sign in experience when first and last name are not available We can now welcome you in Minsar even if all your information aren't filled in. Fixed opening failure for projects created with Minsar 1.09 and older You can open older projects again. Changed automated naming strategy on project creation Fixed 'Reset Transform' on Elements not working as expected 'Reset Transform' resets elements scaling and rotation to default values. Fixed premature Menu display on Project Loading There is no more confusion on when you are able to use the menu. Fixed blurry provider icons in the Asset Picker Fixed Experience opening in XR View 1.13.1¶ Release: 2019/09/26 Improvements¶ - Smoother image anchoring Bugfixes¶ - Trigger is selected after its creation - Video assets import - Triggers button on all asset menus - Fixed experience opening in Viewer mode 1.13.2¶ Release: 2019/10/01 Lumin Features¶ - Re-enabled Gesture detection Double-tap on the Bumper to switch to Gesture mode! Improvements¶ - Polished Menu UI on mobile - Updated Color and Fonts button icons - Improved Selection of Triggers and Actions Bugfixes¶ - Fixed Image Anchoring over multiple tracking sessions 1.13.3¶ Release: 2019/10/02 Improvements¶ - Polished Menu buttons interactions on immersive devices (including animation of contextual buttons on cursor interactions) Bugfixes¶ - Fixed Project Picker crash on Android - Fixed broken rotation on the menu when repositionning in Immersive Devices - Shadows re-enabled on Android and iOS 1.13.4¶ Release: 2019/10/14 Features¶ - QR Code recognition in Viewer
http://docs.minsar.app/home/changelogs/1.13/
2020-09-18T13:54:00
CC-MAIN-2020-40
1600400187899.11
[]
docs.minsar.app
- Realm > - Android SDK > - The Realm Data Model Realm Objects¶ Overview¶ MongoDB Realm applications model data as objects composed of field-value pairs that each contain one or more primitive data types or other Realm objects. Realm objects are essentially the same as regular objects, but they also include additional features like real-time updating data views and reactive change event handlers. Every Realm object has an object type that refers to the object’s class. Objects of the same type share an object schema that defines the properties and relationships of those objects. In most languages you define object schemas using the native class implementation. Example The following code block shows an object schema that describes a Dog. Every Dog object must include a name and age and may optionally include the dog’s breed and a reference to a Person object that represents the dog’s owner. - Kotlin - Java Key Concepts¶ Live Object¶ Objects in Realm clients are live objects that update automatically to reflect data changes, including synced remote changes, and emit notification events that you can subscribe to whenever their underlying data changes. You can use live objects to work with object-oriented data natively without an ORM tool. Live objects are direct proxies to the underlying stored data, which means that a live object doesn’t directly contain data. Instead, a live object always references the most up-to-date data on disk and lazy loads property values when you access them from a collection. This means that a realm can contain many objects but only pay the performance cost for data that the application is actually using. Valid write operations on a live object automatically persist to the realm and propagate to any other synced clients. You do not need to call an update method, modify the realm, or otherwise “push” updates. Object Schema¶ An object schema is a configuration object that defines the fields, relationships of a Realm object type. Realm client applications define object schemas with the native class implementation in their respective language using the Realm Object Model. Object schemas specify constraints on object properties such as the data type of each property, whether or not a property is required, and the default value for optional properties. Schemas can also define relationships between object types in a realm. Every Realm app has a Realm Schema composed of a list of object schemas for each type of object that the realms in that application may contain. MongoDB Realm guarantees that all objects in a realm conform to the schema for their object type and validates objects whenever they’re created, modified, or deleted. Primary Key¶ A primary key is a String or Integer property that uniquely identifies an object. You may optionally define a primary key for an object type as part of the object schema. Realm Database automatically indexes primary key properties, which allows you to efficiently read and modify objects based on their primary key. If an object type has a primary key, then all objects of that type must include the primary key property with a value that is unique among objects of the same type in a realm. You cannot change the primary key property for an object type after any object of that type is added to a realm. Property Schema¶ A property schema is a field-level configuration that defines and constrains a specific property in an object schema. Every object schema must include a property schema for each property in the object. At minimum, a property schema defines a property’s data type and indicates whether or not the property is required. You can configure the following constraints for a given property: Summary¶ - Realm objects are of a type defined as you would any other class. - Realm objects are live: they always reflect the latest version on disk and update automatically when the underlying object changes. - A Realm object type may have a primary key property to uniquely identify each instance of the object type. - The Realm object’s schema defines the properties of the object and which properties are optional, which properties have a default value, and which properties are indexed.
https://docs.mongodb.com/realm/android/objects/
2020-09-18T14:51:36
CC-MAIN-2020-40
1600400187899.11
[]
docs.mongodb.com
Generates random numbers that are distributed according to the associated probability function. The entropy is acquired by calling g.operator(). The first version uses the associated parameter set, the second version uses params. The associated parameter set is not modified. The generated random number. Amortized constant number of invocations of g.operator(). © cppreference.com Licensed under the Creative Commons Attribution-ShareAlike Unported License v3.0.
https://docs.w3cub.com/cpp/numeric/random/poisson_distribution/operator()/
2020-09-18T14:33:23
CC-MAIN-2020-40
1600400187899.11
[]
docs.w3cub.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Performs all the write operations in a batch. Either all the operations succeed or none. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to BatchWriteAsync. Namespace: Amazon.CloudDirectory Assembly: AWSSDK.CloudDirectory.dll Version: 3.x.y.z Container for the necessary parameters to execute the BatchWrite service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CloudDirectory/MICloudDirectoryBatchWriteBatchWriteRequest.html
2018-08-14T14:46:31
CC-MAIN-2018-34
1534221209040.29
[]
docs.aws.amazon.com
Resource Manager Logical Resource Group Configuration options These options (in CTI_LRG CTI and in GW LRGs) configure fallback mechanisms that handle scenarios where CTIC/ICM is unavailable. Logical Resources are configured one of two ways: - As sections within the Resource Manager (RM). In this case, configuration options are available within these sections. - Or as a named applications folder under Configuration Unit in a Tenant. In this case, configuration options are available within the gvp.lrg section of the named applications folder. Contents Gateway Resource Group Option remove-ruri-capability-on-fallback Section: Gateway Resource group section Valid Values: true or false (default) This parameter is used only for the Gateway resource group. It disables or enables the capability for selecting a VXML resource when falling back to VXML after CTIC returns a 404 error. This capability is specified in an INVITE Request URI (gvp.rm.resource-req). - Set to true to disable Resource Manager's use of the gvp.rm.resource-req option for VXML fallback after a CTIC 404 error. - Set to false to enable Resource Manager's use). CTI Connector Failover Options. </ul> Feedback Comment on this article:
https://docs.genesys.com/Documentation/GVP/latest/GDS/851-CTIC_fallback_opts
2018-08-14T13:35:05
CC-MAIN-2018-34
1534221209040.29
[]
docs.genesys.com
vSAN can perform block-level deduplication and compression to save storage space. When you enable deduplication and compression on a vSAN all-flash cluster, redundant data within each disk group is reduced. Deduplication removes redundant data blocks, whereas compression removes additional redundant data within each data block. These techniques work together to reduce the amount of space required to store the data. vSAN vSAN cluster, redundant data within a particular disk group is reduced to a single copy. You can enable deduplication and compression when you create a new vSAN all-flash cluster or when you edit an existing vSAN all-flash cluster. For more information about creating and editing vSAN clusters, see Enabling vSAN.. Deduplication and compression might not be effective for encrypted VMs, because VM Encryption encrypts data on the host before it is written out to storage. Consider storage tradeoffs when using VM Encryption. How to Manage Disks in a Cluster with Deduplication and Compression Consider the following guidelines when managing disks in a cluster with deduplication and compression enabled. Avoid adding disks to a disk group incrementally. For more efficient deduplication and compression, consider adding a disk group to increase cluster storage capacity. When you add a disk group manually, add vSAN Capacity monitor. You can view the Deduplication and Compression Overview when you monitor vSAN vSAN cluster, it might take several minutes for capacity updates to be reflected in the Capacity monitor as disk space is reclaimed and reallocated.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.virtualsan.doc/GUID-3D2D80CC-444E-454E-9B8B-25C3F620EFED.html
2018-08-14T13:33:32
CC-MAIN-2018-34
1534221209040.29
[array(['images/GUID-9AE5D830-646E-4992-B7C1-0885E91C6BBF-low.png', None], dtype=object) ]
docs.vmware.com
Discover how well any company is doing on the PR and SEO front: - For a well known Fortune 500 company, extract company mentions using the entity filter (e.g. for Apple Computer filter by organization:"apple" ). If the company you’re tracking is not regularly covered, just enter the company’s name as a keyword. - Extract backlinks to the company URL from blogs & discussions using the extrnal_link filter (external_links:https\:\/\/*) - Refine results by traffic rank, spam score, thread participant count, or social filters
https://docs.webhose.io/v1.0/docs/pr-seo
2018-08-14T14:23:24
CC-MAIN-2018-34
1534221209040.29
[]
docs.webhose.io
Grid Overview Configuring Grid Behavior Kendo Grid supports paging, sorting, grouping, and scrolling. Configuring any of these Grid behaviors is done using simple boolean configuration options. For example, the follow snippet shows how to enable all of these behaviors. Enabling Grid paging, sorting, grouping, and scrolling $(document).ready(function(){ $("#grid").kendoGrid({ groupable: true, scrollable: true, sortable: true, pageable: true }); }); By default, paging, grouping, and sorting are disabled. Scrolling is enabled by default. Performance with Virtual Scrolling When binding to large data sets or when using large page sizes, reducing active in-memory DOM objects is important for performance. Kendo Grid provides built-in UI virtualization for highly optimized binding to large data sets. Enabling UI virtualization is done via simple configuration. Enabling Grid UI virtualization $(document).ready(function(){ $("#grid").kendoGrid({ scrollable: { virtual: true } }); }); Accessing an Existing Grid You can reference an existing Grid instance via jQuery.data(). Once a reference has been established, you can use the Grid API to control its behavior. Accessing an existing Grid instance var grid = $("#grid").data("kendoGrid");
http://docs.telerik.com/kendo-ui/web/grid/overview
2015-02-01T07:04:00
CC-MAIN-2015-06
1422115855897.0
[]
docs.telerik.com
Metaprogramming¶ identifying typ: Any Expr objects may also be nested: julia> ex3 =¶¶ Quoting¶) Note that equivalent expressions may be constructed using parse() or the direct Expr form: julia> :(a + b*c + 1) ==. # line 3: y = 2 #: at expression construction time is used as an immediate value in the expression. Thus, the value of a when the expression is evaluated no longer matters: the value in the expression is already 1, independent of whatever the value of a might be. - On the other hand, the symbol :b is used in the expression construction, so the value of the variable b at that time is irrelevant — :b is just a symbol and the variable b need not even be defined. At expression evaluation time, however, the value of the symbol :b is resolved by looking up the value of the variable b. Functions on Expressions( :(@sayhello("human")) ) :(println("Hello, ","human","!")) ^^^^^^^ interpolated: now a literal string julia> typeof(ex) Expr Hold up: why macros?¶ julia> ex = macroexp : error("Assertion failed: ", $(string(ex)))) end This macro can be used like this: julia> @assert 1==1.0 julia> @assert 1==0 ERROR: assertion failed: 1 == 0 in error at error.jl:21 In place of the written syntax, the macro call is expanded at parse time to its returned result. This("assertion failed: ", msg_body) return :($ex ? nothing : error($msg)) end() function: julia> macroexpand(:(@assert a==b)) :(if a == b nothing else Base.error("assertion failed: a == b") end) julia> macroexpand(:(@assert a==b "a should equal b!")) :(if a == b nothing else Base.error("assertion failed:")) ASCIIString (constructor with 2 methods)¶() return esc(:(x = 0)) end function foo() x = 1 @zerox x # is zero end This kind of manipulation of variables should be used judiciously, but is occasionally quite handy. Code Generation¶ ERROR: unsupported or misplaced expression $ Non-Standard AbstractString Literals¶.
http://docs.julialang.org/en/latest/manual/metaprogramming/
2015-02-01T07:07:11
CC-MAIN-2015-06
1422115855897.0
[]
docs.julialang.org
Description Gets the theme that is currently applied to the application UI. Syntax GetTheme ( {boolean fullpath} ) Return value A string whose value is the theme name (or theme path and name) that is currently applied to the application. If any argument's value is null, the method returns null. An empty string will be returned if one of the following happens: if no theme is applied ("Do Not Use Themes" is selected in the Themes tab of the Application Properties dialog), or if a theme is applied and the Windows classic style option is selected in the project painter when building the application, or if a theme is applied and the application's executable file cannot find the "theme" folder at runtime. Examples This example gets the theme name that is currently applied to the application: String ls_themename ls_themename = GetTheme() See also Specifying a theme for the application UI in Users Guide
https://docs.appeon.com/pb2021/powerscript_reference/ch02s04s350.html
2021-11-27T07:46:13
CC-MAIN-2021-49
1637964358153.33
[]
docs.appeon.com
Whois History API API Structure The Whois History lookup API brings historical records of a given domain. The Domain Whois Lookup API does not show historical records about a given domain name and only refers to timely domain ownership details. Hence, you can use the domain Whois history lookup service to obtain information about all previous registration modifications for a particular domain name since it was first registered. History API with real values and run code, just click here. You can always contact us for support and ask for more assistance. We'll be glad to assist you with building your product. Updated about 1 month ago
https://docs.deepinfo.com/docs/whois-history-api-1
2021-11-27T09:15:58
CC-MAIN-2021-49
1637964358153.33
[]
docs.deepinfo.com
Install Wizard After activating Billiard, a quick setup wizard will be opened. It will help you to install theme required plugins, demo content, setting. Just in a few clicks, your website will be ready for use.Read welcome message and click “Let`s go!”. Wizard will ask you to install theme required plugins. Click on “Begin activating plugins”: Then choose “Install” and “Apply” options in Bulk Action in order to install and activate the plugins: Read welcome message and click “Let`s go!”: Default Content. By clicking “Continue” setup wizard will import all demo data – pages, posts, portfolios and media files. Please note that import process will take the time needed to download all attachments: Theme customization. Here you can read information about widgets and child theme. Click “Continue”: There is the last step – setting menu. Go Appearance ⇒ Menus. All you need is to mark “Primary menu” as Top navigation and Save changes:
https://docs.foxthemes.me/knowledge-base/355-installation/
2021-11-27T09:02:42
CC-MAIN-2021-49
1637964358153.33
[array(['https://docs.foxthemes.me/wp-content/uploads/2021/07/loq0nJYbSAq8S7_ASRQX2A.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/07/vD2kEJLVTAeBVxq8AtOVOg.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/07/r-0z0UQ9RjiRfxCEk8RKVQ.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/07/r4g9rmRXT2SJ1oO3l0l0_g.png', None], dtype=object) ]
docs.foxthemes.me
In Eventmie Pro, you can create a single & multiple days event. Each event belongs to an organizer. Admin & Organizers can create events from the front-end. {success} We'll discuss repetitive & online events and classes in the next sections. We'll focus on each option in their own section to avoid confusion. {primary} From here, we'll be guiding you through all the features of the front-end. Simple event here refers to single or multiple days event. Click on The first step is mandatory to proceed to the next step. Add these details to proceed to the next steps- Set timing for single or multiple days event. Form Fields {warning} Do not check the Add Repetitive Schedules, we'll discuss it in its own section. Create tickets for the event. Freeor Paid. Form Fields Event location details. You need to enter Google map Lat-long manually to show the venue pinned on the google map on the event page. Form Fields {warning} Do not check the Online Event, we'll discuss it in its own section. Upload event poster, thumbnail, and images. Please upload the mentioned size of images. You can also crop and adjust them. Form Fields Write meta titles, keywords, and descriptions. These tags will be specific for each event, for event-specific SEO. Form Fields Final step- Un-publish/ Publishanytime. {primary} You can also create new Tags directly from Powered By Tab. We'll guide you about this in the Tags section. {success} see the Publishedevent on listing page Browse Events {primary} Event can be publishedonly after completing - Details, Timings, Tickets, Location& Media.
https://eventmie-pro-docs.classiebit.com/docs/1.5/events/simple-events
2021-11-27T07:42:03
CC-MAIN-2021-49
1637964358153.33
[]
eventmie-pro-docs.classiebit.com
Table of Contents Free Support We provide Free Customer Support for all Curly Themes products through our quick and easy ticket application interface. Please limit your inquiries to problems related to the Image Comparison Plugin, its setup and features. For any problems concerning third-party plugins you should address the application’s developer. Note: We can not offer support for custom CSS code customization. You should ask a professional developer for aid if you require help with advanced customization and programming. writing a ticket please get your Purchase Code and your License Certificate: Before opening a new ticket, please make sure you have read through all our documentation. Also, our FAQ section contains helpful answer to questions we often receive from our customers. Following these steps is in the users’ interest, as they will save valuable time. Still, we are always happy to respond to any inquiries you may have related to the Image Comparison Plugin. Installing the Plugin image-comparison-login Before installing this plugin, plugin. FTP Upload - Step 1 – Unzip the .zip package file you downloaded from Code Canyon and locate the folder named image-comparison - Step 2 – Upload this folder on your server in your WordPress directory to /wp-content/plugins/ WordPress Upload image-comparison-upload The second way to install the plugin is by logging in from your website’s to the WordPress Dashboard and: - Step 1 – Go to Appearance > Plugins > Add New > Upload - Step 2 – From there, you should select the file images-comparison.zip from your computer - Step 3 – After clicking the Install Now button, the installation process is finished - Step 4 – After installing the plugin is recommended to activate it by clicking Activate Plugin Activating the Images Comparison Plugin images-comparison-activation After you have completed the install process, in either of the two ways, you need to activate it in case you didn’t already in the WordPress installation method. Log in to the WordPress Dashboard, go to Plugins >Installed Plugins and select Images Comparison Plugin. Click the Activate button and you can start using the plugin. Shortcode Usage If you need a simple images comparison display, with just the two photos, side by side, this simple shortcode does the trick: [images-comparison image_1="" image_2=""] If you wish to choose the way your images are compared, horizontal or vertical, you need to use the orientation parameter. Horizontal means your comparison point will move left and right between images. Vertical means your comparison point will move up and down between images. Default is horizontal. [images-comparison orientation="vertical" image_1="" image_2=""] If you wish to change the position of your comparison point between the two images, you can use the default_offset_pct parameter.. Default is 0.5. [images-comparison orientation="vertical" default_offset_pct="0.4" image_1="" image_2=""] If you wish to add labels for your photos, you can use the before_label and after_label parameters. The before label is for image 1 and the after label is for image 2. [images-comparison before_label="Before" after_label="After" image_1="" image_2=""] If you wish to add CSS classes for the element, you can use the el_css parameter. You can add classes separated by commas. [images-comparison el_css="class_css" image_1="" image_2=""] VC Element Usage To add the Image Comparison Visual Composer element in your page, you need to click on the + Add Element button of the VC builder and, from the elements list, select Image Comparison. You can then customize it with the following options: Image Comparison Settings: - Before Image – Use this option to upload the before image. - After Image – Use this option to upload the after image. - Before Label – Type in the label for the before image. - After Label – Type in the label for the after image. - Default Offset Point – Type in the starting comparison point between images.. - Orientation – You can use this select box to choose how you wish to compare the images. Horizontal means your comparison point will move left and right between images. Vertical means your comparison point will move up and down between images. - CSS Classes – Use this field to add custom CSS classes. 1. I want just a basic image comparison display, just two images side by side. How do I do that? You can use the shortcode: [images-comparison image_1="" image_2=""] or you can use the Visual Composer element. 2. Can I compare images top to bottom? Yes, you need to use the following shortcode: [images-comparison orientation="vertical" image_1="" image_2=""] or you can use the Orientation option from the VC element. 3. How can I move the default comparison point? You can change the Default Offset Point in the VC element or add the parameter in the shortcode: [images-comparison orientation="vertical" default_offset_pct="0.4" image_1="" image_2=""] 4. Which values can I add for the offset point? The value of the offset point is defined with a number between 0 and 1. In percentages, a 0.5 value means the point will be at a 50% distance from both margins. A 0.3 value means the point will be at a 30% distance from the left/top margin and 70% distance from the right/bottom margin. 5. Can I add my own CSS? Yes, you have a dedicated field in the VC element and a parameter for the shortcode: [images-comparison el_css="class_css" image_1="" image_2=""] Release Notes: - Image Comparison Plugin 1.0 (August 15, 2017) Initial Release
https://docs.curlythemes.com/image-comparison/
2021-11-27T08:57:26
CC-MAIN-2021-49
1637964358153.33
[array(['https://docs.curlythemes.com/image-comparison/wp-content/uploads/sites/21/2017/08/simple-weather-login.jpg', None], dtype=object) array(['https://docs.curlythemes.com/image-comparison/wp-content/uploads/sites/21/2017/08/image-comparison-upload.jpg', None], dtype=object) array(['https://docs.curlythemes.com/image-comparison/wp-content/uploads/sites/21/2017/08/images-comparison-activation.jpg', None], dtype=object) array(['https://docs.curlythemes.com/image-comparison/wp-content/uploads/sites/21/2017/08/Screen-Shot-2017-08-15-at-19.30.17.png', None], dtype=object) array(['https://docs.curlythemes.com/image-comparison/wp-content/uploads/sites/21/2017/08/Screen-Shot-2017-08-15-at-19.54.29.png', None], dtype=object) ]
docs.curlythemes.com
Error Codes This page gives you information about what errors can occur and how to handle them. Deepinfo utilizes HTTP response error codes to show the success or failure of your API requests. If your request fails, Deepinfo returns an error with the appropriate status code. Generally, status code ranges are described as follows. - 2XX codes (success): Successful status code verifies that your request worked as expected. - 4XX codes (client error): Error codes indicate the list of HTTP status codes used for possible failed requests of the client. - 5XX codes (server error): Although they are not common but indicate possible errors with Deepinfo servers. Deepinfo API Exception Codes Updated 6 months ago Did this page help you?
https://docs.deepinfo.com/docs/error-codes
2021-11-27T07:59:59
CC-MAIN-2021-49
1637964358153.33
[]
docs.deepinfo.com
The flow control is responsible for optimizing the flow through a network. Every object that is connected to the flow control is part of the network that this flow control will optimize. When the flow rates on an object have changed, that object may schedule a flow recalculation, for example at the time it predicts a tank full or empty, product reaching the end of a flow conveyor, or the mixer reaching the required amount for its current step. Some user actions trigger a recalculation instantly, for example when a flow port is opened or closed or a maximum flow rate is changed, FloWorks will signal the relevant flow control that it should re-optimize the flow through the network the changed object is part of. Most models have one flow control that is automatically created when you add the first FloWorks object, such as a source or a tank, to the model. A newly created flow control has the 3D shape of an air traffic control tower. Since you will hardly interact with the object, it is possible to reduce it to a small flat object at ground level by selecting "Minimize control" from the flow control's properties. In general you will want every FloWorks object in the model, whether it is a fixed object such as a source or a tank or a flow item such as a vessel or truck, to be connected to exactly one flow control. It is possible to remove an object from its flow control's members list, but flow rates will no longer be calculated for that object. This option is only useful, for example, if you are running FloWorks in Express Mode, which has a limitation on the number of objects in a network, and you have many truck flow items waiting in a queue. In this case, the trucks do not need to have flow rates assigned until they actually connect to a tank, for example in a loading point, so you can disconnect all trucks except the active ones from the flow control, thereby reducing the number of members of the flow control. It is possible to have more than one flow control in your model. By default, when you drag a FloWorks object into your model from the library, it will automatically be connected to the first flow control in the model. If no flow control is present, one will be created at the location (0, 0, 0). Usually, having a single flow control that all the FloWorks objects in your model are connected to is sufficient. One reason to want to have more than one flow control, would be that you have multiple independent flow networks. In this case creating a separate flow control for each disjoint component of the network will speed up the calculations, as FloWorks will be able to optimize multiple networks with fewer objects and constraints, rather than one relatively large network. However, this is a refactoring step that you would usually only implement when you have already built your model and find that the optimization speed has an impact on the performance of your model. The flow control has a minimal set of events: On Reset, On Message and On Draw. These are described on the Fixed Resource - Events page. The Flow Control has the following additional event: This event occurs each time after the FloWorks engine has recalculated and assigned all the flow rates for objects attached to this flow control. This event has no parameters. The flow control does not set any states, it will always be in the idle state. The flow control does not keep track of any statistics. The Flow Control object uses the following properties panels:
https://docs.flexsim.com/en/22.0/modules/FloWorks/manual/Reference/Objects/FlowControl.html
2021-11-27T09:06:49
CC-MAIN-2021-49
1637964358153.33
[]
docs.flexsim.com
The theme comes with a number of shortcodes allowing you to add the info where you want the relevant content to show up. In addition, you can use WPBakery Shortcodes to add new elements to the page in a simple way: 1. Create a new page; 2. Press “Backend Editor”; 3. There you can select which you want to add to your page(“Add Element”, “Add Text Block”, “Add Template”); - Add Element. There you can look at all our theme shortcodes, which are available through the WPBakery Page Builder interface. The full list of elements (shortcodes): - Add Text Block. This option allows you to insert paragraph type text and format it using WYSIWYG editor. Moreover, text block allows adding media(images and videos): - Add Template. There you can select the template which you want to add to your page(you can look at these templates, images will show how will look these templates on the page) : If you select some short-code which you want to use on your page, you will have to press of this short-code. Ater that this short-code will be added to your page: There you can set up your short-code. Also on the page, you can set up rows and columns. “Row“ is the main content element of WPBakery Page Builder (formerly Visual Composer). Rows are used to divide your page into the logic blocks with columns, columns later will hold your content blocks. Rows can be divided into the layouts (eg. 1/2 + 1/2, 1/3 + 1/3 + 1/3, and so on). Your page can have an unlimited number of rows. To change row’s position, click and drag row’s drag handler (top left row’s corner) and drag row around (vertical axis). Add column. This option allows ad one more column on your page: Toggle row. This option allows “exclude” the row: Edit this row. This option allows to set up the row, which you want. General. - Row stretch. Select stretching options for row and content (Note: stretched may not work properly if parent container has “overflow: hidden” CSS property). - Column gap. The set gap between columns, all columns within the row will be affected by this value. - Full height row.Set row to be full height.Note: if the content will exceed screen size then the row will be bigger than screen height as well. - Equal height.Set all columns to be equal in height.Note: all columns will have the same height as longest column. - Content position.Set content position within columns – Default, Top, Middle, Bottom.Note: Default value will be used top or other if defined within the theme. - Use video background. Set YouTube background for the row. - Parallax. Add parallax type background for row (Note: If no image is specified, parallax will use background image from Design Options). - CSS Animation. You can select the type of animation for the element to be animated when it “enters” the viewport of the browser (Note: works only in modern browsers). - Row ID. You can add the ID to your row. The ID attribute specifies a unique id for an HTML element. - Disable row. There you can select the option where the row won’t be visible on the public side of your website. You can switch it back any time. - Extra class name. This option allowsto set up style particular content element differently – add a class name and refer to it in custom CSS. - Enable Ovarlay. This option allows Enable Ovarlay on your page. Design Options. Design Options allows you to add additional or modify existing styling of the content elements by applying most common style properties and effects. - CSS box. This option allows you are able to control paddings, margins, border, and radius. - Border color. This option allows the select color of the border. - Border style. This option allows the select style of the border. - Border radius. This option allows the select radius of the border. - Background. This option allows setting different types of background images. - Box controls. This option allows the select controls. Responsive Margins and Paddings. This option allows changing padding/margin for particular rows, columns and other elements on the different desktops and devices. Delete row. This option allows deleted the row.
https://docs.foxthemes.me/knowledge-base/1912-800-intro/
2021-11-27T08:35:13
CC-MAIN-2021-49
1637964358153.33
[array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-e5nPJiQT3T.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-5dtjRLYJtf.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-ctSpWsWdt9.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-hoFZV1uQnz.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-d1jmt66dhX.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-cy7l9waLBe.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-4YV6LfLPSb.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-OQrSWVoIYS.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-UQHrUUcS9z.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-0cihqd3HSs.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-0mQ4rIpL4I.png', None], dtype=object) array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-blyGt2DJJg.png', None], dtype=object) ]
docs.foxthemes.me
SharePoint Online: Using SharePoint Search Query tool to look at managed properties I have found this SharePoint Search query tool pretty useful when looking at Managed Properties so going to share real quick here how to use this query tool in SharePoint Online. Download link: and once you run the install file open up the .exe file and go to the Connection section. 1. SharePoint Site URL: Enter your tenant URL 2. Authentication: Authenticate using specific user account 3. AuthenticationMethod: SharePoint Online ( You can leave the rest as defaults) 4. Click Sign in to SP Online, this will bring up a O365 credential prompt window and enter your tenant domain credentials 5. Once successfully signed in now you can run queries. 6. So in the Query Text box, enter the query you want to search for. 7. Next under select properties you can type managed properties separated by commas: For eg: Author, path, refinablestring01 etc Too small to read the below image? please click on it :) 8. Look at the Primary results tab and you will see whether your managed property has content in it. If the crawl has not yet picked up the mapping then the value here will be empty. 9. As you can see in my case I am searching on ".docx" file and I find SSN.docx in the results with Author managed property value populated with the user name. You could also run a query like author:test1 in the query window and that should give you all documents whose author is test1. Thanks!
https://docs.microsoft.com/en-us/archive/blogs/sharepointsearch/sharepoint-online-using-sharepoint-search-query-tool-to-look-at-managed-properties
2021-11-27T07:45:02
CC-MAIN-2021-49
1637964358153.33
[]
docs.microsoft.com
Penetration testing One of the benefits of using Azure for application testing and deployment is that you can quickly get environments created. You don’t have to worry about requisitioning, acquiring, and “racking and stacking” your own on-premises hardware. Quickly creating environments is great – but you still need to make sure you perform your normal security due diligence. One of the things you likely want to do is penetration test the applications you deploy in Azure.. As of June 15, 2017, Microsoft no longer requires pre-approval to conduct a penetration test against Azure resources. This process is only related to Microsoft Azure, and not applicable to any other Microsoft Cloud Service. Important While notifying Microsoft of pen testing activities is no longer required customers must still comply with the Microsoft Cloud Unified Penetration Testing Rules of Engagement. Standard tests you can perform include: - Tests on your endpoints to uncover the Open Web Application Security Project (OWASP) top 10 vulnerabilities - Fuzz testing of your endpoints - Port scanning of your endpoints One type of pen test that you can’t perform is any kind of Denial of Service (DoS) attack. This test includes initiating a DoS attack itself, or performing related tests that might determine, demonstrate, or simulate any type of DoS attack. Note Microsoft has partnered with BreakingPoint Cloud to build an interface where you can generate traffic against DDoS Protection-enabled public IP addresses for simulations. To learn more about the BreakingPoint Cloud simulation, see testing through simulations. Next steps - Learn more about the Penetration Testing Rules of Engagement.
https://docs.microsoft.com/en-us/azure/security/fundamentals/pen-testing
2021-11-27T10:17:32
CC-MAIN-2021-49
1637964358153.33
[]
docs.microsoft.com
- Blog style – Select the type for blog: modern, grid with image background, grid with letter background - Blog title – Enter title for blog - Show sidebar on pages – Display sidebar on select pages - Show posts from not all categories?- This option allows show posts in blog from not all categories - Show posts from categories – Select categories for showing posts in blog - Transparent header for single post – Turning on transparent header for single post - Tags in posts – Display tags in posts - Share Buttons Iin post – Display social sharing buttons - Categories in posts – Display categories in posts - Author in post details page – Display author in post details page - Biographical Info from Users profile – Display biographical info from user’s profile in post details page - Show Recent Posts – Here you can select to show or to hide recent posts
https://docs.foxthemes.me/knowledge-base/2076-blog/
2021-11-27T07:59:05
CC-MAIN-2021-49
1637964358153.33
[array(['https://docs.foxthemes.me/wp-content/uploads/2021/03/file-2XhSkcdVpq.jpg', None], dtype=object) ]
docs.foxthemes.me
Deploy Microsoft Teams Rooms with Exchange on premises Read this topic for information on how to deploy Microsoft Teams Rooms in a hybrid environment with Exchange on-premises and Microsoft Teams. If your organization has a mix of services, with some hosted on-premises and some hosted online, then your configuration will depend on where each service is hosted. This topic covers hybrid deployments for Microsoft Teams Rooms with Exchange hosted on-premises. Because there are so many different variations in this type of deployment, it's not possible to provide detailed instructions for all of them. The following process will work for many configurations. If the process isn't right for your setup, we recommend that you use Windows PowerShell to achieve the same end result as documented here, and for other deployment options. Exchange on premises, be sure you have met the requirements. For more information, see Microsoft Teams Rooms requirements. If you are deploying Microsoft Teams Rooms with Exchange on-premises, you will be using Active Directory administrative tools to add an email address for your on-premises domain account. This account will be synced to Microsoft 365 or Office 365. You will need to: Create an account and synchronize the account with Azure Active Directory. Enable the remote mailbox and set properties. Assign a Microsoft 365 or Office 365 license. In the Active Directory Users and Computers tool, right-click on the folder or Organizational Unit that your Microsoft Teams Rooms accounts will be created in, click New, and then click User. Type the display name into the Full name box, and the alias into the User logon name box. Click Next. Type the password for this account. You'll need to retype it for verification. Make sure the Password never expires checkbox is the only option selected. Note Selecting Password never expires is a requirement for Microsoft Teams Rooms. Your domain rules may prohibit passwords that don't expire. If so, you'll need to create an exception for each Microsoft Teams Rooms device account. After you've created the account, run a directory synchronization. When it's complete, go to the users page in your Microsoft 365 admin center and verify that the account created in the previous steps has merged to online. Enable the remote mailbox and set properties Open the Exchange Management Shell or connect to your Exchange server using remote PowerShell. In Exchange PowerShell, create a mailbox for the account (mailbox-enable the account) by running the following command: Enable-Mailbox [email protected] -Room For detailed syntax and parameter information, see Enable-Mailbox. In Exchange Microsoft Teams Meeting room!" (The additional text to add to the meeting request.) This example configures these settings on the room mailbox named Project-Rigel-01. Set-CalendarProcessing -Identity "Project-Rigel-01" -AutomateProcessing AutoAccept -AddOrganizerToSubject $false -DeleteComments $false -DeleteSubject $false -RemovePrivateProperty $false -AddAdditionalResponse $true -AdditionalResponse "This is a Microsoft Teams Meeting room!" For detailed syntax and parameter information, see Set-CalendarProcessing. Assign a Microsoft 365 or Office 365 license Connect to Azure Active Directory. For details about Azure Active Directory, see Azure ActiveDirectory (MSOnline) 1.0. Note Azure Active Directory PowerShell 2.0 is not supported. The device account needs to have a valid Microsoft 365 or Office 365 license, or Exchange and Microsoft Teams will not work. If you have the license, you need to assign a usage location to your device account—this determines what license SKUs are available for your account. You can use Get-MsolAccountSkuto retrieve a list of available SKUs. - Next, you can add a license using the Set-MsolUserLicensecmdlet. In this case, $strLicense is the SKU code that you see (for example, contoso:STANDARDPACK). Set-MsolUser -UserPrincipalName '[email protected]' -UsageLocation 'US' Get-MsolAccountSku Set-MsolUserLicense -UserPrincipalName '[email protected]' -AddLicenses $strLicense For detailed instructions, see Assign licenses to user accounts with Office 365 PowerShell. For validation, you should be able to use any client to log in to this account. Related topics Configure accounts for Microsoft Teams Rooms Plan for Microsoft Teams Rooms Deploy Microsoft Teams Rooms Configure a Microsoft Teams Rooms console Manage Microsoft Teams Rooms
https://docs.microsoft.com/en-us/microsoftteams/rooms/with-exchange-on-premises
2021-11-27T09:53:48
CC-MAIN-2021-49
1637964358153.33
[]
docs.microsoft.com
The. Although the Playables API is currently limited to animation, audio, and scripts, it is a generic API that will eventually be used by video and other systems. The animation system already has a graph editing tool, it’s a state machine system that is restricted to playing animation. The Playables API is designed to be more flexible and to support other systems. The Playables API also allows for the creation of graphs not possible with the state machine. These graphs represent a flow of data, indicating what each node produces and consumes. In addition, a single graph is not limited to a single system. A single graph may contain nodes for animation, audio, and scripts. The Playables API allows for dynamic animation blending. This means that objects in the scenes could provide their own animations. For example, animations for weapons, chests, and traps could be dynamically added to the PlayableGraph and used for a certain duration. The Playables API allows you to easily play a single animation without the overhead involved in creating and managing an AnimatorController asset. The Playables API allows users to dynamically create blending graphs and control the blending weights directly frame by frame. A PlayableGraph can be created at runtime, adding playable node as needed, based on conditions. Instead of having a huge “one-size-fit-all” graph where nodes are enabled and disabled, the PlayableGraph can be tailored to fit the requirements of the current situation. 2017–07–04 Page published 2017–07–04 New in Unity 2017.1 NewIn20171
https://docs.unity3d.com/ru/2019.4/Manual/Playables.html
2021-11-27T08:38:01
CC-MAIN-2021-49
1637964358153.33
[]
docs.unity3d.com
- .xcframeworkto the "Frameworks" folder of your project. Make sure "Copy items if needed" is checked. - Check "Frameworks, Libraries, and Embedded Content" section under the general settings tab of your application's target. Ensure the Embed dropdown has Embed and Sign selected for the framework. - Configure Alipay in OPPCheckoutSettingsalong with other customizations. - Make sure Alipay is included to the payment brand list:. Create OPPPaymentParams.brandSpecificInfo[OPPTransactionAlipaySignedOrderInfoKey].brandSpecificInfo[OPPTransactionAlipaySignedOrderInfoKey]: NSString *alipaySignedOrderInfo = transaction.brandSpecificInfo[OPPTransactionAlipaySignedOrderInfoKey]; [[AlipaySDK defaultService] payOrder:alipaySignedOrderInfo fromScheme:@"com.companyname.appname.payments" callback:^(NSDictionary *resultDic) { // Sent request to your server to obtain transaction status }]; let alipaySignedOrderInfo = transaction.brandSpecificInfo?[OPPTransactionAlipaySignedOrderInfoKey] AlipaySDK.defaultService().payOrder, add the module dependency to the build.gradle: // this name must match the library name defined with an include: in your settings.gradle file implementation project(":alipaySdk"): Set<String> paymentBrands = new LinkedHashSet<String>(); paymentBrands.add("ALIPAY"); CheckoutSettings checkoutSettings = new CheckoutSettings(checkoutId, paymentBrands); And you are done!And you are done! val paymentBrands = hashSetOf("ALIPAY") val checkoutSettings =); val paymentParams = PaymentParams(checkoutId, "ALIPAY") val transaction = Transaction(paymentParams) /* use IProviderBinder to interact with service and submit transaction */ providerBinder.submitTransaction(transaction) Call the native method from Alipay SDK with the value alipaySignedOrderInfo from the transaction which your received in the callback: PayTask payTask = new PayTask(YourActivity.this, true); String alipaySignedInfo = transaction.getBrandSpecificInfo().get(Transaction.ALIPAY_SIGNED_ORDER_INFO_KEY); String result = payTask.pay(alipaySignedInfo); /* process the result */ val payTask = PayTask(this@YourActivity, true) val alipaySignedInfo = transaction.brandSpecificInfo[Transaction.ALIPAY_SIGNED_ORDER_INFO_KEY] val result = payTask.pay(alipaySignedInfo) /* process the result */ NOTE: This method must be called from a different thread.
https://quaife.docs.oppwa.com/tutorials/mobile-sdk/alipay
2021-11-27T08:35:39
CC-MAIN-2021-49
1637964358153.33
[array(['https://i.ibb.co/KrnxBSQ/Quaife-Online-Logo.png', 'Home'], dtype=object) ]
quaife.docs.oppwa.com
. 2 to explain how each component is used. You can drag and drop KPI 2 and 3 to see the same design, but with different colors and slightly different expression patterns. When you drag and drop the grid with selection pattern onto your interface, 233 lines of expression will be added to the section where you dragged it. At the top of the pattern, local variables set up the data that will be used in the KPIs. The first visible component is a card layout in a column layout with the label, "TOTAL ACTIVE OPPS", in a rich text display field. For this particular KPI, we use an count() statement to display a count of the total active opportunities. We then use another rich text item to display the comparison between active opportunities and the opportunity target. We use an if() statement that changes the color of the rich text depending on whether the opportunity target was hit or not. KPI 2 to determine the numerical amounts and color in lines 76 through 233. On This Page
https://docs.appian.com/suite/help/20.3/kpis-pattern.html
2021-11-27T09:23:12
CC-MAIN-2021-49
1637964358153.33
[]
docs.appian.com
Date: Fri, 18 Jul 1997 18:35:22 -0700 (PDT) From: Doug White <[email protected]> To: Swee-Chuan Khoo <[email protected]> Cc: [email protected] Subject: Re: missing data in /etc/rc.conf Message-ID: <Pine.BSF.3.96.970718183447.1390H-100000@localhost> In-Reply-To: <Pine.SOL.3.96.970718171552.6104A-100000@topgun> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Fri, 18 Jul 1997, Swee-Chuan Khoo wrote: > On Fri, 18 Jul 1997, Doug White wrote: > > > i find out that when we do installation over network, > > > the /etc/rc.conf file is not configured and i will have to > > > configure it after rebooting. It is different from 2.1, why > > > remove this feature? > > > > When I tried 2.2.2 last week, it appeared to configure up OK, so I don't > > quite know what you're getting at. > > well, i did a ftp install from my own mirror site on another un*x > machine 3 times and each time when the installation is done, i have > to edit /etc/rc.conf file to enter the hostname, ip address, default > router and such. Hm, it worked OK earlier this week. Are you sure you're exiting the post-install menu and 'exit install'ing before rebooting? Doug White | University of Oregon Internet: [email protected] | Residence Networking Assistant | Computer Science Major Spam routed to /dev/null by Procmail | Death to Cyberpromo Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=762912+0+archive/1997/freebsd-questions/19970713.freebsd-questions
2021-11-27T09:21:23
CC-MAIN-2021-49
1637964358153.33
[]
docs.freebsd.org
meridian-minion package: ├── .m2 ├── etc │ └── featuresBoot.d │ └── custom.boot ├── repositories Repositories may take a few moments to update after updating the configuration.) Install the Karaf Feature Repository URIs. Install the Karaf Features. Features listed in the features.boot files of the Maven Repositories will take precedence over those listed in featuresBoot.d. Any existing repository registered in org.ops4j.pax.url.mvn.repositories will be overwritten. Minion Guidelines
https://docs.opennms.com/meridian/2021.1.5/development/minion/container.html
2021-11-27T08:14:39
CC-MAIN-2021-49
1637964358153.33
[]
docs.opennms.com
TSRM Ticketing Plugin The TSRM ticketing plugin creates TSRM incidents in response to Meridian alarms. Setup/tsrm.properties: Name Description tsrm.url TSRM Endpoint URL tsrm.ssl.strict Strict SSL Check (true/false) tsrm.status.open TSRM status for open ticket tsrm.status.close TSRM status for closed ticket Next, add tsrm-troubleticketer to the featuresBoot property in the ${OPENNMS_HOME}/etc/org.apache.karaf.features.cfg Restart OpenNMS. When OpenNMS has started again, login to the Karaf Shell and install the feature: feature:install tsrm-troubleticketer The plugin should be ready to use. When troubleshooting, consult the following log files: ${OPENNMS_HOME}/data/log/karaf.log ${OPENNMS_HOME}/logs/trouble-ticketer.log Remedy Ticketing Plugin Telemetryd
https://docs.opennms.com/meridian/2021.1.5/reference/configuration/ticketing/ticketer/tsrm.html
2021-11-27T08:12:45
CC-MAIN-2021-49
1637964358153.33
[]
docs.opennms.com
Admin, Organizer, and Customer, all of them receive notifications separately. {primary} First, add Mail Credentialsfrom Admin Panel -> Settings -> Mailtag, to start receiving emails. {success} Notifications are received via Email and on the Website (the bell 🔔 icon) {primary} In v1.5, we've added email language variables in lang files, so that you can easily modify the email template language. The customer receives notifications on- Organizer receives notifications on- Admin receives notifications on- Email notifications are sent in multiple languages. The emails will be translated into the user's current language. It's compatible with RTL modes as well.
https://eventmie-pro-docs.classiebit.com/docs/1.5/bookings/email-notifications
2021-11-27T07:56:05
CC-MAIN-2021-49
1637964358153.33
[]
eventmie-pro-docs.classiebit.com
Managing content in OSTree repositories Once you have a build system going, if you actually want client systems to retrieve the content, you will quickly feel a need for "repository management". The command line tool ostree does cover some core functionality, but doesn't include very high level workflows. One reason is that how content is delivered and managed has concerns very specific to the organization. For example, some operating system content vendors may want integration with a specific errata notification system when generating commits. In this section, we will describe some high level ideas and methods for managing content in OSTree repositories, mostly independent of any particular model or tool. That said, there is an associated upstream project ostree-releng-scripts which has some scripts that are intended to implement portions of this document. Another example of software which can assist in managing OSTree repositories today is the Pulp Project, which has a Pulp OSTree plugin. Mirroring repositories It's very common to want to perform a full or partial mirror, in particular across organizational boundaries (e.g. an upstream OS provider, and a user that wants offline and faster access to the content). OSTree supports both full and partial mirroring of the base archive content, although not yet of static deltas. To create a mirror, first create an archive repository (you don't need to run this as root), then add the upstream as a remote, then use pull --mirror. ostree --repo=repo init --mode=archive ostree --repo=repo remote add exampleos ostree --repo=repo pull --mirror exampleos:exampleos/x86_64/standard You can use the --depth=-1 option to retrieve all history, or a positive integer like 3 to retrieve just the last 3 commits. See also the rsync-repos script in ostree-releng-scripts. Separate development vs release repositories By default, OSTree accumulates server side history. This is actually optional in that your build system can (using the API) write a commit with no parent. But first, we'll investigate the ramifications of server side history. Many content vendors will want to separate their internal development with what is made public to the world. Therefore, you will want (at least) two OSTree repositories, we'll call them "dev" and "prod". To phrase this another way, let's say you have a continuous delivery system which is building from git and committing into your "dev" OSTree repository. This might happen tens to hundreds of times per day. That's a substantial amount of history over time, and it's unlikely most of your content consumers (i.e. not developers/testers) will be interested in all of it. The original vision of OSTree was to fulfill this "dev" role, and in particular the "archive" format was designed for it. Then, what you'll want to do is promote content from "dev" to "prod". We'll discuss this later, but first, let's talk about promotion inside our "dev" repository. Promoting content along OSTree branches - "buildmaster", "smoketested" Besides multiple repositories, OSTree also supports multiple branches inside one repository, equivalent to git's branches. We saw in an earlier section an example branch name like exampleos/x86_64/standard. Choosing the branch name for your "prod" repository is absolutely critical as client systems will reference it. It becomes an important part of your face to the world, in the same way the "master" branch in a git repository is. But with your "dev" repository internally, it can be very useful to use OSTree's branching concepts to represent different stages in a software delivery pipeline. Deriving from exampleos/x86_64/standard, let's say our "dev" repository contains exampleos/x86_64/buildmaster/standard. We choose the term "buildmaster" to represent something that came straight from git master. It may not be tested very much. Our next step should be to hook up a testing system (Jenkins, Buildbot, etc.) to this. When a build (commit) passes some tests, we want to "promote" that commit. Let's create a new branch called smoketested to say that some basic sanity checks pass on the complete system. This might be where human testers get involved, for example. A basic way to "promote" the buildmaster commit that passed testing like this: ostree commit -b exampleos/x86_64/smoketested/standard -s 'Passed tests' --tree=ref=aec070645fe53... Here we're generating a new commit object (perhaps include in the commit log links to build logs, etc.), but we're reusing the content from the buildmaster commit aec070645fe53 that passed the smoketests. For a more sophisticated implementation of this model, see the do-release-tags script, which includes support for things like propagating version numbers across commit promotion. We can easily generalize this model to have an arbitrary number of stages like exampleos/x86_64/stage-1-pass/standard, exampleos/x86_64/stage-2-pass/standard, etc. depending on business requirements and logic. In this suggested model, the "stages" are increasingly expensive. The logic is that we don't want to spend substantial time on e.g. network performance tests if something basic like a systemd unit file fails on bootup. Promoting content between OSTree repositories Now, we have our internal continuous delivery stream flowing, it's being tested and works. We want to periodically take the latest commit on exampleos/x86_64/stage-3-pass/standard and expose it in our "prod" repository as exampleos/x86_64/standard, with a much smaller history. We'll have other business requirements such as writing release notes (and potentially putting them in the OSTree commit message), etc. In Build Systems we saw how the pull-local command can be used to migrate content from the "build" repository (in bare-user mode) into an archive repository for serving to client systems. Following this section, we now have three repositories, let's call them repo-build, repo-dev, and repo-prod. We've been pulling content from repo-build into repo-dev (which involves gzip compression among other things since it is a format change). When using pull-local to migrate content between two archive repositories, the binary content is taken unmodified. Let's go ahead and generate a new commit in our prod repository: checksum=$(ostree --repo=repo-dev rev-parse exampleos/x86_64/stage-3-pass/standard`) ostree --repo=repo-prod pull-local repo-dev ${checksum} ostree --repo=repo-prod commit -b exampleos/x86_64/standard \ -s 'Release 1.2.3' --add-metadata-string=version=1.2.3 \ --tree=ref=${checksum} There are a few things going on here. First, we found the latest commit checksum for the "stage-3 dev", and told pull-local to copy it, without using the branch name. We do this because we don't want to expose the exampleos/x86_64/stage-3-pass/standard branch name in our "prod" repository. Next, we generate a new commit in prod that's referencing the exact binary content in dev. If the "dev" and "prod" repositories are on the same Unix filesystem, (like git) OSTree will make use of hard links to avoid copying any content at all - making the process very fast. Another interesting thing to notice here is that we're adding an version metadata string to the commit. This is an optional piece of metadata, but we are encouraging its use in the OSTree ecosystem of tools. Commands like ostree admin status show it by default. Derived data - static deltas and the summary file As discussed in Formats, the archive repository we use for "prod" requires one HTTP fetch per client request by default. If we're only performing a release e.g. once a week, it's appropriate to use "static deltas" to speed up client updates. So once we've used the above command to pull content from repo-dev into repo-prod, let's generate a delta against the previous commit: ostree --repo=repo-prod static-delta generate exampleos/x86_64/standard We may also want to support client systems upgrading from two commits previous. ostree --repo=repo-prod static-delta generate --from=exampleos/x86_64/standard^^ --to=exampleos/x86_64/standard Generating a full permutation of deltas across all prior versions can get expensive, and there is some support in the OSTree core for static deltas which "recurse" to a parent. This can help create a model where clients download a chain of deltas. Support for this is not fully implemented yet however. Regardless of whether or not you choose to generate static deltas, you should update the summary file: ostree --repo=repo-prod summary -u (Remember, the summary command cannot be run concurrently, so this should be triggered serially by other jobs). There is some more information on the design of the summary file in Repo. Pruning our build and dev repositories First, the OSTree author believes you should not use OSTree as a "primary content store". The binaries in an OSTree repository should be derived from a git repository. Your build system should record proper metadata such as the configuration options used to generate the build, and you should be able to rebuild it if necessary. Art assets should be stored in a system that's designed for that (e.g. Git LFS). Another way to say this is that five years down the line, we are unlikely to care about retaining the exact binaries from an OS build on Wednesday afternoon three years ago. We want to save space and prune our "dev" repository. ostree --repo=repo-dev prune --refs-only --keep-younger-than="6 months ago" That will truncate the history older than 6 months. Deleted commits will have "tombstone markers" added so that you know they were explicitly deleted, but all content in them (that is not referenced by a still retained commit) will be garbage collected. Licensing for this document: SPDX-License-Identifier: (CC-BY-SA-3.0 OR GFDL-1.3-or-later)
https://ostree.readthedocs.io/en/stable/manual/repository-management/
2021-11-27T08:12:38
CC-MAIN-2021-49
1637964358153.33
[]
ostree.readthedocs.io
e-Commerce Online Event Registration Module Setup Here's how to setup your site to allow Online Event Registration for your customers If you've correctly set up eCommerce, your payment option (PayPal Express is the simplest, or Pay Later for pay at the door), you are all set for site visitors to register for event(s). - First, you must complete the initial setup of eCommerce or have completed installation using the sample store database. - The type of 'shipping option' you activate is irrelevant, since 'event's Event Registration' module and insert it on a page. There are three actions 'Show All Events', 'Calendar View' and 'Upcoming Events', each of which has a 'Default' view, with the Show All Events action also having an additional 'Headlines' view. - Create a new Event from that module - Enter a title, description, number of seats available, event date, start time, end time, no registrations after date, price, and at least one main image for the event - The other fields are optional and really only provide additional 'text' for the event display if added (event location, end date, file attachments, waiver, etc...). - The event now appears within the module.
http://docs.exponentcms.org/docs/2.6.0/online-event-registration-setup
2021-11-27T08:27:14
CC-MAIN-2021-49
1637964358153.33
[]
docs.exponentcms.org
How to enable Bitlocker and escrow the keys to Azure AD when using AutoPilot for standard users Welcome to our first blog post! This will be the first of many which we hope you find useful and informative when it comes to anything Windows client and Microsoft 365 Powered Device. Our mission at Microsoft has been focused on accelerating digital transformation while empowering every person and every organization on the planet to achieve more. To reflect onto this statement, we have been designing solutions to simplify device complexity, harness the power of the cloud, enabling new and easy ways to collaborate while enhancing the security platform. In this post, we will focus on the modern deployment and management of a device when it comes to protecting data at rest. We will show you how to enable BitLocker Disk Encryption and automate the process for an AutoPilot device that is provisioned for a standard user using the Windows 10 Fall Creators Update version 1709. A bit of history… On June 30th 2017, Microsoft Intune received an update to allow BitLocker configuration where you are able to configure disk encryption settings (article here) under the “Endpoint Protection” profile as shown below: After the device sync to Azure AD and gets the new settings thru Intune, it will prompt end user in the system tray for further actions: Once you click on the notification, you will be presented with this dialog box: And then encryption starts. You are able to retrieve your BitLocker key by visiting and logging in with your AAD credentials and selecting your profile. You will see a list of all of your devices and a link to ‘Get BitLocker Keys’ The above method works great as long as you are local admin on the device and as you can see it requires user intervention which is not ideal in modern deployment where simplification is a concern for every organization. For most corporate IT organizations this is a far less than optimal solution. Pieter Wilgeven wrote a great blog here on BitLocker Encryption using AAD/MDM and it documents the process involved on how to automate BitLocker Disk Encryption regardless of hardware capabilities. In his blog, you are able to download 2 Zip files (TriggerBitlocker and TriggerBitlockerUser) which are basically scripts wrapped into an MSI in order to be deployed thru Intune to any groups of users. As seen below under the Intune service in Azure, select Mobile Apps --> Apps --> click on the “+” next to Add and select “Line of Business app” from the dropdown menu, and then select the MSI file you downloaded from Pieter’s blog. Finalize the configuration per your need and then assign this app to your user group. As long as the user logged in is an admin, the TriggerBitLocker.msi installs itself on the targeted user device and extract onto c:\Programs Files (x86)\BitLockerTrigger where you will find 3 files(Enable_BitLocker.ps1, Enable_BitLocker.vbs, and Enable_BitLocker.xml). The vbs kicks off the powershell script and the xml file which is used to create task scheduler to run around 2pm. With TriggerBitLockerUser.msi, it didn’t work for standard user logged in and UAC is prompted for users and hence we made some improvement to get the result we are looking for. Intune enhancements in Windows 10 1709 aka Fall Creators Update/RS3 As of now, you must be admin to access BL protectors like the recovery key, and we do not enable protection until you back up the recovery key. Recently we have added the ability to upload PowerShell scripts into the Intune Management extensions to run on Windows 10 1607 or later and that is joined to Azure AD. With this new capability, we are able to deploy PowerShell script running under system context to target standard users and get successful results. We have taken the Enable_BitLocker script from Pieter’s blog and made it smarter and with detailed logging for troubleshooting purpose. Running as SYSTEM, BitLocker may not implicitly load the BitLocker PowerShell module and running as SYSTEM the env variable is not set, so we explicitly had to load it using “Import-Module -Name C:\Windows\SysWOW64\WindowsPowerShell\v1.0\Modules\BitLocker” and then did multiple checks/validations to query for OS Volume Status in order to take the appropriate action plan and add required protector. Below you will find zip file link to download which contains the powershell script that you need to upload it to your Azure tenant -->Microsoft Intune -->Device configuration--> PowerShell scripts (as seen above) and assign it to your user group to encrypt the OS drive and escrow the key to their AAD tenant. This will not only work for admins logged on to their device but also for standard users that are provisioned thru AutoPilot. As you may or may not be aware, Windows AutoPilot allows you to make the first user to login to a Windows device from the Out-Of-Box (OOBE) experience a standard user on the machine – in fact, it is the only way to come out of OOBE as a Standard user on the machine. Using AutoPilot to deploy/provision a device and automatically bring it under Intune (MDM) management also provides the ability to have additional administrators added to the machine the same way we do with a traditional AD joined device today. Watch for future posts on how AutoPilot deployed, modern managed devices can help you meet enterprise goals and significantly reduce costs while also providing an exceptional user experience. Future Intune enhancements coming to Windows 10 1803 / RS4 As announced at Microsoft Ignite, mentioned above. In the meantime, this method will help you in your modern deployment/management scenarios today and you will be using AutoPilot and enabling disk encryption for Admins/Standard Users with a seamless transition to the built-in version once it ships. You can also begin testing the new version immediately by becoming a member of the Windows Insider Preview Program. Happy Deployment and Encryption! Download: Credit: Sean McLaren and Imad Balute are experienced Technology Professionals for Modern Desktop. We are dedicated to enabling and helping enterprise customers with their digital transformation journey focusing on Modern IT around deployment, management, and security.
https://docs.microsoft.com/en-us/archive/blogs/showmewindows/how-to-enable-bitlocker-and-escrow-the-keys-to-azure-ad-when-using-autopilot-for-standard-users
2021-11-27T08:36:53
CC-MAIN-2021-49
1637964358153.33
[array(['https://msdnshared.blob.core.windows.net/media/2018/01/Intune_EndpointProtection-1024x792.jpg', None], dtype=object) array(['https://msdnshared.blob.core.windows.net/media/2018/01/popup_encryption_systray.jpg', None], dtype=object) array(['https://msdnshared.blob.core.windows.net/media/2018/01/encryption_dialogbox11-1024x440.jpg', None], dtype=object) array(['https://msdnshared.blob.core.windows.net/media/2018/01/Intune_App_package-1024x247.jpg', None], dtype=object) array(['https://msdnshared.blob.core.windows.net/media/2018/01/Intune_powershell-1024x247.jpg', None], dtype=object) ]
docs.microsoft.com
The TechNet Guru Awards, January 2016! All the votes are in! And below are the results for the TechNet Guru Awards, January Integration Roadmap by Steef-Jan Wiggers Abhishek Kumar: "Steef-Jan Thanks for Integration Roadmap to broad audience ." TGN: "Very good, great article to have one the TN Wiki" LG: "It is just a link to the document from Microsoft! Nothing else." Sandro Pereira: "It’s the public roadmap and a nice addition to the TechNet Article. Simple, but for sure, it deserves an article." Also worth a mention were the other entries this month: - FIM 2010: Planning security setup for accounts, groups and services - Part 3. Compact Checklist by Peter Geelen - MSFT Ed Price: "The color-coding is very helpful!" - FIM 2010: Planning security setup for accounts, groups and services - Part 5. Operational Best Practices by Peter Geelen - MSFT - FIM 2010: Planning security setup for accounts, groups and services - Part 6. References & authoritative resources by Peter Geelen - MSFT Ed Price: "Another fantastic resource you'll want to keep coming back to! " - FIM 2010: Planning security setup for accounts, groups and services - Part 7. Additional resources by Peter Geelen - MSFT Ed Price: "Great job on the Appendix A diagram!" - FIM 2010: Planning security setup for accounts, groups and services - Part 8. Glossary by Peter Geelen - MSFT Ed Price: "The glossary is a helpful resource that you'll want to return to. " Also worth a mention were the other entries this month: - MVC Dynamic Menu Creation Using AngularJS and WCF Rest by SYEDSHANU Richard Mueller: "An advanced topic with lots of code and good images. Cross-links for many of the acronyms would help." Ed Price: "Wow. Incredible depth! Great use of MSDN Gallery to host the source code!" - Windows 10 IoT Core On Raspberry Pi2: Installation And Configuration by Carmelo La Monica Ed Price: "Love the topic! Great use of images! Would benefit from See Also (wiki links) and Other Resources (external links) at the end. The Introduction and Conclusion are very helpful!" Richard Mueller: "Great images. References/links would help." - Go under the hood with Visual Studio Code by Hussain Shahbaz Khawaja Richard Mueller: "Interesting, but we need more, especially references/links." Ed Price: "Good topic, but I'd want to go deeper with instructions and examples for each tip and trick. " - Sway - Overview of the Application by Hezequias Vasconcelos JH: "Nice little introduction to Sway." Also worth a mention were the other entries this month: - SharePoint 2013: Create Reusable Workflow on Content Type using SharePoint Designer 2013 by Danish Islam Ed Price: "Great set of instructions and helpful links." Margriet Bruggeman: "Nice article about workflows." - SharePoint REST API Javascript Library by Michaelle de las Alas Margriet Bruggeman: "A little bit short but nice to the point explanation." Ed Price: "Great topic and code formatting. Could beneft from a more thorough explanation of the code and a See Also section with Wiki links at the end. Good contributions! Thanks!" Ashutosh Singh: "" This were no medals awarded this month. The following article is well written and the backend work is nice, but has been flagged as failing the technical bar for correctness for the title issue. - BackgroundTranfers; keep the data going by Dave Smits JH: "Nice and simple. Love it that the app and the backend is shown." RC: "Server side looks very interesting (but I can't verify correctness). Instead of hacking post processing use the BackgroundTransferCompletionGroup. There's a good overview at\#post-processing" A huge thank you to EVERYONE who contributed an article to January's competition. Hopefully we will see you ALL again in February 2016:
https://docs.microsoft.com/en-us/archive/blogs/wikininjas/the-technet-guru-awards-january-2016
2021-11-27T09:27:36
CC-MAIN-2021-49
1637964358153.33
[]
docs.microsoft.com
Huawei Cloud¶ Features¶ Virtual machine provisioning Backups Brownfield VM management and migration Hypervisor remote console Cloud sync Lifecycle management and resizing Network security group creation Network security group management Router and network creation Load balancer services Docker host management and configuration Floating IP assignment Huawei OBS buckets (create, manage, delete, and discovery) Huawei SFS (create, manage, and delete) Integrate Huawei Cloud with Morpheus¶ To integrate Huawei Cloud with Morpheus, we’ll gather the following pieces of information: Account Name Identity (IAM) API URL Project Username Begin by logging into your Huawei Cloud console. If you’re not currently logged in, you will be prompted to do so. Once on the console page, hover over your username in the upper-right corner of the application window and select “My Credentials”. From the credentials page, we can gather the Account Name and the Project Name, record them for later when we provide the integration information to Morpheus. To gather the API endpoint URL, take a look at the complete list of endpoints. If a specific endpoint exists for your region, use it. In any other case use the endpoint for all regions. It will be formatted like this:. With this information gathered, and presuming you know the credentials for the service account you wish to use, we can move back into Morpheus-UI. Navigate to Infrastructure > Clouds and click + ADD. Scroll to Huawei Cloud and click NEXT. The information we’ve gathered will be plugged into the CREATE CLOUD modal. The DOMAIN ID field will accept the Account Name field we gathered. Your completed CREATE CLOUD modal will look similar to the one pictured below: After clicking NEXT, add this new Cloud to a Group or create a new Group. On finalizing the wizard, Huawei Cloud will be integrated into Morpheus and ready for provisioning. If you opted to inventory existing workloads, those will be onboarded shortly. Add/Edit Huawei Cloud Modal Fields endpoint. See the integration steps above for more detail - DOMAIN ID The DOMAIN ID field takes the “Account Name” as shown on the Basic Information page of the account. See the integration steps above for more detail - PROJECT The target project name. See the integration steps above for more detail - USERNAME The service account username. See the integration steps above for more detail The integration service account password. See the integration steps above for more detail - IMAGE FORMAT Select QCOW2, RAW or VMDK image type - Inventory Existing Instances Select for Morpheus to discover and sync existing VMs - Enable Hypervisor Console Hypervisor console support for openstack currently only supports novnc. Be sure the novnc proxy is configured properly in your openstack environment. Tip When using the RAW image format, you can bypass the image conversion service within the cloud leading to quicker performance. Other image formats are converted to RAW format and back when performing various actions. Using the RAW format from the start will bypass these conversion steps.. Huawei Scalable File Service (SFS)¶ The Morpheus integration with Huawei Cloud includes the capability to work with Huawei Scalable File Service (SFS). SFS is shared file storage hosted on Huawei Cloud. By integrating Morpheus with Huawei Cloud you can discover, create, manage, and delete SFS servers, as well as view and work with the file shares and files contained therein. SFS Server Discovery and Management¶ On integrating Huawei Huawei Cloud integrations (if more than one relevant integration currently exists), select from synced availability zones, and scope the storage server to specific Tenants if desired. Additionally, Huawei SFS servers can be created from the storage server list page (Infrastructure > Storage > Servers) directly in Morpheus. Click + ADD to begin and set the storage server type value to “Huawei SFS”. Just like with existing synced SFS servers, those created from Morpheus can be scoped as needed. Huawei Object Storage Service (OBS)¶ The Morpheus integration with Huawei Cloud also supports Object Storage Service (OBS). Morpheus will automatically onboard existing OBS servers and buckets shortly after completing the cloud integration. Before you can add a new OBS server from Morpheus, you must know or generate a key and secret value from the Huawei console and must provide a Huawei OBS API endpoint. Generate a Key and Secret¶ From the Huawei web console, log into the account used to integrate Huawei Cloud with Morpheus. Hover over your account name in the upper-right corner of the application window and click “My Credentials”. Select “Access Keys” from the left-hand sidebar. To create a new key, click + Create Access Key. Complete the two-factor authentication steps in the box that appears. Once the key is generated, download or record the key and store it in a safe location. The key will not be viewable or available for download again after this point. Create OBS Server in Morpheus¶ With the key and secret value in hand from the previous section, navigate to Infrastructure > Storage > Servers. Click + ADD. On changing the server type to Huawei OBS, you will see the fields for the access key and the secret key. OBS API endpoints can be found in Huawei endpoint documentation. Include those three values in the Create Server modal along with a friendly name for use in Morpheus UI. Just like with SFS objects, we can choose to scope the server to all or specific Tenants at this time. Create Huawei OBS Bucket¶ With an OBS server onboarded or created in Morpheus, you’re able to create and manage Huawei OBS buckets as needed. To create a new bucket, navigate to Infrastructure > Storage > Buckets. Click + ADD and select “Huawei OBS Bucket”. The following fields are required when creating a Huawei OBS bucket: NAME: A friendly name for use in Morpheus UI STORAGE SERVICE: Choose the OBS server to associate the new bucket with BUCKET NAME: The name of the bucket in Huawei Cloud, this must be unique STORAGE CLASS: If needed, view the discussion of storage classes in Huawei support documentation BUCKET ACL: Public Read, Public Read/Write, or Private BUCKET POLICY: Public Read, Public Read/Write, or Private STORAGE QUOTA: Set to 0 for no quota Once finished, click SAVE CHANGES
https://docs.morpheusdata.com/en/5.2.4/integration_guides/Clouds/Huawei/huawei.html
2021-11-27T08:27:19
CC-MAIN-2021-49
1637964358153.33
[array(['../../../_images/1credentials.png', '../../../_images/1credentials.png'], dtype=object) array(['../../../_images/2account_project.png', '../../../_images/2account_project.png'], dtype=object) array(['../../../_images/3identity_api_endpoints.png', '../../../_images/3identity_api_endpoints.png'], dtype=object) array(['../../../_images/4add_cloud.png', '../../../_images/4add_cloud.png'], dtype=object) array(['../../../_images/1editServer.png', '../../../_images/1editServer.png'], dtype=object) array(['../../../_images/2addServer.png', '../../../_images/2addServer.png'], dtype=object) array(['../../../_images/1createKey.png', '../../../_images/1createKey.png'], dtype=object) array(['../../../_images/2addObsServer.png', '../../../_images/2addObsServer.png'], dtype=object) array(['../../../_images/3createBucket.png', '../../../_images/3createBucket.png'], dtype=object)]
docs.morpheusdata.com
_5<< On the “CONFIGURE” tab, we’re asked to set the initial connection strings into vSphere. The API URL should be in the following format: https://<URL>/sdk. The USERNAME should be in user@domain format. _7<< Once you’re satisfied with your selections, click “NEXT” We have now arrived at the “GROUP” tab. In this case, we will mark the radio button to “USE EXISTING” groups if you wish to use the group we configured earlier. _9<<_10<<_11<< Just like with resource pools, we are also able to scope data stores to specific groups. This ensures that the members of each group are only able to consume the data stores they should have access to. _13<<_14<<_16<<_17<<_18<<_19<<_20<<_21<<_22<<”. _26<<_27<<_28<<_29<<_30<<_31<<_33<<_34<<_35<<.
https://docs.morpheusdata.com/en/5.2.4/integration_guides/Clouds/vmware/vmware.html
2021-11-27T07:49:28
CC-MAIN-2021-49
1637964358153.33
[array(['../../../_images/add_cloud.png', '../../../_images/add_cloud.png'], dtype=object) array(['../../../_images/service_plans.png', '../../../_images/service_plans.png'], dtype=object) array(['../../../_images/tagging_at_provisioning.png', '../../../_images/tagging_at_provisioning.png'], dtype=object) array(['../../../_images/cloud_detail.png', '../../../_images/cloud_detail.png'], dtype=object) array(['../../../_images/1groupConfig.png', 'The new group dialog box showing a name for the group filled in'], dtype=object) array(['../../../_images/1createCloud.png', 'The list of clouds available to integrate with, vCenter is selected'], dtype=object) array(['../../../_images/2cloudConfigure.png', 'The create cloud dialog box with relevant fields filled'], dtype=object) array(['../../../_images/3advancedOptions.png', 'The advanced options section of the create cloud dialog box'], dtype=object) array(['../../../_images/4groupTab.png', 'The group tab of the create cloud dialog box'], dtype=object) array(['../../../_images/1resourcePools1.png', 'The list of synced resource pools in Morpheus'], dtype=object) array(['../../../_images/2editResourcePools.png', 'The edit resource pools dialog box'], dtype=object) array(['../../../_images/1dataStores.png', 'The list of synced data stores in Morpheus'], dtype=object) array(['../../../_images/2editDataStores.png', 'The edit data stores dialog box'], dtype=object) array(['../../../_images/1networksSection.png', 'The list of configured neworks'], dtype=object) array(['../../../_images/2addIPAM.png', 'The add IPAM integration dialog box'], dtype=object) array(['../../../_images/3addIPPool.png', 'Creating a Morpheus-type IP pool'], dtype=object) array(['../../../_images/4cloudNetworks.png', 'Viewing networks on the cloud detail page'], dtype=object) array(['../../../_images/1createInstance.png', 'Selecting an instance type to provision'], dtype=object) array(['../../../_images/2instanceConfigure.png', 'The configure tab of the create instance dialog box'], dtype=object) array(['../../../_images/3completeInstance.png', 'Confirming the instance to be provisioned'], dtype=object) array(['../../../_images/4reviewInstance.png', 'Monitoring privisioning progress on the instance detail page'], dtype=object) array(['../../../_images/1addNode.png', 'Adding a new node type'], dtype=object) array(['../../../_images/2nodeSettings1.png', 'Configuring options for the new node'], dtype=object) array(['../../../_images/3addInstanceType.png', 'Adding a new instance type'], dtype=object) array(['../../../_images/4instanceTypeSettings.png', 'Configuring the new instance type'], dtype=object) array(['../../../_images/5openInstanceType.png', 'Opening our newly created instance type'], dtype=object) array(['../../../_images/6layoutSettings.png', 'Configuring the new layout'], dtype=object) array(['../../../_images/7newInstanceSearch.png', 'Searching for our custom instance type'], dtype=object) array(['../../../_images/8newInstanceConfigure.png', 'Configuring the newlt created instance'], dtype=object) array(['../../../_images/10newInstanceConsole.png', 'Confirming creation of the new instance'], dtype=object) array(['../../../_images/1newIntegration.png', 'Adding a new automation integration'], dtype=object) array(['../../../_images/2configureIntegration.png', 'Configuring the new Ansible integration'], dtype=object) array(['../../../_images/3taskConfig.png', 'Configuring the new task'], dtype=object) array(['../../../_images/4executeTask.png', 'Executing the new task'], dtype=object) array(['../../../_images/5newWorkflow.png', 'Creating a workflow for our task'], dtype=object) array(['../../../_images/6automationInProvisioning.png', 'Running the new workflow on provisioning'], dtype=object)]
docs.morpheusdata.com
Tempest Configuration Guide¶ This guide is a starting point for configuring Tempest. It aims to elaborate on and explain some of the mandatory and common configuration settings and how they are used in conjunction. The source of truth on each option is the sample config file which explains the purpose of each individual option. You can see the sample config file here: Sample Configuration File Test Credentials¶ Tempest allows for configuring a set of admin credentials in the auth section, via the following parameters: admin_username admin_password admin_project_name admin_domain_name Admin credentials are not mandatory to run Tempest, but when provided they can be used to: Run tests for admin APIs Generate test credentials on the fly (see Dynamic Credentials) When Keystone uses a policy that requires domain scoped tokens for admin actions, the flag admin_domain_scope must be set to True. The admin user configured, if any, must have a role assigned to the domain to be usable. Tempest allows for configuring pre-provisioned test credentials as well. This can be done using the accounts.yaml file (see Pre-Provisioned Credentials). This file is used to specify an arbitrary number of users available to run tests with. You can specify the location of the file in the auth section in the tempest.conf file. To see the specific format used in the file please refer to the accounts.yaml.sample file included in Tempest. Keystone Connection Info¶ In order for Tempest to be able to talk to your OpenStack deployment you need to provide it with information about how it communicates with keystone. This involves configuring the following options in the identity section: auth_version uri uri_v3 The auth_version option is used to tell Tempest whether it should be using Keystone’s v2 or v3 api for communicating with Keystone. The two uri options are used to tell Tempest the url of the keystone endpoint. The uri option is used for Keystone v2 request and uri_v3 is used for Keystone v3. You want to ensure that which ever version you set for auth_version has its uri option defined. Credential Provider Mechanisms¶ Tempest currently has two different internal methods for providing authentication to tests: dynamic credentials and pre-provisioned credentials. Depending on which one is in use the configuration of Tempest is slightly different. Dynamic Credentials¶ Dynamic Credentials (formerly known as Tenant isolation) was originally created to enable running Tempest in parallel. For each test class it creates a unique set of user credentials to use for the tests in the class. It can create up to three sets of username, password, and project names for a primary user, an admin user, and an alternate user. To enable and use dynamic credentials you only need to configure two things: A set of admin credentials with permissions to create users and projects. This is specified in the authsection with the admin_username, admin_project_name, admin_domain_nameand admin_passwordoptions To enable dynamic credentials in the authsection with the use_dynamic_credentialsoption. This is also currently the default credential provider enabled by Tempest, due to its common use and ease of configuration. It is worth pointing out that depending on your cloud configuration you might need to assign a role to each of the users created by Tempest’s dynamic credentials. This can be set using the tempest_roles option. It takes in a list of role names each of which will be assigned to each of the users created by dynamic credentials. This option will not have any effect when Tempest is not configured to use dynamic credentials. When the admin_domain_scope option is set to True, provisioned admin accounts will be assigned a role on domain configured in default_credentials_domain_name. This will make the accounts provisioned usable in a cloud where domain scoped tokens are required by Keystone for admin operations. Note that the initial pre-provision admin accounts, configured in tempest.conf, must have a role on the same domain as well, for Dynamic Credentials to work. Pre-Provisioned Credentials¶ For a long time using dynamic credentials was the only method available if you wanted to enable parallel execution of Tempest tests. However, this was insufficient for certain use cases because of the admin credentials requirement to create the credential sets on demand. To get around that the accounts.yaml file was introduced and with that a new internal credential provider to enable using the list of credentials instead of creating them on demand. With pre-provisioned credentials (also known as locking test accounts) each test class will reserve a set of credentials from the accounts.yaml before executing any of its tests so that each class is isolated like with dynamic credentials. To enable and use pre-provisioned credentials you need do a few things: Create an accounts.yaml file which contains the set of pre-existing credentials to use for testing. To make sure you don’t have a credentials starvation issue when running in parallel make sure you have at least two times the number of worker processes you are using to execute Tempest available in the file. (If running serially the worker count is 1.) You can check the accounts.yaml.sample file packaged in Tempest for the yaml format. Provide Tempest with the location of your accounts.yaml file with the test_accounts_fileoption in the authsection NOTE: Be sure to use a full path for the file; otherwise Tempest will likely not find it. Set use_dynamic_credentials = Falsein the authgroup It is worth pointing out that each set of credentials in the accounts.yaml should have a unique project. This is required to provide proper isolation to the tests using the credentials, and failure to do this will likely cause unexpected failures in some tests. Also, ensure that these projects and users used do not have any pre-existing resources created. Tempest assumes all tenants it’s using are empty and may sporadically fail if there are unexpected resources present. When the Keystone in the target cloud requires domain scoped tokens to perform admin actions, all pre-provisioned admin users must have a role assigned on the domain where test accounts a provisioned. The option admin_domain_scope is used to tell Tempest that domain scoped tokens shall be used. default_credentials_domain_name is the domain where test accounts are expected to be provisioned if no domain is specified. Note that if credentials are pre-provisioned via tempest account-generator the role on the domain will be assigned automatically for you, as long as admin_domain_scope as default_credentials_domain_name are configured properly in tempest.conf. Pre-Provisioned Credentials are also known as accounts.yaml or accounts file. Keystone Scopes & Roles Support in Tempest¶ For details on scope and roles support in Tempest, please refer to this document Compute¶ Flavors¶ For Tempest to be able to create servers you need to specify flavors that it can use to boot the servers with. There are two options in the Tempest config for doing this: flavor_ref flavor_ref_alt Both of these options are in the compute section of the config file and take in the flavor id (not the name) from Nova. The flavor_ref option is what will be used for booting almost all of the guests; flavor_ref_alt is only used in tests where two different-sized servers are required (for example, a resize test). Using a smaller flavor is generally recommended. When larger flavors are used, the extra time required to bring up servers will likely affect the total run time and probably require tweaking timeout values to ensure tests have ample time to finish. Images¶ Just like with flavors, Tempest needs to know which images to use for booting servers. There are two options in the compute section just like with flavors: image_ref image_ref_alt Both options are expecting an image id (not name) from Nova. The image_ref option is what will be used for booting the majority of servers in Tempest. image_ref_alt is used for tests that require two images such as rebuild. If two images are not available you can set both options to the same image id and those tests will be skipped. There are also options in the scenario section for images: img_file img_container_format img_disk_format However, unlike the other image options, these are used for a very small subset of scenario tests which are uploading an image. These options are used to tell Tempest where an image file is located and describe its metadata for when it is uploaded. You first need to specify full path of the image using img_file option. If it is found then the img_container_format and img_disk_format options are used to upload that image to glance. If it’s not found, the tests requiring an image to upload will fail. It is worth pointing out that using cirros is a very good choice for running Tempest. It’s what is used for upstream testing, they boot quickly and have a small footprint. Networking¶ OpenStack has a myriad of different networking configurations possible and depending on which of the two network backends, nova-network or Neutron, you are using things can vary drastically. Due to this complexity Tempest has to provide a certain level of flexibility in its configuration to ensure it will work against any cloud. This ends up causing a large number of permutations in Tempest’s config around network configuration. Enabling Remote Access to Created Servers¶ Network Creation/Usage for Servers¶ When Tempest creates servers for testing, some tests require being able to connect those servers. Depending on the configuration of the cloud, the methods for doing this can be different. In certain configurations, it is required to specify a single network with server create calls. Accordingly, Tempest provides a few different methods for providing this information in configuration to try and ensure that regardless of the cloud’s configuration it’ll still be able to run. This section covers the different methods of configuring Tempest to provide a network when creating servers. Fixed Network Name¶ This is the simplest method of specifying how networks should be used. You can just specify a single network name/label to use for all server creations. The limitation with this is that all projects and users must be able to see that network name/label if they are to perform a network list and be able to use it. If no network name is assigned in the config file and none of the below alternatives are used, then Tempest will not specify a network on server creations, which depending on the cloud configuration might prevent them from booting. To set a fixed network name simply: Set the fixed_network_nameoption in the computegroup In the case that the configured fixed network name can not be found by a user network list call, it will be treated like one was not provided except that a warning will be logged stating that it couldn’t be found. Accounts File¶ If you are using an accounts file to provide credentials for running Tempest then you can leverage it to also specify which network should be used with server creations on a per project and user pair basis. This provides the necessary flexibility to work with more intricate networking configurations by enabling the user to specify exactly which network to use for which projects. You can refer to the accounts.yaml.sample file included in the Tempest repo for the syntax around specifying networks in the file. However, specifying a network is not required when using an accounts file. If one is not specified you can use a fixed network name to specify the network to use when creating servers just as without an accounts file. However, any network specified in the accounts file will take precedence over the fixed network name provided. If no network is provided in the accounts file and a fixed network name is not set then no network will be included in create server requests. If a fixed network is provided and the accounts.yaml file also contains networks this has the benefit of enabling a couple more tests which require a static network to perform operations like server lists with a network filter. If a fixed network name is not provided these tests are skipped. Additionally, if a fixed network name is provided it will serve as a fallback in case of a misconfiguration or a missing network in the accounts file. With Dynamic Credentials¶ With dynamic credentials enabled and using nova-network, your only option for configuration is to either set a fixed network name or not. However, in most cases, it shouldn’t matter because nova-network should have no problem booting a server with multiple networks. If this is not the case for your cloud then using an accounts file is recommended because it provides the necessary flexibility to describe your configuration. Dynamic credentials are not able to dynamically allocate things as necessary if Neutron is not enabled. With Neutron and dynamic credentials enabled there should not be any additional configuration necessary to enable Tempest to create servers with working networking, assuming you have properly configured the network section to work for your cloud. Tempest will dynamically create the Neutron resources necessary to enable using servers with that network. Also, just as with the accounts file, if you specify a fixed network name while using Neutron and dynamic credentials it will enable running tests which require a static network and it will additionally be used as a fallback for server creation. However, unlike accounts.yaml this should never be triggered. However, there is an option create_isolated_networks to disable dynamic credentials’s automatic provisioning of network resources. If this option is set to False you will have to either rely on there only being a single/default network available for the server creation, or use fixed_network_name to inform Tempest which network to use. SSH Connection Configuration¶ There are also several different ways to actually establish a connection and authenticate/login on the server. After a server is booted with a provided network there are still details needed to know how to actually connect to the server. The validation group gathers all the options regarding connecting to and remotely accessing the created servers. To enable remote access to servers, there are 3 options at a minimum that are used: run_validation connect_method auth_method The run_validation is used to enable or disable ssh connectivity for all tests (with the exception of scenario tests which do not have a flag for enabling or disabling ssh) To enable ssh connectivity this needs be set to True. The connect_method option is used to tell Tempest what kind of IP to use for establishing a connection to the server. Two methods are available: fixed and floating, the later being set by default. If this is set to floating Tempest will create a floating ip for the server before attempted to connect to it. The IP for the floating ip is what is used for the connection. For the auth_method option there is currently, only one valid option, keypair. With this set to keypair Tempest will create an ssh keypair and use that for authenticating against the created server. Configuring Available Services¶ OpenStack is really a constellation of several different projects which are running together to create a cloud. However which projects you’re running is not set in stone, and which services are running is up to the deployer. Tempest, however, needs to know which services are available so it can figure out which tests it is able to run and certain setup steps which differ based on the available services. The service_available section of the config file is used to set which services are available. It contains a boolean option for each service (except for Keystone which is a hard requirement) set it to True if the service is available or False if it is not. Service Catalog¶ Each project which has its own REST API contains an entry in the service catalog. Like most things in OpenStack this is also completely configurable. However, for Tempest to be able to figure out which endpoints should get REST API calls for each service, it needs to know how that project is defined in the service catalog. There are three options for each service section to accomplish this: catalog_type endpoint_type region Setting catalog_type and endpoint_type should normally give Tempest enough information to determine which endpoint it should pull from the service catalog to use for talking to that particular service. However, if your cloud has multiple regions available and you need to specify a particular one to use a service you can set the region option in that service’s section. It should also be noted that the default values for these options are set to what DevStack uses (which is a de facto standard for service catalog entries). So often nothing actually needs to be set on these options to enable communication to a particular service. It is only if you are either not using the same catalog_type as DevStack or you want Tempest to talk to a different endpoint type instead of publicURL for a service that these need to be changed. Note Tempest does not serve all kinds of fancy URLs in the service catalog. The service catalog should be in a standard format (which is going to be standardized at the Keystone level). Tempest expects URLs in the Service catalog in the following format:<version-info> Examples: Good - Wouldn’t work -(adding prefix/suffix around version etc) Service Feature Configuration¶ OpenStack provides its deployers a myriad of different configuration options to enable anyone deploying it to create a cloud tailor-made for any individual use case. It provides options for several different backend types, databases, message queues, etc. However, the downside to this configurability is that certain operations and features aren’t supported depending on the configuration. These features may or may not be discoverable from the API so the burden is often on the user to figure out what is supported by the cloud they’re talking to. Besides the obvious interoperability issues with this, it also leaves Tempest in an interesting situation trying to figure out which tests are expected to work. However, Tempest tests do not rely on dynamic API discovery for a feature (assuming one exists). Instead, Tempest has to be explicitly configured as to which optional features are enabled. This is in order to prevent bugs in the discovery mechanisms from masking failures. The service feature-enabled config sections are how Tempest addresses the optional feature question. Each service that has tests for optional features contains one of these sections. The only options in it are boolean options with the name of a feature which is used. If it is set to false any test which depends on that functionality will be skipped. For a complete list of all these options refer to the sample config file. API Extensions¶ The service feature-enabled sections often contain an api-extensions option (or in the case of Swift a discoverable_apis option). This is used to tell Tempest which API extensions (or configurable middleware) is used in your deployment. It has two valid config states: either it contains a single value all (which is the default) which means that every API extension is assumed to be enabled, or it is set to a list of each individual extension that is enabled for that service.
https://docs.openstack.org/tempest/latest/configuration.html
2021-11-27T08:56:44
CC-MAIN-2021-49
1637964358153.33
[]
docs.openstack.org
VAGRANT SHARE!!! Vlad supports sharing of your virtual machine using the vagrant share command. To use this feature you first need to create a free HashiCorp Atlas account. Once done you can login to this service by typing the following command within the vlad directory. vagrant login This will prompt you for the username and password that you created when you set up the Atlas account. You can then share the box using the following command: vagrant share This will generate a temporary URL that you can then share with anyone else. This will only share HTTP addresses (under port 80) as a default, so if you also want to share the HTTPS address you need to specify it using the --https flag and giving it the port number of 443. vagrant share --https 443
https://vlad-docs.readthedocs.io/en/latest/usage/vagrant_share/
2021-11-27T08:00:07
CC-MAIN-2021-49
1637964358153.33
[]
vlad-docs.readthedocs.io
Metrics and alerts Contents Learn which metrics you should monitor for <service_name> and when to sound the alarm. For information about configuring a monitoring tool, see Link to come. Metrics and alerting GES exposes default metrics about the state of the Node.js application; this includes CPU usage, memory usage, and the state of the Node.js runtime. You’ll find helpful metrics in the GES Metrics subsection, which includes some basic metrics such as REST API usage, the number of created callbacks, call-in requests, and so on. These basic metrics are created as counters, which means that the values will monotonically increase over time from the beginning of a GES pod's lifespan. For more information about counters, see Metric Types in the Prometheus documentation. You can develop a solid understanding of the performance of a given GES deployment or pod by watching how these metrics change over time. The sample Prometheus expressions show you how to use the basic metrics to gain valuable insights into your callback-related activity. For information about deploying dashboards and accessing sample implementations, see Grafana dashboards and Sample implementations. Sample Prometheus expressions For more information about querying in Prometheus, see Querying Prometheus. Health metrics Health metrics, that is, those metrics that report on the status of connections from GES to dependencies such as Tenant Service (ORS), GWS, Redis, and Postgres, do not work like the metrics described above. Instead, they are implemented as a gauge that toggles between "0" and "1". For information about gauges, see the Prometheus Metric types documentation. When the connection to a service is down, the metric is "1". When the service is up, the metric is "0". Also see How alerts work. How alerts work In a Kubernetes deployment, GES relies on Prometheus and Alertmanager to generate alerts. These alerts can then be fowarded to a service of your choice (for example, PagerDuty). For information about finding sample alerts, see Sample implementations. While GES leverages Prometheus, GES also has internal functionality that manually triggers alerts when certain criteria are met. The internal alert is turned into a counter (see the Prometheus Metric types documentation) that is incremented each time the conditions to trigger the alert are met. The counter is made available on the /metrics endpoint. Use a Prometheus rule to capture the metric data and trigger the alert on Prometheus. The following example shows an alert used in an Azure deployment; note how the process watches the increase in instances of the alert being fired over time to trigger the Prometheus alert. - alert: GES_RBAC_CREATE_VQ_PROXY_ERROR annotations: summary: "There are issues managing VQ proxy objects on {{ $labels.pod }}" labels: severity: info action: email service: GES expr: increase(RBAC_CREATE_VQ_PROXY_ERROR[10m]) > 5 - alert: GES_ORS_REDIS_DOWN expr: ORS_REDIS_STATUS > 0 for: 5m labels: severity: critical action: page service: GES annotations: summary: "ORS REDIS Connection down for {{ $labels.pod }}" dashboard: "See GES Performance > Health and Liveliness to track ORS Redis Health over time" Grafana dashboards You can deploy the Grafana dashboards, included with the helm chart, when you deploy GES. Simply set the Helm value .Values.ges.grafana.enabled to true. This creates a config map to automatically deploy the dashboard. In some cases, the dashboards might need adjustment to work appropriately with your Grafana version and overall Kubernetes setup. To make changes, unpack the helm chart .tar.gz file. Make the necessary upgrades to the grafana/ges-dashboard-configmap.yaml and grafana/ges-performance-dashboard.yaml files. Experienced users can make changes in the JSON files. Alternatively, you can use the web interface to set up the dashboard, export the JSON for the dashboard (following the Grafana dashboard export and import instructions), and then copy the JSON into the appropriate file. On a re-deploy of the Helm Charts, Grafana picks up the new dashboards. Sample implementations You can find sample implementations of alerts in the provided helm charts, in the prometheus/alerts.yaml file. Sample dashboards, embedded in config maps, can be found in the grafana/ges-dashboard.yaml and grafana/ges-performance-dashboard.yaml files. These are for the business logic and performance dashboards respectively. You might need to make some adjustments to get the alerts and dashboards working; see Grafana dashboards.
https://all.docs.genesys.com/PEC-CAB/Current/CABPEGuide/Metrics
2021-11-27T08:33:27
CC-MAIN-2021-49
1637964358153.33
[]
all.docs.genesys.com
New Contributor Guide¶ First things first: Welcome to Mozilla! We’re glad to have you here, whether you’re considering volunteering as a contributor or are employed by Mozilla. Our goal here is to get you up and running to contribute as an automation developer. This guide attempts to be generic enough to be useful to most A-Team projects, but some details (especially around software you need installed) may vary among projects. Let’s get started! - About the A-Team - Accounts - Software and Tools - Finding a Project - Development Process - Commitment Curve
https://ateam-bootcamp.readthedocs.io/en/latest/guide/index.html
2021-11-27T08:30:31
CC-MAIN-2021-49
1637964358153.33
[]
ateam-bootcamp.readthedocs.io
Automated Other Emails For more information on automated emails, please review our guide: Introduction to Automated Emails For more information on announcements, please review our guide: Platform Announcements Automated Other emails are used to alert a user that the platform has made an announcement they should read. Announcements are the quickest way to share information with every user in your platform, or a specific set of users you choose. If a user does not log in on their own to the platform and see the announcement, the automated email will serve as a reminder to do so. In this article Others Emails There is one type of automated email in the Others section, the announcement email. This email is triggered whenever a new announcement is made on the platform. The email will be sent only to the users who have been selected to view the announcement. Enable Emails Each type of automated email must be enabled before it will be sent automatically. The enabled button must be toggled on for both Primary Recipient or Other Users to be Notified, if you plan on using either. Recipients There are two different types of recipients who can be named to an automated Other email: the Primary Recipient and CC'd Recipients. Primary Recipient Each automated email is sent to a Primary Recipient. Depending on the email type, the default Primary Recipient is either the individual, or the Site Admin listed in the Site General Settings found in the dashboard at Settings > General. This chart shows who is set as the default Primary Recipient by Other email type: CC'd Recipients You can also CC other email addresses to receive these messages if you wish. The addresses can be added to the CC field, separated by commas. Announcements The Announcement email is relatively straightforward. It just lets a user know a new announcement is waiting for them and asks them to login to read it. Announcement Email Template Here is the provided sample announcement email template. You are free to customize this in any way you wish, including using shortcodes. The most relevant shortcodes for announcement emails that you may want to include are [announcement_content] and [announcement_url]. User Experience of Announcement Email And here is what a user will see in their inbox:
https://docs.academyofmine.com/article/175-automated-other-emails
2021-11-27T08:36:42
CC-MAIN-2021-49
1637964358153.33
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e888b922c7d3a7e9aea626e/images/6110b28c64a230081ba1d4fa/file-ARVSEiblOr.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e888b922c7d3a7e9aea626e/images/6024281f1f25b9041bebdac7/file-BuWgOQJcGt.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e888b922c7d3a7e9aea626e/images/60184338a4cefb30ae5c608a/file-Pkz5Yw3xmV.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e888b922c7d3a7e9aea626e/images/6119c1cfb37d837a3d0e3ddb/file-HeIyywV08H.png', None], dtype=object) ]
docs.academyofmine.com
A newer version of this page is available. Switch to the current version. Range Class Namespace: DevExpress.Xpf.Charts Assembly: DevExpress.Xpf.Charts.v20.2.dll Declaration public class Range : ChartNonVisualElement Public Class Range Inherits ChartNonVisualElement Related API Members The following members accept/return Range objects: Remarks The Range class contains range settings for an axis. An instance of the Range type can be accessed via the Axis.WholeRange and Axis2D.VisualRange properties. The options provided by the Range class allow you to limit an axis range manually, by specifying both the minimum and maximum axis value. This can be done for the axis scale type (via the Range.MinValue and Range.MaxValue properties). The values calculated for these properties can then be obtained via the corresponding Actual* property (e.g., Range.ActualMaxValue and Range.ActualMaxValueInternal). Inheritance See Also Feedback
https://docs.devexpress.com/WPF/DevExpress.Xpf.Charts.Range?v=20.2
2021-11-27T08:34:00
CC-MAIN-2021-49
1637964358153.33
[]
docs.devexpress.com
Poller Packages To define more complex monitoring configuration it is possible to group service configurations into polling packages. They allow you to assign different service configurations to nodes. To assign a polling package to nodes, use the the Rules/Filters syntax. Each polling package can have its own Downtime Model configuration. You can configure multiple packages, and an interface can exist in more than one package. This gives great flexibility to how the service levels will be determined for a given device. Polling package assigned to Nodes with Rules and Filters <package name="example1">(1) <filter>IPADDR != '0.0.0.0'</filter>(2) <include-range(3) <include-range(3) 1 Unique name of the polling package. 2 Base filter on IP address, categories or asset attributes of nodes based on Rules/Filters. The filter is evaluated first and is required. This package is used for all IP interfaces that do not have 0.0.0.0 as an assigned IP address and is required. 3 Allow to specify if the configuration of services is applied on a range of IP interfaces_ (IPv4 or IPv6). Instead of the include-range it is possible to add one or more specific IP interfaces: Defining a specific IP Interfaces <specific>192.168.1.59</specific> It is also possible to exclude IP interfaces: Exclude IP Interfaces <exclude-range Response Time Configuration The definition of polling packages lets you configure similar services with different polling intervals. All the response time measurements are persisted in RRD files and require a definition. Each polling package contains an RRD definition. RRD configuration for Polling Package example1 > 1 Polling interval for all services in this polling package is reflected in the step of size 300 seconds. All services in this package have to be polled in a 5-minute interval, otherwise response time measurements are not persisted correctly. 2 1 step size is persisted 2016 times: 1 * 5 min * 2016 = 7 d, 5 min accuracy for 7 d. 3 12 steps average persisted 1488 times: 12 * 5 min * 1488 = 62 d, aggregated to 60 min for 62 d. 4 288 steps average persisted 366 times: 288 * 5 min * 366 = 366 d, aggregated to 24 h for 366 d. 5 288 steps maximum from 24 h persisted for 366 d. 6 288 steps minimum from 24 h persisted for 366 d. The RRD configuration and the service polling interval must be aligned. In other cases, the persisted response time data is not correctly displayed in the response time graph. If you change the polling interval afterwards, you must recreate existing RRD files with the new definitions.: Overwriting > 1 Polling interval in the packages are 300 seconds and 30 seconds 2 Different polling interval for the ICMP service 3 Different retry settings for the ICMP service 4 Different timeout settings for the ICMP service The last polling package on the service will be applied. This can be used to define a less specific catch-all filter for a default configuration. Use a more specific polling package to overwrite the default setting. In the above example, all IP interfaces in 192.168.1/24 or 2600:/64 will be monitored with ICMP with different polling, retry, and timeout settings. The WebUI displays which polling packages are applied to the IP interface and service. The IP Interface and Service pages show which polling package and service configuration is applied for this specific service. Figure 1. Polling Package applied to IP interface and Service Service Patterns Usually, the poller used to monitor a service is found by the matching the poller’s name with the service name. There is an option for you to match poller if an additional element pattern is specified. If so, the poller is used for all services matching the RegEx pattern. The RegEx pattern lets you specify named capture groups. There can be multiple capture groups inside of a pattern, but each must have a unique name. Please note, that the RegEx must be escaped or wrapped in a CDATA-Tag inside the configuration XML to make it a valid property. If a poller is matched using its pattern, the parts of the service name which match the capture groups of the pattern are available as parameters to the Metadata DSL using the context pattern and the capture group name as key. Examples: <pattern><![CDATA[^HTTP-(?<vhost>.*)$]]></pattern> Matches all services with names starting with HTTP- followed by a host name. If the services is called HTTP-, the Metadata DSL expression ${pattern:vhost} will resolve to. <pattern><![CDATA[^HTTP-(?<vhost>.*?):(?<port>[0-9]+)$]]></pattern>" Matches all services with names starting with HTTP- followed by a hostname and a port. There will be two variables (${pattern:vhost} and ${pattern:port}), which you can use in the poller parameters. Use the service pattern mechanism whenever there are multiple instances of a service on the same interface. By specifying a distinct service name for each instance, the services is identifiable, but there is no need to add a poller definition per service. Common use cases for such services are HTTP virtual hosts, where multiple web applications run on the same web server or BGP session monitoring where each router has multiple neighbors.. Run ICMP monitor configuration defined in specific Polling Package opennms> opennms:poll -S ICMP -P example1 10.23.42.1 The output is verbose, which lets you debug monitor configurations. Important output lines are shown as the following: Important output testing a service on the CLI) 1 Service and package of this test 2 Applied service configuration from polling package for this test 3 Service monitor used for this test 4 RRD configuration for response time measurement 5 Retry and timeout settings for this test 6 Polling result for the service polled against the IP address 7 Response time Test filters on Karaf Shell Filters are ubiquitous in opennms configurations with <filter> syntax. Use this Karaf shell to verify filters. For more information, see example: Run a filter that matches a node location and for a given IP address range. Refer to IPLIKE for more info on using IPLIKE syntax. opennms Node information displayed will have nodeId, nodeLabel, location, and optional fields like foreignId, foreignSource, and categories when they exist. Path Outages Service Monitors
https://docs.opennms.com/meridian/2021.1.5/operation/service-assurance/polling-packages.html
2021-11-27T09:15:18
CC-MAIN-2021-49
1637964358153.33
[]
docs.opennms.com
Manage Cancel Order Requests¶ - The Admin can see a list of Cancel Requests, along with the Order Item Number, Product title, Name of the person, who had placed the order, the reason for cancellation and remarks. - The Admin can review the details and either select ‘Accept’ or ‘Reject’. - The Admin can also filter Cancelled Orders, by giving a particular Order number. - The Admin can also do bulk update for cancel requests by exporting all the cancel requests in an excel or CSV, update it and upload it back.
https://docs.spurtcommerce.com/Admin/Sales/CancelRequest.html
2021-11-27T08:43:47
CC-MAIN-2021-49
1637964358153.33
[]
docs.spurtcommerce.com
Achievements Window An explanation of achievements and the Achievements window is provided below. Displaying the Achievements Window To display the Achievements window, choose Window > Achievements in Eggplant Functional. The Achievements window in Eggplant Functional Tutorials Select Tutorials to see the available tutorials. Click the Start button to the right of a tutorial to watch it. The first panel of the tutorial opens, showing you that this is the first of however many steps are in the tutorial as shown below. To stop or exit a tutorial, click the Stop or Exit button at the bottom of the tutorial window. Exiting a tutorial returns you to the Achievements window. Note: Eggplant Functional provides the option for you to create system under test (SUT) that you can use for testing when you are following tutorials. See Creating a Tutorial SUT for more information. Achievements Achievements are listed by category. Below the list of categories, you can see the percentage of achievements you earned compared to all possible achievements. The currently selected category is "circled" the way Scripting is in the Achievements window above. When you select a category, the list of related achievements displays in the panel on the right. Any achievements you earned are highlighted in green with a check mark, like Your Wish is My Command Line and Lumberjack in the Achievements window above. Descriptions of the categories follow: - Scripting: Select this category to see the list of achievements related to writing scripts in Eggplant Functional. - Features: Select this category to see the list of achievements related to learning how to use Eggplant Functional features. - Image Capture: Select this category to see the list of achievements related to capturing images in Eggplant Functional. - Secret: Select this category to see a list of miscellaneous achievements, such as asking for help, reporting a bug, or submitting a feature request. - Do Lots: Select this category to see the list of achievements for creating large numbers of scripts and images in Eggplant Functional. Changing Achievement Notifications On Mac, Windows 7, and Windows 10, Eggplant Functional sends achievement notifications by default. The notifications are configured in your operating system settings. If you want to change or disable the notifications, follow the instructions below: Mac To change or disable notifications for Eggplant Functional achievements on Mac, follow these steps: - Open your Mac System Preferences and select Notifications. - In the Notifications Center window on the left, scroll to Eggplant and select it. - In the Eggplant alert style panel on the right, change the notification settings as desired or choose None to disable notifications. Windows 7 To change or disable notifications for Eggplant Functional achievements on Windows 7, follow these steps: - Go to Start > Control Panel > Appearance and Personalization. - From the Taskbar and Start Menu section, select Customize icons on the taskbar. - Scroll through the list of available icons and notifications until you get to Eggplant.exe. - Select your desired display option from the drop-down list: Only show notifications, Show icon and notifications, or Hide icon and notifications. - Click OK to save your preferences.
http://docs.eggplantsoftware.com/ePF/gettingstarted/epf-achievements-window.htm
2020-02-17T06:54:51
CC-MAIN-2020-10
1581875141749.3
[array(['../../Resources/Images/epf-achievements-window.png', 'The Achievements window'], dtype=object) array(['../../Resources/Images/epf-achievements-window-tutorial-panel.png', 'Tutorial panel in the Achievements window'], dtype=object) ]
docs.eggplantsoftware.com
Best Practices for Your Compute Instance Oracle Cloud Infrastructure Compute provides bare metal compute capacity that delivers performance, flexibility, and control without compromise. It is powered by Oracle’s next generation, internet-scale infrastructure designed to help you develop and run your most demanding applications and workloads in the cloud. You can provision compute capacity through an easy-to-use web console or an API. The bare metal compute instance, once provisioned, provides you with access to the host. This gives you complete control of your instance. While you have full management authority for your instance, Oracle recommends a variety of best practices to ensure system availability and top performance.. System Resilience do not want to share SSH keys, you can create additional SSH-enabled users. If you created your instance using an Oracle-provided Windows image, you can access your instance using a Remote Desktop client as the opc user. After logging in, you can add users on your instance. For more information about user access, see Adding Users on an Instance. NTP Service Oracle Cloud Infrastructure offers a fully managed, secure, and highly available NTP service that you can use to set the date and time of your Compute and Database instances from within your virtual cloud network (VCN). Oracle recommends that you configure your instances to use the Oracle Cloud Infrastructure NTP service. For information about how to configure instances to use this service, see Configuring the Oracle Cloud Infrastructure NTP Service for an Instance. Fault Domains A fault domain is a grouping of hardware and infrastructure that is distinct from other fault domains in the same availability domain. Each availability domain has three fault domains. By properly leveraging fault domains you can increase the availability of applications running on Oracle Cloud Infrastructure. See Fault Domains for more information. Your application's architecture will determine whether you should separate or group instances using fault domains. Scenario 1: Highly Available Application Architecture In this scenario you have a highly available application, for example you have two web servers and a clustered database. In this scenario you should group one web server and one database node in one fault domain and the other half of each pair in another fault domain. This ensures that a failure of any one fault domain does not result in an outage for your application. Scenario 2: Single Web Server and Database Instance Architecture In this scenario your application architecture is not highly available, for example you have one web server and one database instance. In this scenario both the web server and the database instance must be placed in the same fault domain. This ensures that your application will only be impacted by the failure of that single fault domain. Customer-Managed Virtual Machine (VM) Maintenance When an underlying infrastructure component needs to undergo maintenance, we notify you in advance of the planned maintenance downtime. To avoid this planned downtime, you have the options to reboot, or stop and restart your instances prior to the scheduled maintenance. This makes it easy for you to control your instance downtime during the notification period. The reboot, or stop and restart of a VM instance during the notification period is different than a normal reboot. The reboot, or stop and start workflow will stop your instance on the existing VM host that needs maintenance and restart it on a healthy VM host. If you choose not to reboot during the notification period, then Oracle Cloud Infrastructure will reboot your VM instance before proceeding with the planned infrastructure maintenance. For information on rebooting or restarting your instance prior to planned maintenance, see Rebooting Your Virtual Machine (VM) Instance During Planned Maintenance.
https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/bestpracticescompute.htm
2020-02-17T07:24:07
CC-MAIN-2020-10
1581875141749.3
[]
docs.cloud.oracle.com
ULYSSIS takes backups of all files, databases, settings and repositories 1-4 times a week. These backups can be restored in case of severe server failure or in case of an emergency (if for example a user by mistake deletes all of their site). It is, however, more convenient that users take their own backups when they do experimental things on their account. You can use the graphical methods described in Accessing your files to download a copy of all of your files. This copy, or certain files, can then easily be uploaded again using the same method to restore backups To take a backup of your database you simple use the export function of PHPMyAdmin as described on Using PHPMyAdmin.
https://docs.ulyssis.org/index.php?title=Making_Backups&diff=prev&oldid=792
2020-02-17T07:33:26
CC-MAIN-2020-10
1581875141749.3
[]
docs.ulyssis.org
. - Enable Kerberos. - Use Apache Atlas for dataset level lineage graphs. - Use Apache Ranger to authorize NiFi and NiFi Registry users. - Install the following Runtime services, at minimum: - Atlas - Ranger - ZooKeeper When you have completed the CDP Private Cloud Base cluster installation, add the CFM parcel and CSD files.
https://docs.cloudera.com/cfm/2.0.4/deployment/topics/cfm-install-cdp.html
2021-01-16T05:47:10
CC-MAIN-2021-04
1610703500028.5
[]
docs.cloudera.com
7 Composing choices¶ It’s time to put everything you’ve learnt so far together into a complete and secure DAML model for asset issuance, management, transfer, and trading. This application will have capabilities similar to the one in IOU Quickstart Tutorial. In the process you will learn about a few more concepts: - DAML projects, packages and modules - Composition of transactions - Observers and stakeholders - DAML’s execution model - Privacy The model in this section is not a single DAML file, but a DAML project consisting of several files that depend on each other. DAML projects¶ DAML is organized in packages and modules. A DAML project is specified using a single daml.yaml file, and compiles into a package. Each DAML file within a project becomes a DAML module. You can start a new project with a skeleton structure using daml new project_name in the terminal. Each DAML project has a main source file, which is the entry point for the compiler. A common pattern is to have a main file called LibraryModules.daml, which simply lists all the other modules to include. A minimal project would contain just a daml.yaml file and an empty directory of source files. Take a look at the daml.yaml for this project: sdk-version: __VERSION__ name: __PROJECT_NAME__ source: daml version: 1.0.0 dependencies: - daml-prim - daml-stdlib sandbox-options: - --wall-clock-time You can generally set name and version freely to describe your project. dependencies lists package dependencies: you should always include daml-prim, and daml-stdlib gives access to the DAML standard library. You compile a DAML project by running daml build from the project root directory. This creates a dar package in .daml/dist/dist/project_name-project_version.dar. A dar file is DAML’s equivalent of a JAR file in Java: it’s the artifact that gets deployed to a ledger to load the contract model. Project structure¶ This project contains an asset holding model for transferrable, fungible assets and a separate trade workflow. The templates are structured in three modules: Intro.Asset, Intro.Asset.Role, and Intro.Asset.Trade. In addition, there are tests in modules Test.Intro.Asset, Test.Intro.Asset.Role, and Test.Intro.Asset.Trade. All but the last .-separated segment in module names correspond to paths, and the last one to a file name. The folder structure therefore looks like this: . ├── daml │ ├── Intro │ │ ├── Asset │ │ │ ├── Role.daml │ │ │ └── Trade.daml │ │ └── Asset.daml │ └── Test │ └── Intro │ ├── Asset │ │ ├── Role.daml │ │ └── Trade.daml │ └── Asset.daml └── daml.yaml Each file contains the DAML pragma and module header. For example, daml/Intro/Asset/Role.daml: module Intro.Asset.Role where You can import one module into another using the import keyword. The LibraryModules module imports all six modules: import Intro.Asset Imports always have to appear just below the module declaration. You can optionally add a list of names after the import to import only the selected names: import DA.List (sortOn, groupOn) Project overview¶ The project both changes and adds to the Iou model presented in 6 Parties and authority: Assets are fungible in the sense that they have Mergeand Splitchoices that allow the ownerto manage their holdings. Transfer proposals now need the authorities of both issuerand newOwnerto accept. This makes Assetsafer than Ioufrom the issuer’s point of view. With the Ioumodel, an issuercould end up owing cash to anyone as transfers were authorized by just ownerand newOwner. In this project, only parties having an AssetHoldercontract can end up owning assets. This allows the issuerto determine which parties may own their assets. The Tradetemplate adds a swap of two assets to the model. Composed choices and scenarios¶ This project showcases how you can put the Update and Scenario actions you learnt about in 6 Parties and authority to good use. For example, the Merge and Split choices each perform several actions in their consequences. - Two create actions in case of Split - One create and one archive action in case of Merge Split : SplitResult with splitQuantity : Decimal do splitAsset <- create this with quantity = splitQuantity remainder <- create this with quantity = quantity - splitQuantity return SplitResult with splitAsset remainder Merge : ContractId Asset with otherCid : ContractId Asset do other <- fetch otherCid assertMsg "Merge failed: issuer does not match" (issuer == other.issuer) assertMsg "Merge failed: owner does not match" (owner == other.owner) assertMsg "Merge failed: symbol does not match" (symbol == other.symbol) archive otherCid create this with quantity = quantity + other.quantity The return function used in Split is available in any Action context. The result of return x is a no-op containing the value x. It has an alias pure, indicating that it’s a pure value, as opposed to a value with side-effects. The return name makes sense when it’s used as the last statement in a do block as its argument is indeed the “return”-value of the do block in that case. Taking transaction composition a step further, the Trade_Settle choice on Trade composes two exercise actions: Trade_Settle : (ContractId Asset, ContractId Asset) with quoteAssetCid : ContractId Asset baseApprovalCid : ContractId TransferApproval do fetchedBaseAsset <- fetch baseAssetCid assertMsg "Base asset mismatch" (baseAsset == fetchedBaseAsset with observers = baseAsset.observers) fetchedQuoteAsset <- fetch quoteAssetCid assertMsg "Quote asset mismatch" (quoteAsset == fetchedQuoteAsset with observers = quoteAsset.observers) transferredBaseCid <- exercise baseApprovalCid TransferApproval_Transfer with assetCid = baseAssetCid transferredQuoteCid <- exercise quoteApprovalCid TransferApproval_Transfer with assetCid = quoteAssetCid return (transferredBaseCid, transferredQuoteCid) The resulting transaction, with its two nested levels of consequences, can be seen in the test_trade scenario in Test.Intro.Asset.Trade: TX #15 1970-01-01T00:00:00Z (Test.Intro.Asset.Trade:77:23) #15:0 │ known to (since): 'Alice' (#15), 'Bob' (#15) └─> 'Bob' exercises Trade_Settle on #13:1 (Intro.Asset.Trade:Trade) with quoteAssetCid = #10:1; baseApprovalCid = #14:2 children: #15:1 │ known to (since): 'Alice' (#15), 'Bob' (#15) └─> fetch #11:1 (Intro.Asset:Asset) #15:2 │ known to (since): 'Alice' (#15), 'Bob' (#15) └─> fetch #10:1 (Intro.Asset:Asset) #15:3 │ known to (since): 'USD_Bank' (#15), 'Bob' (#15), 'Alice' (#15) └─> 'Alice', 'Bob' exercises TransferApproval_Transfer on #14:2 (Intro.Asset:TransferApproval) with assetCid = #11:1 children: #15:4 │ known to (since): 'USD_Bank' (#15), 'Bob' (#15), 'Alice' (#15) └─> fetch #11:1 (Intro.Asset:Asset) #15:5 │ known to (since): 'Alice' (#15), 'USD_Bank' (#15), 'Bob' (#15) └─> 'Alice', 'USD_Bank' exercises Archive on #11:1 (Intro.Asset:Asset) #15:6 │ referenced by #17:0 │ known to (since): 'Bob' (#15), 'USD_Bank' (#15), 'Alice' (#15) └─> create Intro.Asset:Asset with issuer = 'USD_Bank'; owner = 'Bob'; symbol = "USD"; quantity = 100.0; observers = [] #15:7 │ known to (since): 'EUR_Bank' (#15), 'Alice' (#15), 'Bob' (#15) └─> 'Bob', 'Alice' exercises TransferApproval_Transfer on #12:1 (Intro.Asset:TransferApproval) with assetCid = #10:1 children: #15:8 │ known to (since): 'EUR_Bank' (#15), 'Alice' (#15), 'Bob' (#15) └─> fetch #10:1 (Intro.Asset:Asset) #15:9 │ known to (since): 'Bob' (#15), 'EUR_Bank' (#15), 'Alice' (#15) └─> 'Bob', 'EUR_Bank' exercises Archive on #10:1 (Intro.Asset:Asset) #15:10 │ referenced by #16:0 │ known to (since): 'Alice' (#15), 'EUR_Bank' (#15), 'Bob' (#15) └─> create Intro.Asset:Asset with issuer = 'EUR_Bank'; owner = 'Alice'; symbol = "EUR"; quantity = 90.0; observers = [] Similar to choices, you can see how the scenarios in this project are built up from each other: test_issuance = scenario do setupResult@(alice, bob, bank, aha, ahb) <- setupRoles assetCid <- submit bank do exercise aha Issue_Asset with symbol = "USD" quantity = 100.0 submit bank do asset <- fetch assetCid assert (asset == Asset with issuer = bank owner = alice symbol = "USD" quantity = 100.0 observers = [] ) return (setupResult, assetCid) In the above, the test_issuance scenario in Test.Intro.Asset.Role uses the output of the setupRoles scenario in the same module. The same line shows a new kind of pattern matching. Rather than writing setupResults <- setupRoles and then accessing the components of setupResults using _1, _2, etc., you can give them names. It’s equivalent to writing setupResults <- setupRoles case setupResults of (alice, bob, bank, aha, ahb) -> ... Just writing (alice, bob, bank, aha, ahb) <- setupRoles would also be legal, but setupResults is used in the return value of test_issuance so it makes sense to give it a name, too. The notation with @ allows you to give both the whole value as well as its constituents names in one go. DAML’s execution model¶ DAML’s execution model is fairly easy to understand, but has some important consequences. You can imagine the life of a transaction as follows: - A party submits a transaction. Remember, a transaction is just a list of actions. - The transaction is interpreted, meaning the Updatecorresponding to each action is evaluated in the context of the ledger to calculate all consequences, including transitive ones (consequences of consequences, etc.). - The views of the transaction that parties get to see (see Privacy) are calculated in a process called blinding, or projecting. - The blinded views are distributed to the parties. - The transaction is validated based on the blinded views and a consensus protocol depending on the underlying infrastructure. - If validation succeeds, the transaction is committed. The first important consequence of the above is that all transactions are committed atomically. Either a transaction is committed as a whole and for all participants, or it fails. That’s important in the context of the Trade_Settle choice shown above. The choice transfers a baseAsset one way and a quoteAsset the other way. Thanks to transaction atomicity, there is no chance that either party is left out of pocket. The second consequence, due to 2., is that the submitter of a transaction knows all consequences of their submitted transaction – there are no surprises in DAML. However, it also means that the submitter must have all the information to interpret the transaction. That’s also important in the context of Trade. In order to allow Bob to interpret a transaction that transfers Alice’s cash to Bob, Bob needs to know both about Alice’s Asset contract, as well as about some way for Alice to accept a transfer – remember, accepting a transfer needs the authority of issuer in this example. Observers¶ Observers are DAML’s mechanism to disclose contracts to other parties. They are declared just like signatories, but using the observer keyword, as shown in the Asset template: template Asset with issuer : Party owner : Party symbol : Text quantity : Decimal observers : [Party] where signatory issuer, owner ensure quantity > 0.0 observer observers The Asset template also gives the owner a choice to set the observers, and you can see how Alice uses it to show her Asset to Bob just before proposing the trade. You can try out what happens if she didn’t do that by removing that transaction. usdCid <- submit alice do exercise usdCid SetObservers with newObservers = [bob] Observers have guarantees in DAML. In particular, they are guaranteed to see actions that create and archive the contract on which they are an observer. Since observers are calculated from the arguments of the contract, they always know about each other. That’s why, rather than adding Bob as an observer on Alice’s AssetHolder contract, and using that to authorize the transfer in Trade_Settle, Alice creates a one-time authorization in the form of a TransferAuthorization. If Alice had lots of counterparties, she would otherwise end up leaking them to each other. Controllers declared via the controller cs can syntax are automatically made observers. Controllers declared in the choice syntax are not, as they can only be calculated at the point in time when the choice arguments are known. Privacy¶ DAML’s privacy model is based on two principles: - Parties see those actions that they have a stake in. - Every party that sees an action sees its (transitive) consequences. Item 2. is necessary to ensure that every party can independently verify the validity of every transaction they see. A party has a stake in an action if - they are a required authorizer of it - they are a signatory of the contract on which the action is performed - they are an observer on the contract, and the action creates or archives it What does that mean for the exercise tradeCid Trade_Settle action from test_trade? Alice is the signatory of tradeCid and Bob a required authorizer of the Trade_Settled action, so both of them see it. According to rule 2. above, that means they get to see everything in the transaction. The consequences contain, next to some fetch actions, two exercise actions of the choice TransferApproval_Transfer. Each of the two involved TransferApproval contracts is signed by a different issuer, which see the action on “their” contract. So the EUR_Bank sees the TransferApproval_Transfer action for the EUR Asset and the USD_Bank sees the TransferApproval_Transfer action for the USD Asset. Some DAML ledgers, like the scenario runner and the Sandbox, work on the principle of “data minimization”, meaning nothing more than the above information is distributed. That is, the “projection” of the overall transaction that gets distributed to EUR_Bank in step 4 of DAML’s execution model would consist only of the TransferApproval_Transfer and its consequences. Other implementations, in particular those on public blockchains, may have weaker privacy constraints. Divulgence¶ Note that principle 2. of the privacy model means that sometimes parties see contracts that they are not signatories or observers on. If you look at the final ledger state of the test_trade scenario, for example, you may notice that both Alice and Bob now see both assets, as indicated by the Xs in their respective columns: This is because the create action of these contracts are in the transitive consequences of the Trade_Settle action both of them have a stake in. This kind of disclosure is often called “divulgence” and needs to be considered when designing DAML models for privacy sensitive applications.
https://docs.daml.com/1.4.0-snapshot.20200729.4851.0.224ab362/daml/intro/7_Composing.html
2021-01-16T06:33:08
CC-MAIN-2021-04
1610703500028.5
[array(['../../_images/divulgence.png', '../../_images/divulgence.png'], dtype=object) ]
docs.daml.com
Creating Provisional Records Under Resources -> Open Metadata Editor. In the File tab, select New -> MARC21 Bibliographic A template will be created. Edit the LDR field to fill in encoding level = 7 Put your cursor in the LDR field. In the Edit tab, select Open form editor (or click CTRL+F). Encoding level (17) -> from pull down menu, select 7 – minimal level To close form editor, Edit -> Close form editor (or click the Esc key) Edit the 008 field to fill in date and language of the material The publication date (Date 1) is the last box on the top row. Language (35-37) is at the bottom middle of the expanded form editor. English is the default. If your material is in a language other than English, choose from the pull down list. You can type the first letter to minimize scrolling. To close, Edit -> Close form editor (or click the Esc key) To add a field that is not on the template, choose Edit -> Add field which puts a blank field below where your cursor is located. Basic fields for a provisional record: *Note: the first indicator is 1 or 0, the second indicator is the number of non-filing characters. First indicator: 1 if there is a 100 field, 0 if there is no 100 field. 1 means author main entry; 0 means title main entry. Non-filing indicator: count the letters in the word you want to skip and add one more for the space after the word. Examples: The = 4 , A = 2 , An = 3, Die = 4, La = 3. **Note: the series is usually optional, but if you need to add series information, use either the 490 (unauthorized form) or the 830 (authorized form found on the series authority record). If using the 490, the first indicator is 0, second blank. If using the 830, the first indicator is blank, the second indicator is 0. When you are finished, go to File -> Save and Release Record (Ctrl+Alt+R) on the MD editor toolbar. Example: 008 ######s2018####xx######r#####000#0#eng#d 020 123123123123 $$q (cloth) 100 1 $$a Smith, John, $$d 2000- 245 14 $$a The book I wrote : $$b a novel / $$c by John Smith. 250 $$a First edition. 260 $$a Nashville : $$b Vanity Press, $$c 2018. 490 0 $$a Vanity Press special Tennessee publications ; $$v volume 77 500 $$a Vanderbilt University Special Collections copy has hand-drawn plates of author’s childhood home rendered in crayon. $$5 TNJ 830 $$a Vanity Press special Tennessee publications ; $$v v. 77. Ignore other fields provided that you don’t use. Click the Save Icon (highlighted below) when you are finished. Alma view: Examples of 1xx: 100:1 : Shakespeare, William, $$ d1564-1616. 100:1 : Christiansen, Carl, $$d born 1884. 100:1 : Streep, Meryl. 245 : first indicator 1 or 0 : The choice of the first indicator is not important — your choice will not affect anything, as long as it is a valid number for this field, either 1 or 0. If you want to be correct, use 1st indicator 1 if there is also a 1xx field on the record. Use 1st indicator 0 if there is no 1xx field. Second indicator the number of nonfiling characters/digits in the title. The choice of the 2nd indicator is important — it will effect whether the title can be searched in a browse search or not. If there is a leading article (such as: a, an, the, der, die, das, el, la, los, las, le, l’, les, il, lo, [Portuguese: o, a, os, as, um, uma] etc.), count the number of characters in the article plus the space following for the second indicator value. Example: for The, the indicator would be 4. NOTE on the 245: use the title page to transcribe the title information, not the cover unless there is no title page. The other parts of the 245 are listed below: Subtitle information: after the title proper, on the same line, type a space colon and subfield b ( : $$b ) followed by the subtitle. Do not capitalize the first word, unless it is a proper name or a German noun. This subfield b does not repeat. If there is more than one subtitle, just separate it with another space colon space. Statement of responsibility: after the subtitle (or after the title proper when there is no subtitle), add the author information found on the title page by typing a space slash subfield c ( / $$c ) and the statement of responsibility just as it appears on the title page. Transcribe the author’s name just as it appears including any words such as “by” “von” “edited with an introduction” etc. Examples of 245: 245:10: :$$a Dancing in the street : $$b Motown and the cultural politics of Detroit / $$cSuzanne E. Smith. 245:04: :$$a The Wall Street journal. 245:10: :$$a All that jazz / $$c Fats Waller. 245:10: :$$a Liebling Kreuzberg : $$b der Verbieter : Roman / $$cvon Horst Friedrichs. 245:00: :$$a Venetian views, Venetian blinds : $$b English fantasies of Venice / $$c edited by Manfred Pfister. 245:00: :$$a St. Petersburg : $$b multimedia album. 245:00: Young Frankenstein / $$c directed by Mel Brooks. 245:00: :$$a Infotrac Web. 245:13: :$$a “…Jill ran up the hill” : $$b a play / $$c by Jack Felldown. Publication information: 260 field, with both indicators blank, was used prior to RDA and is still OK to use. The new, preferred, field for publication information is 264 with second indicator 1. $$a type the Place as it appears on the piece. Transcribe from the language it appears in, do not translate into English. Then type space colon space $$b followed by the Publisher. Type a comma after the name of the publisher followed by subfield c $$c and the Date of publication. NOTE: You can use publication information found anywhere on your piece. If there is no place of publication listed anywhere that you can find, type in brackets [Place of publication not identified]. If you cannot find any publisher listed, type in brackets [publisher not identified] If you can’t find a date you can guess at one, if you have a good idea, and put your guess in brackets [2018?]. If you really do not know when the item may have been published, type in brackets [date of publication not identified]. Examples of 260: 260: : $$a New York : $$b Orchard, $$c 1999. 260: : $$a London : $$b Macmillan, $$c [1999?] 260: : $$a Moskva : $$b Mosty Kultury : $$b Gesharim, $$c 1999. 260: : $$a Moskva : $$b [Place of publication not identified], $$c [date of publication not identified] Series: If there is a prominent series on the piece, especially if it is numbered, you should record it. The 490 is for recording series as it appears on the piece. The 830 is for the authorized form of the series, if you know this form. If you do not know, or will not look up the authorized form of the series, use the 490 first indicator 0, second blank to record the series. Examples of series: 490:0 : $$a Fischer Taschenbucher ; $$v Nr. 4502 490:0 : $$a Modern Library of the world’s best books 830: 0: $$a Wissenschaft und Gegenwart ; $$v Nr. 10 830: 0: $$a Criterion collection ; $$v 143.
https://docs.library.vanderbilt.edu/2019/03/11/487/
2021-01-16T06:38:07
CC-MAIN-2021-04
1610703500028.5
[array(['https://docs.library.vanderbilt.edu/wp-content/uploads/sites/9/2019/03/Cat-creating-provisional-300x153.png', None], dtype=object) ]
docs.library.vanderbilt.edu
Parvenu allows your entire team so they can find data as well. To add a teammate click on Settings in the top right and choose "Manage users." Here you can add a user by email and invite them as teammate on Parvenu. They will be able to enrich contacts and use the platform without limits.
https://docs.parvenunext.com/resources/add-team-members
2021-01-16T05:08:39
CC-MAIN-2021-04
1610703500028.5
[]
docs.parvenunext.com
Auto processing The Voice auto CDR processing is managed here. We can import CDR's to charge customers and print their register of calls on their invoices. The process is similar to the Voice -> Processing -> CDR import, but here we can import multiple CDR's together, automatically. Firstly, to import CDR's you need to navigate to Config -> Voice -> Import data source and add a CRD data source location. Simply click on the Add button located at the top right of the table: The following types of data location types can be used: - SFTP - FTP - Local Adding an SFTP data source: Adding an FTP data source: Adding a local data source: We will use a local storage/data source as an example. The following parameters need to be configured to add data sources: Title - provide a relevant name for the data source location; Data source type - select a type from the drop-down menu: SFTP - if you select "SFTP" then you will need to enter a RSA private key. CDR files are located on a FTP server; FTP - if you select "FTP" then you will need to set the same parameters as for "SFTP" but without a RSA key. CDR files are also located on a FTP server; Local - if you select "Local" then you only need to set the Title and Folder path. CDR files are located on the Splynx server; Folder path - set your path to the folder with the CDR files; Host - set your host IP address; Port - if the data source type selected is FTP or SFTP - specify the port to connect to FTP here; Login - if the data source type selected is FTP or SFTP - specify the login to connect to FTP here; Password - if the data source type selected is FTP or SFTP - specify the password to connect to FTP here. Once the data source is added, you can test the connection (if FTP or SFTP is configured), edit the data source or delete it using the buttons provided in the Actions column as depicted below: When using a FTP server as a CDR data source - make sure that connection is successful. In case of using a local storage/data source (on Splynx server), make sure that the folder with the files has the correct permissions and "splynx" is the owner We have added local storage of CDRs named "Local storage test CDRs" and we will use it in a auto CDR processing configuration. We will use the following format of CDR files(it is a very simple format so we can not use custom handlers to parse files): Very important note to take is that each file must have a unique name, because Splynx checks the name of the file and if the file with the same name is imported to Splynx, after that file was updated - Splynx will not re-load the updated file, as the file with this name was already imported. Once we have added a data source, we can then navigate to Config -> Voice -> Auto CDR processing and add an auto processing unit: Let's create an auto processing entry by clicking on the Add button located at the top right of the table: The following parameters need to be specified here: Title - provide a relevant name for the entry; Import data source - select a data source from the drop-down menu(in our case we used the local storage); File name pattern - Regex for filtering file names (uses pcre syntax): This will process all the files that have pattern entrances in the file name. Examples. We have all the CDR file names starting with "test-cdr"(eg. test-cdr-2020-08-01.csv); Import from file modification date - specify the file modification date for the import. In our example, to import only files for August 2020, we've specified 2020-08-01 00:00:00, but if the file with calls for July was created on 2020-08-01 00:00:01, it will be also imported; First row contains column names - enable this option, if the first row in your CDR's contains the columns names; Delimiter - select a delimeter from the drop-down menu; Type - select a type from the drop-down menu, relevant to the data you would import. In our case it's only calls; Voice provider - select the necessary voice provider; Import handler - select your handler from the drop-down menu; Launch time interval - How often the auto processing will executed; Max processing time - max time that Splynx will spend to process one file. If processing of the file takes more than the specified value - it will be ignored; Enable - when enabled, an auto processing unit will be executed every 'Launch time' interval, and if disabled - you will have to run it manually; As we have a strict and simple format of files, we have specified columns regarding to our file format. If you are using a handler - columns configuration can be ignored. After Auto CDR processing entries have been added, you can run it manually(to test how it works, after successful test results, the auto import can be enabled to do all this stuff automatically(described at the end of this document)), from Config -> Voice -> Auto CDR processing, simply click on the "Run import" button. Before running an import, we recommend checking the preview of files that will be imported: We have 4 files to be imported and all is correct, so we can start the import: After the import has completed, we can check the results by clicking on the "History" button: In our case, the first file was fully processed and all the rest completed with warnings. Simply place the mouse cursor on the "Warning" message under each file to see the number of processed/unprocessed rows: We can click on the "Show warning rows" button, to view more information about the warnings: As we can see, some calls can't be placed to the correct service with the source number = 28449988. To fix this, we have added a voice service for the customer with the number = 28449988 and direction = outgoing. Let's reprocess the warnings: After reprocessing, we have successfully imported all files. Now we can enable the auto processing to grab new files from the data source, once a day. So Splynx will automatically grab and import files with calls:
https://docs.splynx.com/voice/auto_cdr_processing/auto_cdr_processing.md
2021-01-16T06:11:51
CC-MAIN-2021-04
1610703500028.5
[array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Ficon_data_source.png', 'import data source'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2F1.png', 'Add'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2F9.png', 'SFTP'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Fadd_source.png', 'Add source'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Flocal_storage.png', 'Add local source'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Flist_of_sources.png', 'list'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Ftest_cdr_format.png', 'File format'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Ficon_processing.png', 'icon processing'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2F3.png', 'Add'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Fadd_auto_processing.png', 'create auto processing'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Ffiles_preview.png', 'files preview'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Frun_import.png', 'run import'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Fimport_result.png', 'history'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Fwarnings.png', 'warnings'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Fshow_warning_rows.png', 'show warnings'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Fclick_reprocess.png', 'click reprocess'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Freprocessed.png', 'reprocessed rows'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fvoice%2Fauto_cdr_processing%2Fenable.png', 'enable'], dtype=object) ]
docs.splynx.com
[ Tcllib Table Of Contents | Tcllib Index ] tcl::transform::core(n) 1 "Reflected/virtual channel support" Table Of Contents Synopsis - package require Tcl 8.5 - package require TclOO - package require tcl::transform::core ?1? Description The tcl::transform::core package provides a TclOO class implementing common behaviour needed by virtually every reflected or virtual channel transformation (initialization, finalization). This class expects to be used as either superclass of a concrete channel class, or to be mixed into such a class. Class API - ::tcl::transform::core objectName This command creates a new transform core object with an associated global Tcl command whose name is objectName. This command may be used to invoke various operations on the object, as described in the section for the Instance API. Instance API The API of transform core instances provides only two methods, both corresponding to transform handler commands (For reference see TIP 230). They expect to be called from whichever object instance the transform core was made a part of. - objectName initialize thechannel mode This method implements standard behaviour for the initialize method of transform handlers. Using introspection it finds the handler methods supported by the instance and returns a list containing their names, as expected by the support for reflected transformation in the Tcl core. It further remembers the channel handle in an instance variable for access by sub-classes. - objectName finalize thechannel This method implements standard behaviour for the finalize method of channel handlers. It simply destroys itself. - objectName destroy Destroying the transform core instance closes the channel and transform it was initialized for, see the method initialize. When destroyed from within a call of finalize this does not happen, under the assumption that the channel and transform are being destroyed by Tcl. Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems. Please report such in the category virtchannel of the Tcllib Trackers. Please also report any ideas for enhancements you may have for either package and/or documentation.
http://docs.activestate.com/activetcl/8.5/tcl/tcllib/virtchannel_core/transformcore.html
2018-10-15T13:15:02
CC-MAIN-2018-43
1539583509196.33
[]
docs.activestate.com
Advanced Remote Management (ARM) supports Windows Rugged and Android devices running the proper AirWatch Agent and Remote Management service. ARM supports the following platforms. - Windows Mobile/CE running .NET 2.0+ with the AirWatch Agent v6.0.40 installed. - Android devices with the AirWatch Agent v7.0 and greater installed. You must also download the required Advanced Remote Management CAB or APK from Accessing Other Documents.
https://docs.vmware.com/en/VMware-AirWatch/9.2/vmware-airwatch-guides-92/GUID-AW92-RMv4_Supported_Platforms.html
2018-10-15T12:27:53
CC-MAIN-2018-43
1539583509196.33
[]
docs.vmware.com
Tcl8.6.7/Tk8.6.7 Documentation > Tk Commands, version 8.6.7 > grab - grab — Confine pointer and keyboard events to a window sub-tree - SYNOPSIS - DESCRIPTION - grab ?-global? window - grab current ?window? - grab release window - grab set ?-global? window - grab status window - WARNING - BUGS - EXAMPLE - SEE ALSO - KEYWORDS NAMEgrab — Confine pointer and keyboard events to a window sub-tree SYNOPSISgrab ?-global? window grab option ?arg arg ...? DESCRIPTIONThis command implements window's tree, button presses and releases and mouse motion events are reported to window,: -It is very easy to use global grabs to render a display completely unusable (e.g. by setting a grab on a widget which does not respond to events and not providing any mechanism for releasing the grab). Take extreme care when using them! BUGSIt took an incredibly complex and gross implementation to produce the simple grab effect described above. Given the current implementation, it is does not exist. EXAMPLESetbusy KEYWORDSgrab, keyboard events, pointer events, window
http://docs.activestate.com/activetcl/8.6/tcl/TkCmd/grab.html
2018-10-15T12:59:40
CC-MAIN-2018-43
1539583509196.33
[]
docs.activestate.com
InstallShield Limited Edition Note This article applies to Visual Studio 2015. If you're looking for Visual Studio 2017 documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2017. Download it here. By using InstallShield Limited Edition, you can create a setup file and distribute it to users so that they can install a desktop application or component without being connected to a network. InstallShield Limited Edition is free for users Visual Studio Professional and Enterprise editions. It replaces Windows Installer technology, which Visual Studio no longer supports. As an alternative, you can distribute applications and components by using ClickOnce, which requires network connectivity. See ClickOnce Security and Deployment. Note You can continue using Windows Installer projects created in earlier versions of Visual Studio by installing the Visual Studio Installer Projects Extension. See Visual Studio Installer Projects Extension. To enable InstallShield Limited Edition On the menu bar, choose File, New, Project. In the New Project dialog box, expand the Other Project Types node, and then choose the Setup and Deployment node. In the template list, choose Enable InstallShield Limited Edition, and then choose the OK button. In the browser window that opens, read the instructions, and then choose the Go to the download web site link.
https://docs.microsoft.com/en-us/visualstudio/deployment/installshield-limited-edition?view=vs-2015
2018-10-15T13:43:49
CC-MAIN-2018-43
1539583509196.33
[]
docs.microsoft.com
Contents IT Business Management Previous Topic Next Topic Create a budget key manually ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Create a budget key manually Create a budget key manually when you want to specify specific segment records that comprise the key, as allowed by the budget definition. Before you beginRole required: budget_admin Procedure Navigate to Financial Planning > Administration > Budget keys. Click New. Fill out the form fields (see table). Click Submit. Table 1. Account Code form fields Field Description Name Descriptive name for the account code. Segments The segments associated with this budget key. These segments are set in the budget model in the Segment Relationships related list. Select a value for the root segment identified in the budget definition. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-it-business-management/page/product/it-finance/task/t_Create_A_Budget_Key.html
2018-10-15T13:22:46
CC-MAIN-2018-43
1539583509196.33
[]
docs.servicenow.com
© 2005-2017 The original authors. - Preface - I. Introduction - 1. What is Spring Web Services? - 2. Why Contract First? - 3. Writing Contract-First Web Services - 3.1. Introduction - 3.2. Messages - 3.3. Data Contract - 3.4. Service contract - 3.5. Creating the project - 3.6. Implementing the Endpoint - 3.7. Publishing the WSDL - II. Reference - 4. Shared components - 5. Creating a Web service with Spring-WS - 5.1. Introduction - 5.2. The MessageDispatcher - 5.3. Transports - 5.4. Endpoints - 5.5. Endpoint mappings - 5.6. Handling Exceptions - 5.7. Server-side testing - 6. Using Spring Web Services on the Client - 6.1. Introduction - 6.2. Using the client-side API - 6.3. Client-side testing - 7. Securing your Web services with Spring-WS - 7.1. Introduction - 7.2. XwsSecurityInterceptor - 7.3. Wss4jSecurityInterceptor - III. Other Resources Preface focuses on creating these document-driven Web services. Spring Web Services facilitates contract-first SOAP service development, allowing for the creation of flexible web services using one of the many ways to manipulate XML payloads. Spring-WS provides a powerful message dispatching framework, a WS-Security solution that integrates with your existing application security solution, and a Client-side API that follows the familiar Spring template pattern. I. Introduction 1. What is Spring Web Services? 1.1. Introduction Spring: 1.1.1. Powerful mappings You can distribute incoming XML requests to any object, depending on message payload, SOAP Action header, or an XPath expression. 1.1.2. XML API support Incoming XML messages can be handled not only with standard JAXP APIs such as DOM, SAX, and StAX, but also JDOM, dom4j, XOM, or even marshalling technologies. 1.1.3. Flexible XML Marshalling Spring Web Services builds on the Object/XML Mapping module in the Spring Framework, which supports JAXB 1 and 2, Castor, XMLBeans, JiBX, and XStream. 1.1.4. Reuses your Spring expertise Spring-WS uses Spring application contexts for all configuration, which should help Spring developers get up-to-speed nice and quickly. Also, the architecture of Spring-WS resembles that of Spring-MVC. 1.1.5. Supports WS-Security WS-Security allows you to sign SOAP messages, encrypt and decrypt them, or authenticate against them. 1.1.6. Integrates with Spring Security The WS-Security implementation of Spring Web Services provides integration with Spring Security. This means you can use your existing Spring Security configuration for your SOAP service as well. 1.2. Runtime environment Spring Web Services requires a standard Java 7 Runtime Environment. Java 8 is also supported. Spring-WS is built on Spring Framework 4.0.9, but higher versions are supported.and SoapMessageinterfaces, and higher. 1.3. Supported standards Spring Web Services supports the following standards: SOAP 1.1 and 1.2 WSDL 1.1 and 2.0 (XSD-based generation only supported for WSDL 1.1) WS-I Basic Profile 1.0, 1.1, 1.2 and 2.0 WS-Addressing 1.0 and the August 2004 draft SOAP Message Security 1.1, Username Token Profile 1.1, X.509 Certificate Token Profile 1.1, SAML Token Profile 1.1, Kerberos Token Profile 1.1, Basic Security Profile 1.1 2. Why Contract First? 2.1. Introduction When. 2.2. Object/XML Impedance Mismatch]. 2.2.1. XSD extensions. 2.2.2. Unportable types. 2.2.3. Cyclic graphs. 2.3. Contract-first versus Contract-last Besides the Object/XML Mapping issues mentioned in the previous section, there are other reasons for preferring a contract-first development style. 2.3.1. Fragility. 2.3.2. Performance. 2.3.3. Reusability. 2.3.4. Versioning. 3. Writing Contract-First Web Services 3.1. Introduction. The second part focuses on implementing this contract using Spring-WS . The most important thing when doing contract-first Web service development is to try and think in terms of XML. This means that Java-language concepts are of lesser importance. It is the XML that is sent across the wire, and you should focus on that. The fact that Java is used to implement the Web service is an implementation detail. An important detail, but a detail nonetheless. In this tutorial, we will define a Web service that is created by a Human Resources department. Clients can send holiday request forms to this service to book a holiday. 3.2. Messages In this section, we will focus on the actual XML messages that are sent to and from the Web service. We will start out by determining what these messages look like. 3.2.1. Holiday In the scenario, we have to deal with holiday requests, so it makes sense to determine what a holiday looks like in XML: <Holiday xmlns=""> <StartDate>2006-07-03</StartDate> <EndDate>2006-07-07</EndDate> </Holiday> 3.2.2. Employee There is also the notion of an employee in the scenario. Here is what it looks like in XML: <Employee xmlns=""> <Number>42</Number> <FirstName>Arjen</FirstName> <LastName>Poutsma</LastName> </Employee> We have used the same namespace as before. If this <Employee/> element could be used in other scenarios, it might make sense to use a different namespace, such as. 3.2.3. HolidayRequest Both the holiday and employee element can be put in a <HolidayRequest/>: <HolidayRequest xmlns=""> <Holiday> <StartDate>2006-07-03</StartDate> <EndDate>2006-07-07</EndDate> </Holiday> <Employee> <Number>42</Number> <FirstName>Arjen</FirstName> <LastName>Poutsma</LastName> </Employee> </HolidayRequest> The order of the two elements does not matter: <Employee/> could have been the first element just as well. What is important is that all of the data is there. In fact, the data is the only thing that is important: we are taking a data-driven approach. 3.3. Data Contract Now that we have seen some examples of the XML data that we will use, it makes sense to formalize this into a schema. This data contract defines the message format we accept. There are four different ways of defining such a contract for XML: DTDs have limited namespace support, so they are not suitable for Web services. Relax NG and Schematron certainly are easier than XML Schema. Unfortunately, they are not so widely supported across platforms. We will use XML Schema. By far the easiest way to create an XSD is to infer it from sample documents. Any good XML editor or Java IDE offers this functionality. Basically, these tools use some sample XML documents, and generate a schema from it that validates them all. The end result certainly needs to be polished up, but it’s a great starting point. Using the sample described above, we end up with the following generated schema: :schema> This generated schema obviously can be improved. The first thing to notice is that every type has a root-level element declaration. This means that the Web service should be able to accept all of these elements as data. This is not desirable: we only want to accept a <HolidayRequest/>. By removing the wrapping element tags (thus keeping the types), and inlining the results, we can accomplish this. <xs:schema xmlns: :schema> The schema still has one problem: with a schema like this, you can expect the following messages to validate: <HolidayRequest xmlns=""> <Holiday> <StartDate>this is not a date</StartDate> <EndDate>neither is this</EndDate> </Holiday> PlainText Section qName:lineannotation level:4, chunks:[<, !-- ... --, >] attrs:[:] </HolidayRequest> Clearly, we must make sure that the start and end date are really dates. XML Schema has an excellent built-in date type which we can use. We also change the NCName s to string s. Finally, we change the sequence in <HolidayRequest/> to all. This tells the XML parser that the order of <Holiday/> and <Employee/> is not significant. Our final XSD now looks like this: <xs:schema xmlns: <xs:element <xs:complexType> <xs:all> <xs:element (1) <xs:element (1) </xs:all> </xs:complexType> </xs:element> <xs:complexType <xs:sequence> <xs:element (2) <xs:element (2) </xs:sequence> </xs:complexType> <xs:complexType <xs:sequence> <xs:element <xs:element (3) <xs:element (3) </xs:sequence> </xs:complexType> </xs:schema> We store this file as hr.xsd. 3.4. Service contract A service contract is generally expressed as a WSDL file. Note that in Spring-WS, writing the WSDL by hand is not required. Based on the XSD and some conventions, Spring-WS can create the WSDL for you, as explained in the section entitled Implementing the Endpoint. You can skip to the next section if you want to; the remainder of this section will show you how to write your own WSDL by hand. We start our WSDL with the standard preamble, and by importing our existing XSD. To separate the schema from the definition, we will use a separate namespace for the WSDL definitions:. <wsdl:definitions xmlns: <wsdl:types> <xsd:schema xmlns: <xsd:import </xsd:schema> </wsdl:types> Next, we add our messages based on the written schema types. We only have one message: one with the <HolidayRequest/> we put in the schema: <wsdl:message <wsdl:part </wsdl:message> We add the message to a port type as an operation: <wsdl:portType <wsdl:operation <wsdl:input </wsdl:operation> </wsdl:portType> That finished the abstract part of the WSDL (the interface, as it were), and leaves the concrete part. The concrete part consists of a binding, which tells the client how to invoke the operations you’ve just defined; and a service, which tells it where to invoke it. Adding a concrete part is pretty standard: just refer to the abstract part you defined previously, make sure you use document/literal for the soap:binding elements ( rpc/encoded is deprecated), pick a soapAction for the operation (in this case, but any URI will do), and determine the location URL where you want request to come in (in this case): <wsdl:definitions xmlns: <wsdl:types> <xsd:schema xmlns: <xsd:import </xsd:schema> </wsdl:types> <wsdl:message (2) <wsdl:part (3) </wsdl:message> <wsdl:portType (4) <wsdl:operation <wsdl:input (2) </wsdl:operation> </wsdl:portType> <wsdl:binding (4)(5) <soap:binding (7) <wsdl:operation <soap:operation (8) <wsdl:input <soap:body (6) </wsdl:input> </wsdl:operation> </wsdl:binding> <wsdl:service <wsdl:port (5) <soap:address (9) </wsdl:port> </wsdl:service> </wsdl:definitions> This is the final WSDL. We will describe how to implement the resulting schema and WSDL in the next section. 3.5. Creating the project In this section, we will be using Maven3 to create the initial project structure for us. Doing so is not required, but greatly reduces the amount of code we have to write to setup our HolidayService. The following command creates a Maven3 web application project for us, using the Spring-WS archetype (that is, project template) mvn archetype:create -DarchetypeGroupId=org.springframework.ws \ -DarchetypeArtifactId=spring-ws-archetype \ -DarchetypeVersion= \ -DgroupId=com.mycompany.hr \ -DartifactId=holidayService This command will create a new directory called holidayService. In this directory, there is a 'src/main/webapp' directory, which will contain the root of the WAR file. You will find the standard web application deployment descriptor 'WEB-INF/web.xml' here, which defines a Spring-WS MessageDispatcherServlet and maps all incoming requests to this servlet. <web-app <display-name>MyCompany HR Holiday Service</display-name> <!-- take special notice of the name of this servlet --> > In addition to the above 'WEB-INF/web.xml' file, you will also need another, Spring-WS-specific configuration file, named 'WEB-INF/spring-ws-servlet.xml'. This file contains all of the Spring-WS-specific beans such as EndPoints, WebServiceMessageReceivers, and suchlike, and is used to create a new Spring container. The name of this file is derived from the name of the attendant servlet (in this case 'spring-ws') with '-servlet.xml' appended to it. So if you defined a MessageDispatcherServlet with the name 'dynamite', the name of the Spring-WS-specific configuration file would be 'WEB-INF/dynamite-servlet.xml'. (You can see the contents of the 'WEB-INF/spring-ws-servlet.xml' file for this example in [tutorial.example.sws-conf-file].) Once you had the project structure created, you can put the schema and wsdl from previous section into 'WEB-INF/' folder. 3.6. Implementing the Endpoint In Spring-WS, you will implement Endpoints to handle incoming XML messages. An endpoint is typically created by annotating a class with the @Endpoint annotation. In this endpoint class, you will create one or more methods that handle incoming request. The method signatures can be quite flexible: you can include just about any sort of parameter type related to the incoming XML message, as will be explained later. 3.6.1. Handling the XML Message In this sample application, we are going to use JDom 2 to handle the XML message. We are also using XPath, because it allows us to select particular parts of the XML JDOM tree, without requiring strict schema conformance. package com.mycompany.hr.ws; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.Arrays; import java.util.Date;2.Element; import org.jdom2.JDOMException; import org.jdom2.Namespace; import org.jdom2.filter.Filters; import org.jdom2.xpath.XPathExpression; import org.jdom2.xpath.XPathFactory; @Endpoint (1) public class HolidayEndpoint { private static final String NAMESPACE_URI = ""; private XPathExpression<Element> startDateExpression; private XPathExpression<Element> endDateExpression; private XPathExpression<Element> firstNameExpression; private XPathExpression<Element> lastNameExpression; private HumanResourceService humanResourceService; @Autowired (2) public HolidayEndpoint(HumanResourceService humanResourceService) throws JDOMException { this.humanResourceService = humanResourceService; Namespace namespace = Namespace.getNamespace("hr", NAMESPACE_URI); XPathFactory xPathFactory = XPathFactory.instance(); startDateExpression = xPathFactory.compile("//hr:StartDate", Filters.element(), null, namespace); endDateExpression = xPathFactory.compile("//hr:EndDate", Filters.element(), null, namespace); firstNameExpression = xPathFactory.compile("//hr:FirstName", Filters.element(), null, namespace); lastNameExpression = xPathFactory.compile("//hr:LastName", Filters.element(), null, namespace); } @PayloadRoot(namespace = NAMESPACE_URI, localPart = "HolidayRequest") (3) public void handleHolidayRequest(@RequestPayload Element holidayRequest) throws Exception {(4) Date startDate = parseDate(startDateExpression, holidayRequest); Date endDate = parseDate(endDateExpression, holidayRequest); String name = firstNameExpression.evaluateFirst(holidayRequest).getText() + " " + lastNameExpression.evaluateFirst(holidayRequest).getText(); humanResourceService.bookHoliday(startDate, endDate, name); } private Date parseDate(XPathExpression<Element> expression, Element element) throws ParseException { Element result = expression.evaluateFirst(element); if (result != null) { SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd"); return dateFormat.parse(result.getText()); } else { throw new IllegalArgumentException("Could not evaluate [" + expression + "] on [" + element + "]"); } } } Using JDOM is just one of the options to handle the XML: other options include DOM, dom4j, XOM, SAX, and StAX, but also marshalling techniques like JAXB, Castor, XMLBeans, JiBX, and XStream, as is explained in the next chapter. We chose JDOM because it gives us access to the raw XML, and because it is based on classes (not interfaces and factory methods as with W3C DOM and dom4j), which makes the code less verbose. We use XPath because it is less fragile than marshalling technologies: we don’t care for strict schema conformance, as long as we can find the dates and the name. Because we use JDOM, we must add some dependencies to the Maven pom.xml, which is in the root of our project directory. Here is the relevant section of the POM: <dependencies> <dependency> <groupId>org.springframework.ws</groupId> <artifactId>spring-ws-core</artifactId> <version></version> </dependency> <dependency> <groupId>jdom</groupId> <artifactId>jdom</artifactId> <version>2.0.1</version> </dependency> <dependency> <groupId>jaxen</groupId> <artifactId>jaxen</artifactId> <version>1.1</version> </dependency> </dependencies> Here is how we would configure these classes in our spring-ws-servlet.xml Spring XML configuration file, by using component scanning. We also instruct Spring-WS to use annotation-driven endpoints, with the <sws:annotation-driven> element. <beans xmlns="" xmlns: <context:component-scan <sws:annotation-driven/> </beans> 3.6.2. Routing the Message to the Endpoint As part of writing the endpoint, we also used the @PayloadRoot annotation to indicate which sort of messages can be handled by the handleHolidayRequest method. In Spring-WS, this process is the responsibility of an EndpointMapping. Here we route messages based on their content, by using a PayloadRootAnnotationMethodEndpointMapping. The annotation used above: @PayloadRoot(namespace = "", localPart = "HolidayRequest") basically means that whenever an XML message is received with the namespace and the HolidayRequest local name, it will be routed to the handleHolidayRequest method. By using the <sws:annotation-driven> element in our configuration, we enable the detection of the @PayloadRoot annotations. It is possible (and quite common) to have multiple, related handling methods in an endpoint, each of them handling different XML messages. There are also other ways to map endpoints to XML messages, which will be described in the next chapter. 3.6.3. Providing the Service and Stub implementation Now that we have the Endpoint, we need HumanResourceService and its implementation for use by HolidayEndpoint. package com.mycompany.hr.service; import java.util.Date; public interface HumanResourceService { void bookHoliday(Date startDate, Date endDate, String name); } For tutorial purposes, we will use a simple stub implementation of the HumanResourceService. package com.mycompany.hr.service; import java.util.Date; import org.springframework.stereotype.Service; @Service (1) public class StubHumanResourceService implements HumanResourceService { public void bookHoliday(Date startDate, Date endDate, String name) { System.out.println("Booking holiday for [" + startDate + "-" + endDate + "] for [" + name + "] "); } } 3.7. Publishing the WSDL Finally, we need to publish the WSDL. As stated in Service contract, we don’t need to write a WSDL ourselves; Spring-WS can generate one for us based on some conventions. Here is how we define the generation: <sws:dynamic-wsdl (5) <sws:xsd (2) </sws:dynamic-wsdl> <init-param> <param-name>transformWsdlLocations</param-name> <param-value>true</param-value> </init-param> You can create a WAR file using mvn install. If you deploy the application (to Tomcat, Jetty, etc.), and point your browser at this location, you will see the generated WSDL. This WSDL is ready to be used by clients, such as soapUI, or other SOAP frameworks. That concludes this tutorial. The tutorial code can be found in the full distribution of Spring-WS. The next step would be to look at the echo sample application that is part of the distribution. After that, look at the airline sample, which is a bit more complicated, because it uses JAXB, WS-Security, Hibernate, and a transactional service layer. Finally, you can read the rest of the reference documentation. II. Reference 4. Shared components In this chapter, we will explore the components which are shared between client- and server-side Spring-WS development. These interfaces and classes represent the building blocks of Spring-WS, so it is important to understand what they do, even if you do not use them directly. 4.1. Web service messages 4.1.1. WebServiceMessage. 4.1.2. SoapMessage. 4.1.3. Message Factories. SaajSoapMessageFactory" /> AxiomSoapMessageF. SOAP 1.1 or 1.2. 4.1.4. MessageContext. 4.2. TransportContext(); 4.3. Handling XML With XPath One of the best ways to handle XML is to use XPath. Quoting [effective-xml], item 35:. Spring Web Services has two ways to use XPath within your application: the faster XPathExpression or the more flexible XPathTemplate. 4.3.1. XPathExpression()); } }); PlainText Section qName:lineannotation level:5, chunks:[// do something with list of Contact objects] attrs:[:] } } Similar to mapping rows in Spring JDBC’s RowMapper, each result node is mapped using an anonymous inner class. In this case, we create a Contact object, which we use later on. 4.3.2. XPathTemplate } } 4.4. Message Logging and Tracing When developing or debugging a Web service, it can be quite useful to look at the content of a (SOAP) message when it arrives, or just before it is sent. Spring Web Services offer this functionality, via the standard Commons Logging interface.] ... 5. Creating a Web service with Spring-WS 5.1. Introduction Spring. 5.2. The MessageDispatcherdelegates MessageDispatcherServlet. 5.3. Transports Spring Web Services supports multiple transport protocols. The most common is the HTTP transport, for which a custom servlet is supplied, but it is also possible to send messages over JMS, and even email. 5.3.1. MessageDispatcherServlet ‘/WEB-INF/spring-ws-servlet.xml’. This file will contain all of the Spring Web Services beans such as endpoints, marshallers and suchlike. As an alternative for web.xml, if you are running on a Servlet 3+ environment, you can configure Spring-WS programmatically. For this purpose, Spring-WS provides a number of abstract base classes that extend the WebApplicationInitializer interface found in the Spring Framework. If you are also using @Configuration classes for your bean definitions, you are best of extending the AbstractAnnotationConfigMessageDispatcherServletInitializer, like so: public class MyServletInitializer extends AbstractAnnotationConfigMessageDispatcherServletInitializer { @Override protected Class<?>[] getRootConfigClasses() { return new Class[]{MyRootConfig.class}; } @Override protected Class<?>[] getServletConfigClasses() { return new Class[]{MyEndpointConfig.class}; } } In the example above, we tell Spring that endpoint bean definitions can be found in the MyEndpointConfig class (which is a @Configuration class). Other bean definitions (typically services, repositories, etc.) can be found in the MyRootConfig class. By default, the AbstractAnnotationConfigMessageDispatcherServletInitializer maps the servlet to two patterns: /services and *.wsdl, though this can be changed by overriding the getServletMappings() method. For more details on the programmatic configuration of the MessageDispatcherServlet, refer to the Javadoc of AbstractMessageDispatcherServletInitializer and AbstractAnnotationConfigMessageDispatcherServletInitializer. Automatic WSDL exposure ‘id’ attribute, because this will be used when exposing the WSDL. <sws:static-wsdl Or as @Bean method in a @Configuration class: @Bean public SimpleWsdl11Definition orders() { return new SimpleWsdl11Definition(new ClassPathResource("orders.wsdl")); } The WSDL defined in the ‘orders.wsdl’ file on the classpath can then be accessed via GET requests to a URL of the following form (substitute the host, port and servlet context path as appropriate). Another nice feature of the MessageDispatcherServlet (or more correctly the WsdlDefinitionHandlerAdapter) is that it is able to transform the value of the ‘location’ of all the WSDL that it exposes to reflect the URL of the incoming request. Please note that this ‘location’ transformation feature is off by default.To switch this feature on, you just need to specify an initialization parameter to the MessageDispatcherServlet, like so: <web-app> </servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app> If you use the AbstractAnnotationConfigMessageDispatcherServletInitializer, enabling transformation is as simple as overriding the isTransformWsdlLocations() method to return true. Publishing the WSDL. The next application context snippet shows how to create such a dynamic WSDL file: <sws:dynamic-wsdl <sws:xsd </sws:dynamic-wsdl> Or, as @Bean method: @Bean public DefaultWsdl11Definition orders() { DefaultWsdl11Definition definition = new DefaultWsdl11Definition(); definition.setPortTypeName("Orders"); definition.setLocationUri(""); definition.setSchema(new SimpleXsdSchema(new ClassPathResource("echo.xsd"))); return definition; }. The DefaultWsdl11Definition (and therefore, the <dynamic-wsdl> tag). 5.3.2. Wiring up Spring-WS in a DispatcherServlet As an alternative to the MessageDispatcherServlet, you can wire up a MessageDispatcher in a standard, Spring-Web MVC DispatcherServlet. By default, the DispatcherServlet can only delegate to Controllers, but we can instruct it to delegate to a MessageDispatcher by adding a WebServiceMessageReceiverHandlerAdapter to the servlet’s web application context: .method.annotation.RequestMappingHandlerAdapter"/> </beans> Note that by explicitly adding the WebServiceMessageReceiverHandlerAdapter, the dispatcher servlet does not load the default adapters, and is unable to handle standard Spring-MVC @Controllers. Therefore, we add the RequestMapping> 5.3.3. JMS transport"/> <.4. Email transport"/> <.5. Embedded HTTP Server transportler`s.> 5.3.6. XMPP transport: ="messagingReceiver" class="org.springframework.ws.transport.xmpp.XmppMessageReceiver"> <property name="messageFactory" ref="messageFactory"/> <property name="connection" ref="connection"/> <property name="messageReceiver" ref="messageDispatcher"/> <.7. MTOM MTOM is the mechanism of sending binary data to and from Web Services. You can look at how to implement this with Spring WS through the MTOM sample. 5.4. Endpoints (1) public class AnnotationOrderEndpoint { private final OrderService orderService; @Autowired (2) public AnnotationOrderEndpoint(OrderService orderService) { this.orderService = orderService; } @PayloadRoot(localPart = "order", namespace = "") (5) public void order(@RequestPayload Element orderElement) { (3) Order order = createOrder(orderElement); orderService.createOrder(order); } @PayloadRoot(localPart = "orderRequest", namespace = "") (5) @ResponsePayload public Order getOrder(@RequestPayload OrderRequest orderRequest, SoapHeader header) { (4)> Or, if you are using @Configuration classes instead of Spring XML, you can annotate your configuration class with @EnableWs, like so: @EnableWs @Configuration public class EchoConfig { // @Bean definitions go here } To customize the @EnableWs configuration, you can implement WsConfigurer, or better yet extend the WsConfigurerAdapter. For instance: @Configuration @EnableWs @ComponentScan(basePackageClasses = { MyConfiguration.class }) public class MyConfiguration extends WsConfigurerAdapter { @Override public void addInterceptors(List<EndpointInterceptor> interceptors) { interceptors.add(new MyInterceptor()); } @Override public void addArgumentResolvers(List<MethodArgumentResolver> argumentResolvers) { argumentResolvers.add(new MyArgumentResolver()); } // More overridden methods ... } In the next couple of sections, a more elaborate description of the @Endpoint programming model is given. Note that all abstract base classes provided in Spring-WS are thread safe, unless otherwise indicated in the class-level Javadoc. 5.4.1. @Endpoint handling methods. Handling method parameters message.. @XPathParam:orderRequest/@id") int orderId) { Order order = orderService.getOrder(orderId); // create Source fromor Boolean doubleor Double String Node NodeList In addition to this list, you can use any type that can be converted from a String by a Spring 3 conversion service. Handling method return types. 5.5. Endpoint mappingsapping`s. For example, there could be a custom endpoint mapping that chooses an endpoint not only based on the contents of a message, but also on a specific SOAP header (or indeed multiple SOAP headers). Most endpoint mappings inherit from the AbstractEndpointMapping, which offers an ‘interceptors’ property, which is the list of interceptors to use. EndpointInterceptors are discussed in Intercepting requests - the EndpointInterceptor interface. Additionally, there is the ‘defaultEndpoint’, which is the default endpoint to use when this endpoint mapping does not result in a matching endpoint. As explained in. 5.5.1. WS-Addressing. AnnotationActionEndpointMapping @Endpoint handling methods and URIs and Transports. 5.5.2. Intercepting requests - the EndpointInterceptor interface. When using @Configuration classes, you can extend from WsConfigurerAdapter to add interceptors. Like so: @Configuration @EnableWs public class MyWsConfiguration extends WsConfigurerAdapter { @Override public void addInterceptors(List<EndpointInterceptor> interceptors) { interceptors.add(new MyPayloadRootInterceptor()); } } XwsSecurityInterceptor. PayloadLoggingInterceptor and SoapEnvelopeLogging> Both of these interceptors have two properties: ‘logRequest’ and ‘logResponse’, which can be set to false to disable logging for either request or response messages. Of course, you could use the WsConfigurerAdapter approach, as described above, for the PayloadLoggingInterceptor as well. PayloadValidatingInterceptor> Of course, you could use the WsConfigurerAdapter approach, as described above, for the PayloadValidatingInterceptor as well. PayloadTransformingInterceptor. Of course, you could use the WsConfigurerAdapter approach, as described above, for the PayloadTransformingInterceptor as well. 5.6. Handling Exceptions. 5.6.1. SoapFaultMappingExceptionRes. 5.6.2. SoapFaultAnnotationExceptionResolver> 5.7. Server-side testing.. 5.7.1. Writing server-side integration tests Spring Web Services 2.0 introduced support for creating endpoint integration tests. In this context, an endpoint is class handles (SOAP) messages instanceimplementations provided in RequestCreators(which can be statically imported). Set up response expectations by calling andExpect(ResponseMatcher), possibly by using the default ResponseMatcherimplementations provided in ResponseMatchers(which can be statically imported). Multiple expectations can be set up by chaining andExpect(ResponseMatcher)calls. Consider, for example, this simple Web service endpoint class: import org.springframework.ws.server.endpoint.annotation.Endpoint; import org.springframework.ws.server.endpoint.annotation.RequestPayload; import org.springframework.ws.server.endpoint.annotation.ResponsePayload; @Endpoint (1) public class CustomerEndpoint { @ResponsePayload (2) public CustomerCountResponse getCustomerCount( (2) @RequestPayload CustomerCountRequest request) { (2); (1) import static org.springframework.ws.test.server.RequestCreators.*; (1) import static org.springframework.ws.test.server.ResponseMatchers.*; (1) @RunWith(SpringJUnit4ClassRunner.class) (2) @ContextConfiguration("spring-ws-servlet.xml") (2) public class CustomerEndpointIntegrationTest { @Autowired private ApplicationContext applicationContext; (3) private MockWebServiceClient mockClient; @Before public void createClient() { mockClient = MockWebServiceClient.createClient(applicationContext); (4) } @Test public void customerEndpoint()>"); mockClient.sendRequest(withPayload(requestPayload)). (5) andExpect(payload(responsePayload)); (5) } } 5.7.2. RequestCreator and RequestCreators. 5.7.3. ResponseMatcher and ResponseMatchersError`s response matchers provided by ResponseMatchers, refer to the class level Javadoc. 6. Using Spring Web Services on the Client 6.1. Introduction Spring-WS provides a client-side Web service API that allows for consistent, XML-driven access to Web services. It also caters for the use of marshallers and unmarshallers so that your service tier code can deal exclusively with Java objects. The org.springframework.ws.client.core package provides the core functionality for using the client-side access API. It contains template classes that simplify the use of Web services, much like the core Spring JdbcTemplate does for JDBC. The design principle common to Spring template classes is to provide helper methods to perform common operations, and for more sophisticated usage, delegate to user implemented callback interfaces. The Web service template follows the same design. The classes offer various convenience methods for the sending and receiving of XML messages, marshalling objects to XML before sending, and allows for multiple transport options. 6.2. Using the client-side API 6.2.1. WebServiceTemplate The WebServiceTemplate is the core class for client-side Web service access in Spring-WS. It contains methods for sending Source objects, and receiving response messages as either Source or Result. Additionally, it can marshal objects to XML before sending them across a transport, and unmarshal any response XML into an object again. URIs and Transports The WebServiceTemplate class uses an URI as the message destination. You can either set a defaultUri property on the template itself, or supply an URI explicitly when calling a method on the template. The URI will be resolved into a WebServiceMessageSender, which is responsible for sending the XML message across a transport layer. You can set one or more message senders using the messageSender or messageSenders properties of the WebServiceTemplate class. HTTP transports There are two implementations of the WebServiceMessageSender interface for sending messages via HTTP. The default implementation is the HttpUrlConnectionMessageSender, which uses the facilities provided by Java itself. The alternative is the HttpComponentsMessageSender, which uses the Apache HttpComponents HttpClient. Use the latter if you need more advanced and easy-to-use functionality (such as authentication, HTTP connection pooling, and so forth). To use the HTTP transport, either set the defaultUri to something like, or supply the uri parameter for one of the methods. The following example shows how the default configuration can be used for HTTP transports: ="defaultUri" value=""/> </bean> </beans> The following example shows how override the default configuration, and to use Apache HttpClient to authenticate using HTTP authentication: <bean id="webServiceTemplate" class="org.springframework.ws.client.core.WebServiceTemplate"> <constructor-arg <property name="messageSender"> <bean class="org.springframework.ws.transport.http.HttpComponentsMessageSender"> <property name="credentials"> <bean class="org.apache.http.auth.UsernamePasswordCredentials"> <constructor-arg </bean> </property> </bean> </property> <property name="defaultUri" value=""/> </bean> JMS transport For sending messages over JMS, Spring Web Services provides the JmsMessageSender. This class uses the facilities of the Spring framework to transform the WebServiceMessage into a JMS Message, send it on its way on a Queue or Topic, and receive a response (if any). To use the JmsMessageSender, you need to set the defaultUri or uri parameter to a JMS URI, which - at a minimum - consists of the jms: prefix and a destination name. Some examples of JMS URIs are: jms:SomeQueue, jms:SomeTopic?priority=3&deliveryMode=NON_PERSISTENT, and jms:RequestQueue?replyToName=ResponseName. For more information on this URI syntax, refer to the class level Javadoc of the JmsMessageSender. By default, the JmsMessageSender send JMS BytesMessage, but this can be overriden to use TextMessages by using the messageType parameter on the JMS URI. For example: jms:Queue?messageType=TEXT_MESSAGE. Note that BytesMessages are the preferred type, because TextMessages do not support attachments and character encodings reliably. The following example shows how to use the JMS transport in combination with an ActiveMQ connection factory: <beans> <bean id="messageFactory" class="org.springframework.ws.soap.saaj.SaajSoapMessageFactory"/> <bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"> <property name="brokerURL" value="vm://localhost?broker.persistent=false"/> </bean> <bean id="webServiceTemplate" class="org.springframework.ws.client.core.WebServiceTemplate"> <constructor-arg <property name="messageSender"> <bean class="org.springframework.ws.transport.jms.JmsMessageSender"> <property name="connectionFactory" ref="connectionFactory"/> </bean> </property> <property name="defaultUri" value="jms:RequestQueue?deliveryMode=NON_PERSISTENT"/> </bean> </beans> Spring Web Services also provides an email transport, which can be used to send web service messages via SMTP, and retrieve them via either POP3 or IMAP. The client-side email functionality is contained in the MailMessageSender class. This class creates an email message from the request WebServiceMessage, and sends it via SMTP. It then waits for a response message to arrive in the incoming POP3 or IMAP server. To use the MailMessageSender, set the defaultUri or uri parameter to a mailto URI. Here are some URI examples: mailto:[email protected], and mailto:[email protected]?subject=SOAP%20Test. Make sure that the message sender is properly configured with a transportUri, which indicates the server to use for sending requests (typically a SMTP server), and a storeUri, which indicates the server to poll for responses (typically a POP3 or IMAP server). The following example shows how to use the email transport: ="messageSender"> <bean class="org.springframework.ws.transport.mail.MailMessageSender"> <property name="from" value="Spring-WS SOAP Client <[email protected]>"/> <property name="transportUri" value="smtp://client:[email protected]"/> <property name="storeUri" value="imap://client:[email protected]/INBOX"/> </bean> </property> <property name="defaultUri" value="mailto:[email protected]?subject=SOAP%20Test"/> </bean> </beans> XMPP transport Spring Web Services 2.0 introduced an XMPP (Jabber) transport, which can be used to send and receive web service messages via XMPP. The client-side XMPP functionality is contained in the XmppMessageSender class. This class creates an XMPP message from the request WebServiceMessage, and sends it via XMPP. It then listens for a response message to arrive. To use the XmppMessageSender, set the defaultUri or uri parameter to a xmpp URI, for example xmpp:[email protected]. The sender also requires an XMPPConnection to work, which can be conveniently created using the org.springframework.ws.transport.xmpp.support.XmppConnectionFactoryBean. The following example shows how to use the xmpp transport: ="webServiceTemplate" class="org.springframework.ws.client.core.WebServiceTemplate"> <constructor-arg <property name="messageSender"> <bean class="org.springframework.ws.transport.xmpp.XmppMessageSender"> <property name="connection" ref="connection"/> </bean> </property> <property name="defaultUri" value="xmpp:[email protected]"/> </bean> </beans> Message factories In addition to a message sender, the WebServiceTemplate requires a Web service message factory. There are two message factories for SOAP: SaajSoapMessageFactory and AxiomSoapMessageFactory. If no message factory is specified (via the messageFactory property), Spring-WS will use the SaajSoapMessageFactory by default. 6.2.2. Sending and receiving a WebServiceMessage The WebServiceTemplate contains many convenience methods to send and receive web service messages. There are methods that accept and return a Source and those that return a Result. Additionally, there are methods which marshal and unmarshal objects to XML. Here is an example that sends a simple XML message to a Web service. import java.io.StringReader; import javax.xml.transform.stream.StreamResult; import javax.xml.transform.stream.StreamSource; import org.springframework.ws.WebServiceMessageFactory; import org.springframework.ws.client.core.WebServiceTemplate; import org.springframework.ws.transport.WebServiceMessageSender; public class WebServiceClient { private static final StringHello Web Service World</message>"; private final WebServiceTemplate webServiceTemplate = new WebServiceTemplate(); public void setDefaultUri(String defaultUri) { webServiceTemplate.setDefaultUri(defaultUri); } // send to the configured default URI public void simpleSendAndReceive() { StreamSource source = new StreamSource(new StringReader(MESSAGE)); StreamResult result = new StreamResult(System.out); webServiceTemplate.sendSourceAndReceiveToResult(source, result); } // send to an explicit URI public void customSendAndReceive() { StreamSource source = new StreamSource(new StringReader(MESSAGE)); StreamResult result = new StreamResult(System.out); webServiceTemplate.sendSourceAndReceiveToResult("", source, result); } } <beans xmlns=""> <bean id="webServiceClient" class="WebServiceClient"> <property name="defaultUri" value=""/> </bean> </beans> The above example uses the WebServiceTemplate to send a hello world message to the web service located at (in the case of the simpleSendAndReceive() method), and writes the result to the console. The WebServiceTemplate is injected with the default URI, which is used because no URI was supplied explicitly in the Java code. Please note that the WebServiceTemplate class is thread-safe once configured (assuming that all of it’s dependencies are thread-safe too, which is the case for all of the dependencies that ship with Spring-WS), and so multiple objects can use the same shared WebServiceTemplate instance if so desired. The WebServiceTemplate exposes a zero argument constructor and messageFactory/ messageSender bean properties which can be used for constructing the instance (using a Spring container or plain Java code). Alternatively, consider deriving from Spring-WS’s WebServiceGatewaySupport convenience base class, which exposes convenient bean properties to enable easy configuration. (You do not have to extend this base class… it is provided as a convenience class only.) 6.2.3. Sending and receiving POJOs - marshalling and unmarshalling In order to facilitate the sending of plain Java objects, the WebServiceTemplate has a number of send(..) methods that take an Object as an argument for a message’s data content. The method marshalSendAndReceive(..) in the WebServiceTemplate class delegates the conversion of the request object to XML to a Marshaller, and the conversion of the response XML to an object to an Unmarshaller. (For more information about marshalling and unmarshaller, refer to the Spring documentation.) By using the marshallers, your application code can focus on the business object that is being sent or received and not be concerned with the details of how it is represented as XML. In order to use the marshalling functionality, you have to set a marshaller and unmarshaller with the marshaller/ unmarshaller properties of the WebServiceTemplate class. 6.2.4. WebServiceMessageCallback To accommodate the setting of SOAP headers and other settings on the message, the WebServiceMessageCallback interface gives you access to the message after it has been created, but before it is sent. The example below demonstrates how to set the SOAP Action header on a message that is created by marshalling an object. public void marshalWithSoapActionHeader(MyObject o) { webServiceTemplate.marshalSendAndReceive(o, new WebServiceMessageCallback() { public void doWithMessage(WebServiceMessage message) { ((SoapMessage)message).setSoapAction(""); } }); } WS-Addressing In addition to the server-side WS-Addressing support, Spring Web Services also has support for this specification on the client-side. For setting WS-Addressing headers on the client, you can use the org.springframework.ws.soap.addressing.client.ActionCallback. This callback takes the desired Action header as a parameter. It also has constructors for specifying the WS-Addressing version, and a To header. If not specified, the To header will default to the URL of the connection being made. Here is an example of setting the Action header to: webServiceTemplate.marshalSendAndReceive(o, new ActionCallback("")); 6.2.5. WebServiceMessageExtractor The WebServiceMessageExtractor interface is a low-level callback interface that allows you to have full control over the process to extract an Object from a received WebServiceMessage. The WebServiceTemplate will invoke the extractData(..) method on a supplied WebServiceMessageExtractor while the underlying connection to the serving resource is still open. The following example illustrates the WebServiceMessageExtractor in action: public void marshalWithSoapActionHeader(final Source s) { final Transformer transformer = transformerFactory.newTransformer(); webServiceTemplate.sendAndReceive(new WebServiceMessageCallback() { public void doWithMessage(WebServiceMessage message) { transformer.transform(s, message.getPayloadResult()); }, new WebServiceMessageExtractor() { public Object extractData(WebServiceMessage message) throws IOException // do your own transforms with message.getPayloadResult() // or message.getPayloadSource() } }); } 6.3. Client-side testing When it comes to testing your Web service clients (i.e. classes that uses the WebServiceTemplate to access a Web service), there are two possible approaches: Write Unit Tests, which simply mock away the WebServiceTemplateclass, WebServiceOperationsinterface, or the complete client class. The advantage of this approach is that it’s quite easy to accomplish; the disadvantage is that you are not really testing the exact content of the XML messages that are sent over the wire, especially when mocking out the entire client class.. 6.3.1. Writing client-side integration tests Spring Web Services 2.0 introduced support for creating Web service client integration tests. In this context, a client is a class that uses the WebServiceTemplate to access a Web service. The integration test support lives in the org.springframework.ws.test.client package. The core class in that package usage of the MockWebServiceServer is: . Create a MockWebServiceServerinstance by calling MockWebServiceServer.createServer(WebServiceTemplate), MockWebServiceServer.createServer(WebServiceGatewaySupport), or MockWebServiceServer.createServer(ApplicationContext). Set up request expectations by calling expect(RequestMatcher), possibly by using the default RequestMatcherimplementations provided in RequestMatchers(which can be statically imported). Multiple expectations can be set up by chaining andExpect(RequestMatcher)calls. Create an appropriate response message by calling andRespond(ResponseCreator), possibly by using the default ResponseCreatorimplementations provided in ResponseCreators(which can be statically imported). Use the WebServiceTemplateas normal, either directly of through client code. Call MockWebServiceServer.verify()to make sure that all expectations have been met. Consider, for example, this Web service client class: import org.springframework.ws.client.core.support.WebServiceGatewaySupport; public class CustomerClient extends WebServiceGatewaySupport { (1) public int getCustomerCount() { CustomerCountRequest request = new CustomerCountRequest(); (2) request.setCustomerName("John Doe"); CustomerCountResponse response = (CustomerCountResponse) getWebServiceTemplate().marshalSendAndReceive(request); (3); (1) import static org.springframework.ws.test.client.RequestMatchers.*; (1) import static org.springframework.ws.test.client.ResponseCreators.*; (1) @RunWith(SpringJUnit4ClassRunner.class) (2) @ContextConfiguration("integration-test.xml") (2) public class CustomerClientIntegrationTest { @Autowired private CustomerClient client; (3) private MockWebServiceServer mockServer; (4) @Before public void createServer() throws Exception { mockServer = MockWebServiceServer.createServer(client); } @Test public void customerClient()>"); mockServer.expect(payload(requestPayload)).andRespond(withPayload(responsePayload));(5) int result = client.getCustomerCount(); (6) assertEquals(10, result); (6) mockServer.verify(); (7) } } 6.3.2. RequestMatcher and RequestMatchers To verify whether the request message meets certain expectations, the MockWebServiceServer uses the RequestMatcher strategy interface. The contract defined by this interface is quite simple: public interface RequestMatcher { void match(URI uri, WebServiceMessage request) throws IOException, AssertionError; } You can write your own implementations of this interface, throwing AssertionError`s when the message does not meet your expectations, but you certainly do not have to. The `RequestMatchers class provides standard RequestMatcher implementations for you to use in your tests. You will typically statically import this class. The RequestMatchers class provides the following request matchers: You can set up multiple request expectations by chaining andExpect() calls, like so: mockServer.expect(connectionTo("")). andExpect(payload(expectedRequestPayload)). andExpect(validPayload(schemaResource)). andRespond(...); For more information on the request matchers provided by RequestMatchers, refer to the class level Javadoc. 6.3.3. ResponseCreator and ResponseCreators When the request message has been verified and meets the defined expectations, the MockWebServiceServer will create a response message for the WebServiceTemplate to consume. The server uses the ResponseCreator strategy interface for this purpose: public interface ResponseCreator { WebServiceMessage createResponse(URI uri, WebServiceMessage request, WebServiceMessageFactory messageFactory) throws IOException; } Once again you can write your own implementations of this interface, creating a response message by using the message factory, but you certainly do not have to, as the ResponseCreators class provides standard ResponseCreator implementations for you to use in your tests. You will typically statically import this class. The ResponseCreators class provides the following responses: For more information on the request matchers provided by RequestMatchers, refer to the class level Javadoc. 7. Securing your Web services with Spring-WS 7.1. Introduction This chapter explains how to add WS-Security aspects to your Web services. We will focus on the three different areas of WS-Security, namely: Authentication. This is the process of determining whether a principal is who they claim to be. In this context, a "principal" generally means a user, device or some other system which can perform an action in your application. Digital signatures. The digital signature of a message is a piece of information based on both the document and the signer’s private key. It is created through the use of a hash function and a private signing function (encrypting with the signer’s private key). Encryption and Decryption. Encryption is the process of transforming data into a form that is impossible to read without the appropriate key. It is mainly used to keep information hidden from anyone for whom it is not intended. Decryption is the reverse of encryption; it is the process of transforming of encrypted data back into an readable form. All of these three areas are implemented using the XwsSecurityInterceptor or Wss4jSecurityInterceptor, which we will describe in XwsSecurityInterceptor and Wss4jSecurityInterceptor, respectively 7.2. XwsSecurityInterceptor The XwsSecurityInterceptor is an EndpointInterceptor (see Intercepting requests - the EndpointInterceptor interface) that is based on SUN’s XML and Web Services Security package (XWSS). This WS-Security implementation is part of the Java Web Services Developer Pack (Java WSDP). Like any other endpoint interceptor, it is defined in the endpoint mapping (see Endpoint mappings). This means that you can be selective about adding WS-Security support: some endpoint mappings require it, while others do not. The XwsSecurityInterceptor requires a security policy file to operate. This XML file tells the interceptor what security aspects to require from incoming SOAP messages, and what aspects to add to outgoing messages. The basic format of the policy file will be explained in the following sections, but you can find a more in-depth tutorial here. You can set the policy with the policyConfiguration property, which requires a Spring resource. The policy file can contain multiple elements, e.g. require a username token on incoming messages, and sign all outgoing messages. It contains a SecurityConfiguration element as root (not a JAXRPCSecurity element). Additionally, the security interceptor requires one or more`CallbackHandler`s to operate. These handlers are used to retrieve certificates, private keys, validate user credentials, etc. Spring-WS offers handlers for most common security concerns, e.g. authenticating against a Spring Security authentication manager, signing outgoing messages based on a X509 certificate. The following sections will indicate what callback handler to use for which security concern. You can set the callback handlers using the callbackHandler or callbackHandlers property. Here is an example that shows how to wire the XwsSecurityInterceptor up: <beans> <bean id="wsSecurityInterceptor" class="org.springframework.ws.soap.security.xwss.XwsSecurityInterceptor"> <property name="policyConfiguration" value="classpath:securityPolicy.xml"/> <property name="callbackHandlers"> <list> <ref bean="certificateHandler"/> <ref bean="authenticationHandler"/> </list> </property> </bean> ... </beans> This interceptor is configured using the securityPolicy.xml file on the classpath. It uses two callback handlers which are defined further on in the file. 7.2.1. Keystores For most cryptographic operations, you will use the standard java.security.KeyStore objects. These operations include certificate verification, message signing, signature verification, and encryption, but excludes username and time-stamp verification. This section aims to give you some background knowledge on keystores, and the Java tools that you can use to store keys and certificates in a keystore file. This information is mostly not related to Spring-WS, but to the general cryptographic features of Java. The java.security.KeyStore class represents a storage facility for cryptographic keys and certificates. It can contain three different sort of elements: Private Keys. These keys are used for self-authentication. The private key is accompanied by certificate chain for the corresponding public key. Within the field of WS-Security, this accounts to message signing and message decryption. Symmetric Keys. Symmetric (or secret) keys are used for message encryption and decryption as well. The difference being that both sides (sender and recipient) share the same, secret key. Trusted certificates. These X509 certificates are called a trusted certificate because the keystore owner trusts that the public key in the certificates indeed belong to the owner of the certificate. Within WS-Security, these certificates are used for certificate validation, signature verification, and encryption. KeyTool Supplied with your Java Virtual Machine is the keytool program, a key and certificate management utility. You can use this tool to create new keystores, add new private keys and certificates to them, etc. It is beyond the scope of this document to provide a full reference of the keytool command, but you can find a reference here , or by giving the command keytool -help on the command line. KeyStoreFactoryBean To easily load a keystore using Spring configuration, you can use the KeyStoreFactoryBean. It has a resource location property, which you can set to point to the path of the keystore to load. A password may be given to check the integrity of the keystore data. If a password is not given, integrity checking is not performed. <bean id="keyStore" class="org.springframework.ws.soap.security.support.KeyStoreFactoryBean"> <property name="password" value="password"/> <property name="location" value="classpath:org/springframework/ws/soap/security/xwss/test-keystore.jks"/> </bean> KeyStoreCallbackHandler To use the keystores within a XwsSecurityInterceptor, you will need to define a KeyStoreCallbackHandler. This callback has three properties with type keystore: ( keyStore, trustStore, and symmetricStore). The exact stores used by the handler depend on the cryptographic operations that are to be performed by this handler. For private key operation, the keyStore is used, for symmetric key operations the symmetricStore, and for determining trust relationships, the trustStore. The following table indicates this: Additionally, the KeyStoreCallbackHandler has a privateKeyPassword property, which should be set to unlock the private key(s) contained in the`keyStore`. If the symmetricStore is not set, it will default to the keyStore. If the key or trust store is not set, the callback handler will use the standard Java mechanism to load or create it. Refer to the JavaDoc of the KeyStoreCallbackHandler to know how this mechanism works. For instance, if you want to use the KeyStoreCallbackHandler to validate incoming certificates or signatures, you would use a trust store, like so: > If you want to use it to decrypt incoming certificates or sign outgoing messages, you would use a key store, like so: > The following sections will indicate where the KeyStoreCallbackHandler can be used, and which properties to set for particular cryptographic operations. 7.2.2. Authentication As stated in the introduction, authentication is the task of determining whether a principal is who they claim to be. Within WS-Security, authentication can take two forms: using a username and password token (using either a plain text password or a password digest), or using a X509 certificate. Plain Text Username Authentication The simplest form of username authentication uses*plain text passwords*. In this scenario, the SOAP message will contain a UsernameToken element, which itself contains a Username element and a Password element which contains the plain text password. Plain text authentication can be compared to the Basic Authentication provided by HTTP servers. To require that every incoming message contains a UsernameToken with a plain text password, the security policy file should contain a RequireUsernameToken element, with the passwordDigestRequired attribute set to`false`. PlainTextPasswordRequest to the registered handlers. Within Spring-WS, there are three classes which handle this particular callback. SimplePasswordValidationCallbackHandler The simplest password validation handler is the SimplePasswordValidationCallbackHandler. This handler validates passwords against an in-memory Properties object, which you can specify using the users property, like so: <bean id="passwordValidationHandler" class="org.springframework.ws.soap.security.xwss.callback.SimplePasswordValidationCallbackHandler"> <property name="users"> <props> <prop key="Bert">Ernie</prop> </props> </property> </bean> In this case, we are only allowing the user "Bert" to log in using the password "Ernie". SpringPlainTextPasswordValidationCallbackHandler The SpringPlainTextPasswordValidationCallbackHandler uses Spring Security to authenticate users. It is beyond the scope of this document to describe Spring Security, but suffice it to say that it is a full-fledged security framework. You can read more about it in the Spring Security reference documentation. The SpringPlainTextPasswordValidationCallbackHandler requires an AuthenticationManager to operate. It uses this manager to authenticate against a UsernamePasswordAuthenticationToken that it creates. If authentication is successful, the token is stored in the SecurityContextHolder. You can set the authentication manager using the `authenticationManager`property: <beans> <bean id="springSecurityHandler" class="org.springframework.ws.soap.security.xwss.callback.SpringPlainTextPasswordValidationCallbackHandler"> <property name="authenticationManager" ref="authenticationManager"/> </bean> <bean id="authenticationManager" class="org.springframework.security.providers.ProviderManager"> <property name="providers"> <bean class="org.springframework.security.providers.dao.DaoAuthenticationProvider"> <property name="userDetailsService" ref="userDetailsService"/> </bean> </property> </bean> <bean id="userDetailsService" class="com.mycompany.app.dao.UserDetailService" /> ... </beans> JaasPlainTextPasswordValidationCallbackHandler The JaasPlainTextPasswordValidationCallbackHandler is based on the standard Java Authentication and Authorization Service. It is beyond the scope of this document to provide a full introduction into JAAS, but there is a good tutorial available. The JaasPlainTextPasswordValidationCallbackHandler requires only a loginContextName to operate. It creates a new JAAS LoginContext using this name, and handles the standard JAAS NameCallback and PasswordCallback using the username and password provided in the SOAP message. This means that this callback handler integrates with any JAAS LoginModule that fires these callbacks during the login() phase, which is standard behavior. You can wire up a JaasPlainTextPasswordValidationCallbackHandler as follows: <bean id="jaasValidationHandler" class="org.springframework.ws.soap.security.xwss.callback.jaas.JaasPlainTextPasswordValidationCallbackHandler"> <property name="loginContextName" value="MyLoginModule" /> </bean> In this case, the callback handler uses the LoginContext named "MyLoginModule". This module should be defined in your jaas.config file, as explained in the abovementioned tutorial. Digest Username Authentication When using password digests, the SOAP message also contains a UsernameToken element, which itself contains a Username element and a Password element. The difference is that the password is not sent as plain text, but as a digest. The recipient compares this digest to the digest he calculated from the known password of the user, and if they are the same, the user is authenticated. It can be compared to the Digest Authentication provided by HTTP servers. To require that every incoming message contains a UsernameToken element with a password digest, the security policy file should contain a RequireUsernameToken element, with the passwordDigestRequired attribute set to`true`. Additionally, the nonceRequired should be set to`true`: DigestPasswordRequest to the registered handlers. Within Spring-WS, there are two classes which handle this particular callback. SimplePasswordValidationCallbackHandler The SimplePasswordValidationCallbackHandler can handle both plain text passwords as well as password digests. It is described in SimplePasswordValidationCallbackHandler. SpringDigestPasswordValidationCallbackHandler The SpringDigestPasswordValidationCallbackHandler requires an Spring Security UserDetailService to operate. It uses this service to retrieve.xwss.callback.SpringDigestPasswordValidationCallbackHandler"> <property name="userDetailsService" ref="userDetailsService"/> </bean> <bean id="userDetailsService" class="com.mycompany.app.dao.UserDetailService" /> ... </beans> Certificate Authentication A more secure way of authentication uses X509 certificates. In this scenerario, the SOAP message contains a`BinarySecurityToken`, which contains a Base 64-encoded version of a X509 certificate. The certificate is used by the recipient to authenticate. The certificate stored in the message is also used to sign the message (see Verifying Signatures). To make sure that all incoming SOAP messages carry a`BinarySecurityToken`, the security policy file should contain a RequireSignature element. This element can further carry other elements, which will be covered in Verifying Signatures. You can find a reference of possible child elements here. <xwss:SecurityConfiguration xmlns: ... <xwss:RequireSignature ... </xwss:SecurityConfiguration> When a message arrives that carries no certificate, the XwsSecurityInterceptor will return a SOAP Fault to the sender. If it is present, it will fire a CertificateValidationCallback. There are three handlers within Spring-WS which handle this callback for authentication purposes. KeyStoreCallbackHandler The KeyStoreCallbackHandler uses a standard Java keystore to validate certificates. This certificate validation process consists of the following steps: . First, the handler will check whether the certificate is in the private keyStore. If it is, it is valid. If the certificate is not in the private keystore, the handler will check whether the current date and time are within the validity period given in the certificate. If they are not, the certificate is invalid; if it is, it will continue with the final step. Finally, a certification path for the certificate is created. This basically means that the handler will determine whether the certificate has been issued by any of the certificate authorities in the`trustStore`. If a certification path can be built successfully, the certificate is valid. Otherwise, the certificate is not. To use the KeyStoreCallbackHandler for certificate validation purposes, you will most likely set only> Using this setup, the certificate that is to be validated must either be in the trust store itself, or the trust store must contain a certificate authority that issued the certificate. SpringCertificateValidationCallbackHandler The SpringCertificateValidationCallbackHandler requires an Spring Security AuthenticationManager to operate. It uses this manager to authenticate against a X509AuthenticationToken that it creates. The configured authentication manager is expected to supply a provider which can handle this token (usually an instance of X509AuthenticationProvider). If authentication is succesful, the token is stored in the SecurityContextHolder. You can set the authentication manager using the authenticationManager property: <beans> <bean id="springSecurityCertificateHandler" class="org.springframework.ws.soap.security.xwss.callback.SpringCertificateValidationCallbackHandler"> <property name="authenticationManager" ref="authenticationManager"/> </bean> <bean id="authenticationManager" class="org.springframework.security.providers.ProviderManager"> <property name="providers"> <bean class="org.springframework.ws.soap.security.x509.X509AuthenticationProvider"> <property name="x509AuthoritiesPopulator"> <bean class="org.springframework.ws.soap.security.x509.populator.DaoX509AuthoritiesPopulator"> <property name="userDetailsService" ref="userDetailsService"/> </bean> </property> </bean> </property> </bean> <bean id="userDetailsService" class="com.mycompany.app.dao.UserDetailService" /> ... </beans> In this case, we are using a custom user details service to obtain authentication details based on the certificate. Refer to the Spring Security reference documentation for more information about authentication against X509 certificates. JaasCertificateValidationCallbackHandler The JaasCertificateValidationCallbackHandler requires a loginContextName to operate. It creates a new JAAS LoginContext using this name and with the X500Principal of the certificate. This means that this callback handler integrates with any JAAS LoginModule that handles X500 principals. You can wire up a JaasCertificateValidationCallbackHandler as follows: <bean id="jaasValidationHandler" class="org.springframework.ws.soap.security.xwss.callback.jaas.JaasCertificateValidationCallbackHandler"> <property name="loginContextName">MyLoginModule</property> </bean> In this case, the callback handler uses the LoginContext named "MyLoginModule". This module should be defined in your jaas.config file, and should be able to authenticate against X500 principals. 7.2.3. Digital Signatures The digital signature of a message is a piece of information based on both the document and the signer’s private key. There are two main tasks related to signatures in WS-Security: verifying signatures and signing messages. Verifying Signatures Just like certificate-based authentication, a signed message contains a BinarySecurityToken, which contains the certificate used to sign the message. Additionally, it contains a SignedInfo block, which indicates what part of the message was signed. To make sure that all incoming SOAP messages carry a`BinarySecurityToken`, the security policy file should contain a RequireSignature:RequireSignature </xwss:SecurityConfiguration> If the signature is not present, the XwsSecurityInterceptor will return a SOAP Fault to the sender. If it is present, it will fire a SignatureVerification signature verification. For signature verification, the handler uses:org/springframework/ws/soap/security/xwss/test-truststore.jks"/> <property name="password" value="changeit"/> </bean> </beans> When signing a message, the XwsSecurityInterceptor adds the BinarySecurityToken to the message, and a SignedInfo block, which indicates what part of the message was signed. To sign all outgoing SOAP messages, the security policy file should contain a Sign:Sign </xwss:SecurityConfiguration> The XwsSecurityInterceptor will fire a Signature signing messages. For adding signatures, the handler uses the keyStore property. Additionally, you must set the privateKeyPassword property to unlock the private key used for signing. > 7.2.4. Encryption and Decryption When encrypting, the message is transformed into a form that can only be read with the appropriate key. The message can be decrypted to reveal the original, readable message. Decryption To decrypt incoming SOAP messages, the security policy file should contain a RequireEncryption element. This element can further carry a EncryptionTarget element which indicates which part of the message should be encrypted, and a SymmetricKey to indicate that a shared secret instead of the regular private key should be used to decrypt the message. You can read a description of the other elements here. <xwss:SecurityConfiguration xmlns: <xwss:RequireEncryption /> </xwss:SecurityConfiguration> If an incoming message is not encrypted, the XwsSecurityInterceptor will return a SOAP Fault to the sender. If it is present, it will fire a DecryptionKeyCallback to the registered handlers. Within Spring-WS, there is one class which handled this particular callback: the`KeyStoreCallbackHandler`. KeyStoreCallbackHandler As described in KeyStoreCallbackHandler, the KeyStoreCallbackHandler uses a java.security.KeyStore for handling various cryptographic callbacks, including decryption. For decryption, the handler uses the keyStore property. Additionally, you must set the privateKeyPassword property to unlock the private key used for decryption. For decryption based on symmetric keys, it will use the symmetricStore. > Encryption To encrypt outgoing SOAP messages, the security policy file should contain a Encrypt element. This element can further carry a EncryptionTarget element which indicates which part of the message should be encrypted, and a SymmetricKey to indicate that a shared secret instead of the regular public key should be used to encrypt the message. You can read a description of the other elements here. <xwss:SecurityConfiguration xmlns: <xwss:Encrypt /> </xwss:SecurityConfiguration> The XwsSecurityInterceptor will fire a EncryptionKeyCallback to the registered handlers in order to retrieve the encryption information. Within Spring-WS, there is one class which handled this particular callback: the KeyStoreCallbackHandler. KeyStoreCallbackHandler As described in KeyStoreCallbackHandler, the KeyStoreCallbackHandler uses a java.security.KeyStore for handling various cryptographic callbacks, including encryption. For encryption based on public keys, the handler uses the trustStore property. For encryption based on symmetric keys, it will use the`symmetricStore`. > 7.2.5. Security Exception Handling When a securement or validation action fails, the XwsSecurityInterceptor will throw a WsSecuritySecurementException or WsSecurityValidationException respectively. These exceptions bypass the standard exception handling mechanism, but are handled in the interceptor itself. WsSecuritySecurementException exceptions are handled in the handleSecurementException method of the XwsSecurityInterceptor. By default, this method will simply log an error, and stop further processing of the message. Similarly, WsSecurityValidationException exceptions are handled in the handleValidationException method of the XwsSecurityInterceptor. By default, this method will create a SOAP 1.1 Client or SOAP 1.2 Sender Fault, and send that back as a response. 7.3. Wss4jSecurityInterceptor The Wss4jSecurityInterceptor is an EndpointInterceptor (see Intercepting requests - the EndpointInterceptor interface) that is based on Apache’s WSS4J. WSS4J implements the following standards: OASIS Web Serives Security: SOAP Message Security 1.0 Standard 200401, March 2004 Username Token profile V1.0 X.509 Token Profile V1.0 This interceptor supports messages created by the AxiomSoapMessageFactory and the SaajSoapMessageFactory. 7.3.1. Configuring Wss4jSecurityInterceptor WSS4J uses no external configuration file; the interceptor is entirely configured by properties. The validation and securement actions executed by this interceptor are specified via validationActions and securementActions properties, respectively. Actions are passed as a space-separated strings. Here is an example configuration: <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="validationActions" value="UsernameToken Encrypt"/> ... <property name="securementActions" value="Encrypt"/> ... </bean> Validation actions are: Securement actions are: The order of the actions is significant and is enforced by the interceptor. The interceptor will reject an incoming SOAP message if its security actions were performed in a different order than the one specified by`validationActions`. 7.3.2. Handling Digital Certificates For cryptographic operations requiring interaction with a keystore or certificate handling (signature, encryption and decryption operations), WSS4J requires an instance of`org.apache.ws.security.components.crypto.Crypto`. Crypto instances can be obtained from WSS4J’s CryptoFactory or more conveniently with the Spring-WS`CryptoFactoryBean`. CryptoFactoryBean Spring-WS provides a convenient factory bean, CryptoFactoryBean that constructs and configures Crypto instances via strong-typed properties (prefered) or through a Properties object. By default, CryptoFactoryBean returns instances of org.apache.ws.security.components.crypto.Merlin. This can be changed by setting the cryptoProvider property (or its equivalent org.apache.ws.security.crypto.provider string property). Here is a simple example configuration: <bean class="org.springframework.ws.soap.security.wss4j.support.CryptoFactoryBean"> <property name="keyStorePassword" value="mypassword"/> <property name="keyStoreLocation" value="file:/path_to_keystore/keystore.jks"/> </bean> 7.3.3. Authentication Validating Username Token Spring-WS provides a set of callback handlers to integrate with Spring Security. Additionally, a simple callback handler SimplePasswordValidationCallbackHandler is provided to configure users and passwords with an in-memory Properties object. Callback handlers are configured via Wss4jSecurityInterceptor’s `validationCallbackHandler property. SimplePasswordValidationCallbackHandler SimplePasswordValidationCallbackHandler validates plain text and digest username tokens against an in-memory Properties object. It is configured as follows: <bean id="callbackHandler" class="org.springframework.ws.soap.security.wss4j.callback.SimplePasswordValidationCallbackHandler"> <property name="users"> <props> <prop key="Bert">Ernie</prop> </props> </property> </bean> SpringSecurityPasswordValidationCallbackHandler The SpringSecurityPasswordValidationCallbackHandler validates plain text and digest passwords using a Spring Security UserDetailService to operate. It uses this service to retrieve the (digest of ).wss4j.callback.SpringDigestPasswordValidationCallbackHandler"> <property name="userDetailsService" ref="userDetailsService"/> </bean> <bean id="userDetailsService" class="com.mycompany.app.dao.UserDetailService" /> ... </beans> Adding Username Token Adding a username token to an outgoing message is as simple as adding UsernameToken to the securementActions property of the Wss4jSecurityInterceptor and specifying securementUsername and`securementPassword`. The password type can be set via the securementPasswordType property. Possible values are PasswordText for plain text passwords or PasswordDigest for digest passwords, which is the default. The following example generates a username token with a digest password: <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="securementActions" value="UsernameToken"/> <property name="securementUsername" value="Ernie"/> <property name="securementPassword" value="Bert"/> </bean> If plain text password type is chosen, it is possible to instruct the interceptor to add Nonce and/or Created elements using the securementUsernameTokenElements property. The value must be a list containing the desired elements' names separated by spaces (case sensitive). The next example generates a username token with a plain text password, a Nonce and a Created element: <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="securementActions" value="UsernameToken"/> <property name="securementUsername" value="Ernie"/> <property name="securementPassword" value="Bert"/> <property name="securementPasswordType" value="PasswordText"/> <property name="securementUsernameTokenElements" value="Nonce Created"/> </bean> Certificate Authentication As certificate authentication is akin to digital signatures, WSS4J handles it as part of the signature validation and securement. Specifically, the securementSignatureKeyIdentifier property must be set to DirectReference in order to instruct WSS4J to generate a BinarySecurityToken element containing the X509 certificate and to include it in the outgoing message. The certificate’s name and password are passed through the securementUsername and securementPassword properties respectively. See the next example: <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="securementActions" value="Signature"/> <property name="securementSignatureKeyIdentifier" value="DirectReference"/> <property name="securementUsername" value="mycert"/> <property name="securementPassword" value="certpass"/> <property name="securementSignatureCrypto"> <bean class="org.springframework.ws.soap.security.wss4j.support.CryptoFactoryBean"> <property name="keyStorePassword" value="123456"/> <property name="keyStoreLocation" value="classpath:/keystore.jks"/> </bean> </property> </bean> For the certificate validation, regular signature validation applies: <bean> At the end of the validation, the interceptor will automatically verify the validity of the certificate by delegating to the default WSS4J implementation. If needed, this behavior can be changed by redefining the verifyCertificateTrust method. For more details, please refer to Digital Signatures. 7.3.4. Security Timestamps This section describes the various timestamp options available in the Wss4jSecurityInterceptor. Validating Timestamps To validate timestamps add Timestamp to the validationActions property. It is possible to override timestamp semantics specified by the initiator of the SOAP message by setting timestampStrict to true and specifying a server-side time to live in seconds (defaults to 300) via the timeToLive property [3] . In the following example, the interceptor will limit the timestamp validity window to 10 seconds, rejecting any valid timestamp token outside that window: <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="validationActions" value="Timestamp"/> <property name="timestampStrict" value="true"/> <property name="timeToLive" value="10"/> </bean> Adding Timestamps Adding Timestamp to the securementActions property generates a timestamp header in outgoing messages. The timestampPrecisionInMilliseconds property specifies whether the precision of the generated timestamp is in milliseconds. The default value is`true`. <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="securementActions" value="Timestamp"/> <property name="timestampPrecisionInMilliseconds" value="true"/> </bean> 7.3.5. Digital Signatures This section describes the various signature options available in the Wss4jSecurityInterceptor. Verifying Signatures To instruct the`Wss4jSecurityInterceptor`, validationActions must contain the Signature action. Additionally, the validationSignatureCrypto property must point to the keystore containing the public certificates of the initiator: <bean id="wsSecurityInterceptor"> Signature action to the`securementActions`. The alias and the password of the private key to use are specified by the securementUsername and securementPassword properties respectively. securementSignatureCrypto must point to the keystore containing the private key: <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="securementActions" value="Signature"/> <property name="securementUsername" value="mykey"/> <property name="securementPassword" value="123456"/> <property name="securementSignatureCrypto"> <bean class="org.springframework.ws.soap.security.wss4j.support.CryptoFactoryBean"> <property name="keyStorePassword" value="123456"/> <property name="keyStoreLocation" value="classpath:/keystore.jks"/> </bean> </property> </bean> Furthermore, the signature algorithm can be defined via the securementSignatureAlgorithm. The key identifier type to use can be customized via the securementSignatureKeyIdentifier property. Only IssuerSerial and DirectReference are valid for signature. securementSignatureParts property controls which part of the message shall be signed. The value of this property is a list of semi-colon separated element names that identify the elements to sign. The general form of a signature part is {}{namespace}Element [4] . The default behavior is to sign the SOAP body. As an example, here is how to sign the echoResponse element in the Spring Web Services echo sample: <property name="securementSignatureParts" value="{}{}echoResponse"/> To specify an element without a namespace use the string Null as the namespace name (case sensitive). If there is no other element in the request with a local name of Body then the SOAP namespace identifier can be empty ( {}). Signature Confirmation Signature confirmation is enabled by setting enableSignatureConfirmation to true. Note that signature confirmation action spans over the request and the response. This implies that secureResponse and validateRequest must be set to true (which is the default value) even if there are no corresponding security actions. <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="validationActions" value="Signature"/> <property name="enableSignatureConfirmation" value="true"/> <property name="validationSignatureCrypto"> <bean class="org.springframework.ws.soap.security.wss4j.support.CryptoFactoryBean"> <property name="keyStorePassword" value="123456"/> <property name="keyStoreLocation" value="file:/keystore.jks"/> </bean> </property> </bean> 7.3.6. Encryption and Decryption This section describes the various encryption and descryption options available in the Wss4jSecurityInterceptor. Decryption Decryption of incoming SOAP messages requires Encrypt action be added to the validationActions property. The rest of the configuration depends on the key information that appears in the message [5] . To decrypt messages with an embedded encypted symmetric key ( xenc:EncryptedKey element), validationDecryptionCrypto needs to point to a keystore containing the decryption private key. Additionally, validationCallbackHandler has to be injected with a org.springframework.ws.soap.security.wss4j.callback.KeyStoreCallbackHandler specifying the key’s password: <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="validationActions" value="Encrypt"/> <property name="validationDecryptionCrypto"> <bean class="org.springframework.ws.soap.security.wss4j.support.CryptoFactoryBean"> <property name="keyStorePassword" value="123456"/> <property name="keyStoreLocation" value="classpath:/keystore.jks"/> </bean> </property> <property name="validationCallbackHandler"> <bean class="org.springframework.ws.soap.security.wss4j.callback.KeyStoreCallbackHandler"> <property name="privateKeyPassword" value="mykeypass"/> </bean> </property> </bean> To support decryption of messages with an embedded key name ( ds:KeyName element), configure a KeyStoreCallbackHandler that points to the keystore with the symmetric secret key. The property symmetricKeyPassword indicates the key’s password, the key name being the one specified by ds:KeyName element: <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="validationActions" value="Encrypt"/> <property name="validationCallbackHandler"> <bean class="org.springframework.ws.soap.security.wss4j.callback.KeyStoreCallbackHandler"> <property name="keyStore"> <bean class="org.springframework.ws.soap.security.support.KeyStoreFactoryBean"> <property name="location" value="classpath:keystore.jks"/> <property name="type" value="JCEKS"/> <property name="password" value="123456"/> </bean> </property> <property name="symmetricKeyPassword" value="mykeypass"/> </bean> </property> </bean> Encryption Adding Encrypt to the securementActions enables encryption of outgoing messages. The certifacte’s alias to use for the encryption is set via the securementEncryptionUser property. The keystore where the certificate reside is accessed using the securementEncryptionCrypto property. As encryption relies on public certificates, no password needs to be passed. <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="securementActions" value="Encrypt"/> <property name="securementEncryptionUser" value="mycert"/> <property name="securementEncryptionCrypto"> <bean class="org.springframework.ws.soap.security.wss4j.support.CryptoFactoryBean"> <property name="keyStorePassword" value="123456"/> <property name="keyStoreLocation" value="file:/keystore.jks"/> </bean> </property> </bean> Encryption can be customized in several ways: The key identifier type to use is defined by`securementEncryptionKeyIdentifier`. Possible values are`IssuerSerial`, X509KeyIdentifier, DirectReference, Thumbprint, SKIKeyIdentifier or`EmbeddedKeyName`. If the EmbeddedKeyName type is chosen, you need to specify the secret key to use for the encryption. The alias of the key is set via the securementEncryptionUser property just as for the other key identifier types. However, WSS4J requires a callback handler to fetch the secret key. Thus, securementCallbackHandler must be provided with a KeyStoreCallbackHandler pointing to the appropriate keystore. By default, the ds:KeyName element in the resulting WS-Security header takes the value of the securementEncryptionUser property. To indicate a different name, set the securementEncryptionEmbeddedKeyName with the desired value. In the next example, the outgoing message will be encrypted with a key aliased secretKey whereas myKey will appear in ds:KeyName element: <bean class="org.springframework.ws.soap.security.wss4j.Wss4jSecurityInterceptor"> <property name="securementActions" value="Encrypt"/> <property name="securementEncryptionKeyIdentifier" value="EmbeddedKeyName"/> <property name="securementEncryptionUser" value="secretKey"/> <property name="securementEncryptionEmbeddedKeyName" value="myKey"/> <property name="securementCallbackHandler"> <bean class="org.springframework.ws.soap.security.wss4j.callback.KeyStoreCallbackHandler"> <property name="symmetricKeyPassword" value="keypass"/> <property name="keyStore"> <bean class="org.springframework.ws.soap.security.support.KeyStoreFactoryBean"> <property name="location" value="file:/keystore.jks"/> <property name="type" value="jceks"/> <property name="password" value="123456"/> </bean> </property> </bean> </property> </bean> The securementEncryptionKeyTransportAlgorithm property defines which algorithm to use to encrypt the generated symmetric key. Supported values are, which is the default, and. The symmetric encryption algorithm to use can be set via the securementEncryptionSymAlgorithm property. Supported values are (default value),,,. Finally, the securementEncryptionParts property defines which parts of the message will be encrypted. The value of this property is a list of semi-colon separated element names that identify the elements to encrypt. An encryption mode specifier and a namespace identification, each inside a pair of curly brackets, may precede each element name. The encryption mode specifier is either {Content} or {Element} [6] . The following example identifies the echoResponse from the echo sample: <property name="securementEncryptionParts" value="{Content}{}echoResponse"/> Be aware that the element name, the namespace identifier, and the encryption modifier are case sensitive. The encryption modifier and the namespace identifier can be omitted. In this case the encryption mode defaults to Content and the namespace is set to the SOAP namespace. To specify an element without a namespace use the value Null as the namespace name (case sensitive). If no list is specified, the handler encrypts the SOAP Body in Content mode by default. 7.3.7. Security Exception Handling The exception handling of the Wss4jSecurityInterceptor is identical to that of the XwsSecurityInterceptor. See Security Exception Handling for more information. III. Other Resources Bibliography [waldo-94] Jim Waldo, Ann Wollrath, and Sam Kendall. A Note on Distributed Computing. Springer Verlag. 1994 [alpine] Steve Loughran & Edmund Smith. Rethinking the Java SOAP Stack. May 17, 2005. © 2005 IEEE Telephone Laboratories, Inc. [effective-enterprise-java] Ted Neward. Scott Meyers. Effective Enterprise Java. Addison-Wesley. 2004 [effective-xml] Elliotte Rusty Harold. Scott Meyers. Effective XML. Addison-Wesley. 2004
https://docs.spring.io/spring-ws/docs/3.0.3.RELEASE/reference/
2018-10-15T12:53:56
CC-MAIN-2018-43
1539583509196.33
[array(['images/spring-deps.png', 'spring deps'], dtype=object) array(['images/sequence.png', 'sequence'], dtype=object)]
docs.spring.io
What Is IAM? AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.. Topics Video Introduction to IAM AWS Training and Certification provides a 10-minute video introduction to IAM: Introduction to AWS Identity and Access Management IAM Features IAM gives you the following features: - Shared access to your AWS account You can grant other people permission to administer and use resources in your AWS account without having to share your password or access key. - Granular permissions You can grant different permissions to different people for different resources. For example, you might allow some users complete access to Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Redshift, and other AWS services. For other users, you can allow read-only access to just some S3 buckets, or permission to administer just some EC2 instances, or to access your billing information but nothing else. - Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM features to securely provide credentials for applications that run on EC2 instances. These credentials provide permissions for your application to access other AWS resources. Examples include S3 buckets and DynamoDB tables. -. - Identity federation You can allow users who already have passwords elsewhere—for example, in your corporate network or with an internet identity provider—to get temporary access to your AWS account. - Identity information for assurance If you use AWS CloudTrail, you receive log records that include information about those who made requests for resources in your account. That information is based on IAM identities. - PCI DSS Compliance IAM. - Integrated with many AWS services For a list of AWS services that work with IAM, see AWS Services That Work with IAM. - Eventually Consistent IAM, like many other AWS services, is eventually consistent. IAM achieves high availability by replicating data across multiple servers within Amazon's data centers around the world. If a request to change some data is successful, the change is committed and safely stored. However, the change must be replicated across IAM, which can take some time. Such changes include creating or updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in the critical, high-availability code paths of your application. Instead, make IAM changes in a separate initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have been propagated before production workflows depend on them. For more information, see Changes that I make are not always immediately visible. - Free to use AWS Identity and Access Management (IAM) and AWS Security Token Service (AWS STS) are features of your AWS account offered at no additional charge. You are charged only when you access other AWS services using your IAM users or AWS STS temporary security credentials. For information about the pricing of other AWS products, see the Amazon Web Services pricing page. Accessing IAM You can work with AWS Identity and Access Management in any of the following ways. - AWS Management Console The console is a browser-based interface to manage IAM and AWS resources. For more information about accessing IAM through the console, see The IAM Console and Sign-in Page. For a tutorial that guides you through using the console, see Creating Your First IAM Admin User and Group. - AWS Command Line Tools You can use the AWS command line tools to issue commands at your system's command line to perform IAM and AWS tasks. Using the command line can be faster and more convenient than the console. The command line tools are also AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to IAM and AWS. For example, the SDKs take care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page. - IAM HTTPS API You can access IAM and AWS programmatically by using the IAM HTTPS API, which lets you issue HTTPS requests directly to the service. When you use the HTTPS API, you must include code to digitally sign requests using your credentials. For more information, see Calling the API by Making HTTP Query Requests and the IAM API Reference.
https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
2018-10-15T13:01:05
CC-MAIN-2018-43
1539583509196.33
[]
docs.aws.amazon.com
modules Though Meteor 1.2 introduced support for many new ECMAScript 2015 features, one of the most notable omissions was ES2015 import and export syntax. Meteor 1.3 fills that gap with a fully standards-compliant module system that works on both the client and the server, solves multiple long-standing problems for Meteor applications (such as controlling file load order), and yet maintains full backwards compatibility with existing Meteor code. This document explains the usage and key features of the new module system. Enabling modules We think you’re going to love the new module system, and that’s why it will be installed by default for all new apps and packages. Nevertheless, the modules package is totally optional, and it will be up to you to add it to existing apps and/or packages. For apps, this is as easy as meteor add modules, or (even better) meteor add ecmascript, since the ecmascript package implies the modules package. For packages, you can enable modules by adding api.use('modules') to the Package.onUse or Package.onTest sections of your package.js file. Now, you might be wondering what good the modules package is without the ecmascript package, since ecmascript enables import and export syntax. By itself, the modules package provides the CommonJS require and exports primitives that may be familiar if you’ve ever written Node code, and the ecmascript package simply compiles import and export statements to CommonJS. The require and export primitives also allow Node modules to run within Meteor application code without modification. Furthermore, keeping modules separate allows us to use require and exports in places where using ecmascript is tricky, such as the implementation of the ecmascript package itself. While the modules package is useful by itself, we very much encourage using the ecmascript package (and thus import and export) instead of using require and exports directly. If you need convincing, here’s a presentation that explains the differences: Basic syntax ES2015 Although there are a number of different variations of import and export syntax, this section describes the essential forms that everyone should know. First, you can export any named declaration on the same line where it was declared: // exporter.js export var a = ...; export let b = ...; export const c = ...; export function d() { ... } export function* e() { ... } export class F { ... } These declarations make the variables a, b, c (and so on) available not only within the scope of the exporter.js module, but also to other modules that import from exporter.js. If you prefer, you can export variables by name, rather than prefixing their declarations with the export keyword: // exporter.js function g() { ... } let h = g(); // At the end of the file export { g, h }; All of these exports are named, which means other modules can import them using those names: // importer.js import { a, c, F, h } from './exporter'; new F(a, c).method(h); If you’d rather use different names, you’ll be glad to know export and import statements can rename their arguments: // exporter.js export { g as x }; g(); // Same as calling `y()` in importer.js // importer.js import { x as y } from './exporter'; y(); // Same as calling `g()` in exporter.js As with CommonJS module.exports, it is possible to define a single default export: // exporter.js export default any.arbitrary(expression); This default export may then be imported without curly braces, using any name the importing module chooses: // importer.js import Value from './exporter'; // Value is identical to the exported expression Unlike CommonJS module.exports, the use of default exports does not prevent the simultaneous use of named exports. Here is how you can combine them: // importer.js import Value, { a, F } from './exporter'; In fact, the default export is conceptually just another named export whose name happens to be “default”: // importer.js import { default as Value, a, F } from './exporter'; These examples should get you started with import and export syntax. For further reading, here is a very detailed explanation by Axel Rauschmayer of every variation of import and export syntax. CommonJS You don’t need to use the ecmascript package or ES2015 syntax in order to use modules. Just like Node.js in the pre-ES2015 days, you can use require and module.exports—that’s what the import and export statements are compiling into, anyway. ES2015 import lines like these: import { AccountsTemplates } from 'meteor/useraccounts:core'; import '../imports/startup/client/routes.js'; can be written with CommonJS like this: var UserAccountsCore = require('meteor/useraccounts:core'); require('../imports/startup/client/routes.js'); and you can access AccountsTemplates via UserAccountsCore.AccountsTemplates. Note that files don’t need a module.exports if they’re required like routes.js is in this example, without assignment to any variable. The code in routes.js will simply be included and executed in place of the above require statement. ES2015 export statements like these: export const insert = new ValidatedMethod({ ... }); export default incompleteCountDenormalizer; can be rewritten to use CommonJS module.exports: module.exports.insert = new ValidatedMethod({ ... }); module.exports.default = incompleteCountDenormalizer; You can also simply write exports instead of module.exports if you prefer. If you need to require from an ES2015 module with a default export, you can access the export with require('package').default. There is a case where you might need to use CommonJS, even if your project has the ecmascript package: if you want to conditionally include a module. import statements must be at top-level scope, so they cannot be within an if block. If you’re writing a common file, loaded on both client and server, you might want to import a module in only one or the other environment: if (Meteor.isClient) { require('./client-only-file.js'); } Note that dynamic calls to require() (where the name being required can change at runtime) cannot be analyzed correctly and may result in broken client bundles. This is also discussed in the guide. CoffeeScript CoffeeScript has been a first-class supported language since Meteor’s early days. Even though today we recommend ES2015, we still intend to support CoffeeScript fully. As of CoffeeScript 1.11.0, CoffeeScript supports import and export statements natively. Make sure you are using the latest version of the CoffeeScript package in your project to get this support. New projects created today will get this version with meteor add coffeescript. Make sure you don’t forget to include the ecmascript and modules packages: meteor add ecmascript. (The modules package is implied by ecmascript.) CoffeeScript import syntax is nearly identical to the ES2015 syntax you see above: import { Meteor } from 'meteor/meteor' import { SimpleSchema } from 'meteor/aldeed:simple-schema' import { Lists } from './lists.coffee' You can also use traditional CommonJS syntax with CoffeeScript. Modular application structure Before the release of Meteor 1.3, the only way to share values between files in an application was to assign them to global variables or communicate through shared variables like Session (variables which, while not technically global, sure do feel syntactically identical to global variables). With the introduction of modules, one module can refer precisely to the exports of any other specific module, so global variables are no longer necessary. If you are familiar with modules in Node, you might expect modules not to be evaluated until the first time you import them. However, because earlier versions of Meteor evaluated all of your code when the application started, and we care about backwards compatibility, eager evaluation is still the default behavior. If you would like a module to be evaluated lazily (in other words: on demand, the first time you import it, just like Node does it), then you should put that module in an imports/ directory (anywhere in your app, not just the root directory), and include that directory when you import the module: import {stuff} from './imports/lazy'. Note: files contained by node_modules/ directories will also be evaluated lazily (more on that below). Lazy evaluation will very likely become the default behavior in a future version of Meteor, but if you want to embrace it as fully as possible in the meantime, we recommend putting all your modules inside either client/imports/ or server/imports/ directories, with just a single entry point for each architecture: client/main.js and server/main.js. The main.js files will be evaluated eagerly, giving your application a chance to import modules from the imports/ directories. Modular package structure If you are a package author, in addition to putting api.use('modules') or api.use('ecmascript') in the Package.onUse section of your package.js file, you can also use a new API called api.mainModule to specify the main entry point for your package: Package.describe({ name: 'my-modular-package' }); Npm.depends({ moment: '2.10.6' }); Package.onUse((api) => { api.use('modules'); api.mainModule('server.js', 'server'); api.mainModule('client.js', 'client'); api.export('Foo'); }); Now server.js and client.js can import other files from the package source directory, even if those files have not been added using the api.addFiles function. When you use api.mainModule, the exports of the main module are exposed globally as Package['my-modular-package'], along with any symbols exported by api.export, and thus become available to any code that imports the package. In other words, the main module gets to decide what value of Foo will be exported by api.export, as well as providing other properties that can be explicitly imported from the package: // In an application that uses 'my-modular-package': import { Foo as ExplicitFoo, bar } from 'meteor/my-modular-package'; console.log(Foo); // Auto-imported because of `api.export`. console.log(ExplicitFoo); // Explicitly imported, but identical to `Foo`. console.log(bar); // Exported by server.js or client.js, but not auto-imported. Note that the import is from 'meteor/my-modular-package', not from 'my-modular-package'. Meteor package identifier strings must include the prefix meteor/... to disambiguate them from npm packages. Finally, since this package is using the new modules package, and the package Npm.depends on the “moment” npm package, modules within the package can import moment from 'moment' on both the client and the server. This is great news, because previous versions of Meteor allowed npm imports only on the server, via Npm.require. Lazy loading modules from a package Packages can also specify a lazy main module: Package.onUse(function (api) { api.mainModule("client.js", "client", { lazy: true }); }); This means the client.js module will not be evaluated during app startup unless/until another module imports it, and will not even be included in the client bundle if no importing code is found. To import a method named exportedPackageMethod, simply: import { exportedPackageMethod } from "meteor/<package name>"; Note: Packages with lazymain modules cannot use api.exportto export global symbols to other packages/apps. Also, prior to Meteor 1.4.4.2 it is neccessary to explicitly name the file containing the module: import "meteor/<package name>/client.js". Local node_modules Before Meteor 1.3, the contents of node_modules directories in Meteor application code were completely ignored. When you enable modules, those useless node_modules directories suddenly become infinitely more useful: meteor create modular-app cd modular-app mkdir node_modules npm install moment echo "import moment from 'moment';" >> modular-app.js echo 'console.log(moment().calendar());' >> modular-app.js meteor When you run this app, the moment library will be imported on both the client and the server, and both consoles will log output similar to: Today at 7:51 PM. Our hope is that the possibility of installing Node modules directly within an app will reduce the need for npm wrapper packages such as. A version of the npm command comes bundled with every Meteor installation, and (as of Meteor 1.3) it’s quite easy to use: meteor npm ... is synonymous with npm ..., so meteor npm install moment will work in the example above. (Likewise, if you don’t have a version of node installed, or you want to be sure you’re using the exact same version of node that Meteor uses, meteor node ... is a convenient shortcut.) That said, you can use any version of npm that you happen to have available. Meteor’s module system only cares about the files installed by npm, not the details of how npm installs those files. File load order Before Meteor 1.3, the order in which application files were evaluated was dictated by a set of rules described in the Application Structure - Default file load order section of the Meteor Guide. These rules could become frustrating when one file depended on a variable defined by another file, particularly when the first file was evaluated after the second file. Thanks to modules, any load-order dependency you might imagine can be resolved by adding an import statement. So if a.js loads before b.js because of their file names, but a.js needs something defined by b.js, then a.js can simply import that value from b.js: // a.js import { bThing } from './b'; console.log(bThing, 'in a.js'); // b.js export var bThing = 'a thing defined in b.js'; console.log(bThing, 'in b.js'); Sometimes a module doesn’t actually need to import anything from another module, but you still want to be sure the other module gets evaluated first. In such situations, you can use an even simpler import syntax: // c.js import './a'; console.log('in c.js'); No matter which of these modules is imported first, the order of the console.log calls will always be: console.log(bThing, 'in b.js'); console.log(bThing, 'in a.js'); console.log('in c.js');
https://docs.meteor.com/packages/modules.html
2018-10-15T12:21:19
CC-MAIN-2018-43
1539583509196.33
[]
docs.meteor.com
Size Size Size Size Struct Definition public value class Size : IFormattable [System.ComponentModel.TypeConverter(typeof(System.Windows.SizeConverter))] [Serializable] public struct Size : IFormattable type Size = struct interface IFormattable Public Structure Size Implements IFormattable - Inheritance - - Attributes - TypeConverterAttribute SerializableAttribute - Implements - Examples The following example demonstrates how to use a Size structure in code..
https://docs.microsoft.com/en-us/dotnet/api/system.windows.size?view=netframework-4.7.2
2018-10-15T12:43:30
CC-MAIN-2018-43
1539583509196.33
[]
docs.microsoft.com
Topics covered in this article: - Multi-Currency - Tax - Exporting Negative Expenses - Authentication Error When Connecting or Syncing - Expensify Not Displaying Customers/Projects - Error Adding an Employee Email Address in Sage Intacct - Not Seeing User Application Subscriptions - Not Seeing the 'Time & Expenses' Module - Reports Going to 'Submitted' or 'Draft' State Instead of 'Approved' When Exported to Sage Intacct - How to Restrict Non-reimbursable Expenses From Exporting to Intacct - Why Can't I Export Without a Category Selected? - Where Can I Find My Expenses? Multi-Currency When multi-currency is enabled in your Sage Intacct account, we will export the output currency that is set in your Expensify policy. If the vendor or account in Sage Intacct is in a different currency, Intacct will do the conversion using the Intacct daily rate. The only known issue with having multi-currency enabled in Sage Intacct is when exporting as charge card transactions. These cannot be exported at the top-level so you will need to select an entity in the configuration of your Expensify policy by going to Settings > Policies > Group > [Policy Name] > Connections > Configure. Tax We are currently unable to support tax export with our Sage Intacct integration. Expensify Not Displaying Customers/Projects This is most likely a permissions issue. The resolution is to verify that the web services user (the user that "owns" the Sage Intacct connection in Expensify) has "Read-Only" permissions to the Accounts Receivable module in Sage Intacct. To do that, go to Company > Users > Subscriptions (for the web services user) > Permissions (for AR) > select the Read Only radio button option > Save. Then sync the Sage Intacct connection in Expensify (Settings > Policies > Group > [Policy Name] > Connections > Sync Now). Exporting Negative Expenses In general, you can export negative expenses successfully to Intacct regardless of which Export Option you choose. The one thing to keep in mind is that if you have Expense Reports selected as your export option, the total of the report can not be negative. Authentication Error If you're receiving this error, first make sure that you're using the credentials for your xmlgateway_expensify web services user when attempting to connect your policy to Intacct. If you've ensured that your credentials are correct, you likely need to add Expensify to your company's Web Services authorizations. To do this, go to Company > Company Info > Security in Intacct and click Edit. Next, scroll down to Web Services authorizations and add "expensify" as a Sender ID: Save this change and then try syncing or connecting your policy again. Error Adding an Employee Email Address in Sage Intacct You have two different Employees with the same email address set up. To resolve this, delete the duplicate employee in Sage Intacct then sync the connection in Expensify by going to Settings > Policies > Group > [Policy Name] > Connections > Sync Now. An employee email address cannot be added in the employee listing within in Sage Intacct (there is no email field). Instead, you need to add/edit the employee email address by going to Company > Contacts and searching for the employee in the list. Not Seeing User Application Subscriptions In step one of the Sage Intacct setup guide, when you create a user in Sage Intacct and hit Save, you should be taken to a menu called User Application Subscriptions where you can set the user's permissions. If you don't see this menu, then you'll need to create an Expensify "role". To do this in Sage Intacct: - Go to Company > Roles > Add - Name the role "Expensify" - Give the role the following permissions: - Administration: All - Company: Read-only - Cash management: All - Time & Expense: All - Projects: Read-only (only required if you're going to be using Projects and Customers) After you've done this, click Save. Next, create the web services user with the following: - User ID: “xmlgateway_expensify" - Last name and First name: "Expensify" - User type: "Business" - Admin privileges: "Full" - Status: "Active" - Web services only: this box should be checked Finally, select the "Permissions" tab and assign the new "Expensify" role to the new web services user you just created. Hit Save. Not Seeing the 'Time & Expenses' Module In a role-based permissions setup, log into Sage Intacct and navigate to: - Company > Roles > Subscriptions (for your user role in Sage Intacct) - Check "Time & Expenses" - Click "Save" Charge Card Configuration is Missing You'll see this error if you're attempting to export non-reimbursable (company card) expenses to Sage Intacct and you haven't yet set up a credit card account (or credit card accounts). We export non-reimbursable expenses as charge card transactions, so it's required that you have these set up. To set up charge card accounts in Sage Intacct: - Head to to to Cash Management > Open Setup > "+" Charge Card Accounts - Mandatory fields should be: - ID (this is the name that will appear in the dropdown in Expensify, so use a name that you'll recognize!), - Type - Payment Method (Credit) - Expiration (you'll still need to fill this out but it doesn't matter in the case that you just have one roll up card account) - Credit-card offset account (this is the account that's credited when the expense posts) - Default location (location you want transactions associated with), - Vendor ID (likely the bank/card vendor that you'll be paying) - Save - After this you'll go to Expensify > Settings > Policies > Group > [Policy Name] > Connections > Configure > Export > select the account that you'll want to use > "Save" Note: If you have multiple credit card accounts, you'll need to follow the instructions on this page for configuring those (it's a slightly different process in Expensify). Reports Going to 'Submitted' or 'Draft' State Instead of 'Approved' When Exported to Sage Intacct Go to Sage Intacct > Time & Expenses > Configure Time & Expenses > uncheck "Enable expense report approval." After this, in Expensify go to Settings > Policies > Group > [Policy Name] > Connections > Sync Now _10<< > Configure. For example, if you select specific Credit Card accounts for your cards in Domain Control, you need to be sure to select "Credit Card" as your non-reimbursable export option: << How to Restrict Non-reimbursable Expenses From Exporting to Intacct If you don't want to export any non-reimbursable expenses from Expensify to Intacct, you can take the following steps: Go to Settings > Policies > Group > [Policy Name] > Connections > Configure and select Vendor Bills as your non-reimbursable export option. In Intacct, we need to set up a Smart Rule. To do this go to Customization/Platform Services > Objects. In the list find and click AP Bill. In the header click Smart Rules. Click New Smart Rule Select AP Bill from the dropdown list and click Next. Select the type "Error" Under Events select Add. Use the condition: right({!APBILL.RECORDID!}, 2) != "NR" The error message is up to you. This will populate in Expensify when a report is exported with non-reimbursable expenses. Click Next. The Smart Rule ID needs to start with a letter and can contain numbers, letters, and underscores only. The description field is optional. Click Save and you're done! Expense reports containing non-reimbursable expenses will no longer export to Intacct. Because the formula used is specific to the bill numbers used by Expensify, this won't affect any other vendor bill entries as long as the bill number doesn't end with "NR". Why Can't I Export Without a Category Selected? When connecting to Sage Intacct, the chart of accounts is pulled in to be used as categories on expense. Each expense is required to have a category selected within Expensify in order to export. Each category also has to be imported in from Sage Intacct, Sage Intacct! Still looking for answers? Search our Community for more content on this topic!
https://docs.expensify.com/articles/1304595-sage-intacct-faq
2019-06-16T04:40:26
CC-MAIN-2019-26
1560627997731.69
[array(['https://downloads.intercomcdn.com/i/o/56540467/df80a9d09f5cca3b439d0e5d/Expensify_-_Policy_Editor-4.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/95364630/1a1e174002bf6caf6c89e78b/2019-01-07_11-47-05.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/95365036/1be900e1dcb1b42a54d8d597/2019-01-07_11-48-25.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/61734844/fc6b05ae4d2b5b71c5895f14/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/61734751/ea2df8c5cd0444f360991b6c/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/56530291/846a5e97ea8e1202b3f28f7b/SYNC+NOW.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/56530450/5549c94a8b3a21d93de6b12b/http_%252F%252Fstatic1.squarespace.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/56530555/a110b87e980957a3ada42a85/http_%252F%252Fstatic1.squarespace.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/95365370/766643e2fa4e6b517f8a8abc/2019-01-07_11-50-18.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/56530775/6c1f5abae34b2fef6fa90625/SYNC+NOW.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/57122298/8841c83e4b279f420a9f7c1c/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66667930/bc48061dfbf7f24c47ed5261/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/95367619/b4dee8bdd73b4c632a870a27/2019-01-07_11-52-39.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/56530919/e263a6c53a93aa714d701717/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/95367930/21b3e2625e189599c431d34f/2019-01-07_12-00-04.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66668400/0735e110c0e444dd0683a6f1/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44493187/de0604ac95bf622762703514/Expensify_-_Policy_Editor+3.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44493203/a3d773189183cb91aec87fa8/Web_Services_DEMO+6.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44493240/d62412b516a9fbee81b701a3/Web_Services_DEMO+2.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44491708/9974fc5bedddea687bd4c763/Web_Services_DEMO+2.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44493256/ba14a9c16c8ce85577767c22/Web_Services_DEMO+3.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44493260/0b82802692128d75d9aafa66/Web_Services_DEMO+4.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44493266/85b675a903db26b17115c4ee/Web_Services_DEMO+5.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/56532303/d9c52d9664cd4a4b7c124411/Cortney_Ofstad_Expenses_to_2017-08-09_-_Expensify_-_Report.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66668557/a436696106f8724b8506d6a7/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/56532389/f563abbf07feceab041b0d3a/Expensify_-_Policy_Editor.png', None], dtype=object) ]
docs.expensify.com
ProxCP KVM Templates KVM Templates Overview ProxCP offers support for KVM creation and rebuilding from templates. Pre-built ProxCP KVM templates are available on our template distribution site. Proxmox does not currently offer a mechanism to share KVM templates between Proxmox servers. Therefore, you need to download our KVM templates onto every Proxmox node individually. If you followed our recommendation to assign VMID ranges to Proxmox nodes, the VMIDs of your KVM templates should be less than the VMID range. - Continuing our example with node1 (VMID range 1000-2999), the KVM template VMIDs on node1 should be less than 1000 - For node2 (VMID range 3000-4999), the KVM template VMIDs should be less than 3000 In the ProxCP KVM Templates Admin menu, you can set the template VMID on a per-node level. It is possible that the same KVM template will have a different VMID on different Proxmox nodes. As of writing, all ProxCP Linux KVM templates support Cloud-Init. The ProxCP Windows KVM templates do not support Cloud-Init. KVM Templates Installation - Download the desired template from our distribution site to your Proxmox server (i.e. wget) - Run the following command on your Proxmox server qmrestore {template file}.vma.lzo {VMID} The restored VM should appear in Proxmox as a template with all correct settings already. You may need to delete and re-create the cloud-init drive depending on the Proxmox VMID.
https://docs.proxcp.com/index.php?title=ProxCP_KVM_Templates
2019-06-16T05:44:38
CC-MAIN-2019-26
1560627997731.69
[]
docs.proxcp.com
An Act to amend 15.05 (1) (c), 15.34 (2) (a) and 17.20 (1) of the statutes; Relating to: the appointment and term of service of the secretary of natural resources and vacancies on the Natural Resources Board. Bill Text (PDF: ) Wisconsin Ethics Commission information 2017 Senate Bill 171 - S - Natural Resources and Energy
https://docs-preview.legis.wisconsin.gov/2017/proposals/ab157
2019-06-16T06:07:59
CC-MAIN-2019-26
1560627997731.69
[]
docs-preview.legis.wisconsin.gov
- Console software The Provisioning Services Console can be installed on any machine that can communicate with the Provisioning Services database. The Console installation includes the Boot Device Management utility. Note If you are upgrading from the current product version, the Console software is removed when the Provisioning Server software is removed. Upgrading from earlier versions may not remove the Console software automatically. - Run the appropriate platform-specific install option; PVS_Console.exe or PVS_Console_x64.exe. - Click Next on the Welcome screen. The Product License Agreement appears. - Accept the terms in the license agreement, then click Next to continue. The Customer Information dialog appears. - Type or select your user name and organization name in the appropriate text boxes. - Enable the appropriate application user radio button, then click Next. The Destination Folder dialog appears. - Click Change, then enter the folder name or navigate to the folder where the software should be installed, or click Next to install the Console to the default folder. The Setup Type dialog appears. - Select the appropriate radio button: - Complete - Installs all components and options on this computer (default). - Custom - Choose which components to install and where to install those components. --run to install additional components at a later time, or re-run on a different computer to install selected components on a separate computer. Installing Provisioning Services Console software
https://docs.citrix.com/en-us/provisioning/7-15/install/install-task3-install-console.html
2019-06-16T06:43:31
CC-MAIN-2019-26
1560627997731.69
[]
docs.citrix.com
Installing and verifying prerequisites¶ Verify Java Version¶ Ensure that you are running Java 1.8. To check, run the following command at the command prompt and make sure that the version displayed is Java 1.8: java -version The command above should output something like this: java version "1.8.0_91" Verify JAVA_HOME environment variable is set correctly¶ Make sure that you have a JAVA_HOME environment variable that points to the root of the JDK install directory. To check the value set for JAVA_HOME, enter the following command at the command prompt: For Unix/Linux Systems: env | grep JAVA_HOME For Windows Systems: set JAVA_HOME How to set the JAVA_HOME environment variable¶ To set JAVA_HOME on a Unix/Linux System - To set JAVA_HOME on a Windows System - Do one of the following: - Windows 7 – Right click My Computer and select Properties > Advanced - Windows 10 - Type advanced system settings in the search box (beside the Windows start button) and click on the match - Click the Environment Variables button - Under System Variables, click New - In the Variable Name field, enter: JAVA_HOME - In the Variable Value field, enter your JDK installation path - Click on OK and Apply Changes as prompted Note For Windows users, the path specified in your JAVA_HOMEvariable should not contain spaces. If the path contains spaces, use the shortened path name. For example, C:\Progra~1\Java\jdk1.8.0_91 Note For Windows users on 64-bit systems: - Progra~1= Program Files - Progra~2= Program Files(x86) OS X extra prerequisite¶ For OS X users, the latest openssl formula needs to be installed via homebrew: brew install openssl Linux prerequisite¶ For Linux users, some of the scripts uses lsof. Please note that some Linux distributions does not come with lsofpre-installed and so, may need to be installed. To install lsoffor Debian-based Linux distros: apt-get install lsof To install lsoffor RedHat-based Linux distros: yum install lsof The library libncurses5is required for running the restore script. You may get the following error when running the restore script without the libncurses5library installed: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory To install the library libncurses5, use the following commands: On Debian-based Linux distros: sudo apt-get install libncurses5-dev libncursesw5-dev On RHEL, CentOS: sudo yum install ncurses-devel On Fedora 22 and newer version: sudo dnf install ncurses-devel Windows prerequisite¶ Windows users using older operating systems may experience issues when Crafter CMS starts up MongoDB and see the following error: The program can’t start because api-ms-win-crt-runtime-l1-1-0.dll is missing from your computer. Try reinstalling the program to fix this problem. For MongoDB to startup properly, a Microsoft update may be needed for older operating systems including: - Windows 7 - Windows Server 2012 R2 - Windows Server 2012 To install the update, download the Universal C Runtime update from Microsoft ( ) When the update is installed, please try to start Crafter CMS again. Another issue Windows users may experience when Crafter CMS starts up MongoDB, is the following error in the logs: Error creating bean with name ‘crafter.profileRepository’ defined in class path resource [crafter/profile/services-context.xml]: Invocation of init method failed; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector Users may also see a Windows dialog with the following message: The code execution cannot proceed because VCRUNTIME140.dll was not found. Reinstalling the program may fix this problem. For MongoDB to startup properly, Visual Studio C++ Redistributable 2015 needs to be installed or repaired if some of the required dll is corrupted. You can download Visual Studio C++ Redistributable 2015 here:. When finished installing, please restart Windows. To Install a Production Environment¶ To install a production environment setup, see the section on Setting up a Crafter CMS production environment Setting up a Crafter CMS production environment To Install a Development Environment¶ To install a development environment, see the section on installing Crafter CMS from the zip download or the section on installing Crafter CMS from archive built by the Gradle environment builder in the Quick Start Guide. To learn more about the developer workflow, see Introduction to the Developer Workflow with Crafter CMS.
https://docs.craftercms.org/en/3.0/system-administrators/activities/installing-and-verifying-prereq.html
2019-06-16T05:51:13
CC-MAIN-2019-26
1560627997731.69
[]
docs.craftercms.org
The Max for Live Building Tools Live Pack contains a set of fully functional Audio Effect devices that provide examples of various kinds of operations and processes. You can use these patches in series in your Live Session, or edit and recombine them to make your own devices. Live Suite owners can download this Live Pack by choosing Your Account > Your Packs when logged in to the Ableton website and clicking on the download button to the Live Pack. To install the Live Pack, double-click on the .alp file The pack can also be retrieved using the Available Packs link in the Places folder of the Live browser. Finding the Audio. - Click on the arrow to the left of the folder marked M4L Building Tools to show the available folders. Select the Max Audio Effects folder from within the Building Blocks folder to see the devices available. Overview of the Audio Effect Building Blocks - Max AutoPanner : A auto-panning utility -
https://docs.cycling74.com/max8/vignettes/live_audiobuildingblocks
2019-06-16T05:29:34
CC-MAIN-2019-26
1560627997731.69
[array(['/static/max8/images/777e6f32250f2c03c1b32825cc07839a.png', None], dtype=object) ]
docs.cycling74.com
Now we will do things to draw lines of the character. To enable lines in the level, we need to perform three main steps: 1) adding a post-process (PP) volume to the level, 2) Applying a PP line material to the PP volume, and 3) turning on the render customdepth pass switch of the character (or actor). 1) Adding a PP volume to the level First, we add a PP volume to the level by drag-and-drop. To make it affect all the scene, we make it unbound. 2) Applying a PP line material to the PP volume Now we will apply a PP line material to the PP volume. Cartoon Rendering Pack provides a sample PP line material instance. But, in most cases, to control line width, amount, and colors, we’d better making a new material instance that will be used in a level. We will make PP_CRP_silhouette_and_crease_tutorial at Materials folder by duplicating PP_CharCelShading_silhouette_and_crease_Inst in CartoonRenderingPack\LineDrawing\Materials\MaterialInstances. Then we put the PP line material to a blendable slot of the PP volume. To do so, we first need to add an element to the blendables array. Then we make the element “Asset reference.” Then we assign the PP line material PP_CRP_silhouette_and_crease_tutorial to the slot. 3) Turning on the render customdepth pass switch of the character (or actor). Still, we cannot see the line drawing of the character. It is because the PP line material only works with the actors whose “render customdepth pass” switch is on. We now select the character and turn on the render customdepth pass switch. Now we can see the line drawing of the character.
https://docs.jiffycrew.com/docs/cartoon-rendering-pack/getting-started/step-6-drawing-lines-of-the-character
2019-06-16T05:02:37
CC-MAIN-2019-26
1560627997731.69
[array(['https://i1.wp.com/docs.jiffycrew.com/wp-content/uploads/2017/01/img_58759dc48e8e8.png?w=835', None], dtype=object) ]
docs.jiffycrew.com
1 Introduction Mendix Studio Pro contains a lot of out-of-the-box widgets such as data grids and snippets. However, if you want to extend your application with more widgets and modules (for example, the Forgot Password module), simple charts, an Excel importer, and other features, you need to add content from the Mendix App Store. The App Store contains many useful and reusable widgets and modules created by Mendix as well as by our partners and community. Mendix delivers a robust platform for the rapid development of apps. To make your development move even more quickly, you can use content from the App Store. In addition to downloading content from the App Store, you can upload items you have developed to share and help the whole Mendix community.
https://docs.mendix.com/developerportal/app-store/
2019-06-16T04:28:14
CC-MAIN-2019-26
1560627997731.69
[]
docs.mendix.com
The - Mindray patient monitor - Model: iMec 8 - 8.4" LCD display with colour LED background lighting - Resolution: 800 x 600 pixels - Optional touch screen - With an integrated lithium ion battery (operating time up to 2 hours with fully charged battery) - Battery or mains operated - Optional integrated thermal printer - Carrying handle for easy mobile use - Data storage unit for up to 48 hours trend curves, 120 hours graphic and tabular trends, 1000 IBP readings and 100 alarm events - Arrhythmia analysis - Shortcut keys for fast access to key functions - Defibrillation-proof - Ventilator-free design reduces airborne contamination and prevents noise interference - Various connections, including VGA output connection and RJ45 (to update software) - Mobile stand, wall and bed mount available as optional extras - Dimensions: 268 x 210 x 114mm - Weight: 2.6kg The iMec 8 can be used for monitoring the following parameters - ECG - Heart rate - Respiration - NIBP (non-invasive blood pressure) - Pulse rate - Temperature - SpO2 Package Includes: - lithium-ion battery - Mains adapter - NIBP adult inflatable cuff (standard) and tubing - ECG leads - Mindray SpO2 sensor - Temperature probe - User manual
https://tools4docs.com/products/Mindray-iMec-8-Patient-Monitor.html
2019-06-16T04:56:55
CC-MAIN-2019-26
1560627997731.69
[]
tools4docs.com
Error getting tags : error 404Error getting tags : error 404 the clickV clickV() the clickV if the clickV > the bottom of stack window then beep Use the clickV function to find out where the user last clicked. Value: The clickV function returns a positive integer. The returned value is the vertical distance in pixels from theV returns the position of the mouse as it was when the mouseDown action occured. This function is equal to item 2 of the clickLoc function.
http://docs.runrev.com/Function/clickV
2018-06-18T05:43:23
CC-MAIN-2018-26
1529267860089.11
[]
docs.runrev.com
pass you can then pass to –include in verbatim to only restore the single file or directory. Don't forget to umount after quitting! Mounting repositories via FUSE is not possible on OpenBSD, Solaris/illumos and Windows..
http://restic.readthedocs.io/en/latest/050_restore.html
2018-06-18T05:18:10
CC-MAIN-2018-26
1529267860089.11
[]
restic.readthedocs.io
We). The development cycle for the next major release starts immediately after the previous one has been shipped. Bugfix/point releases (if any) address only serious bugs and never contain new features. The versions of CIDER and cider-nrepl are always kept in sync. If you're tracking the master branch of CIDER, you should also be tracking the master branch of cider-nrepl.
https://cider.readthedocs.io/en/latest/about/release_policy/
2018-06-18T05:39:53
CC-MAIN-2018-26
1529267860089.11
[]
cider.readthedocs.io
You can enter users' Google Play store credentials in order to display an app description and icon in the management console and in the Worx Store. Google store credentials are mandatory when you configure App Controller to connect to Device Manager and when you configure an app for an Android mobile link in the management console. You can enter any value for the user name and password. You need to enter the device ID that is associated with the account. The device ID is only used to download the information to Device Manager. On your Android device, you can obtain the device ID by entering *#*#8255#*#* on your phone pad.
https://docs.citrix.com/pt-br/xenmobile/8-7/xmob-appc-manage-wrapper-f-con/xmob-appc-add-applications-wrapper-con/xmob-appc-mobile-apps-landing-page-con/xmob-appc-mobile-apps-google-play-tsk.html
2018-06-18T05:42:24
CC-MAIN-2018-26
1529267860089.11
[]
docs.citrix.com
Check Against Do Not Call List OCS reads all records from the table that is referenced in the gsw_donotcall_list Table Access and populates separate tables in memory with the unique values from the phone and customer_id fields. The tables in memory mirror the DNC List in the database. OCS checks these tables in memory during a Do Not Call predial check. If OCS finds a phone number in the Do Not Call table in memory during a predial check, it applies the DNC restriction to the phone number and does not check the customer ID. The phone number has a higher priority than the customer ID if they are both in a dialing record. The following figure illustrates the predial check process. This page was last modified on 8 January 2015, at 18:54. Feedback Comment on this article:
https://docs.genesys.com/Documentation/OU/latest/Dep/CheckAgainstDNCList
2018-06-18T05:58:40
CC-MAIN-2018-26
1529267860089.11
[]
docs.genesys.com
Azure Active Directory Pass-through Authentication: Current limitations Important Azure Active Directory (Azure AD) Pass-through Authentication is a free feature, and you don't need any paid editions of Azure AD to use it. Pass-through Authentication is only available in the world-wide instance of Azure AD, and not on the Microsoft Azure Germany cloud or the Microsoft Azure Government cloud. Supported scenarios The following scenarios are fully supported: - User sign-ins to all web browser-based applications. - User sign-ins to Office applications that support modern authentication: Office 2016, and Office 2013 with modern authentication. - User sign-ins to Outlook clients using legacy protocols such as Exchange ActiveSync, SMTP, POP and IMAP. - User sign-ins to Skype for Business that support modern authentication, including online and hybrid topologies. Learn more about supported topologies here. - Azure AD domain joins for Windows 10 devices. - App passwords for Multi-Factor Authentication. Unsupported scenarios The following scenarios are not supported: - User sign-ins to legacy Office client applications, excluding Outlook (see Supported scenarios above): Office 2010, and Office 2013 without modern authentication. Organizations are encouraged to switch to modern authentication, if possible. Modern authentication allows for Pass-through Authentication support. It also helps you secure your user accounts by using conditional access features, such as Azure Multi-Factor Authentication. - Access to calendar sharing and free/busy information in Exchange hybrid environments on Office 2010 only. - User sign-ins to Skype for Business client applications without modern authentication. - User sign-ins to PowerShell version 1.0. We recommended that you use PowerShell version 2.0. -. - The Apple Device Enrollment Program (Apple DEP) using the iOS Setup Assistant does not support modern authentication. This will fail to enroll Apple DEP devices into Intune for managed domains using Pass-through Authentication. Consider using the Company Portal app as an alternative. Important As a workaround for unsupported scenarios only, enable Password Hash Synchronization on the Optional features page in the Azure AD Connect wizard. When users sign into applications listed in the "unsupported scenarios" section, those specific sign-in requests are not handled by Pass-through Authentication Agents, and therefore will not be recorded in Pass-through Authentication logs. Note Enabling password hash synchronization gives you the option to failover authentication if your on-premises infrastructure is disrupted. This failover from Pass-through Authentication to Active Directory.
https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-pass-through-authentication-current-limitations
2018-06-18T06:03:14
CC-MAIN-2018-26
1529267860089.11
[]
docs.microsoft.com
Puppet Enterprise 2015.3 Release Notes This page describes new features and general improvements in the latest Puppet Enterprise (PE) release. For more information about this release, see Known Issues and Security and Bug Fixes. New Features and Improvements in PE 2015.3.1 PE 2015.3.1 is a release to fix a critical bug. For more information, review the release notes security documentation. New Features in PE 2015.3 Application Orchestration PE 2015.3 introduces Application Orchestration, a new Puppet app that provides Puppet language extensions and command-line tools to help you configure and manage multi-service and multi-node applications. Specifically, Application Orchestration is: -. For more information, refer to the following documentation: Puppet Enterprise Client Tools The Puppet Enterprise client tools package collects a set of command line tools that let you access Puppet Enterprise services from a workstation that may or may not be managed by Puppet. The package includes clients for the following services: - Puppet Orchestrator: Tools that provide the interface to the Application Orchestration service. These tools include puppet-joband puppet-app. - Puppet Access: A client used to authenticate yourself to the PE RBAC token-based authentication service so that you can use other capabilities and APIs. - Razor: The client for Razor, the provisioning application for deploying bare metal systems. For instructions on installing the client tools, see Installing the PE Client Tools Package. Token-Based Access You can now access PE key features and service APIs using authentication tokens. Authentication tokens are generated per user and are tied to the user permissions configured through Role-Based Access Control (RBAC). The new Puppet Access command line tool allows you to quickly generate and manage authentication tokens. For more information, see Token-Based Authentication. Compile Masters Supported in Monolithic Installations PE users with large (or growing) infrastructures can now add compile masters to monolithic installations. Previously compile masters were only supported in split installations. For instructions on installing compile masters, refer to the compile master installation documentation. For more information about installation sizes and hardware recommendations, see the system requirements. New Agent Platform Support This PE release includes support for the following agent platforms: - Mac OS X 10.11 (El Capitan) (x86_64) - Fedora 22 (i386 and x86_64) Run Puppet From the PE Console You can now run Puppet on a specific node at any time from the PE console. The Run Puppet button is available in the node details view. You can use it to immediately test changes without having to log in to the node. For more information, see Running Puppet on a Node. Code Manager and File Sync Code Manager and file sync work together with r10k to automate the management and deployment of your new Puppet code. Push your code updates to your Git repository, and then Puppet creates environments, installs modules, and deploys and syncs the new code to your masters, so that all your servers start running the new code at the same time, without interrupting agent runs. File sync, which is part of Puppet Server, creates a new code staging directory ( /etc/puppetlabs/code-staging) so that it can sync your Puppet code to all masters. If you use Code Manager or file sync, you’ll place your code into this new directory. If you are not using Code Manager or file sync, continue putting code into your usual codedir. See Code Manager for more information about how to get started with these code management tools.To learn more about file sync and the staging directory, see About File Sync. Improvements in PE 2015.3 Permissions in the Operators and Viewers Roles Can Now Be Changed In previous releases of PE, the Operators and Viewers roles in Role-Based Access Control (RBAC) came with a default set of permissions that could not be changed. In this release, the default permissions for the Operators and Viewers roles can be changed by any user with the User roles - Edit permission. For more information on user roles, see Creating and Managing Users and User Roles. Status API No Longer in Tech Preview Status endpoints were added in PE2015.2, but they were considered to have a tech preview status. These endpoints are now fully supported and are no longer in tech preview. Unsigned Certificates is Now a Separate Page in the PE UI In previous releases, certificates were signed in the PE UI from the Unsigned certificates tab, which was part of the Inventory page. In PE 2015.3, Unsigned certificates has been moved from a tab to a full page to make it easier to access. To sign certificates in the PE UI, go to Nodes > Unsigned certificates. Configuration Management Improvements With this release, Configuration Management has the following improvements: - Ability to filter on tags to view resources by module and to locate exported resources when using the node graph. - Code coordinates on the node graph link to a resource’s event details. - Restored reporting on unchanged events for audit purposes. - New paginator in Reports that better handles large numbers of records. Filebucket Resource No Longer Created By Default Puppet Enterprise used to create a filebucket resource by default, which was not necessary and led to problems for customers. As of PE 2015.3, we no longer create a file bucket and we set a resource default of backup => false for all files. If you want to enable the file bucket for your deployment, see the filebucket type reference. Sort Capabilities Added to Event Details All Event tables can now be sorted. Reports Can Be Filtered You can now filter reports by fact value or report run status. r10k Creates a Record of the Last Code Deployment If the code deployed by r10k is copied to another location without the cached repos, it becomes impossible to use Git to interact with the repository and see which version of code r10k deployed. R10k now creates a .r10k-deploy.json file that records the time and SHA of the last code deployment. r10k puppetfile install Supports New Options The r10k puppetfile install subcommand was able to set a custom puppetfile path and moduledir location via environment variables to match librarian-puppet, but they didn’t match the librarian-puppet semantics. The r10k puppetfile install subcommand now supports command line flags to set these options. r10k Data Format Output Improved The r10k deploy display output format was non-standard. It has been improved and now defaults to YAML. New Supported Modules Available Azure The azure module allows you to drive the Microsoft Azure API using Puppet code. Use Puppet to create, stop, restart, and destroy Virtual Machines, meaning you can manage even more of your infrastructure as code. Powershell The powershell module adds a Powershell provider that can execute Powershell commands. This module is particularly helpful if you need to run PowerShell commands but don’t know the details about how PowerShell is executed, since you can technically run PowerShell commands in Puppet without the module. WSUS The Windows Server Update Service (WSUS) lets Windows administrators manage operating system updates using their own servers instead of Microsoft’s Windows Update servers. The wsus_client module configures Puppet agents to schedule update downloads and installations from a WSUS server, manage user access to update settings, and configure automatic updates. Deprecations and Removed Features in PE 2015.3 Accounts Module Removed The built-in pe_accounts module has been removed from Puppet Enterprise. It is replaced by the new accounts module, which is available for download from the Puppet Forge. The module manages resources related to login and service accounts. Windows 2003 Removed Windows 2003 and 2003r2 are no longer supported in Puppet Enterprise.
http://docs.puppetlabs.com/pe/latest/release_notes.html
2016-02-06T00:02:21
CC-MAIN-2016-07
1454701145578.23
[]
docs.puppetlabs.com
Difference between revisions of "Jumla" From Joomla! Documentation Redirect page Revision as of 19:22, 21 September 2012 (view source)Tom Hutchison (Talk | contribs)m (redirect to actual page not a redirect to a redirect page)← Older edit Latest revision as of 14:38,]] [[Category:Landing Pages]] [[Category:Landing Pages]] Latest revision as of 14:38, 15 September 2013 Portal:Beginners#What is Joomla.21.3F Retrieved from ‘’ Category: Landing Pages
https://docs.joomla.org/index.php?title=Jumla&diff=prev&oldid=103650
2016-02-06T01:47:38
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
The process for preparing your storage device is as follows. Individual steps are described in detail in the following topics. Print the PDF file from the CreateJob response and tape the bar code portion to your device. Be especially careful to attach the right barcode to the right device. You submit a separate job request for each device, and each job request generates a unique barcode. CreateJob Important Attach the barcode securely to your device, with tape on all four sides. Do not shrink the barcode or obscure it in any way. AWS Import/Export uses the signature barcode to validate your storage device before starting the data load. If the barcode is separated from your device, we cannot validate it and we will return your device without performing the data load. Secure your device within the shipping box to prevent shifting in transit. For example, wrapping your device in bubble wrap will help prevent shifting and give added protection from damage.
http://docs.aws.amazon.com/AWSImportExport/latest/DG/PackingEBSJobs.html
2016-02-06T00:11:39
CC-MAIN-2016-07
1454701145578.23
[]
docs.aws.amazon.com
Screen menu item manager topnmenu 15.png From Joomla! Documentation Size of this preview: 800 × 362 pixels. Other resolution: 995 × 450 pixels. Original file (995 × 450 pixels, file size: 21 KB, MIME type: image/png) Screenshot of Joomla! 1.5 Menu Item Manager Top Menu screen File history Click on a date/time to view the file as it appeared at that time. - You cannot overwrite this file. File usage There are no pages that link to this file.
https://docs.joomla.org/index.php?title=File:Screen_menu_item_manager_topnmenu_15.png&oldid=1754
2016-02-06T01:22:25
CC-MAIN-2016-07
1454701145578.23
[array(['/images/thumb/4/46/Screen_menu_item_manager_topnmenu_15.png/800px-Screen_menu_item_manager_topnmenu_15.png', 'File:Screen menu item manager topnmenu 15.png'], dtype=object) ]
docs.joomla.org
Difference between revisions of "Glossary" From Joomla! Documentation Latest revision as of 17:41, 27 February 2014 (208 P) - [×] Glossary definitions/pt-br (5 P) - [×] Glossary definitions/ru (28 P) - [×] Glossary definitions/ja (11 P) C - [×] Category Management (2 P) - [×] Category Management/en (empty) Pages in category ‘Glossary’ The following 111 pages are in this category, out of 111 total.
https://docs.joomla.org/index.php?title=Category:Glossary&diff=109888&oldid=62282
2016-02-06T01:06:50
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Changes related to "Finding module positions on any given page" ← Finding module positions on any given page This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20140317211329&target=Finding_module_positions_on_any_given_page
2016-02-06T01:26:11
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
The Request Context¶ This document describes the behavior in Flask 0.7 which is mostly in line with the old behavior but has some small, subtle differences. It is recommended that you read the The Application Context chapter first. Diving into Context Locals¶ Say you have a utility function that returns the URL the user should be redirected to. Imagine it would always redirect to the URL’s parameter or the HTTP referrer or the index page: from flask import request, url_for def redirect_url(): return request.args.get('next') or \ request.referrer or \ url_for('index')() From that point onwards you can work with the request object: >>> redirect_url() u'' Until you call pop: >>> ctx.pop(). How the Context Works¶ If you look into how the Flask WSGI application internally works, you will find a piece of code that looks very much like this: def wsgi_app(self, environ): with self.request_context(environ): try: response = self.full_dispatch_request() except Exception as e: response = self.make_response(self.handle_exception(e)) return response(environ, start_response) application’s teardown_request() functions are also executed. Another thing of note is that the request context will automatically also create an application context when it’s pushed and there is no application context for that application so far. Callbacks and Errors¶ What happens if an error occurs in Flask during request processing? This particular behavior changed in 0.7 because we wanted to make it easier to understand what is actually happening. The new behavior is quite simple: - Before each request, before_request()functions are executed. If one of these functions return a response, the other functions are no longer called. In any case however the return value is treated as a replacement for the view’s return value. - If the before_request()functions did not return a response, the regular request handling kicks in and the view function that was matched has the chance to return a response. - The return value of the view is then converted into an actual response object and handed over to the after_request()functions which have the chance to replace it or modify it in place. -. Teardown Callbacks¶ The teardown callbacks are special callbacks in that they are executed at a different point. Strictly speaking they are independent of the actual request handling as they are bound to the lifecycle of the RequestContext object. When the request context is popped, the teardown_request() functions are called. This is important to know if the life of the request context is prolonged by using the test client in a with statement or when using the request context from the command line: with app.test_client() as client: resp = client.get('/foo') # the teardown functions are still not called at that point # even though the response ended and you have the response # object in your hand # only when the code reaches this point the teardown functions # are called. Alternatively the same thing happens if another # request was triggered from the test client It’s easy to see the behavior from the command line: >>> app = Flask(__name__) >>> @app.teardown_request ... def teardown_request(exception=None): ... print 'this runs after request' ... >>> ctx = app.test_request_context() >>> ctx.push() >>> ctx.pop() this runs after request >>>. Notes On Proxies¶ Some of the objects provided by Flask are proxies to other objects. The reason behind this is that these proxies are shared between threads and they have to dispatch to the actual object bound to a thread behind the scenes as necessary. Most of the time you don’t have to care about that, but there are some exceptions where it is good to know that this object is an actual proxy: - The proxy objects do not fake their inherited types, so if you want to perform actual instance checks, you have to do that on the instance that is being proxied (see _get_current_object below). - if the object reference is important (so for example for sending Signals) If you need to get access to the underlying object that is proxied, you can use the _get_current_object() method: app = current_app._get_current_object() my_signal.send(app) Context Preservation on Error¶ If an error occurs or not, at the end of the request the request context is popped and all data associated with it is destroyed. During development however that can be problematic as you might want to have the information around for a longer time in case an exception occurred. In Flask 0.6 and earlier in debug mode, if an exception occurred, the request context was not popped so that the interactive debugger can still provide you with important information. Starting with Flask 0.7 you have finer control over that behavior by setting the PRESERVE_CONTEXT_ON_EXCEPTION configuration variable. By default it’s linked to the setting of DEBUG. If the application is in debug mode the context is preserved, in production mode it’s.
https://azcv.readthedocs.io/en/stable/reqcontext.html
2022-09-24T18:56:36
CC-MAIN-2022-40
1664030333455.97
[]
azcv.readthedocs.io