doc_id
stringlengths
36
36
metadata
dict
input
stringlengths
28
4.3k
output
stringlengths
11
5.45k
66ca15f6-8510-439d-a408-5afb8e6091e8
{ "language": "AsciiDoc" }
```asciidoc = Changelog == Version 0.4.0 Date: unreleased - Add encode/decode functions to JWS/JWT implementation. Them instead of return plain value, return a monadic either. That allows granular error reporting instead something like nil that not very useful. The previous sign/unsign are conserved for backward compatibility but maybe in future will be removed. - Rename parameter `maxage` to `max-age` on jws implementation. This change introduces a little backward incompatibility. - Add "compact" signing implementation as replacemen of django based one. - Django based generic signing is removed. - Update buddy-core version to 0.4.0 == Version 0.3.0 Date: 2014-01-18 - First version splitted from monolitic buddy package. - No changes from original version. ``` Update changelog with release date of 0.4.0
```asciidoc = Changelog == Version 0.4.0 Date: 2015-02-22 - Add encode/decode functions to JWS/JWT implementation. Them instead of return plain value, return a monadic either. That allows granular error reporting instead something like nil that not very useful. The previous sign/unsign are conserved for backward compatibility but maybe in future will be removed. - Rename parameter `maxage` to `max-age` on jws implementation. This change introduces a little backward incompatibility. - Add "compact" signing implementation as replacemen of django based one. - Django based generic signing is removed. - Update buddy-core version to 0.4.0 == Version 0.3.0 Date: 2014-01-18 - First version splitted from monolitic buddy package. - No changes from original version. ```
1141c4f3-b760-42b3-aef6-cb2291a71690
{ "language": "AsciiDoc" }
```asciidoc == Concourse pipeline Ensure that you've setup the spring-boot target and can login [source] ---- $ fly -t spring-boot login -n spring-boot -c https://ci.spring.io ---- The pipeline can be deployed using the following command: [source] ---- $ fly -t spring-boot set-pipeline -p spring-boot -c ci/pipeline.yml -l ci/parameters.yml ---- NOTE: This assumes that you have credhub integration configured with the appropriate secrets. ``` Rename CI pipeline to include the version in its name
```asciidoc == Concourse pipeline Ensure that you've setup the spring-boot target and can login [source] ---- $ fly -t spring-boot login -n spring-boot -c https://ci.spring.io ---- The pipeline can be deployed using the following command: [source] ---- $ fly -t spring-boot set-pipeline -p spring-boot-2.2.x -c ci/pipeline.yml -l ci/parameters.yml ---- NOTE: This assumes that you have credhub integration configured with the appropriate secrets. ```
8c01c016-86a1-4aab-bb41-c4743bc00f2d
{ "language": "AsciiDoc" }
```asciidoc ==== Dependency Upgrades GORM 7.0 supports a minimum version of Java 8, Hibernate 5.3.x and Spring 5.2.x. Each of these underlying components may have changes that require altering your application. These changes are beyond the scope of this documentation. ==== Package Restructuring and Deprecations Previously deprecated classes have been deleted from this release and in order to support Java 11 modules in the future some package re-structuring has occurred. ==== Changes to Proxy Handling GORM no longer creates custom proxy factories nor automatically unwraps Hibernate proxies. This makes it more consistent to the way regular Hibernate behaves and reduces the complexity required at the framework level. You may need to alter `instanceof` checks are manually unwrap proxies in certain cases. ==== Module `grails-validation` Deprecated and Removed In GORM 6.x the `grails-validation` module was deprecated and replaced by `grails-datastore-gorm-validation`. Deprecated interfaces were maintained for backwards compatibility. In GORM 7.0 these deprecated classes have been removed and all dependency on `grails-validation` removed.``` Add note on transaction requirements
```asciidoc ==== Dependency Upgrades GORM 7.0 supports a minimum version of Java 8, Hibernate 5.3.x and Spring 5.2.x. Each of these underlying components may have changes that require altering your application. These changes are beyond the scope of this documentation. ==== Package Restructuring and Deprecations Previously deprecated classes have been deleted from this release and in order to support Java 11 modules in the future some package re-structuring has occurred. ==== Changes to Proxy Handling GORM no longer creates custom proxy factories nor automatically unwraps Hibernate proxies. This makes it more consistent to the way regular Hibernate behaves and reduces the complexity required at the framework level. You may need to alter `instanceof` checks are manually unwrap proxies in certain cases. ==== Module `grails-validation` Deprecated and Removed In GORM 6.x the `grails-validation` module was deprecated and replaced by `grails-datastore-gorm-validation`. Deprecated interfaces were maintained for backwards compatibility. In GORM 7.0 these deprecated classes have been removed and all dependency on `grails-validation` removed. ==== Transactions Now Required for all Operations Previous versions of Hibernate allowed read operations to be executed without the presence of a declaration transaction. Hibernate 5.2 and above require the presence of an active transaction. If see a `javax.persistence.TransactionRequiredException` exception it means your method lacks a `@Transactional` annotation around it.```
144a7c25-17d1-45f4-80a7-8a058c40ac53
{ "language": "AsciiDoc" }
```asciidoc TinkerPop3 ---------- image:https://raw.githubusercontent.com/tinkerpop/tinkerpop3/master/docs/static/images/tinkerpop3-splash.png[TinkerPop3] Documentation ~~~~~~~~~~~~~ * link:http://tinkerpop.com/docs/current/[user documentation] * link:http://tinkerpop.com/javadocs/current/[javadoc] Building and Testing ~~~~~~~~~~~~~~~~~~~~ * Build Project: `mvn clean install` * Build AsciiDocs: `mvn process-resources -Dasciidoc` * Deploy AsciiDocs: `mvn deploy -Dasciidoc` * Deploy JavaDocs: `mvn deploy -Djavadoc` * Travis Deploy Docs: merge/push to `publish-docs` branch * Integration Tests: `mvn clean verify -DskipIntegrationTests=false` * Performance Tests: `mvn clean verify -DskipPerformanceTests=false` * Gatling Load Tests: `mvn clean test -Pgatling -Dhost={hostname}` Get Started ~~~~~~~~~~~ [source,bash] ---- $ bin/gremlin.sh \,,,/ (o o) -----oOOo-(3)-oOOo----- gremlin> ---- ``` Add docs for uploading zip archives.
```asciidoc TinkerPop3 ---------- image:https://raw.githubusercontent.com/tinkerpop/tinkerpop3/master/docs/static/images/tinkerpop3-splash.png[TinkerPop3] Documentation ~~~~~~~~~~~~~ * link:http://tinkerpop.com/docs/current/[user documentation] * link:http://tinkerpop.com/javadocs/current/[javadoc] Building and Testing ~~~~~~~~~~~~~~~~~~~~ * Build Project: `mvn clean install` * Build AsciiDocs: `mvn process-resources -Dasciidoc` * Deploy AsciiDocs: `mvn deploy -Dasciidoc` * Deploy JavaDocs: `mvn deploy -Djavadoc` * Deploy Zip Archives: `mvn deploy -Darchives` * Travis Deploy Docs: merge/push to `publish-docs` branch * Integration Tests: `mvn clean verify -DskipIntegrationTests=false` * Performance Tests: `mvn clean verify -DskipPerformanceTests=false` * Gatling Load Tests: `mvn clean test -Pgatling -Dhost={hostname}` Get Started ~~~~~~~~~~~ [source,bash] ---- $ bin/gremlin.sh \,,,/ (o o) -----oOOo-(3)-oOOo----- gremlin> ---- ```
a6c3effd-a7cf-48ea-9d3a-2383b4c4be04
{ "language": "AsciiDoc" }
```asciidoc [[file-descriptors]] === File Descriptors [NOTE] This is only relevant for Linux and macOS and can be safely ignored if running Elasticsearch on Windows. On Windows that JVM uses an https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx[API] limited only by available resources. Elasticsearch uses a lot of file descriptors or file handles. Running out of file descriptors can be disastrous and will most probably lead to data loss. Make sure to increase the limit on the number of open files descriptors for the user running Elasticsearch to 65,536 or higher. For the `.zip` and `.tar.gz` packages, set <<ulimit,`ulimit -n 65536`>> as root before starting Elasticsearch, or set `nofile` to `65536` in <<limits.conf,`/etc/security/limits.conf`>>. RPM and Debian packages already default the maximum number of file descriptors to 65536 and do not require further configuration. You can check the `max_file_descriptors` configured for each node using the <<cluster-nodes-stats>> API, with: [source,js] -------------------------------------------------- GET _nodes/stats/process?filter_path=**.max_file_descriptors -------------------------------------------------- // CONSOLE ``` Document JVM option MaxFDLimit for macOS ()
```asciidoc [[file-descriptors]] === File Descriptors [NOTE] This is only relevant for Linux and macOS and can be safely ignored if running Elasticsearch on Windows. On Windows that JVM uses an https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx[API] limited only by available resources. Elasticsearch uses a lot of file descriptors or file handles. Running out of file descriptors can be disastrous and will most probably lead to data loss. Make sure to increase the limit on the number of open files descriptors for the user running Elasticsearch to 65,536 or higher. For the `.zip` and `.tar.gz` packages, set <<ulimit,`ulimit -n 65536`>> as root before starting Elasticsearch, or set `nofile` to `65536` in <<limits.conf,`/etc/security/limits.conf`>>. On macOS, you must also pass the JVM option `-XX:-MaxFDLimit` to Elasticsearch in order for it to make use of the higher file descriptor limit. RPM and Debian packages already default the maximum number of file descriptors to 65536 and do not require further configuration. You can check the `max_file_descriptors` configured for each node using the <<cluster-nodes-stats>> API, with: [source,js] -------------------------------------------------- GET _nodes/stats/process?filter_path=**.max_file_descriptors -------------------------------------------------- // CONSOLE ```
8e92c3fd-6a3c-4ec5-a93c-7eed71dd91f7
{ "language": "AsciiDoc" }
```asciidoc [id="persistent-storage-csi"] = Configuring CSI volumes include::modules/common-attributes.adoc[] :context: persistent-storage-csi toc::[] The Container Storage Interface (CSI) allows {product-title} to consume storage from storage back ends that implement the link:https://github.com/container-storage-interface/spec[CSI interface] as persistent storage. [IMPORTANT] ==== {product-title} does not ship with any CSI drivers. It is recommended to use the CSI drivers provided by link:https://kubernetes-csi.github.io/docs/drivers.html[community or storage vendors]. Installation instructions differ by driver, and are found in each driver's documentation. Follow the instructions provided by the CSI driver. ==== include::modules/persistent-storage-csi-architecture.adoc[leveloffset=+1] include::modules/persistent-storage-csi-external-controllers.adoc[leveloffset=+2] include::modules/persistent-storage-csi-driver-daemonset.adoc[leveloffset=+2] include::modules/persistent-storage-csi-drivers-supported.adoc[leveloffset=+1] include::modules/persistent-storage-csi-dynamic-provisioning.adoc[leveloffset=+1] include::modules/persistent-storage-csi-mysql-example.adoc[leveloffset=+1] ``` Remove note that says that no CSI drivers are installed
```asciidoc [id="persistent-storage-csi"] = Configuring CSI volumes include::modules/common-attributes.adoc[] :context: persistent-storage-csi toc::[] The Container Storage Interface (CSI) allows {product-title} to consume storage from storage back ends that implement the link:https://github.com/container-storage-interface/spec[CSI interface] as persistent storage. include::modules/persistent-storage-csi-architecture.adoc[leveloffset=+1] include::modules/persistent-storage-csi-external-controllers.adoc[leveloffset=+2] include::modules/persistent-storage-csi-driver-daemonset.adoc[leveloffset=+2] include::modules/persistent-storage-csi-drivers-supported.adoc[leveloffset=+1] include::modules/persistent-storage-csi-dynamic-provisioning.adoc[leveloffset=+1] include::modules/persistent-storage-csi-mysql-example.adoc[leveloffset=+1] ```
9628091d-3a0c-4184-83f1-197fc4ca4c48
{ "language": "AsciiDoc" }
```asciidoc // .basic WARNING: Watch out for dragons! // .basic_multiline NOTE: An admonition paragraph draws the reader's attention to auxiliary information. Its purpose is determined by the label at the beginning of the paragraph. // .block [IMPORTANT] ==== While werewolves are hardy community members, keep in mind some dietary concerns. ==== // .block_with_title [IMPORTANT] .Feeding the Werewolves ==== While werewolves are hardy community members, keep in mind some dietary concerns. ==== // .block_with_id_and_role [IMPORTANT, id=werewolve, role=member] ==== While werewolves are hardy community members, keep in mind some dietary concerns. ==== ``` Add full set of examples for each admonition type
```asciidoc // .note NOTE: This is a note. // .note_with_title .Title of note NOTE: This is a note with title. // .note_with_id_and_role [#note-1.yellow] NOTE: This is a note with id and role. // .note_block [NOTE] ==== This is a note with complex content. * It contains a list. ==== // .tip TIP: This is a tip. // .tip_with_title .Title of tip TIP: This is a tip with title. // .tip_with_id_and_role [#tip-1.blue] TIP: This is a tip with id and role. // .tip_block [TIP] ==== This is a tip with complex content. * It contains a list. ==== // .important IMPORTANT: This is an important notice. // .important_with_title .Title of important notice IMPORTANT: This is an important notice with title. // .important_with_id_and_role [#important-1.red] IMPORTANT: This is an important notice with id and role. // .important_block [IMPORTANT] ==== This is an important notice with complex content. * It contains a list. ==== // .caution CAUTION: This is a caution. // .caution_with_title .Title of caution CAUTION: This is a caution with title. // .caution_with_id_and_role [#caution-1.red] CAUTION: This is a caution with id and role. // .caution_block [CAUTION] ==== This is a caution with complex content. * It contains a list. ==== // .warning WARNING: This is a warning. // .warning_with_title .Title of warning WARNING: This is a warning with title. // .warning_with_id_and_role [#warning-1.red] WARNING: This is a warning with id and role. // .warning_block [WARNING] ==== This is a warning with complex content. * It contains a list. ==== ```
de6fe72b-d2c4-490c-b9a1-94ef493874e2
{ "language": "AsciiDoc" }
```asciidoc // Module included in the following assemblies: // //* registry/configuring_registry_storage-azure.adoc [id="registry-configuring-storage-azure-user-infra_{context}"] = Configuring registry storage for Azure During installation, your cloud credentials are sufficient to create Azure Blob Storage, and the Registry Operator automatically configures storage. .Prerequisites * A cluster on Azure with user-provisioned infrastructure. * To configure registry storage for Azure, provide Registry Operator cloud credentials. * For Azure storage the secret is expected to contain one key: ** `REGISTRY_STORAGE_AZURE_ACCOUNTKEY` .Procedure . Create an link:https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal[Azure storage container]. . Fill in the storage configuration in `configs.imageregistry.operator.openshift.io/cluster`: + ---- $ oc edit configs.imageregistry.operator.openshift.io/cluster storage: azure: accountName: <account-name> container: <container-name> ---- ``` Update storage account name in Azure registry storage configuration
```asciidoc // Module included in the following assemblies: // //* registry/configuring_registry_storage-azure.adoc [id="registry-configuring-storage-azure-user-infra_{context}"] = Configuring registry storage for Azure During installation, your cloud credentials are sufficient to create Azure Blob Storage, and the Registry Operator automatically configures storage. .Prerequisites * A cluster on Azure with user-provisioned infrastructure. * To configure registry storage for Azure, provide Registry Operator cloud credentials. * For Azure storage the secret is expected to contain one key: ** `REGISTRY_STORAGE_AZURE_ACCOUNTKEY` .Procedure . Create an link:https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal[Azure storage container]. . Fill in the storage configuration in `configs.imageregistry.operator.openshift.io/cluster`: + ---- $ oc edit configs.imageregistry.operator.openshift.io/cluster storage: azure: accountName: <storage-account-name> container: <container-name> ---- ```
64cf1862-9b6a-4458-b718-6a7fc70b7e10
{ "language": "AsciiDoc" }
```asciidoc = OmniJ Release Process == Main Release Process . Update `CHANGELOG.adoc` . Set versions .. `gradle.properties` .. omnij-dsl `ExtensionModule` .. `README.adoc` . Commit version bump and changelog. . Full build, test .. `./gradlew clean jenkinsBuild regTest` . Tag: `git tag -a v0.x.y -m "Release 0.x.y"` . Push: `git push --tags origin master` . Publish to Bintray: .. `./gradlew bintrayUpload` .. Confirm publish of artifacts in Bintray Web UI. . Update Github https://github.com/OmniLayer/OmniJ/releases[OmniJ Releases] page. . TBD: Update github-pages site (not set up yet) == Announcements . Not yet. == After release . Set versions back to -SNAPSHOT .. `build.gradle` .. omnij-dsl `ExtensionModule` .. *Not* `README.adoc` -- it should match release version . Commit and push to master ``` Use `buildCI` target rather than `jenkinsBuild`
```asciidoc = OmniJ Release Process == Main Release Process . Update `CHANGELOG.adoc` . Set versions .. `gradle.properties` .. omnij-dsl `ExtensionModule` .. `README.adoc` . Commit version bump and changelog. . Full build, test .. `./gradlew clean buildCI regTest` . Tag: `git tag -a v0.x.y -m "Release 0.x.y"` . Push: `git push --tags origin master` . Publish to Bintray: .. `./gradlew bintrayUpload` .. Confirm publish of artifacts in Bintray Web UI. . Update Github https://github.com/OmniLayer/OmniJ/releases[OmniJ Releases] page. . TBD: Update github-pages site (not set up yet) == Announcements . Not yet. == After release . Set versions back to -SNAPSHOT .. `build.gradle` .. omnij-dsl `ExtensionModule` .. *Not* `README.adoc` -- it should match release version . Commit and push to master ```
55d31990-562b-4582-997f-2a4a8fc33e84
{ "language": "AsciiDoc" }
```asciidoc = Reactor Netty Stephane Maldini; :appversion: 0.5.0 (wiki) ifndef::host-github[:ext-relative: {outfilesuffix}] {appversion} :doctype: book :icons: font //include::net.adoc[] include::tcp-server.adoc[] include::tcp-client.adoc[] include::http-server.adoc[] include::http-client.adoc[] ``` Add myself to the list with authors for the reference documentation
```asciidoc = Reactor Netty Stephane Maldini <https://twitter.com/smaldini[@smaldini]> ; Violeta Georgieva <https://twitter.com/violeta_g_g[@violeta_g_g]> :appversion: 0.5.0 (wiki) ifndef::host-github[:ext-relative: {outfilesuffix}] {appversion} :doctype: book :icons: font //include::net.adoc[] include::tcp-server.adoc[] include::tcp-client.adoc[] include::http-server.adoc[] include::http-client.adoc[] ```
75caa0f2-8692-4a08-bae1-dd6f47f28fdb
{ "language": "AsciiDoc" }
```asciidoc [role="xpack"] [[security-api-change-password]] === Change Password API The Change Password API enables you to submit a request to change the password of a user. ==== Request `POST _xpack/security/user/_password` + `POST _xpack/security/user/<username>/_password` ==== Path Parameters `username`:: (string) The user whose password you want to change. If you do not specify this parameter, the password is changed for the current user. ==== Request Body `password` (required):: (string) The new password value. ==== Authorization Every user can change their own password. Users with the `manage_security` privilege can change passwords of other users. ==== Examples The following example updates the password for the `elastic` user: [source,js] -------------------------------------------------- POST _xpack/security/user/elastic/_password { "password": "x-pack-test-password" } -------------------------------------------------- // CONSOLE A successful call returns an empty JSON structure. [source,js] -------------------------------------------------- {} -------------------------------------------------- // TESTRESPONSE ``` Fix formatting in change password API
```asciidoc [role="xpack"] [[security-api-change-password]] === Change Password API The Change Password API enables you to submit a request to change the password of a user. ==== Request `POST _xpack/security/user/_password` + `POST _xpack/security/user/<username>/_password` ==== Path Parameters `username`:: (string) The user whose password you want to change. If you do not specify this parameter, the password is changed for the current user. ==== Request Body `password` (required):: (string) The new password value. ==== Authorization Every user can change their own password. Users with the `manage_security` privilege can change passwords of other users. ==== Examples The following example updates the password for the `elastic` user: [source,js] -------------------------------------------------- POST _xpack/security/user/elastic/_password { "password": "x-pack-test-password" } -------------------------------------------------- // CONSOLE A successful call returns an empty JSON structure. [source,js] -------------------------------------------------- {} -------------------------------------------------- // TESTRESPONSE ```
da120b95-1ff9-41eb-b6a3-ed03209e0752
{ "language": "AsciiDoc" }
```asciidoc # Janitor image:https://travis-ci.org/techdev-solutions/janitor.svg?branch=master["Build Status",link="https://travis-ci.org/techdev-solutions/janitor"] An application to perform cleanup work using the https://getpocket.com[Pocket API]. Review your existing items via the web interface and have Janitor archive old items once per day. ## API Documentation The documentation for the Kotlin API bindings can be found https://techdev-solutions.github.io/janitor/pocket-api/[here]. ## Tutorial (German) Kotlin: Ein Tutorial für Einsteiger @ JAXenter:: * https://jaxenter.de/kotlin-tutorial-48156[Part 1] * https://jaxenter.de/kotlin-ein-tutorial-fuer-einsteiger-teil-2-48587[Part 2] * https://jaxenter.de/kotlin-ein-tutorial-fuer-einsteiger-teil-3-48967[Part 3] ## Screenshots ### Item Overview image:images/items.png?raw=true[Item Overiew] ### Error View image:images/error.png?raw=true[Error View]``` Add link to 4th part of JAXenter tutorial
```asciidoc # Janitor image:https://travis-ci.org/techdev-solutions/janitor.svg?branch=master["Build Status",link="https://travis-ci.org/techdev-solutions/janitor"] An application to perform cleanup work using the https://getpocket.com[Pocket API]. Review your existing items via the web interface and have Janitor archive old items once per day. ## API Documentation The documentation for the Kotlin API bindings can be found https://techdev-solutions.github.io/janitor/pocket-api/[here]. ## Tutorial (German) Kotlin: Ein Tutorial für Einsteiger @ JAXenter:: * https://jaxenter.de/kotlin-tutorial-48156[Part 1] * https://jaxenter.de/kotlin-ein-tutorial-fuer-einsteiger-teil-2-48587[Part 2] * https://jaxenter.de/kotlin-ein-tutorial-fuer-einsteiger-teil-3-48967[Part 3] * https://jaxenter.de/kotlin-ein-tutorial-fuer-einsteiger-teil-4-49160[Part 4] ## Screenshots ### Item Overview image:images/items.png?raw=true[Item Overiew] ### Error View image:images/error.png?raw=true[Error View]```
ca7ffc18-ed4f-42ff-a11e-229ebdb2a609
{ "language": "AsciiDoc" }
```asciidoc = Groovy API :ref: http://www.elastic.co/guide/en/elasticsearch/reference/current :java: http://www.elastic.co/guide/en/elasticsearch/client/java-api/current :version: 5.0.0-alpha5 [preface] == Preface This section describes the http://groovy-lang.org/[Groovy] API elasticsearch provides. All elasticsearch APIs are executed using a <<client,GClient>>, and are completely asynchronous in nature (they either accept a listener, or return a future). The Groovy API is a wrapper on top of the {java}[Java API] exposing it in a groovier manner. The execution options for each API follow a similar manner and covered in <<anatomy>>. [[maven]] === Maven Repository The Groovy API is hosted on http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22elasticsearch-groovy%22[Maven Central]. For example, you can define the latest version in your `pom.xml` file: ["source","xml",subs="attributes"] -------------------------------------------------- <dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch-groovy</artifactId> <version>{version}</version> </dependency> -------------------------------------------------- include::anatomy.asciidoc[] include::client.asciidoc[] include::index_.asciidoc[] include::get.asciidoc[] include::delete.asciidoc[] include::search.asciidoc[] ``` Fix version constant in Groovy API docs
```asciidoc = Groovy API :ref: http://www.elastic.co/guide/en/elasticsearch/reference/current :java: http://www.elastic.co/guide/en/elasticsearch/client/java-api/current :version: 5.1.0 [preface] == Preface This section describes the http://groovy-lang.org/[Groovy] API elasticsearch provides. All elasticsearch APIs are executed using a <<client,GClient>>, and are completely asynchronous in nature (they either accept a listener, or return a future). The Groovy API is a wrapper on top of the {java}[Java API] exposing it in a groovier manner. The execution options for each API follow a similar manner and covered in <<anatomy>>. [[maven]] === Maven Repository The Groovy API is hosted on http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22elasticsearch-groovy%22[Maven Central]. For example, you can define the latest version in your `pom.xml` file: ["source","xml",subs="attributes"] -------------------------------------------------- <dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch-groovy</artifactId> <version>{version}</version> </dependency> -------------------------------------------------- include::anatomy.asciidoc[] include::client.asciidoc[] include::index_.asciidoc[] include::get.asciidoc[] include::delete.asciidoc[] include::search.asciidoc[] ```
2793490c-59e2-4398-974c-3944056254bc
{ "language": "AsciiDoc" }
```asciidoc = Minimal-J Java - but small. image::doc/frontends.png[] Minimal-J applications are * Responsive to use on every device * Straight forward to specify and implement and therefore * Easy to plan and manage === Idea Business applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented. Minimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user. == Technical Features * Independent of the used UI technology. Implementations for Web / Mobile / Desktop. * ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported. * Small: The minimalj.jar is still < 1MB * Very few dependencies * Applications can run standalone (like SpringBoot) == Documentation * link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications. * link:doc/topics.adoc[Tutorial and examples] Informations for developers. * link:doc/release_notes.adoc[Release Notes] == Hello World How to implement Hello World in Minimal-J: video::0VHz7gv6TpA[youtube] === Contact * Bruno Eberhard, mailto:[email protected][[email protected]] ``` Include Hello World Youtube video
```asciidoc = Minimal-J Java - but small. image::doc/frontends.png[] Minimal-J applications are * Responsive to use on every device * Straight forward to specify and implement and therefore * Easy to plan and manage === Idea Business applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented. Minimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user. == Technical Features * Independent of the used UI technology. Implementations for Web / Mobile / Desktop. * ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported. * Small: The minimalj.jar is still < 1MB * Very few dependencies * Applications can run standalone (like SpringBoot) == Documentation * link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications. * link:doc/topics.adoc[Tutorial and examples] Informations for developers. * link:doc/release_notes.adoc[Release Notes] == Hello World How to implement Hello World in Minimal-J: link:_includes/ex-video.adoc[] video::0VHz7gv6TpA[youtube] === Contact * Bruno Eberhard, mailto:[email protected][[email protected]] ```
a3ed1dc8-35d1-48cb-963f-600b07f4a8ad
{ "language": "AsciiDoc" }
```asciidoc == Prerequisites * Java 1.8 or newer * A Java web app project that will serve your REST requests. * A JAX-RS 2.0 container, such as Jersey 2.x. * Cayenne 4.0.M4 or newer. Mapping your database and starting Cayenne ServerRuntime is outside the scope of this document. Please refer to the http://cayenne.apache.org/docs/4.0/cayenne-guide/index.html[corresponding Cayenne docs] ``` Update cayenne version mentioned in the documentation
```asciidoc == Prerequisites * Java 1.8 or newer * A Java web app project that will serve your REST requests. * A JAX-RS 2.0 container, such as Jersey 2.x. * Cayenne 4.0 or newer. Mapping your database and starting Cayenne ServerRuntime is outside the scope of this document. Please refer to the http://cayenne.apache.org/docs/4.0/cayenne-guide/index.html[corresponding Cayenne docs] ```
4901b081-8ba2-428d-8b62-439ce00d479b
{ "language": "AsciiDoc" }
```asciidoc # AsciiDoc mode for CodeMirror This repository contains the AsciiDoc mode for CodeMirror. ## Installation ``` $ npm install codemirror-asciidoc ``` ## Usage ```js var codemirror = require("codemirror/lib/codemirror"), codemirror_asciidoc = require("codemirror-asciidoc/lib/asciidoc"); codemirror.fromTextArea(document.getElementById("editor"), { lineNumbers: true, lineWrapping: true, mode: "asciidoc" }); ``` ## License BSD ## Credits The AsciiDoc mode for CodeMirror was generated from the AsciiDoc mode for Ace using the https://github.com/espadrine/ace2cm[ace2cm] project by https://github.com/espadrine[Thaddee Tyl]. ``` Simplify import path for modules
```asciidoc # AsciiDoc mode for CodeMirror This repository contains the AsciiDoc mode for CodeMirror. ## Installation ``` $ npm install codemirror-asciidoc ``` ## Usage ```js var codemirror = require("codemirror"), codemirror_asciidoc = require("codemirror-asciidoc"); codemirror.fromTextArea(document.getElementById("editor"), { lineNumbers: true, lineWrapping: true, mode: "asciidoc" }); ``` ## License BSD ## Credits The AsciiDoc mode for CodeMirror was generated from the AsciiDoc mode for Ace using the https://github.com/espadrine/ace2cm[ace2cm] project by https://github.com/espadrine[Thaddee Tyl]. ```
84941920-2df3-40b5-9b4e-51fbc1c436b9
{ "language": "AsciiDoc" }
```asciidoc [[prerequisites-openshift]] .Prerequisites * To install {ProductName}, the OpenShift client tools are required. You can download the OpenShift Origin client from link:https://github.com/openshift/origin/releases[OpenShift Origin^]. {ProductName} has been tested to work with the latest stable release of the OpenShift Origin Client. * An OpenShift cluster is required. If you do not have an OpenShift cluster available, see link:https://github.com/minishift/minishift[Minishift^] for an example of how to run a local instance of OpenShift on your machine. * A method to generate certificates is required. This guide uses link:https://www.openssl.org/[OpenSSL^]. ``` Update documentation regarding using Service Catalog on Minishift
```asciidoc [[prerequisites-openshift]] .Prerequisites * To install {ProductName}, the OpenShift client tools are required. You can download the OpenShift Origin client from link:https://github.com/openshift/origin/releases[OpenShift Origin^]. {ProductName} has been tested to work with the latest stable release of the OpenShift Origin Client. * An OpenShift cluster is required. If you do not have an OpenShift cluster available, see link:https://github.com/minishift/minishift[Minishift^] for an example of how to run a local instance of OpenShift on your machine. * If you want to install {ProductName} on Minishift and want to use Service Catalog, you need to explicitly enable it during the start-up, like [options="nowrap"] ---- MINISHIFT_ENABLE_EXPERIMENTAL="y" minishift start --extra-clusterup-flags "--service-catalog" ---- * A method to generate certificates is required. This guide uses link:https://www.openssl.org/[OpenSSL^]. ```
5caa25b4-a3e2-4c73-b813-309dee689b0e
{ "language": "AsciiDoc" }
```asciidoc == Concourse pipeline Ensure that you've have a spring-boot target logged in [source] ---- $ fly -t spring-boot login -n spring-boot -c https://ci.spring-io ---- The pipeline can be deployed using the following command: [source] ---- $ fly -t spring-boot set-pipeline -p spring-boot-2.0.x -c ci/pipeline.yml -l ci/parameters.yml ---- NOTE: This assumes that you have credhub integration configured with the appropriate secrets. ``` Fix type in CI readme
```asciidoc == Concourse pipeline Ensure that you've setup the spring-boot target and can login [source] ---- $ fly -t spring-boot login -n spring-boot -c https://ci.spring.io ---- The pipeline can be deployed using the following command: [source] ---- $ fly -t spring-boot set-pipeline -p spring-boot-2.0.x -c ci/pipeline.yml -l ci/parameters.yml ---- NOTE: This assumes that you have credhub integration configured with the appropriate secrets. ```
5f3293ef-ba7f-4fed-a955-1a67f8117280
{ "language": "AsciiDoc" }
```asciidoc = Changelog This changelog contains major and/or breaking changes to Kakoune between released versions. == Development version * `%sh{...}` strings are not reparsed automatically anymore, they need to go through an explicit `evaluate-commands` * The `-allow-override` switch from `define-command` has been renamed `-override`. * The search prompt uses buffer word completion so that fuzzy completion can be used to quickly search for a buffer word. * The `wrap` highlighter can accept a new `-marker <marker_text>` switch. == Kakoune 2018.04.13 First official Kakoune release. ``` Update Changelog to describe the list syntax overhaul
```asciidoc = Changelog This changelog contains major and/or breaking changes to Kakoune between released versions. == Development version This version contains a significant overhaul of various Kakoune features that can break user configuration. This was a necessary change to make Kakoune command model cleaner and more robust. * `%sh{...}` strings are not reparsed automatically anymore, they need to go through an explicit `evaluate-commands` * The `-allow-override` switch from `define-command` has been renamed `-override`. * The search prompt uses buffer word completion so that fuzzy completion can be used to quickly search for a buffer word. * The `wrap` highlighter can accept a new `-marker <marker_text>` switch. * The command line syntax has changed to support robust escaping, see <<command-parsing#,`:doc command-parsing`>>. * Various lists (options, registers...) in Kakoune are now written using the command line syntax: - `set-register` now take an arbitrary number of parameters and sets the register to multiple strings. `%reg` expands to a list of strings. - `%opt` expands list options as list of strings. - selection descs are whitespaces separated instead of `:` separated == Kakoune 2018.04.13 First official Kakoune release. ```
0a790ada-ccac-4431-a3bc-3bd0b2086e32
{ "language": "AsciiDoc" }
```asciidoc [[new]] = What's New in Spring Security 6.0 Spring Security 6.0 provides a number of new features. Below are the highlights of the release. == Breaking Changes * https://github.com/spring-projects/spring-security/issues/10556[gh-10556] - Remove EOL OpenSaml 3 Support. Use the OpenSaml 4 Support instead. * https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`. Instead use data storage to encrypt values. * https://github.com/spring-projects/spring-security/issues/11520[gh-11520] - Remember Me uses SHA256 by default * https://github.com/spring-projects/spring-security/issues/8819 - Move filters to web package Reorganize imports * https://github.com/spring-projects/spring-security/issues/7349 - Move filter and token to appropriate packages Reorganize imports * https://github.com/spring-projects/spring-security/issues/11026[gh-11026] - Use `RequestAttributeSecurityContextRepository` instead of `NullSecurityContextRepository` * https://github.com/spring-projects/spring-security/pull/11887[gh-11827] - Change default authority for `oauth2Login()` ``` Update What's New for 6.0
```asciidoc [[new]] = What's New in Spring Security 6.0 Spring Security 6.0 provides a number of new features. Below are the highlights of the release. == Breaking Changes * https://github.com/spring-projects/spring-security/issues/10556[gh-10556] - Remove EOL OpenSaml 3 Support. Use the OpenSaml 4 Support instead. * https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`. Instead use data storage to encrypt values. * https://github.com/spring-projects/spring-security/issues/11520[gh-11520] - Remember Me uses SHA256 by default * https://github.com/spring-projects/spring-security/issues/8819 - Move filters to web package Reorganize imports * https://github.com/spring-projects/spring-security/issues/7349 - Move filter and token to appropriate packages Reorganize imports * https://github.com/spring-projects/spring-security/issues/11026[gh-11026] - Use `RequestAttributeSecurityContextRepository` instead of `NullSecurityContextRepository` * https://github.com/spring-projects/spring-security/pull/11887[gh-11827] - Change default authority for `oauth2Login()` * https://github.com/spring-projects/spring-security/issues/10347[gh-10347] - Remove `UsernamePasswordAuthenticationToken` check in `BasicAuthenticationFilter````
a53dc53e-1943-4901-a4f4-51c4bd15926d
{ "language": "AsciiDoc" }
```asciidoc TinkerPop3 ---------- image:https://raw.githubusercontent.com/tinkerpop/tinkerpop3/master/docs/static/images/tinkerpop3-splash.png[TinkerPop3] Documentation ~~~~~~~~~~~~~ * link:http://tinkerpop.com/docs/current/[user documentation] * link:http://tinkerpop.com/javadocs/current/[javadoc] Building ~~~~~~~~ * Build Project: `mvn clean install` * Build AsciiDocs: `mvn process-resources -Dasciidoc` * Deploy AsciiDocs: `mvn deploy -Dasciidoc` * Deploy JavaDocs: `mvn deploy -Djavadoc` Get Started ~~~~~~~~~~~ [source,bash] ---- $ bin/gremlin.sh \,,,/ (o o) -----oOOo-(3)-oOOo----- gremlin> ---- ``` Add some build/test commands to readme.
```asciidoc TinkerPop3 ---------- image:https://raw.githubusercontent.com/tinkerpop/tinkerpop3/master/docs/static/images/tinkerpop3-splash.png[TinkerPop3] Documentation ~~~~~~~~~~~~~ * link:http://tinkerpop.com/docs/current/[user documentation] * link:http://tinkerpop.com/javadocs/current/[javadoc] Building and Testing ~~~~~~~~~~~~~~~~~~~~ * Build Project: `mvn clean install` * Build AsciiDocs: `mvn process-resources -Dasciidoc` * Deploy AsciiDocs: `mvn deploy -Dasciidoc` * Deploy JavaDocs: `mvn deploy -Djavadoc` * Travis Deploy Docs: merge/push to `publish-docs` branch * Integration Tests: `mvn clean verify -DskipIntegrationTests=false` * Performance Tests: `mvn clean verify -DskipPerformanceTests=false` * Gatling Load Tests: `mvn clean test -Pgatling -Dhost={hostname}` Get Started ~~~~~~~~~~~ [source,bash] ---- $ bin/gremlin.sh \,,,/ (o o) -----oOOo-(3)-oOOo----- gremlin> ---- ```
1a9fcdc5-edcd-4e15-a42a-7190da9622b8
{ "language": "AsciiDoc" }
```asciidoc = Getting Started First you need to Download the Camel distribution; or you could grab the Source and try Building it yourself. Then come back here, and you might want to read the following documentation before continuing: * Longer xref:book-getting-started.adoc[Getting Started Guide] * Find out about xref:components:eips:enterprise-integration-patterns.adoc[Enterprise Integration Patterns] and how to implement them with Camel * Review the xref:architecture.adoc[Architecture guide] to see how to build Routes using the xref:dsl.adoc[DSL]. == Working with CamelContexts and RouteBuilders To get started with Camel: 1. Create a xref:camelcontext.adoc[CamelContext]. 2. Add whatever routing rules you wish using the DSL and `RouteBuilder` or using XML DSL. 3. Start the Camel context. When your application is closing you may wish to stop the context When you are ready, why not xref:walk-through-an-example.adoc[Walk through an Example]? And then continue the walk xref:walk-through-another-example.adoc[Walk through another example]. ``` Update walk through link text
```asciidoc = Getting Started First you need to Download the Camel distribution; or you could grab the Source and try Building it yourself. Then come back here, and you might want to read the following documentation before continuing: * Longer xref:book-getting-started.adoc[Getting Started Guide] * Find out about xref:components:eips:enterprise-integration-patterns.adoc[Enterprise Integration Patterns] and how to implement them with Camel * Review the xref:architecture.adoc[Architecture guide] to see how to build Routes using the xref:dsl.adoc[DSL]. == Working with CamelContexts and RouteBuilders To get started with Camel: 1. Create a xref:camelcontext.adoc[CamelContext]. 2. Add whatever routing rules you wish using the DSL and `RouteBuilder` or using XML DSL. 3. Start the Camel context. When your application is closing you may wish to stop the context When you are ready, why not xref:walk-through-an-example.adoc[walk through an example]? And then xref:walk-through-another-example.adoc[walk through another example]. ```
d5fe2127-678c-4c7d-be17-34ed8a694544
{ "language": "AsciiDoc" }
```asciidoc [[file-descriptors]] === File Descriptors [NOTE] This is only a problem for Linux and macOS and can be safely ignored if running Elasticsearch on Windows. Elasticsearch uses a lot of file descriptors or file handles. Running out of file descriptors can be disastrous and will most probably lead to data loss. Make sure to increase the limit on the number of open files descriptors for the user running Elasticsearch to 65,536 or higher. For the `.zip` and `.tar.gz` packages, set <<ulimit,`ulimit -n 65536`>> as root before starting Elasticsearch, or set `nofile` to `65536` in <<limits.conf,`/etc/security/limits.conf`>>. RPM and Debian packages already default the maximum number of file descriptors to 65536 and do not require further configuration. You can check the `max_file_descriptors` configured for each node using the <<cluster-nodes-stats>> API, with: [source,js] -------------------------------------------------- GET _nodes/stats/process?filter_path=**.max_file_descriptors -------------------------------------------------- // CONSOLE ``` Reword note about windows and FDs
```asciidoc [[file-descriptors]] === File Descriptors [NOTE] This is only relevant for Linux and macOS and can be safely ignored if running Elasticsearch on Windows. On Windows that JVM uses an https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx[API] limited only by available resources. Elasticsearch uses a lot of file descriptors or file handles. Running out of file descriptors can be disastrous and will most probably lead to data loss. Make sure to increase the limit on the number of open files descriptors for the user running Elasticsearch to 65,536 or higher. For the `.zip` and `.tar.gz` packages, set <<ulimit,`ulimit -n 65536`>> as root before starting Elasticsearch, or set `nofile` to `65536` in <<limits.conf,`/etc/security/limits.conf`>>. RPM and Debian packages already default the maximum number of file descriptors to 65536 and do not require further configuration. You can check the `max_file_descriptors` configured for each node using the <<cluster-nodes-stats>> API, with: [source,js] -------------------------------------------------- GET _nodes/stats/process?filter_path=**.max_file_descriptors -------------------------------------------------- // CONSOLE ```
75ae8603-da23-46d5-b178-13e9e650be7a
{ "language": "AsciiDoc" }
```asciidoc // Module included in the following assemblies: // // assembly-configure-address-spaces-addresses-cli-kube.adoc // assembly-configure-address-spaces-addresses-cli-oc.adoc [id='create-address-space-cli-{context}'] = Creating an address space .Procedure . Create an address space definition: + [source,yaml,options="nowrap"] .link:resources/standard-address-space.yaml[standard-address-space.yaml] ---- include::../common/standard-address-space.yaml[] ---- . Create the address space: + [options="nowrap",subs="attributes"] ---- {cmdcli} create -f standard-address-space.yaml ---- + The address space is ready for use when `.status.isReady` field is set to `true`. . Check the status of the address space: + [options="nowrap",subs="attributes"] ---- {cmdcli} get addressspace myspace -o jsonpath={.status.isReady} ---- . Retrieve console URL: + [options="nowrap",subs="attributes"] ---- {cmdcli} get addressspace myspace -o jsonpath={.status.endpointStatuses[?(@.name==\'console\')].host} ---- ``` Add additional steps for creating address space
```asciidoc // Module included in the following assemblies: // // assembly-configure-address-spaces-addresses-cli-kube.adoc // assembly-configure-address-spaces-addresses-cli-oc.adoc [id='create-address-space-cli-{context}'] = Creating an address space .Procedure ifeval::["{cmdcli}" == "oc"] . Log in as a messaging tenant: + [source,yaml,options="nowrap"] ---- {cmdcli} login -u developer ---- . Create the project for the messaging application: + [source,yaml,options="nowrap"] ---- {cmdcli} new-project myapp ---- endif::[] . Create an address space definition: + [source,yaml,options="nowrap"] .link:resources/standard-address-space.yaml[standard-address-space.yaml] ---- include::../common/standard-address-space.yaml[] ---- . Create the address space: + [options="nowrap",subs="attributes"] ---- {cmdcli} create -f standard-address-space.yaml ---- + The address space is ready for use when `.status.isReady` field is set to `true`. . Check the status of the address space: + [options="nowrap",subs="attributes"] ---- {cmdcli} get addressspace myspace -o jsonpath={.status.isReady} ---- . Retrieve console URL: + [options="nowrap",subs="attributes"] ---- {cmdcli} get addressspace myspace -o jsonpath={.status.endpointStatuses[?(@.name==\'console\')].host} ---- ```
78a20e68-c56e-4d9d-9ea9-e2f143c2eac1
{ "language": "AsciiDoc" }
```asciidoc // Module included in the following assemblies: // // * storage/dynamic-provisioning.adoc [id="change-default-storage-class_{context}"] = Changing the default StorageClass If you are using GCE and AWS, use the following process to change the default StorageClass. This process assumes you have two StorageClasses defined, `gp2` and `standard`, and you want to change the default StorageClass from `gp2` to `standard`. . List the StorageClass: + ---- $ oc get storageclass NAME TYPE gp2 (default) kubernetes.io/aws-ebs <1> standard kubernetes.io/gce-pd ---- <1> `(default)` denotes the default StorageClass. . Change the value of the annotation `storageclass.kubernetes.io/is-default-class` to `false` for the default StorageClass: + ---- $ oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' ---- . Make another StorageClass the default by adding or modifying the annotation as `storageclass.kubernetes.io/is-default-class=true`. + ---- $ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' ---- . Verify the changes: + ---- $ oc get storageclass NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/gce-pd ---- ``` Fix bz1729334: remove GCE storage reference
```asciidoc // Module included in the following assemblies: // // * storage/dynamic-provisioning.adoc [id="change-default-storage-class_{context}"] = Changing the default StorageClass If you are using AWS, use the following process to change the default StorageClass. This process assumes you have two StorageClasses defined, `gp2` and `standard`, and you want to change the default StorageClass from `gp2` to `standard`. . List the StorageClass: + ---- $ oc get storageclass NAME TYPE gp2 (default) kubernetes.io/aws-ebs <1> standard kubernetes.io/aws-ebs ---- <1> `(default)` denotes the default StorageClass. . Change the value of the annotation `storageclass.kubernetes.io/is-default-class` to `false` for the default StorageClass: + ---- $ oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' ---- . Make another StorageClass the default by adding or modifying the annotation as `storageclass.kubernetes.io/is-default-class=true`. + ---- $ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' ---- . Verify the changes: + ---- $ oc get storageclass NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs ---- ```
c78839f5-61c9-46af-b8aa-40d975acad70
{ "language": "AsciiDoc" }
```asciidoc :title: Natural habitat :parent: Traits and environment == Natural habitatz C. porcellus is not found naturally in the wild; it is likely descended from some closely related species of cavies, such as C. aperea, C. fulgida, and C. tschudii, which are still commonly found in various regions of South America. Some species of cavy identified in the 20th century, such as C. anolaimae and C. guianae, may be domestic guinea pigs that have become feral by reintroduction into the wild. Wild cavies are found on grassy plains and occupy an ecological niche similar to that of cattle. They are social, living in the wild in small groups which consist of several females (sows), a male (boar), and the young (which in a break with the preceding porcine nomenclature are called pups). They move together in groups (herds) eating grass or other vegetation, and do not store food. While they do not burrow or build nests, they frequently seek shelter in the burrows of other animals, as well as in crevices and tunnels formed by vegetation. They tend to be most active during dawn and dusk, when it is harder for predators to spot them. ``` Fix header in test source
```asciidoc :title: Natural habitat :parent: Traits and environment == Natural habitat C. porcellus is not found naturally in the wild; it is likely descended from some closely related species of cavies, such as C. aperea, C. fulgida, and C. tschudii, which are still commonly found in various regions of South America. Some species of cavy identified in the 20th century, such as C. anolaimae and C. guianae, may be domestic guinea pigs that have become feral by reintroduction into the wild. Wild cavies are found on grassy plains and occupy an ecological niche similar to that of cattle. They are social, living in the wild in small groups which consist of several females (sows), a male (boar), and the young (which in a break with the preceding porcine nomenclature are called pups). They move together in groups (herds) eating grass or other vegetation, and do not store food. While they do not burrow or build nests, they frequently seek shelter in the burrows of other animals, as well as in crevices and tunnels formed by vegetation. They tend to be most active during dawn and dusk, when it is harder for predators to spot them. ```
35984a20-9a9d-4770-a3fb-c9bbc2ff1662
{ "language": "AsciiDoc" }
```asciidoc = AsciidoctorJ Conversion Process Overview Before starting to write your first extension, some basic understanding of how Asciidoctor treats the document is helpful. As any language processing tool, the process can be roughly split into three steps: . Parsing: the raw sources content is read and analyzed to generate the internal representation, the AST (_abstract syntax tree_). . Processing: the AST is processed. For example to detect possible errors, add automatically generated content (toc), etc. . Output generation: once the final AST is set, it's again processed to generate the desired output. For example, a sub-section of the AST representing a title with a paragraph will be converted into its correspondent HTML or PDF output. NOTE: Some liberty is taken to make the process easier to understand. In reality, Asciidoctor has implementation details that divert from the 3 steps above. The different extension types are called in different steps of the conversion process in the following order: . IncludeProcessors are called in an arbitrary and changeable order during processing and are called whenever an `include::` is found. . Preprocessors are called just before parsing. . BlockMacroProcessors are called during processing in the order that they appear in the document. . BlockProcessors are called during processing in the order that they appear in the document. . Treeprocessors are called right before processing. . InlineMacroProcessors are called during processing in the order that they appear in the document. . Postprocessors are called before output generation. . DocinfoProcessors are called in an arbitrary and changeable order during processing. ``` Clarify that BlockMacro- and BlockProcessors are called in the same stage
```asciidoc = AsciidoctorJ Conversion Process Overview Before starting to write your first extension, some basic understanding of how Asciidoctor treats the document is helpful. As any language processing tool, the process can be roughly split into three steps: . Parsing: the raw sources content is read and analyzed to generate the internal representation, the AST (_abstract syntax tree_). . Processing: the AST is processed. For example to detect possible errors, add automatically generated content (toc), etc. . Output generation: once the final AST is set, it's again processed to generate the desired output. For example, a sub-section of the AST representing a title with a paragraph will be converted into its correspondent HTML or PDF output. NOTE: Some liberty is taken to make the process easier to understand. In reality, Asciidoctor has implementation details that divert from the 3 steps above. The different extension types are called in different steps of the conversion process in the following order: . IncludeProcessors are called in an arbitrary and changeable order during processing and are called whenever an `include::` is found. . Preprocessors are called just before parsing. . BlockMacroProcessors and BlockProcessors are called during processing in the order that they appear in the document. . Treeprocessors are called right before processing. . InlineMacroProcessors are called during processing in the order that they appear in the document. . Postprocessors are called before output generation. . DocinfoProcessors are called in an arbitrary and changeable order during processing. ```
cee1d524-186b-4267-aac8-f2559881e9c0
{ "language": "AsciiDoc" }
```asciidoc ifndef::imagesdir[:imagesdir: ../images] = generateDeck image::ea/Manual/generateDeck.png[] This task makes use of the https://github.com/asciidoctor/asciidoctor-reveal.js/[asciidoctor-reveal.js] backend to render your documents HTML based presentation. This task is best used together with the <<exportPPT>> task. Create a PowerPoint presentation and enrich it with reveal.js slide definitions in AsciiDoc within the speaker notes. == Source .build.gradle [source,groovy] ---- include::../../../build.gradle[tags=generateDeck] ---- ``` Fix missing part of sentence
```asciidoc ifndef::imagesdir[:imagesdir: ../images] = generateDeck image::ea/Manual/generateDeck.png[] This task makes use of the https://github.com/asciidoctor/asciidoctor-reveal.js/[asciidoctor-reveal.js] backend to render your documents into a HTML based presentation. This task is best used together with the <<exportPPT>> task. Create a PowerPoint presentation and enrich it with reveal.js slide definitions in AsciiDoc within the speaker notes. == Source .build.gradle [source,groovy] ---- include::../../../build.gradle[tags=generateDeck] ---- ```
06a9bc60-88d8-48c3-a170-35b8f2ee34a6
{ "language": "AsciiDoc" }
```asciidoc # byteslice ## Status image:https://travis-ci.org/rlespinasse/byteslice.svg?branch=master["Build Status", link="https://travis-ci.org/rlespinasse/byteslice"] image:https://coveralls.io/repos/github/rlespinasse/byteslice/badge.svg?branch=master["Coverage Status", link="https://coveralls.io/github/rlespinasse/byteslice?branch=master"] ``` Add how-to section and documentation site link
```asciidoc = byteslice image:https://travis-ci.org/rlespinasse/byteslice.svg?branch=master["Build Status", link="https://travis-ci.org/rlespinasse/byteslice"] image:https://coveralls.io/repos/github/rlespinasse/byteslice/badge.svg?branch=master["Coverage Status", link="https://coveralls.io/github/rlespinasse/byteslice?branch=master"] == How to === test it with coverage [source,shell] ----- $ go test -coverprofile=/tmp/byteslice-cover . ok github.com/rlespinasse/byteslice 0.007s coverage: 100.0% of statements ----- === launch the benchmark [source,shell] ----- $ go test -bench=. . goos: ... goarch: ... pkg: github.com/rlespinasse/byteslice BenchmarkRBitState/get_low_bit_of_high_nibble-8 2000000000 1.99 ns/op BenchmarkRBitState/get_low_bit-8 2000000000 1.96 ns/op ... PASS ok github.com/rlespinasse/byteslice 158.442s ----- == Documentation Site See https://godoc.org/github.com/rlespinasse/byteslice ```
1d576416-f53f-47bf-90f9-ed66d405a78c
{ "language": "AsciiDoc" }
```asciidoc [[xpack-rollup]] = Rolling up historical data [partintro] -- beta[] Keeping historical data around for analysis is extremely useful but often avoided due to the financial cost of archiving massive amounts of data. Retention periods are thus driven by financial realities rather than by the usefulness of extensive historical data. The Rollup feature in {xpack} provides a means to summarize and store historical data so that it can still be used for analysis, but at a fraction of the storage cost of raw data. * <<rollup-overview, Overview>> * <<rollup-getting-started,Getting Started>> * <<rollup-api-quickref, API Quick Reference>> * <<rollup-understanding-groups,Understanding Rollup Grouping>> * <<rollup-search-limitations,Limitations of Rollup Search>> -- include::overview.asciidoc[] include::api-quickref.asciidoc[] include::rollup-getting-started.asciidoc[] include::understanding-groups.asciidoc[] include::rollup-search-limitations.asciidoc[]``` Mark Rollup as Experimental instead of Beta
```asciidoc [[xpack-rollup]] = Rolling up historical data [partintro] -- experimental[] Keeping historical data around for analysis is extremely useful but often avoided due to the financial cost of archiving massive amounts of data. Retention periods are thus driven by financial realities rather than by the usefulness of extensive historical data. The Rollup feature in {xpack} provides a means to summarize and store historical data so that it can still be used for analysis, but at a fraction of the storage cost of raw data. * <<rollup-overview, Overview>> * <<rollup-getting-started,Getting Started>> * <<rollup-api-quickref, API Quick Reference>> * <<rollup-understanding-groups,Understanding Rollup Grouping>> * <<rollup-search-limitations,Limitations of Rollup Search>> -- include::overview.asciidoc[] include::api-quickref.asciidoc[] include::rollup-getting-started.asciidoc[] include::understanding-groups.asciidoc[] include::rollup-search-limitations.asciidoc[]```
e46600d5-e5ae-4871-be54-ddb526bd2ac2
{ "language": "AsciiDoc" }
```asciidoc // Module included in the following assemblies: // // * operators/understanding-olm/olm-understanding-olm.adoc [id="olm-metrics_{context}"] = Exposed metrics The Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for use by the Prometheus-based {product-title} cluster monitoring stack. .Metrics exposed by OLM [cols="2a,8a",options="header"] |=== |Name |Description |`csv_count` |Number of CSVs successfully registered. |`install_plan_count` |Number of InstallPlans. |`subscription_count` |Number of Subscriptions. |`csv_upgrade_count` |Monotonic count of CatalogSources. |=== ``` Update OLM metrics for 4.3
```asciidoc // Module included in the following assemblies: // // * operators/understanding-olm/olm-understanding-olm.adoc [id="olm-metrics_{context}"] = Exposed metrics The Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for use by the Prometheus-based {product-title} cluster monitoring stack. .Metrics exposed by OLM [cols="2a,8a",options="header"] |=== |Name |Description |`catalog_source_count` |Number of CatalogSources. |`csv_abnormal` |When reconciling a CSV, present whenever a CSV version is in any state other than `Succeeded`. Includes the `name`, `namespace`, `phase`, `reason`, and `version` labels. A Prometheus alert is created when this metric is present. |`csv_count` |Number of CSVs successfully registered. |`csv_succeeded` |When reconciling a CSV, represents whether a CSV version is in a `Succeeded` state (value `1`) or not (value `0`). Includes the `name`, `namespace`, and `version` labels. |`csv_upgrade_count` |Monotonic count of CatalogSources. |`install_plan_count` |Number of InstallPlans. |`subscription_count` |Number of Subscriptions. |`subscription_sync_total` |Monotonic count of Subscription syncs. Includes the `channel`, `installed` CSV, and Subscription `name` labels. |=== ```
845e02b5-00c4-42cd-bb00-26e097f9f1da
{ "language": "AsciiDoc" }
```asciidoc = conoha/roles/dokku Role to install dokku 0.5.7 . After finish installation, access to `http://d.10sr.mydns.jp` and finish setup. == Deploying Apps デプロイするには、 ---- cat .ssh/id_rsa.pub | ssh conoha 'sudo sshcommand acl-add dokku dokkudeploy' ---- みたいなことをマシン毎にしなきゃいけません。どうにかして自動化したい。 === Add Apps .Add pytohn-getting-started app ---- alias dokku="ssh -t dokku@conoha" cd ../../dokku-apps/python-getting-started dokku apps:create python-getting-started git remote add dokku dokku@conoha:python-getting-started git push dokku master ---- == Refs * http://dokku.viewdocs.io/dokku/application-deployment/ * https://gist.github.com/10sr/cf8b84cf16f2e67f5dac ``` Add url to access apps
```asciidoc = conoha/roles/dokku Role to install dokku 0.5.7 . After finish installation, access to `http://d.10sr.mydns.jp` and finish setup. == Deploying Apps デプロイするには、 ---- cat .ssh/id_rsa.pub | ssh conoha 'sudo sshcommand acl-add dokku dokkudeploy' ---- みたいなことをマシン毎にしなきゃいけません。どうにかして自動化したい。 === Add Apps .Add pytohn-getting-started app ---- alias dokku="ssh -t dokku@conoha" cd ../../dokku-apps/python-getting-started dokku apps:create python-getting-started git remote add dokku dokku@conoha:python-getting-started git push dokku master ---- And this app can be available at http://python-getting-started.d.10sr.mydns.jp == Refs * http://dokku.viewdocs.io/dokku/application-deployment/ * https://gist.github.com/10sr/cf8b84cf16f2e67f5dac ```
7fa8359a-50f7-4769-9bdb-82ad786b4e98
{ "language": "AsciiDoc" }
```asciidoc = Keyword argument functions now also accept maps Fogus 2021-03-18 :jbake-type: post ifdef::env-github,env-browser[:outfilesuffix: .adoc] To date, Clojure’s support for keyword arguments forces programmers to choose between creating APIs that better support people (accepting keyword args) or APIs that better support programs (by taking a map of those args). Introduced in Clojure 1.11, a function specified to take keyword arguments may be passed a single map instead of or in addition to (and following) the key/value pairs. When a lone map is passed, it is used outright for destructuring, else a trailing map is added to the map built from the preceding key/values via `conj`. For example, a function that takes a sequence and optional keyword arguments and returns a vector containing the values is defined as: [source,clojure] ---- (defn destr [& {:keys [a b] :as opts}] [a b opts]) (destr :a 1) ->[1 nil {:a 1}] (destr {:a 1 :b 2}) ->[1 2 {:a 1 :b 2}] ---- In Clojure 1.11 the call to `destr` accepts a mixture of key/value pairs and/or a lone (or trailing) map benefitting both programmer and program. This enhancement is available now in `org.clojure/clojure "1.11.0-alpha1"`. ``` Fix typo: change 'and' to 'of'
```asciidoc = Keyword argument functions now also accept maps Fogus 2021-03-18 :jbake-type: post ifdef::env-github,env-browser[:outfilesuffix: .adoc] To date, Clojure’s support for keyword arguments forces programmers to choose between creating APIs that better support people (accepting keyword args) or APIs that better support programs (by taking a map of those args). Introduced in Clojure 1.11, a function specified to take keyword arguments may be passed a single map instead of or in addition to (and following) the key/value pairs. When a lone map is passed, it is used outright for destructuring, else a trailing map is added to the map built from the preceding key/values via `conj`. For example, a function that takes a sequence of optional keyword arguments and returns a vector containing the values is defined as: [source,clojure] ---- (defn destr [& {:keys [a b] :as opts}] [a b opts]) (destr :a 1) ->[1 nil {:a 1}] (destr {:a 1 :b 2}) ->[1 2 {:a 1 :b 2}] ---- In Clojure 1.11 the call to `destr` accepts a mixture of key/value pairs and/or a lone (or trailing) map benefitting both programmer and program. This enhancement is available now in `org.clojure/clojure "1.11.0-alpha1"`. ```
3db2e117-4094-4755-853e-c292e9d8e331
{ "language": "AsciiDoc" }
```asciidoc = Light Table David Nolen 2016-08-02 :type: tools :toc: macro :icons: font [Light Table]http://www.lighttable.com is an extensible IDE that offers instant evaluation of your code, realtime feedback, and a ClojureScript plugin ecosystem. To get started, check out this concise http://docs.lighttable.com/#start[introduction] or the [full tutorial]http://docs.lighttable.com/tutorials/full/. Once you feel comfortable navigating Light Table's interface and using basic commands, it is advisable to install the official Paredit plugin. You can do so via the integrated plugin manager: open the command bar and look for `Plugins: Show plugin manager`. ``` Adjust links to lighttable and full tutorial
```asciidoc = Light Table David Nolen 2016-08-02 :type: tools :toc: macro :icons: font http://www.lighttable.com[Light Table] is an extensible IDE that offers instant evaluation of your code, realtime feedback, and a ClojureScript plugin ecosystem. To get started, check out this concise http://docs.lighttable.com/#start[introduction] or the http://docs.lighttable.com/tutorials/full/[full tutorial]. Once you feel comfortable navigating Light Table's interface and using basic commands, it is advisable to install the official Paredit plugin. You can do so via the integrated plugin manager: open the command bar and look for `Plugins: Show plugin manager`. ```
1d918ee7-07f8-4f20-ba5b-da510f90e4b7
{ "language": "AsciiDoc" }
```asciidoc [[breaking-changes-5.3]] == Breaking changes in 5.3 [[breaking_53_packaging_changes]] [float] === Packaging changes [float] ==== Logging configuration Previously Elasticsearch exposed a single system property (`es.logs`) that included the absolute path to the configured logs directory, and the prefix of the filenames used for the various logging files (the main log file, the deprecation log, and the slow logs). This property has been replaced in favor of three properties: - `es.logs.base_path`: the absolute path to the configured logs directory - `es.logs.cluster_name`: the default prefix of the filenames used for the various logging files - `es.logs.node_name`: exposed if `node.name` is configured for inclusion in the filenames of the various logging files (if you prefer) The property `es.logs` is deprecated and will be removed in Elasticsearch 6.0.0. ``` Fix list formatting in 5.3 migration docs
```asciidoc [[breaking-changes-5.3]] == Breaking changes in 5.3 [[breaking_53_packaging_changes]] [float] === Packaging changes [float] ==== Logging configuration Previously Elasticsearch exposed a single system property (`es.logs`) that included the absolute path to the configured logs directory, and the prefix of the filenames used for the various logging files (the main log file, the deprecation log, and the slow logs). This property has been replaced in favor of three properties: * `es.logs.base_path`: the absolute path to the configured logs directory * `es.logs.cluster_name`: the default prefix of the filenames used for the various logging files * `es.logs.node_name`: exposed if `node.name` is configured for inclusion in the filenames of the various logging files (if you prefer) The property `es.logs` is deprecated and will be removed in Elasticsearch 6.0.0. ```
9a92f8d6-ff56-4f7b-83d7-c5abf98cfd10
{ "language": "AsciiDoc" }
```asciidoc = {project-name} :author: {project-author} :revnumber: {project-version} :toclevels: 10 include::_links.adoc[] :leveloffset: 1 include::introduction.adoc[] include::configuration.adoc[] include::modules.adoc[] ``` Add apidoc links to guide
```asciidoc = {project-name} :author: {project-author} :revnumber: {project-version} :toclevels: 10 include::_links.adoc[] :leveloffset: 1 include::introduction.adoc[] include::configuration.adoc[] include::modules.adoc[] = Links link:api/index.html[Javadoc, window="_blank"] link:api-src/index.html[API Sources, window="_blank"] ```
7b781d14-634e-4ad7-bf78-e4d539ce3dd2
{ "language": "AsciiDoc" }
```asciidoc [[analysis-truncate-tokenfilter]] === Truncate Token Filter The `truncate` token filter can be used to truncate tokens into a specific length. This can come in handy with keyword (single token) based mapped fields that are used for sorting in order to reduce memory usage. It accepts a `length` parameter which control the number of characters to truncate to, defaults to `10`. ``` Update truncate token filter to not mention the keyword tokenizer
```asciidoc [[analysis-truncate-tokenfilter]] === Truncate Token Filter The `truncate` token filter can be used to truncate tokens into a specific length. It accepts a `length` parameter which control the number of characters to truncate to, defaults to `10`. ```
62ae81a0-c006-44c0-b60f-a2eea012bf2c
{ "language": "AsciiDoc" }
```asciidoc James Glasbrenner's dotfiles ============================ James Glasbrenner <[email protected]> July 27, 2017 My dotfiles repo. Bootstrapping ------------- Stow 2.3.0 is included in this repo for bootstrapping purposes. To stow stow after cloning this repository to `$HOME/.dotfiles`, run .bash ---------------------------------------------- PERL5LIB=$HOME/.dotfiles/stow/stow/lib/perl5 \ $HOME/.dotfiles/stow/stow/bin/stow \ -d $HOME/.dotfiles/stow \ -t /home/glasbren/.local \ stow ---------------------------------------------- License ------- All content is licensed under the terms of link:LICENSE[The Unlicense License]. ``` Fix source code block formatting
```asciidoc James Glasbrenner's dotfiles ============================ James Glasbrenner <[email protected]> July 27, 2017 My dotfiles repo. Bootstrapping ------------- Stow 2.3.0 is included in this repo for bootstrapping purposes. To stow stow after cloning this repository to `$HOME/.dotfiles`, run [source,bash] ---------------------------------------------- PERL5LIB=$HOME/.dotfiles/stow/stow/lib/perl5 \ $HOME/.dotfiles/stow/stow/bin/stow \ -d $HOME/.dotfiles/stow \ -t /home/glasbren/.local \ stow ---------------------------------------------- License ------- All content is licensed under the terms of link:LICENSE[The Unlicense License]. ```
d29577d9-1a10-4a87-890b-961545626174
{ "language": "AsciiDoc" }
```asciidoc [[query-periodic-commit]] Using Periodic Commit ===================== NOTE: See <<cypherdoc-importing-csv-files-with-cypher>> on how to import data from CSV files. Updating very large amounts of data (e.g. when importing) with a single Cypher query may fail due to memory constraints. For these situations *only*, Cypher provides the global +USING PERIODIC COMMIT+ query hint for updating queries. Periodic Commit tracks the number of updates performed by a query (creating a node, setting a property etc.). Whenever the number of updates reaches a limit, the current transaction is committed and replaced with a newly opened transaction. Using periodic commit will prevent running out of memory when updating large amounts of data. However it will also break transactional isolation thus it should only be used where needed. :leveloffset: 2 include::periodic-commit-without-update-limit.asciidoc[] include::periodic-commit-with-update-limit.asciidoc[] include::import-using-periodic-commit.asciidoc[] ``` Remove links to the periodic commit reference
```asciidoc [[query-periodic-commit]] Using Periodic Commit ===================== NOTE: See <<cypherdoc-importing-csv-files-with-cypher>> on how to import data from CSV files. Updating very large amounts of data (e.g. when importing) with a single Cypher query may fail due to memory constraints. For these situations *only*, Cypher provides the global +USING PERIODIC COMMIT+ query hint for updating queries. Periodic Commit tracks the number of updates performed by a query (creating a node, setting a property etc.). Whenever the number of updates reaches a limit, the current transaction is committed and replaced with a newly opened transaction. Using periodic commit will prevent running out of memory when updating large amounts of data. However it will also break transactional isolation thus it should only be used where needed. :leveloffset: 2 include::import-using-periodic-commit.asciidoc[] ```
08268532-a126-468d-a63f-beacaf450587
{ "language": "AsciiDoc" }
```asciidoc = Griffon 2.0.0.RC1 Released Andres Almiray 2014-07-29 :jbake-type: post :jbake-status: published :category: news :idprefix: == Griffon 2.0.0.RC1 Released The Griffon team is happy to announce the release of Griffon 2.0.0.RC1! This is the first release candidate of Griffon 2.0.0. If all goes according to planf the next release will be 2.0.0 final. The following list summarizes the changes brought by this release: * Groovy support upgraded to Groovy 2.3.5. * Several build plugins updated to their latest versions. * Addition of a master pom file for application projects. * Enable +mavenLocal()+ on gradle-griffon plugins by default. * more content added to the link:../guide/2.0.0.RC1/index.html[Griffon Guide]. We look forward to your feedback. Please report any problems you find to the Griffon User list, or better yet, file a bug at http://jira.codehaus.org/browse/griffon Remember you can also contact the team on Twitter: http://twitter.com/theaviary[@theaviary]. Many thanks to all who contributed to this release! The Griffon Team ``` Update latest news on site project
```asciidoc = Griffon 2.0.0.RC1 Released Andres Almiray 2014-07-29 :jbake-type: post :jbake-status: published :category: news :idprefix: == Griffon 2.0.0.RC1 Released The Griffon team is happy to announce the release of Griffon 2.0.0.RC1! This is the first release candidate of Griffon 2.0.0. If all goes according to plan the next release will be 2.0.0 final in a few weeks time. The following list summarizes the changes brought by this release: * Groovy support upgraded to Groovy 2.3.5. * Several build plugins updated to their latest versions. * Addition of a master pom file for application projects. * Enable +mavenLocal()+ on gradle-griffon plugins by default. * Lots of updates applied to Lazybones application templates. * More content added to the link:../guide/2.0.0.RC1/index.html[Griffon Guide]. We look forward to your feedback. Please report any problems you find to the Griffon User list, or better yet, file a bug at http://jira.codehaus.org/browse/griffon Remember you can also contact the team on Twitter: http://twitter.com/theaviary[@theaviary]. Many thanks to all who contributed to this release! The Griffon Team ```
89fd1c63-f2f4-435d-aff9-6cd2a8ca1781
{ "language": "AsciiDoc" }
```asciidoc [[monitoring]] == Monitoring {ProductName} on OpenShift {ProductName} comes with addons for Prometheus and Grafana for monitoring the service. Cluster-admin privileges is required for Prometheus to monitor pods in the cluster. === Deploying Prometheus .Procedure . Create Prometheus deployment + [options="nowrap"] ---- oc create -f ./openshift/addons/prometheus.yaml -n enmasse ---- . Grant cluster-reader privileges to Prometheus service account + [options="nowrap"] ---- oc policy add-role-to-user cluster-reader system:serviceaccount:enmasse:prometheus-server ---- === Deploying Grafana .Procedure . Create Grafana deployment + [options="nowrap"] ---- oc create -f ./openshift/addons/grafana.yaml -n enmasse ---- . Expose Grafana service + [options="nowrap"] ---- oc expose service grafana ---- Grafana accepts the username 'admin' and password 'admin' by default. See the link:https://prometheus.io/docs/visualization/grafana/#creating-a-prometheus-data-source[Prometheus Documentation] on how to connect Grafana to Prometheus. Use `prometheus.enmasse.svc.cluster.local` as the prometheus hostname. ``` Use correct command to add cluster-reader rolebinding
```asciidoc [[monitoring]] == Monitoring {ProductName} on OpenShift {ProductName} comes with addons for Prometheus and Grafana for monitoring the service. Cluster-admin privileges is required for Prometheus to monitor pods in the cluster. === Deploying Prometheus .Procedure . Create Prometheus deployment + [options="nowrap"] ---- oc create -f ./openshift/addons/prometheus.yaml -n enmasse ---- . Grant cluster-reader privileges to Prometheus service account + [options="nowrap"] ---- oc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:enmasse:prometheus-server ---- === Deploying Grafana .Procedure . Create Grafana deployment + [options="nowrap"] ---- oc create -f ./openshift/addons/grafana.yaml -n enmasse ---- . Expose Grafana service + [options="nowrap"] ---- oc expose service grafana ---- Grafana accepts the username 'admin' and password 'admin' by default. See the link:https://prometheus.io/docs/visualization/grafana/#creating-a-prometheus-data-source[Prometheus Documentation] on how to connect Grafana to Prometheus. Use `prometheus.enmasse.svc.cluster.local` as the prometheus hostname. ```
f1b19336-9bf6-4b7b-8bb5-e51683dea431
{ "language": "AsciiDoc" }
```asciidoc = M17n TODO ``` Add section on how to install MIM files
```asciidoc = M17n https://www.nongnu.org/m17n/[m17n] refers to project for enhanced multilingualization on Linux. The output target generates `.mim` files, which are used as configuration files for https://en.wikipedia.org/wiki/Intelligent_Input_Bus[IBUS]. == Project-level configuration and properties NOTE: For a list of all supported properties of the Android target files, see <<TargetX11>> in the reference section below. == Layout-level configuration and properties Currently none. == How to install a new locale on X11 Assumes Ubuntu 18.04. . Make sure Gnome settings app is not open . Copy the new `.mim` file(s) to either `~/.m17n.d/` (user local) or `/usr/share/m17n/` (global) . Make sure they have the correct permissions set, i.e. `644` like the other files . restart all IBus daemons (according to https://askubuntu.com/a/656243[this question]), e.g. using `pgrep ibus | xargs kill` . Open Gnome settings .. Go to "Region & Language" .. Under "Input Sources", press the "+" button .. In the modal window, select your newly added language variant. Note that it might be grouped by the main language, e.g., you might need to click on "Swedish (Sweden)" first, and then select the specific variant like "Swedish (test (m17n))". .. Confirm with "Add" .. You can test the keyboard layout by clicking on the "eye" icon next to the language in the list . The language should show up global menu bar's language selection. ```
181b2c91-da0f-4d67-971d-4c64a53c36a7
{ "language": "AsciiDoc" }
```asciidoc [[important-settings]] == Important Elasticsearch configuration While Elasticsearch requires very little configuration, there are a number of settings which need to be considered before going into production. The following settings *must* be considered before going to production: * <<path-settings,Path settings>> * <<cluster.name,Cluster name>> * <<node.name,Node name>> * <<network.host,Network host>> * <<discovery-settings,Discovery settings>> * <<heap-size,Heap size>> * <<heap-dump-path,Heap dump path>> * <<gc-logging,GC logging>> include::important-settings/path-settings.asciidoc[] include::important-settings/cluster-name.asciidoc[] include::important-settings/node-name.asciidoc[] include::important-settings/network-host.asciidoc[] include::important-settings/discovery-settings.asciidoc[] include::important-settings/heap-size.asciidoc[] include::important-settings/heap-dump-path.asciidoc[] include::important-settings/gc-logging.asciidoc[] ``` Add error file docs to important settings
```asciidoc [[important-settings]] == Important Elasticsearch configuration While Elasticsearch requires very little configuration, there are a number of settings which need to be considered before going into production. The following settings *must* be considered before going to production: * <<path-settings,Path settings>> * <<cluster.name,Cluster name>> * <<node.name,Node name>> * <<network.host,Network host>> * <<discovery-settings,Discovery settings>> * <<heap-size,Heap size>> * <<heap-dump-path,Heap dump path>> * <<gc-logging,GC logging>> include::important-settings/path-settings.asciidoc[] include::important-settings/cluster-name.asciidoc[] include::important-settings/node-name.asciidoc[] include::important-settings/network-host.asciidoc[] include::important-settings/discovery-settings.asciidoc[] include::important-settings/heap-size.asciidoc[] include::important-settings/heap-dump-path.asciidoc[] include::important-settings/gc-logging.asciidoc[] include::important-settings/error-file.asciidoc[] ```
1d1d9dc9-6267-4ae3-9b00-1d3b58a5d199
{ "language": "AsciiDoc" }
```asciidoc // Module included in the following assemblies: // // * cli_reference/openshift_developer_cli/creating-a-single-component-application-with-odo [id="creating-a-nodejs-application-with-odo_{context}"] = Creating a Node.js application with {odo-title} .Procedure . Add a component of the type Node.js to your application: + ---- $ odo create nodejs ---- + NOTE: By default, the latest image is used. You can also explicitly supply an image version by using `odo create openshift/nodejs:8`. . Push the initial source code to the component: + ---- $ odo push ---- + Your component is now deployed to {product-title}. . Create a URL and add an entry in the local configuration file as follows: + ---- $ odo url create --port 8080 ---- + . Push the changes. This creates a URL on the cluster. + ---- $ odo push ---- + . List the URLs to check the desired URL for the component. + ---- $ odo url list ---- + . View your deployed application using the generated URL. + ---- $ curl <URL> ----``` Update step for deploying an odo single application
```asciidoc // Module included in the following assemblies: // // * cli_reference/openshift_developer_cli/creating-a-single-component-application-with-odo [id="creating-a-nodejs-application-with-odo_{context}"] = Creating a Node.js application with {odo-title} .Procedure . Change the current directory to the front-end directory: + ---- $ cd <directory-name> ---- . Add a component of the type Node.js to your application: + ---- $ odo create nodejs ---- + NOTE: By default, the latest image is used. You can also explicitly supply an image version by using `odo create openshift/nodejs:8`. . Push the initial source code to the component: + ---- $ odo push ---- + Your component is now deployed to {product-title}. . Create a URL and add an entry in the local configuration file as follows: + ---- $ odo url create --port 8080 ---- + . Push the changes. This creates a URL on the cluster. + ---- $ odo push ---- + . List the URLs to check the desired URL for the component. + ---- $ odo url list ---- + . View your deployed application using the generated URL. + ---- $ curl <URL> ---- ```
50fb9980-ea6f-4be7-85e6-361b11fc4a46
{ "language": "AsciiDoc" }
```asciidoc = flexy-pool Author <[email protected]> v1.0.0, 2014-02-25 :toc: :imagesdir: images :homepage: http://vladmihalcea.com/ == Introduction The flexy-pool library brings adaptability to a given Connection Pool, allowing it to resize on demand. This is very handy since most connection pools offer a limited set of dynamic configuration strategies. == Features (Goals) * extensive connection pool support(Bitronix TM, C3PO, DBCP) * statistics support ** connection acquiring time histogram ** retries attempts histogram ** minimum CP size histogram ** maximum CP size histogram ** average CP size histogram``` Add more goals for metrics
```asciidoc = flexy-pool Author <[email protected]> v1.0.0, 2014-02-25 :toc: :imagesdir: images :homepage: http://vladmihalcea.com/ == Introduction The flexy-pool library brings adaptability to a given Connection Pool, allowing it to resize on demand. This is very handy since most connection pools offer a limited set of dynamic configuration strategies. == Features (Goals) * extensive connection pool support(Bitronix TM, C3PO, DBCP) * statistics support ** source connection acquiring time histogram ** total connection acquiring time histogram ** retries attempts histogram ** minimum CP size histogram ** maximum CP size histogram ** average connection request count histogram```
6f394e87-9be7-463b-97c4-ca2cd639aa62
{ "language": "AsciiDoc" }
```asciidoc [role="xpack"] [[xpack-sql]] = SQL Access :sql-tests: {xes-repo-dir}/../../qa/sql :sql-specs: {sql-tests}/src/main/resources :jdbc-tests: {sql-tests}/src/main/java/org/elasticsearch/xpack/qa/sql/jdbc :security-tests: {sql-tests}/security/src/test/java/org/elasticsearch/xpack/qa/sql/security [partintro] -- X-Pack includes a SQL feature to execute SQL against Elasticsearch indices and return tabular results. There are four main components: <<sql-rest,REST API>>:: Accepts SQL in a JSON document, executes it, and returns the results. <<sql-translate,Translate API>>:: Accepts SQL in a JSON document and translates it into a native Elasticsearch query and returns that. <<sql-cli,CLI>>:: Command line application that connects to Elasticsearch to execute SQL and print tabular results. <<sql-jdbc,JDBC>>:: A JDBC driver for Elasticsearch. -- include::getting-started.asciidoc[] include::endpoints/index.asciidoc[] include::functions/index.asciidoc[] include::language/index.asciidoc[] :jdbc-tests!: ``` Mark SQL feature as experimental
```asciidoc [role="xpack"] [[xpack-sql]] = SQL Access :sql-tests: {xes-repo-dir}/../../qa/sql :sql-specs: {sql-tests}/src/main/resources :jdbc-tests: {sql-tests}/src/main/java/org/elasticsearch/xpack/qa/sql/jdbc :security-tests: {sql-tests}/security/src/test/java/org/elasticsearch/xpack/qa/sql/security [partintro] -- experimental[] X-Pack includes a SQL feature to execute SQL against Elasticsearch indices and return tabular results. There are four main components: <<sql-rest,REST API>>:: Accepts SQL in a JSON document, executes it, and returns the results. <<sql-translate,Translate API>>:: Accepts SQL in a JSON document and translates it into a native Elasticsearch query and returns that. <<sql-cli,CLI>>:: Command line application that connects to Elasticsearch to execute SQL and print tabular results. <<sql-jdbc,JDBC>>:: A JDBC driver for Elasticsearch. -- include::getting-started.asciidoc[] include::endpoints/index.asciidoc[] include::functions/index.asciidoc[] include::language/index.asciidoc[] :jdbc-tests!: ```
1780cbe8-7d53-49ab-b0ef-07948e7e39ce
{ "language": "AsciiDoc" }
```asciidoc = Cloud Enablement CCT module This repository serves as a temporary public link:https://github.com/concrt/concreate[Concreate] module for building JBoss Openshift container images. This repository is an enhanced copy of shell scripts used to build JBoss images for OpenShift. ``` Add a mini-faq and see-also URIs
```asciidoc = Cloud Enablement CCT module This repository serves as a temporary public link:https://github.com/concrt/concreate[Concreate] module for building JBoss Openshift container images. This repository is an enhanced copy of shell scripts used to build JBoss images for OpenShift. == FAQ === Why the name? *cct* (container configuration tool) was the name of a precursor tool to *concreate*. This module bundle was originally created for use with *cct* but has outlived it. === Temporary? We are in the process of refactoring the modules. The end-goal is likely to be splitting this repository up into separate modules aligned by product. == See also * https://github.com/jboss-container-images[Red Hat Middleware Container Images] * http://registry.access.redhat.com/[Red Hat Container Catalog] * https://www.openshift.com/[Red Hat OpenShift] ```
f0697682-1f84-4800-9b2b-7a3a0ebe5105
{ "language": "AsciiDoc" }
```asciidoc = Continuous integration :awestruct-layout: normalBase :showtitle: We use Jenkins for continuous integration. *Show https://hudson.jboss.org/hudson/job/drools/[the Jenkins job].* Keep the build blue! ``` Update public Jenkins job URL
```asciidoc = Continuous integration :awestruct-layout: normalBase :showtitle: We use Jenkins for continuous integration. *Show https://jenkins-kieci.rhcloud.com/[the Jenkins jobs].* These are mirrors of a Red Hat internal Jenkins jobs. Keep the builds green! ```
2bc7a285-8ff6-4aa9-80ec-56040cb68d52
{ "language": "AsciiDoc" }
```asciidoc = Minimal-J Java - but small. image::doc/frontends.png[] Minimal-J applications are * Responsive to use on every device * Straight forward to specify and implement and therefore * Easy to plan and manage === Idea Business applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented. Minimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user. == Technical Features * Independent of the used UI technology. Implementations for Web / Mobile / Desktop. * ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported. * Small: The minimalj.jar is still < 1MB * Very few dependencies * Applications can run standalone (like SpringBoot) == Documentation * link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications. * link:doc/topics.adoc[Tutorial and examples] Informations for developers. * link:doc/release_notes.adoc[Release Notes] === Contact * Bruno Eberhard, mailto:[email protected][[email protected]] ``` Include Hello World Youtube video
```asciidoc = Minimal-J Java - but small. image::doc/frontends.png[] Minimal-J applications are * Responsive to use on every device * Straight forward to specify and implement and therefore * Easy to plan and manage === Idea Business applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented. Minimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user. == Technical Features * Independent of the used UI technology. Implementations for Web / Mobile / Desktop. * ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported. * Small: The minimalj.jar is still < 1MB * Very few dependencies * Applications can run standalone (like SpringBoot) == Documentation * link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications. * link:doc/topics.adoc[Tutorial and examples] Informations for developers. * link:doc/release_notes.adoc[Release Notes] == Hello World How to implement Hello World in Minimal-J: [![Minimal-J - Hello Wolrd](http://img.youtube.com/vi/0VHz7gv6TpA/0.jpg)](http://www.youtube.com/watch?v=0VHz7gv6TpA) === Contact * Bruno Eberhard, mailto:[email protected][[email protected]] ```
3480d0cb-40b7-493b-b1ff-c63dbfe059e2
{ "language": "AsciiDoc" }
```asciidoc [id="persistent-storage-using-csi"] = Configuring CSI volumes include::modules/common-attributes.adoc[] :context: persistent-storage-csi toc::[] The Container Storage Interface (CSI) allows {product-title} to consume storage from storage backends that implement the link:https://github.com/container-storage-interface/spec[CSI interface] as persistent storage. [IMPORTANT] ==== {product-title} does not ship with any CSI drivers. It is recommended to use the CSI drivers provided by link:https://kubernetes-csi.github.io/docs/drivers.html[community or storage vendors]. Installation instructions differ by driver, and are found in each driver's documentation. Follow the instructions provided by the CSI driver. {product-title} {product-version} supports version 1.1.0 of the link:https://github.com/container-storage-interface/spec[CSI specification]. ==== include::modules/persistent-storage-csi-architecture.adoc[leveloffset=+1] include::modules/persistent-storage-csi-external-controllers.adoc[leveloffset=+2] include::modules/persistent-storage-csi-driver-daemonset.adoc[leveloffset=+2] include::modules/persistent-storage-csi-dynamic-provisioning.adoc[leveloffset=+1] include::modules/persistent-storage-csi-mysql-example.adoc[leveloffset=+1] ``` Remove CSI spec ref in note
```asciidoc [id="persistent-storage-using-csi"] = Configuring CSI volumes include::modules/common-attributes.adoc[] :context: persistent-storage-csi toc::[] The Container Storage Interface (CSI) allows {product-title} to consume storage from storage back ends that implement the link:https://github.com/container-storage-interface/spec[CSI interface] as persistent storage. [IMPORTANT] ==== {product-title} does not ship with any CSI drivers. It is recommended to use the CSI drivers provided by link:https://kubernetes-csi.github.io/docs/drivers.html[community or storage vendors]. Installation instructions differ by driver, and are found in each driver's documentation. Follow the instructions provided by the CSI driver. ==== include::modules/persistent-storage-csi-architecture.adoc[leveloffset=+1] include::modules/persistent-storage-csi-external-controllers.adoc[leveloffset=+2] include::modules/persistent-storage-csi-driver-daemonset.adoc[leveloffset=+2] include::modules/persistent-storage-csi-dynamic-provisioning.adoc[leveloffset=+1] include::modules/persistent-storage-csi-mysql-example.adoc[leveloffset=+1] ```
34c411ba-ef7f-4b12-b611-fd92365ce12a
{ "language": "AsciiDoc" }
```asciidoc [[development]] == Development Github repository: {datasource-proxy}``` Add how to build documentation
```asciidoc [[development]] == Development Github repository: {datasource-proxy} === Build Documentation ```sh > ./mvnw asciidoctor:process-asciidoc@output-html ``` ```
5e7260f6-1d52-44f5-a72f-26c2c209e5ca
{ "language": "AsciiDoc" }
```asciidoc [[release-notes]] = JUnit 5 Release Notes Stefan Bechtold; Sam Brannen; Johannes Link; Matthias Merdes; Marc Philipp; Christian Stein // :docinfodir: ../docinfos :docinfo: private-head :numbered!: // This document contains the _change log_ for all JUnit 5 releases since 5.0 GA. Please refer to the <<../user-guide/index.adoc#user-guide,User Guide>> for comprehensive reference documentation for programmers writing tests, extension authors, and engine authors as well as build tool and IDE vendors. include::../link-attributes.adoc[] include::release-notes-5.1.0-M2.adoc[] include::release-notes-5.1.0-RC1.adoc[] include::release-notes-5.1.0-M1.adoc[] include::release-notes-5.0.3.adoc[] include::release-notes-5.0.2.adoc[] include::release-notes-5.0.1.adoc[] include::release-notes-5.0.0.adoc[] ``` Fix order of includes for Release Notes
```asciidoc [[release-notes]] = JUnit 5 Release Notes Stefan Bechtold; Sam Brannen; Johannes Link; Matthias Merdes; Marc Philipp; Christian Stein // :docinfodir: ../docinfos :docinfo: private-head :numbered!: // This document contains the _change log_ for all JUnit 5 releases since 5.0 GA. Please refer to the <<../user-guide/index.adoc#user-guide,User Guide>> for comprehensive reference documentation for programmers writing tests, extension authors, and engine authors as well as build tool and IDE vendors. include::../link-attributes.adoc[] include::release-notes-5.1.0-RC1.adoc[] include::release-notes-5.1.0-M2.adoc[] include::release-notes-5.1.0-M1.adoc[] include::release-notes-5.0.3.adoc[] include::release-notes-5.0.2.adoc[] include::release-notes-5.0.1.adoc[] include::release-notes-5.0.0.adoc[] ```
fa84ec8c-bb41-4c1d-8f1c-3dd3f5e7ecb8
{ "language": "AsciiDoc" }
```asciidoc == Setting up a development environment on Ubuntu 16.04 (Xenial) Install development dependencies: $ sudo apt-get install python-pip python-pyscard libykpers-1-1 libu2f-host0 Setup the repository: $ git clone --recursive https://github.com/Yubico/yubikey-manager.git $ cd yubikey-manager Install in editable mode with pip (from root of repository): $ sudo pip install -e . Run the app: $ ykman --help To update once installed, just make sure the repo is up to date: $ git pull $ git submodule update To uninstall, run: $ sudo pip uninstall yubikey-manager ``` Add documentation about running tests
```asciidoc == Setting up a development environment on Ubuntu 16.04 (Xenial) Install development dependencies: $ sudo apt-get install python-pip python-pyscard libykpers-1-1 libu2f-host0 Setup the repository: $ git clone --recursive https://github.com/Yubico/yubikey-manager.git $ cd yubikey-manager Install in editable mode with pip (from root of repository): $ sudo pip install -e . Run the app: $ ykman --help To update once installed, just make sure the repo is up to date: $ git pull $ git submodule update To uninstall, run: $ sudo pip uninstall yubikey-manager === Unit tests To run unit tests: $ python setup.py test === Integration tests WARNING: ONLY run these on a dedicated developer key, as it will permanently delete data on the device! To run integration tests: $ INTEGRATION_TESTS=TRUE python setup.py test ```
91081c81-97bf-48ac-a7ca-42d117da5b38
{ "language": "AsciiDoc" }
```asciidoc :gfdl-enabled: OsmoTRX User Manual ==================== Pau Espin Pedrol <[email protected]> include::../common/chapters/preface.adoc[] include::chapters/overview.adoc[] include::chapters/running.adoc[] include::../common/chapters/control_if.adoc[] include::chapters/control.adoc[] include::../common/chapters/vty.adoc[] include::../common/chapters/logging.adoc[] include::chapters/counters.adoc[] include::chapters/configuration.adoc[] include::../common/chapters/port_numbers.adoc[] include::../common/chapters/bibliography.adoc[] include::../common/chapters/glossary.adoc[] include::../common/chapters/gfdl.adoc[] ``` Introduce chapter trx_if.adoc and add it to OsmoTRX and OsmoBTS
```asciidoc :gfdl-enabled: OsmoTRX User Manual ==================== Pau Espin Pedrol <[email protected]> include::../common/chapters/preface.adoc[] include::chapters/overview.adoc[] include::chapters/running.adoc[] include::../common/chapters/control_if.adoc[] include::chapters/control.adoc[] include::../common/chapters/vty.adoc[] include::../common/chapters/logging.adoc[] include::chapters/counters.adoc[] include::chapters/configuration.adoc[] include::../common/chapters/trx_if.adoc[] include::../common/chapters/port_numbers.adoc[] include::../common/chapters/bibliography.adoc[] include::../common/chapters/glossary.adoc[] include::../common/chapters/gfdl.adoc[] ```
51ef3773-a0e4-4caf-bb95-44bcd984b6b4
{ "language": "AsciiDoc" }
```asciidoc [[misc]] == Misc * https://github.com/elasticsearch/puppet-elasticsearch[Puppet]: Elasticsearch puppet module. * http://github.com/elasticsearch/cookbook-elasticsearch[Chef]: Chef cookbook for Elasticsearch * https://github.com/medcl/salt-elasticsearch[SaltStack]: SaltStack Module for Elasticsearch * http://www.github.com/neogenix/daikon[daikon]: Daikon Elasticsearch CLI * https://github.com/Aconex/scrutineer[Scrutineer]: A high performance consistency checker to compare what you've indexed with your source of truth content (e.g. DB) * https://www.wireshark.org/[Wireshark]: Protocol dissection for Zen discovery, HTTP and the binary protocol * https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin[Readonly REST]: High performance access control for Elasticsearch native REST API. ``` Add Pes Plugin to the plugin page
```asciidoc [[misc]] == Misc * https://github.com/elasticsearch/puppet-elasticsearch[Puppet]: Elasticsearch puppet module. * http://github.com/elasticsearch/cookbook-elasticsearch[Chef]: Chef cookbook for Elasticsearch * https://github.com/medcl/salt-elasticsearch[SaltStack]: SaltStack Module for Elasticsearch * http://www.github.com/neogenix/daikon[daikon]: Daikon Elasticsearch CLI * https://github.com/Aconex/scrutineer[Scrutineer]: A high performance consistency checker to compare what you've indexed with your source of truth content (e.g. DB) * https://www.wireshark.org/[Wireshark]: Protocol dissection for Zen discovery, HTTP and the binary protocol * https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin[Readonly REST]: High performance access control for Elasticsearch native REST API. * https://github.com/kodcu/pes[Pes]: A pluggable elastic query DSL builder for Elasticsearch ```
1ed50017-659a-4eea-8b77-c1545b33b493
{ "language": "AsciiDoc" }
```asciidoc = Configuration notes: bryn.justinwflory.com Configuration or changelog notes for Bryn. == Firewall * *Services*: ** SSH (22) ** HTTP (80) ** DNS (52) ** HTTPS (443) ** MySQL * *Ports*: ** `8086/tcp`: InfluxDB ** `60597/udp`: Mosh == XenForo * `sitemaps/` and `temp/` must have 0777 permissions… ** `temp/` is for the image proxy ``` Add wp_cli update in progress scenario to logbook
```asciidoc = Configuration notes: bryn.justinwflory.com Configuration or changelog notes for Bryn. == Firewall * *Services*: ** SSH (22) ** HTTP (80) ** DNS (52) ** HTTPS (443) ** MySQL * *Ports*: ** `8086/tcp`: InfluxDB ** `60597/udp`: Mosh == WordPress === wp_cli cannot upgrade WordPress because another update is in progress `wp option get core_updater.lock`:: Check if WordPress is holding a version lock. If a number is returned, the version lock is on. `wp option delete core_updater.lock`:: Break a version lock on WordPress. Verify an update is NOT in progress before deleting (check processes, visit admin panel). https://wordpress.stackexchange.com/questions/224989/get-rid-of-another-update-is-currently-in-progress[Reference] == XenForo * `sitemaps/` and `temp/` must have 0777 permissions… ** `temp/` is for the image proxy ```
9ebe8b1c-77be-4407-8027-6f00eb3a7efe
{ "language": "AsciiDoc" }
```asciidoc = beeLīn Java / JVM This is the start of a DNSChain client for the JVM. == Install Prerequisites Make sure you have JDK 7 or later installed. WARNING: **beeLin-java** is using HTTPS to query the DNSChain server at https://api.dnschain.net[api.dnschain.net] which uses a https://cert.startcom.org[StartCom] SSL certificate. By default, Java installations do not trust the StartCom certificate authority and your HTTPS connection to `api.dnschain.net` may fail. For more detail and instructions on how to install the StartCom certificate in your JDK/JRE see http://wernerstrydom.com/2014/01/14/installing-startcom-certificate-authority-certificates-java/[Installing StartCom Certificate Authority Certificates in Java]. == Build and run tests ./gradlew build You can view the test results in `beelinj-lib/build/reports/tests/index.html` Of course you probably want to install https://www.jetbrains.com/idea/download/[IntelliJ IDEA] to do this. ``` Add how to run the browser via Gradle.
```asciidoc = beeLīn Java / JVM This is the start of a DNSChain client for the JVM. == Install Prerequisites Make sure you have JDK 7 or later installed. WARNING: **beeLin-java** is using HTTPS to query the DNSChain server at https://api.dnschain.net[api.dnschain.net] which uses a https://cert.startcom.org[StartCom] SSL certificate. By default, Java installations do not trust the StartCom certificate authority and your HTTPS connection to `api.dnschain.net` may fail. For more detail and instructions on how to install the StartCom certificate in your JDK/JRE see http://wernerstrydom.com/2014/01/14/installing-startcom-certificate-authority-certificates-java/[Installing StartCom Certificate Authority Certificates in Java]. == Build and run tests ./gradlew build You can view the test results in `beelinj-lib/build/reports/tests/index.html` Of course you probably want to install https://www.jetbrains.com/idea/download/[IntelliJ IDEA] to do this. == Build and run the bare-bones browser ./gradlew :beelinj-browser-fx:run ```
67fe1ccf-9466-488f-8d0d-078331c6e8cf
{ "language": "AsciiDoc" }
```asciidoc = Installers :toc: :toc-placement!: toc::[] [[intro]] Introduction ------------ CMake has the ability to create installers for multiple platforms using a program called link:https://cmake.org/Wiki/CMake:CPackPackageGenerators[CPack]. CPack includes the ability to create Linux RPM, deb and gzip distributions of both binaries and source code. It also includes teh ability to create NSIS files for Microsoft Windows. ``` Fix typo. teh -> the
```asciidoc = Installers :toc: :toc-placement!: toc::[] [[intro]] Introduction ------------ CMake has the ability to create installers for multiple platforms using a program called link:https://cmake.org/Wiki/CMake:CPackPackageGenerators[CPack]. CPack includes the ability to create Linux RPM, deb and gzip distributions of both binaries and source code. It also includes the ability to create NSIS files for Microsoft Windows. ```
69cde7e9-32af-4436-a370-b2984ad8f39c
{ "language": "AsciiDoc" }
```asciidoc // Do not edit this file (e.g. go instead to src/main/asciidoc) Spring Cloud Build is a common utility project for Spring Cloud to use for plugin and dependency management. == Building and Deploying To install locally: ---- $ mvn install ---- and to deploy snapshots to repo.spring.io: ---- $ mvn install -DaltSnapshotDeploymentRepository=repo.spring.io::default::https://repo.spring.io/libs-snapshot-local ---- for a RELEASE build use ---- $ mvn install -DaltReleaseDeploymentRepository=repo.spring.io::default::https://repo.spring.io/libs-release-local ---- and for Maven Central use ---- $ mvn install -P central -DaltReleaseDeploymentRepository=sonatype-nexus-staging::default::https://oss.sonatype.org/service/local/staging/deploy/maven2 ---- (the "central" profile is available for all projects in Spring Cloud and it sets up the gpg jar signing, and the repository has to be specified separately for this project because it is a parent of the starter parent which users inturn have as their own parent). ``` Update readme for mavn deploy
```asciidoc // Do not edit this file (e.g. go instead to src/main/asciidoc) Spring Cloud Build is a common utility project for Spring Cloud to use for plugin and dependency management. == Building and Deploying To install locally: ---- $ mvn install ---- and to deploy snapshots to repo.spring.io: ---- $ mvn deploy -DaltSnapshotDeploymentRepository=repo.spring.io::default::https://repo.spring.io/libs-snapshot-local ---- for a RELEASE build use ---- $ mvn deploy -DaltReleaseDeploymentRepository=repo.spring.io::default::https://repo.spring.io/libs-release-local ---- and for Maven Central use ---- $ mvn deploy -P central -DaltReleaseDeploymentRepository=sonatype-nexus-staging::default::https://oss.sonatype.org/service/local/staging/deploy/maven2 ---- (the "central" profile is available for all projects in Spring Cloud and it sets up the gpg jar signing, and the repository has to be specified separately for this project because it is a parent of the starter parent which users in turn have as their own parent). ```
fbea0096-88dd-493d-9947-55fddfff9ed6
{ "language": "AsciiDoc" }
```asciidoc = Spring Session Rob Winch Spring Session aims to provide a common infrastructure for managing sessions. This provides many benefits including: * Accessing a session from any environment (i.e. web, messaging infrastructure, etc) * In a web environment ** Support for clustering in a vendor neutral way ** Pluggable strategy for determining the session id ** Easily keep the HttpSession alive when a WebSocket is active = Spring Session Project Site You can find the documentation, issue management, support, samples, and guides for using Spring Session at http://spring.io/spring-session/ = License Spring Session is Open Source software released under the http://www.apache.org/licenses/LICENSE-2.0.html[Apache 2.0 license]. ``` Add projects subdomain in project page URL
```asciidoc = Spring Session Rob Winch Spring Session aims to provide a common infrastructure for managing sessions. This provides many benefits including: * Accessing a session from any environment (i.e. web, messaging infrastructure, etc) * In a web environment ** Support for clustering in a vendor neutral way ** Pluggable strategy for determining the session id ** Easily keep the HttpSession alive when a WebSocket is active = Spring Session Project Site You can find the documentation, issue management, support, samples, and guides for using Spring Session at http://projects.spring.io/spring-session/ = License Spring Session is Open Source software released under the http://www.apache.org/licenses/LICENSE-2.0.html[Apache 2.0 license]. ```
c8bf8516-7340-4c4f-80b9-21f4fddcef7b
{ "language": "AsciiDoc" }
```asciidoc [[release-notes-5.7.0-M1]] == 5.7.0-M1 *Date of Release:* ❓ *Scope:* ❓ For a complete list of all _closed_ issues and pull requests for this release, consult the link:{junit5-repo}+/milestone/44?closed=1+[5.7 M1] milestone page in the JUnit repository on GitHub. [[release-notes-5.7.0-M1-junit-platform]] === JUnit Platform ==== Bug Fixes * ❓ ==== Deprecations and Breaking Changes * ❓ ==== New Features and Improvements * The number of containers and tests excluded by post discovery filters based on their tags is now logged, along with the exclusion reasons. [[release-notes-5.7.0-M1-junit-jupiter]] === JUnit Jupiter ==== Bug Fixes * ❓ ==== Deprecations and Breaking Changes * ❓ ==== New Features and Improvements * The Javadoc for the `provideTestTemplateInvocationContexts()` method in `TestTemplateInvocationContextProvider` has been aligned with the actual implementation. Providers are now officially allowed to return an empty stream, and the error message when all provided streams are empty is now more helpful. [[release-notes-5.7.0-M1-junit-vintage]] === JUnit Vintage ==== Bug Fixes * ❓ ==== Deprecations and Breaking Changes * ❓ ==== New Features and Improvements * ❓ ``` Update release notes for Testable annotation.
```asciidoc [[release-notes-5.7.0-M1]] == 5.7.0-M1 *Date of Release:* ❓ *Scope:* ❓ For a complete list of all _closed_ issues and pull requests for this release, consult the link:{junit5-repo}+/milestone/44?closed=1+[5.7 M1] milestone page in the JUnit repository on GitHub. [[release-notes-5.7.0-M1-junit-platform]] === JUnit Platform ==== Bug Fixes * ❓ ==== Deprecations and Breaking Changes * ❓ ==== New Features and Improvements * The number of containers and tests excluded by post discovery filters based on their tags is now logged, along with the exclusion reasons. * The annotation `@Testable` may now be applied _directly_ to fields. [[release-notes-5.7.0-M1-junit-jupiter]] === JUnit Jupiter ==== Bug Fixes * ❓ ==== Deprecations and Breaking Changes * ❓ ==== New Features and Improvements * The Javadoc for the `provideTestTemplateInvocationContexts()` method in `TestTemplateInvocationContextProvider` has been aligned with the actual implementation. Providers are now officially allowed to return an empty stream, and the error message when all provided streams are empty is now more helpful. [[release-notes-5.7.0-M1-junit-vintage]] === JUnit Vintage ==== Bug Fixes * ❓ ==== Deprecations and Breaking Changes * ❓ ==== New Features and Improvements * ❓ ```
75139bea-59d3-43f5-8189-395eacd095dc
{ "language": "AsciiDoc" }
```asciidoc [[breaking-changes]] = Breaking changes [partintro] -- This section discusses the changes that you need to be aware of when migrating your application from one version of Elasticsearch to another. As a general rule: * Migration between major versions -- e.g. `1.x` to `2.x` -- requires a <<restart-upgrade,full cluster restart>>. * Migration between minor versions -- e.g. `1.x` to `1.y` -- can be performed by <<rolling-upgrades,upgrading one node at a time>>. See <<setup-upgrade>> for more info. -- include::migrate_1_x.asciidoc[] include::migrate_1_0.asciidoc[] ``` Add missing link to the 2.0 migration guide.
```asciidoc [[breaking-changes]] = Breaking changes [partintro] -- This section discusses the changes that you need to be aware of when migrating your application from one version of Elasticsearch to another. As a general rule: * Migration between major versions -- e.g. `1.x` to `2.x` -- requires a <<restart-upgrade,full cluster restart>>. * Migration between minor versions -- e.g. `1.x` to `1.y` -- can be performed by <<rolling-upgrades,upgrading one node at a time>>. See <<setup-upgrade>> for more info. -- include::migrate_2_0.asciidoc[] include::migrate_1_x.asciidoc[] include::migrate_1_0.asciidoc[] ```
ed314749-25d6-4e24-ba23-0c79b34ece2d
{ "language": "AsciiDoc" }
```asciidoc = Code Valet == Problem The link:https://jenkins.io[Jenkins] project faces a challenge unusual to other contemporary CI/CD tools in that it is downloaded and executed on a user's machine(s) rather than as a service. While this offers quite a lot of flexibility to the end-user, it puts Jenkins developers at a disadvantage for a number of reasons: . There is no direct return line of feedback from where their code is executed. No error reports, etc. . There is a significant lag between developers releasing code and it being adopted/used. == Solution A free service which provides basic CI/CD functions with Jenkins as the core platform. With a regularly updated "Jenkins distribution" consisting of many of the key plugins which people are used, built from `master`, and rolled through the Code Valet cluster: . Blue Ocean . GitHub Branch Source . Pipeline . ??? ``` Add a current thought dump on the architecture
```asciidoc = Code Valet == Problem The link:https://jenkins.io[Jenkins] project faces a challenge unusual to other contemporary CI/CD tools in that it is downloaded and executed on a user's machine(s) rather than as a service. While this offers quite a lot of flexibility to the end-user, it puts Jenkins developers at a disadvantage for a number of reasons: . There is no direct return line of feedback from where their code is executed. No error reports, etc. . There is a significant lag between developers releasing code and it being adopted/used. == Solution A free service which provides basic CI/CD functions with Jenkins as the core platform. With a regularly updated "Jenkins distribution" consisting of many of the key plugins which people are used, built from `master`, and rolled through the Code Valet cluster: . Blue Ocean . GitHub Branch Source . Pipeline . ??? == Architecture WARNING: This is most definitely not set in stone === Control Plane A Kubernetes cluster which will act as the control plane === Management A webapp which acts as the primary controller, user interface for non-Jenkins specific tasks: . Surfacing logs from the Jenkins masters .. Searchable logs for exceptions, etc . Listing statistics from the other running Jenkins masters .. Number of Pipelines active === Jenkins Each GitHub user will have a per-user allocated Jenkins master which executes as a container in this control plane. For example: `rtyler.codevalet.io` would contain all the Pipelines which can be found inside of `https://github.com/rtyler` To start, new Jenkins masters will be configured manual (beta testing!) and users will have access primarily through to link:https://jenkins.io/projects/blueocean[Blue Ocean], but will **not** be given administrative access to the Jenkins master. Groovy scripting will set up some VM templates which will provide dynamically provisioned agents in a cloud provider such as Azure, with a fixed quota, i.e. 2 active machines per. ```
ccc57ec3-65c8-4abd-b92b-24e0bffcc6d9
{ "language": "AsciiDoc" }
```asciidoc MrTweety Analytic ================= image:https://travis-ci.org/kubahorak/mrtweety-analytic.svg?branch=master["Build Status", link="https://travis-ci.org/kubahorak/mrtweety-analytic"] Analyses tweets in realtime to figure out the most popular hashtags. Uses https://spark.apache.org[Apache Spark] and a https://github.com/spark-packages/dstream-twitter[Twitter DStream] to retrieve top 5 tweets in a sliding window of 5 minutes. This result is stored in a JSON file and a small JS app is producing a small website of this data. Everything is included into one Docker container together with Nginx. Just run ./gradlew buildImage To use the Spark module alone, build it like so: ./gradlew :spark:shadowJar Optionally you can change the default path of the result JSON file by setting the environment variable `RESULT_FILENAME`. Finally submit the "fat jar" to your Spark for execution: spark-submit --class "cz.zee.mrtweety.analytic.spark.Application" --master local[4] build/libs/spark-all.jar (c) 2016 Jakub Horák ``` Add example setting environment variable.
```asciidoc MrTweety Analytic ================= image:https://travis-ci.org/kubahorak/mrtweety-analytic.svg?branch=master["Build Status", link="https://travis-ci.org/kubahorak/mrtweety-analytic"] Analyses tweets in realtime to figure out the most popular hashtags. Uses https://spark.apache.org[Apache Spark] and a https://github.com/spark-packages/dstream-twitter[Twitter DStream] to retrieve top 5 tweets in a sliding window of 5 minutes. This result is stored in a JSON file and a small JS app is producing a small website of this data. Everything is included into one Docker container together with Nginx. Just run ./gradlew buildImage To use the Spark module alone, build it like so: ./gradlew :spark:shadowJar Optionally you can change the default path of the result JSON file by setting the environment variable `RESULT_FILENAME`. export RESULT_FILENAME=/tmp/analytic.json Finally submit the "fat jar" to your Spark for execution: spark-submit --class "cz.zee.mrtweety.analytic.spark.Application" --master local[4] build/libs/spark-all.jar (c) 2016 Jakub Horák ```
b428a11e-f074-4ca1-957c-00a9a1edebcc
{ "language": "AsciiDoc" }
```asciidoc = conoha/ Ansible playbook for my conoha instance. == Current setups |==== |Memory |1GB |Disk |SSD 50GB |OS |Ubuntu 14.4 trusty |==== == Pre configurations * Add normal user: `adduser NAME` ** Add `.ssh/authorized_keys` file for the user ** Add the user to `sudo` group: `gpasswd -a NAME sudo` == Usage Issue ---- ansible-playbook ansible.yml --ask-become-pass ---- ``` Add about updating mydns entry
```asciidoc = conoha/ Ansible playbook for my conoha instance. == Current setups |==== |Memory |1GB |Disk |SSD 50GB |OS |Ubuntu 14.4 trusty |==== == Pre configurations * Add normal user: `adduser NAME` ** Add `.ssh/authorized_keys` file for the user ** Add the user to `sudo` group: `gpasswd -a NAME sudo` == Usage Issue ---- ansible-playbook ansible.yml --ask-become-pass ---- Optionally, when you want to update mydns entry: ---- ansible-playbook ansible.yml --extra-vars=mydns_password=xxxx ---- ```
2d06b572-403a-4162-8bf9-5067c1275134
{ "language": "AsciiDoc" }
```asciidoc = Overview {product-author} {product-version} :data-uri: :icons: :experimental: Use these topics to discover the different link:../architecture/core_concepts/builds_and_image_streams.html#source-build[S2I (Source-to-Image)], database, and other Docker images that are available for OpenShift users. ifdef::openshift-enterprise[] Red Hat's official container images are provided in the Red Hat Registry at https://registry.access.redhat.com[registry.access.redhat.com]. OpenShift's supported S2I, database, and Jenkins images are provided in the https://access.redhat.com/search/#/container-images?q=openshift3&p=1&sort=relevant&rows=12&srch=any&documentKind=ImageRepository[*openshift3* repository] in the Red Hat Registry. For example, `registry.access.redhat.com/openshift3/nodejs-010-rhel7` for the Node.js image. The xPaaS middleware images are provided in their respective product repositories on the Red Hat Registry, but suffixed with a *-openshift*. For example, `registry.access.redhat.com/jboss-eap-6/eap64-openshift` for the JBoss EAP image. endif::[] ``` Make image support blurb appear for Dedicated.
```asciidoc = Overview {product-author} {product-version} :data-uri: :icons: :experimental: Use these topics to discover the different link:../architecture/core_concepts/builds_and_image_streams.html#source-build[S2I (Source-to-Image)], database, and other Docker images that are available for OpenShift users. ifdef::openshift-enterprise,openshift-dedicated[] Red Hat's official container images are provided in the Red Hat Registry at https://registry.access.redhat.com[registry.access.redhat.com]. OpenShift's supported S2I, database, and Jenkins images are provided in the https://access.redhat.com/search/#/container-images?q=openshift3&p=1&sort=relevant&rows=12&srch=any&documentKind=ImageRepository[*openshift3* repository] in the Red Hat Registry. For example, `registry.access.redhat.com/openshift3/nodejs-010-rhel7` for the Node.js image. The xPaaS middleware images are provided in their respective product repositories on the Red Hat Registry, but suffixed with a *-openshift*. For example, `registry.access.redhat.com/jboss-eap-6/eap64-openshift` for the JBoss EAP image. endif::[] ```
a60c66f0-374e-4b74-8be8-84a5cc889486
{ "language": "AsciiDoc" }
```asciidoc = Hawkular Android Client This repository contains the source code for the Hawkular Android application. == License * http://www.apache.org/licenses/LICENSE-2.0.html[Apache Version 2.0] == Building ifdef::env-github[] [link=https://travis-ci.org/hawkular/android-client] image:https://travis-ci.org/hawkular/android-client.svg["Build Status", link="https://travis-ci.org/hawkular/android-client"] endif::[] You will need JDK 1.7+ installed. Gradle, Android SDK and all dependencies will be downloaded automatically. ---- $ ./gradlew clean assembleDebug ---- ``` Change Travis badge regarding changed repository path.
```asciidoc = Hawkular Android Client This repository contains the source code for the Hawkular Android application. == License * http://www.apache.org/licenses/LICENSE-2.0.html[Apache Version 2.0] == Building ifdef::env-github[] [link=https://travis-ci.org/hawkular/hawkular-android-client] image:https://travis-ci.org/hawkular/hawkular-android-client.svg["Build Status", link="https://travis-ci.org/hawkular/hawkular-android-client"] endif::[] You will need JDK 1.7+ installed. Gradle, Android SDK and all dependencies will be downloaded automatically. ---- $ ./gradlew clean assembleDebug ---- ```
76d56f61-ff9a-4839-9326-f497807b25d9
{ "language": "AsciiDoc" }
```asciidoc Domotica ======== A simple webapp for a home automation system based around a Siemens S7-300 PLC, using libs7comm-python. It's intended to be used from smartphones. Note that this is not a generic solution. It's an implementation for one specific house. Requirements ============ * Django * python-daemon ``` Document that we require python-rrdtool
```asciidoc Domotica ======== A simple webapp for a home automation system based around a Siemens S7-300 PLC, using libs7comm-python. It's intended to be used from smartphones. Note that this is not a generic solution. It's an implementation for one specific house. Requirements ============ * Django * python-daemon * python-rrdtool ```
d3fd3c7e-4e61-4297-afa3-f3eb1b93330f
{ "language": "AsciiDoc" }
```asciidoc [[release-notes]] == Release Notes :numbered!: include::release-notes-5.1.0-M1.adoc[] include::release-notes-5.0.2.adoc[] include::release-notes-5.0.1.adoc[] include::release-notes-5.0.0.adoc[] include::release-notes-5.0.0-RC3.adoc[] include::release-notes-5.0.0-RC2.adoc[] include::release-notes-5.0.0-RC1.adoc[] include::release-notes-5.0.0-M6.adoc[] include::release-notes-5.0.0-M5.adoc[] include::release-notes-5.0.0-M4.adoc[] include::release-notes-5.0.0-M3.adoc[] include::release-notes-5.0.0-M2.adoc[] include::release-notes-5.0.0-M1.adoc[] include::release-notes-5.0.0-ALPHA.adoc[] :numbered: ``` Include 5.1.0-M2 release notes in main document
```asciidoc [[release-notes]] == Release Notes :numbered!: include::release-notes-5.1.0-M2.adoc[] include::release-notes-5.1.0-M1.adoc[] include::release-notes-5.0.2.adoc[] include::release-notes-5.0.1.adoc[] include::release-notes-5.0.0.adoc[] include::release-notes-5.0.0-RC3.adoc[] include::release-notes-5.0.0-RC2.adoc[] include::release-notes-5.0.0-RC1.adoc[] include::release-notes-5.0.0-M6.adoc[] include::release-notes-5.0.0-M5.adoc[] include::release-notes-5.0.0-M4.adoc[] include::release-notes-5.0.0-M3.adoc[] include::release-notes-5.0.0-M2.adoc[] include::release-notes-5.0.0-M1.adoc[] include::release-notes-5.0.0-ALPHA.adoc[] :numbered: ```
2387f20b-eb23-4b5b-8322-1c346abf50c7
{ "language": "AsciiDoc" }
```asciidoc [appendix] [[faq]] = Frequently Asked Questions (FAQ) [qanda] What is the difference between Neo4j OGM and Spring Data Neo4j (SDN)?:: Spring Data Neo4j (SDN) uses the OGM under the covers. It's like Spring Data JPA, where JPA/Hibernate underly it. Most of the power of SDN actually comes from the OGM. ``` Add label class hierarchy information to FAQ
```asciidoc [appendix] [[faq]] = Frequently Asked Questions (FAQ) [qanda] What is the difference between Neo4j OGM and Spring Data Neo4j (SDN)?:: Spring Data Neo4j (SDN) uses the OGM under the covers. It's like Spring Data JPA, where JPA/Hibernate underly it. Most of the power of SDN actually comes from the OGM. How are labels generated when using inheritance?:: All concrete classes generate a label, abstract classes and interfaces not. If any kind of class or interface gets annotated with @NodeEntity or @NodeEntity(label="customLabel") it will generate a label. Any class annotated with @Transient will not generate a label. ```
4f565efc-e6f4-4851-b305-62fa9722cb93
{ "language": "AsciiDoc" }
```asciidoc = Building Mac OS X client = OS X licensing means we can't just simply provide a build vm via vagrant as we've done for linux. Creating a vagrant-ready VM is fairly simple though. == Client Build VM == Install link:https://www.vagrantup.com/[Vagrant], buy VMware Fusion, and buy the link:https://www.vagrantup.com/vmware[Vagrant VMWare Fusion plugin]. It's possible to get this to work with VirtualBox, which is free, but it's missing some useful features such as shared folders. Create a new link:http://ilostmynotes.blogspot.com/2015/02/building-os-x-vagrant-vmware-fusion-vm.html[vagrant-ready OS X VM in VMWare Fusion]. The essentials are: * Commandline Tools for XCode * PackageMaker.app for building .pkg's * VMWare Tools * Vagrant ssh and sudo access ``` Add note about using mountain lion as base version
```asciidoc = Building Mac OS X client = OS X licensing means we can't just simply provide a build vm via vagrant as we've done for linux. Creating a vagrant-ready VM is fairly simple though. == Client Build VM == Install link:https://www.vagrantup.com/[Vagrant], buy VMware Fusion, and buy the link:https://www.vagrantup.com/vmware[Vagrant VMWare Fusion plugin]. It's possible to get this to work with VirtualBox, which is free, but it's missing some useful features such as shared folders. Create a new link:http://ilostmynotes.blogspot.com/2015/02/building-os-x-vagrant-vmware-fusion-vm.html[vagrant-ready OS X VM in VMWare Fusion]. We use OS X 10.8 (Mountain Lion) as a base so we can install on systems at least that old, the build scripts are tested and work on that version. The essentials are: * Commandline Tools for XCode * PackageMaker.app for building .pkg's * VMWare Tools * Vagrant ssh and sudo access ```
f4719de0-f601-4a67-88d3-596e93a56452
{ "language": "AsciiDoc" }
```asciidoc [[data-rollup-transform]] = Roll up or transform your data [partintro] -- {es} offers the following methods for manipulating your data: ifdef::permanently-unreleased-branch[] * <<xpack-rollup>> + A rollup aggregates an index's time series data and stores the results in new read-only index. For example, you can roll up hourly data into daily or weekly summaries. endif::[] ifndef::permanently-unreleased-branch[] * <<xpack-rollup,Rolling up your historical data>> + include::rollup/index.asciidoc[tag=rollup-intro] endif::[] * <<transforms,Transforming your data>> + include::transform/transforms.asciidoc[tag=transform-intro] -- include::rollup/index.asciidoc[] include::transform/index.asciidoc[] ``` Remove ifdefs for rollup refactor
```asciidoc [[data-rollup-transform]] = Roll up or transform your data [partintro] -- {es} offers the following methods for manipulating your data: * <<xpack-rollup,Rolling up your historical data>> + include::rollup/index.asciidoc[tag=rollup-intro] * <<transforms,Transforming your data>> + include::transform/transforms.asciidoc[tag=transform-intro] -- include::rollup/index.asciidoc[] include::transform/index.asciidoc[] ```
ceb0523a-cb63-4fbc-a0d6-ef9fb57362c1
{ "language": "AsciiDoc" }
```asciidoc = Vert.x-Web image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-web["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-web/"] image:https://img.shields.io/maven-central/v/io.vertx/vertx-web.svg["Maven Central"] image:https://bestpractices.coreinfrastructure.org/projects/540/badge["CII Best Practices",link="https://bestpractices.coreinfrastructure.org/projects/540"] Vert.x-Web is a set of building blocks for building web applications with Vert.x. Think of it as a Swiss Army Knife for building modern, scalable, web apps. Please see the in source asciidoc link:vertx-web/src/main/asciidoc/index.adoc[documentation] or the main http://vertx.io/docs/#web[documentation] on the web-site for a full description of Vert.x-Web: == Template engines Template engine implementations are in the template engine sub-project. ``` Remove link to in-source docs
```asciidoc = Vert.x-Web image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-web["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-web/"] image:https://img.shields.io/maven-central/v/io.vertx/vertx-web.svg["Maven Central"] image:https://bestpractices.coreinfrastructure.org/projects/540/badge["CII Best Practices",link="https://bestpractices.coreinfrastructure.org/projects/540"] Vert.x-Web is a set of building blocks for building web applications with Vert.x. Think of it as a Swiss Army Knife for building modern, scalable, web apps. Please see the main documentation on the web-site for a full description: * https://vertx.io/docs/#web[Web-site documentation] == Template engines Template engine implementations are in the template engine sub-project. ```
1940b5bb-bfda-4509-aec3-af19d7de03a0
{ "language": "AsciiDoc" }
```asciidoc = java-libbitcoinconsensus image:https://travis-ci.org/dexX7/java-libbitcoinconsensus.svg["Build Status", link="https://travis-ci.org/dexX7/java-libbitcoinconsensus"] A https://github.com/java-native-access/jna[JNA] binding and Java wrapper for https://github.com/bitcoin/bitcoin/blob/master/doc/shared-libraries.md#bitcoinconsensus[libbitcoinconsensus]. See the http://dexx7.github.io/java-libbitcoinconsensus/[Javadoc] for an overview of this project. WARNING: This software is EXPERIMENTAL software. USE AT YOUR OWN RISK. == Setup git clone https://github.com/dexX7/java-libbitcoinconsensus.git cd java-libbitcoinconsensus/ == Building ./gradlew build == Testing ./gradlew test NOTE: `libbitcoinconsensus` must be available in the class path or as system library. ``` Add notes for Mac OS X
```asciidoc = java-libbitcoinconsensus image:https://travis-ci.org/dexX7/java-libbitcoinconsensus.svg["Build Status", link="https://travis-ci.org/dexX7/java-libbitcoinconsensus"] A https://github.com/java-native-access/jna[JNA] binding and Java wrapper for https://github.com/bitcoin/bitcoin/blob/master/doc/shared-libraries.md#bitcoinconsensus[libbitcoinconsensus]. See the http://dexx7.github.io/java-libbitcoinconsensus/[Javadoc] for an overview of this project. WARNING: This software is EXPERIMENTAL software. USE AT YOUR OWN RISK. == Setup git clone https://github.com/dexX7/java-libbitcoinconsensus.git cd java-libbitcoinconsensus/ == Building ./gradlew build == Testing ./gradlew test NOTE: `libbitcoinconsensus` must be available in the class path or as system library. == Platform Notes === Mac OS X On Mac OS X you can put `libbitcoinconsensus.dylib` in `src/main/resources/`. ```
ae6eabf0-a093-4eb3-b0f5-3b9a3e8efd24
{ "language": "AsciiDoc" }
```asciidoc = asciidoctor-chrome-extension, AsciiDoc.js Chrome Extension Guillaume Grossetie :sources: https://github.com/Mogztter/asciidoctor-chrome-extension :license: https://github.com/Mogztter/asciidoctor-chrome-extension/blob/master/LICENSE This project uses https://github.com/asciidoctor/asciidoctor.js[Asciidoctor.js] to render AsciiDoc as HTML inside Chrome! == Copyright Copyright (C) 2013 Guillaume Grossetie. Free use of this software is granted under the terms of the MIT License. See the {license}[LICENSE] file for details.``` Update AsciiDoc.js to Asciidoctor in the title and the repository URL (asciidoctor org)
```asciidoc = asciidoctor-chrome-extension, Asciidoctor Chrome Extension Guillaume Grossetie :sources: https://github.com/asciidoctor/asciidoctor-chrome-extension :license: https://github.com/asciidoctor/asciidoctor-chrome-extension/blob/master/LICENSE This project uses https://github.com/asciidoctor/asciidoctor.js[Asciidoctor.js] to render AsciiDoc as HTML inside Chrome! == Copyright Copyright (C) 2013 Guillaume Grossetie. Free use of this software is granted under the terms of the MIT License. See the {license}[LICENSE] file for details.```
ed08d374-433c-46be-8fd1-671ae7bf6073
{ "language": "AsciiDoc" }
```asciidoc = Starter POMs Spring Boot Starters are a set of convenient dependency descriptors that you can include in your application. You get a one-stop-shop for all the Spring and related technology that you need without having to hunt through sample code and copy paste loads of dependency descriptors. For example, if you want to get started using Spring and JPA for database access just include the `spring-boot-starter-data-jpa` dependency in your project, and you are good to go. For complete details see the http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#using-boot-starter-poms[reference documentation] == Community Contributions If you create a starter for a technology that is not already in the standard list we can list it here. Just send a pull request for this page. |=== | Name | Location | https://code.google.com/p/wro4j/[WRO4J] | https://github.com/sbuettner/spring-boot-autoconfigure-wro4j |=== ``` Add codecentric Spring Batch Starter Info
```asciidoc = Starter POMs Spring Boot Starters are a set of convenient dependency descriptors that you can include in your application. You get a one-stop-shop for all the Spring and related technology that you need without having to hunt through sample code and copy paste loads of dependency descriptors. For example, if you want to get started using Spring and JPA for database access just include the `spring-boot-starter-data-jpa` dependency in your project, and you are good to go. For complete details see the http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#using-boot-starter-poms[reference documentation] == Community Contributions If you create a starter for a technology that is not already in the standard list we can list it here. Just send a pull request for this page. |=== | Name | Location | https://code.google.com/p/wro4j/[WRO4J] | https://github.com/sbuettner/spring-boot-autoconfigure-wro4j | http://projects.spring.io/spring-batch/[Spring Batch] (Advanced usage) | https://github.com/codecentric/spring-boot-starter-batch-web |=== ```
905357b1-f3cc-4f24-9656-40d7dfd31613
{ "language": "AsciiDoc" }
```asciidoc = Reactor Netty Stephane Maldini; :appversion: 0.5.0 (wiki) ifndef::host-github[:ext-relative: {outfilesuffix}] {appversion} :doctype: book :icons: font include::net.adoc[] include::net-tcp.adoc[] include::net-http.adoc[] include::net-e2e.adoc[] ``` Fix warning in asciidoc rendering
```asciidoc = Reactor Netty Stephane Maldini; :appversion: 0.5.0 (wiki) ifndef::host-github[:ext-relative: {outfilesuffix}] {appversion} :doctype: book :icons: font include::net.adoc[] include::net-tcp.adoc[] include::net-http.adoc[] ```
39833775-5ec4-4ded-88f9-74ee95add82a
{ "language": "AsciiDoc" }
```asciidoc [[important-settings]] == Important Elasticsearch configuration While Elasticsearch requires very little configuration, there are a number of settings which need to be considered before going into production. The following settings *must* be considered before going to production: * <<path-settings,Path settings>> * <<cluster.name,Cluster name>> * <<node.name,Node name>> * <<network.host,Network host>> * <<discovery-settings,Discovery settings>> * <<heap-size,Heap size>> * <<heap-dump-path,Heap dump path>> * <<gc-logging,GC logging>> include::important-settings/path-settings.asciidoc[] include::important-settings/cluster-name.asciidoc[] include::important-settings/node-name.asciidoc[] include::important-settings/network-host.asciidoc[] include::important-settings/discovery-settings.asciidoc[] include::important-settings/heap-size.asciidoc[] include::important-settings/heap-dump-path.asciidoc[] include::important-settings/gc-logging.asciidoc[] ``` Add error file docs to important settings
```asciidoc [[important-settings]] == Important Elasticsearch configuration While Elasticsearch requires very little configuration, there are a number of settings which need to be considered before going into production. The following settings *must* be considered before going to production: * <<path-settings,Path settings>> * <<cluster.name,Cluster name>> * <<node.name,Node name>> * <<network.host,Network host>> * <<discovery-settings,Discovery settings>> * <<heap-size,Heap size>> * <<heap-dump-path,Heap dump path>> * <<gc-logging,GC logging>> include::important-settings/path-settings.asciidoc[] include::important-settings/cluster-name.asciidoc[] include::important-settings/node-name.asciidoc[] include::important-settings/network-host.asciidoc[] include::important-settings/discovery-settings.asciidoc[] include::important-settings/heap-size.asciidoc[] include::important-settings/heap-dump-path.asciidoc[] include::important-settings/gc-logging.asciidoc[] include::important-settings/error-file.asciidoc[] ```
13afe488-9532-4b30-b180-a98b6e5f4dd9
{ "language": "AsciiDoc" }
```asciidoc = Getting support for Micrometer == GitHub issues We choose not to use GitHub issues for general usage questions and support, preferring to use issues solely for the tracking of bugs and enhancements. If you have a general usage question please do not open a GitHub issue, but use one of the other channels described below. If you are reporting a bug, please help to speed up problem diagnosis by providing as much information as possible. Ideally, that would include a small sample project that reproduces the problem. == Stack Overflow The Micrometer community monitors the https://stackoverflow.com/tags/micrometer[`micrometer`] tag on Stack Overflow. Before asking a question, please familiarize yourself with Stack Overflow's https://stackoverflow.com/help/how-to-ask[advice on how to ask a good question]. == Slack If you want to discuss something or have a question that isn't suited to Stack Overflow, the Micrometer community chat in the http://slack.micrometer.io/[micrometer-metrics channel on Slack]. ``` Update link to the Slack invite
```asciidoc = Getting support for Micrometer == GitHub issues We choose not to use GitHub issues for general usage questions and support, preferring to use issues solely for the tracking of bugs and enhancements. If you have a general usage question please do not open a GitHub issue, but use one of the other channels described below. If you are reporting a bug, please help to speed up problem diagnosis by providing as much information as possible. Ideally, that would include a small sample project that reproduces the problem. == Stack Overflow The Micrometer community monitors the https://stackoverflow.com/tags/micrometer[`micrometer`] tag on Stack Overflow. Before asking a question, please familiarize yourself with Stack Overflow's https://stackoverflow.com/help/how-to-ask[advice on how to ask a good question]. == Slack If you want to discuss something or have a question that isn't suited to Stack Overflow, the Micrometer community chat in the https://join.slack.com/t/micrometer-metrics/shared_invite/zt-ewo3kcs0-Ji3aOAqTxnjYPEFBBI5HqQ[micrometer-metrics Slack]. ```
e917653e-37ee-4b6a-bdb7-93608bc07a5c
{ "language": "AsciiDoc" }
```asciidoc [[new]] = What's New in Spring Security 5.8 Spring Security 5.8 provides a number of new features. Below are the highlights of the release. * https://github.com/spring-projects/spring-security/pull/11638[gh-11638] - Refresh remote JWK when unknown KID error occurs * https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation * https://github.com/spring-projects/spring-security/issues/11661[gh-11661] - Configurable authentication converter for resource-servers with token introspection * https://github.com/spring-projects/spring-security/pull/11771[gh-11771] - `HttpSecurityDsl` should support `apply` method ``` Update What's New for 5.8
```asciidoc [[new]] = What's New in Spring Security 5.8 Spring Security 5.8 provides a number of new features. Below are the highlights of the release. * https://github.com/spring-projects/spring-security/pull/11638[gh-11638] - Refresh remote JWK when unknown KID error occurs * https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation * https://github.com/spring-projects/spring-security/issues/11661[gh-11661] - Configurable authentication converter for resource-servers with token introspection * https://github.com/spring-projects/spring-security/pull/11771[gh-11771] - `HttpSecurityDsl` should support `apply` method * https://github.com/spring-projects/spring-security/pull/11232[gh-11232] - `ClientRegistrations#rest` defines 30s connect and read timeouts ```
f53ceb6c-2c6b-4f2d-8a86-c969698366ca
{ "language": "AsciiDoc" }
```asciidoc = Logbook: horizon.justinwflory.com Running logbook of documentation notes for Horizon. == Firewall _See https://github.com/jwflory/infrastructure/blob/master/roles/firewalld/tasks/main.yml[jwflory/infrastructure firewalld Ansible role]._ == WordPress === wp_cli cannot upgrade WordPress because another update is in progress `wp option get core_updater.lock`:: Check if WordPress is holding a version lock. If a number is returned, the version lock is on. `wp option delete core_updater.lock`:: Break a version lock on WordPress. Verify an update is NOT in progress before deleting (check processes, visit admin panel). https://wordpress.stackexchange.com/questions/224989/get-rid-of-another-update-is-currently-in-progress[Reference] == XenForo * `sitemaps/` and `temp/` must have 0777 permissions… ** `temp/` is for the image proxy ``` Add 'failed to create directory' WordPress scenario
```asciidoc = Logbook: horizon.justinwflory.com Running logbook of documentation notes for Horizon. == Firewall _See https://github.com/jwflory/infrastructure/blob/master/roles/firewalld/tasks/main.yml[jwflory/infrastructure firewalld Ansible role]._ == WordPress === Updating from admin dashboard does not work: "failed to create directory" The web server cannot read or write to files in the web directory. Make sure the `php-fpm` user is a member of the `www-admin` group and that the folder is group-writable. Why does this happen? Normally, your web server needs access, like `nginx` or `httpd`. But in my case, `nginx` is spinning off `php-fpm` processes to run PHP web applications. This user is trying to read and write from files instead of the web server. Look closer at https://github.com/jwflory/infrastructure[jwflory/infrastructure] for additional context. === wp_cli cannot upgrade WordPress because another update is in progress `wp option get core_updater.lock`:: Check if WordPress is holding a version lock. If a number is returned, the version lock is on. `wp option delete core_updater.lock`:: Break a version lock on WordPress. Verify an update is NOT in progress before deleting (check processes, visit admin panel). https://wordpress.stackexchange.com/questions/224989/get-rid-of-another-update-is-currently-in-progress[Reference] == XenForo * `sitemaps/` and `temp/` must have 0777 permissions… ** `temp/` is for the image proxy ```
852a1fa8-89f9-4260-869a-c9b8f2064b69
{ "language": "AsciiDoc" }
```asciidoc [[release-notes]] == JUnit 5 Release Notes include::link-attributes.adoc[] :numbered!: include::release-notes-5.1.0-M2.adoc[] include::release-notes-5.1.0-M1.adoc[] include::release-notes-5.0.2.adoc[] include::release-notes-5.0.1.adoc[] include::release-notes-5.0.0.adoc[] include::release-notes-5.0.0-RC3.adoc[] include::release-notes-5.0.0-RC2.adoc[] include::release-notes-5.0.0-RC1.adoc[] include::release-notes-5.0.0-M6.adoc[] include::release-notes-5.0.0-M5.adoc[] include::release-notes-5.0.0-M4.adoc[] include::release-notes-5.0.0-M3.adoc[] include::release-notes-5.0.0-M2.adoc[] include::release-notes-5.0.0-M1.adoc[] include::release-notes-5.0.0-ALPHA.adoc[] :numbered: ``` Include header and footer in Release Notes
```asciidoc [[release-notes]] = JUnit 5 Release Notes Stefan Bechtold; Sam Brannen; Johannes Link; Matthias Merdes; Marc Philipp; Christian Stein // :docinfodir: docinfos :docinfo: private-head :numbered!: // include::link-attributes.adoc[] include::release-notes-5.1.0-M2.adoc[] include::release-notes-5.1.0-M1.adoc[] include::release-notes-5.0.2.adoc[] include::release-notes-5.0.1.adoc[] include::release-notes-5.0.0.adoc[] include::release-notes-5.0.0-RC3.adoc[] include::release-notes-5.0.0-RC2.adoc[] include::release-notes-5.0.0-RC1.adoc[] include::release-notes-5.0.0-M6.adoc[] include::release-notes-5.0.0-M5.adoc[] include::release-notes-5.0.0-M4.adoc[] include::release-notes-5.0.0-M3.adoc[] include::release-notes-5.0.0-M2.adoc[] include::release-notes-5.0.0-M1.adoc[] include::release-notes-5.0.0-ALPHA.adoc[] ```
39068850-8c08-49d3-9d08-d52815ae3280
{ "language": "AsciiDoc" }
```asciidoc [[misc]] == Misc * https://github.com/electrical/puppet-elasticsearch[Puppet]: Elasticsearch puppet module. * http://github.com/elasticsearch/cookbook-elasticsearch[Chef]: Chef cookbook for Elasticsearch * https://github.com/tavisto/elasticsearch-rpms[elasticsearch-rpms]: RPMs for elasticsearch. * http://www.github.com/neogenix/daikon[daikon]: Daikon Elasticsearch CLI * https://github.com/Aconex/scrutineer[Scrutineer]: A high performance consistency checker to compare what you've indexed with your source of truth content (e.g. DB) ``` Update link to puppet module and remove link to other RPM repo as we have our own.
```asciidoc [[misc]] == Misc * https://github.com/elasticsearch/puppet-elasticsearch[Puppet]: Elasticsearch puppet module. * http://github.com/elasticsearch/cookbook-elasticsearch[Chef]: Chef cookbook for Elasticsearch * http://www.github.com/neogenix/daikon[daikon]: Daikon Elasticsearch CLI * https://github.com/Aconex/scrutineer[Scrutineer]: A high performance consistency checker to compare what you've indexed with your source of truth content (e.g. DB) ```
d07923fb-26c8-495a-a6fc-b07e82d64d9c
{ "language": "AsciiDoc" }
```asciidoc = JoinFaces Documentation Marcelo Fernandes <[email protected]>; Lars Grefer <[email protected]> :toc: left == Using JoinFaces === System Requirements === Maven Integration === Gradle Integration``` Add Example projects and compatibility matrix
```asciidoc = JoinFaces Documentation Marcelo Fernandes <[email protected]>; Lars Grefer <[email protected]> :toc: left :sectnums: :sectanchors: == Using JoinFaces .Official example projects |=== |Build tool \ packaing |`jar` |`war` |https://maven.apache.org/[Maven] |https://github.com/joinfaces/joinfaces-maven-jar-example[maven-jar-example] |https://github.com/joinfaces/joinfaces-maven-war-example[maven-war-example] |https://gradle.org/[Gradle] |https://github.com/joinfaces/joinfaces-gradle-jar-example[gradle-jar-example] |https://github.com/joinfaces/joinfaces-gradle-war-example[gradle-war-example] |=== === System Requirements .System Requirements |=== |Joinfaces |Java |Spring Boot |JSF |3.3 |1.8 to 11 |2.1 |2.0 to 2.3 |3.0 to 3.2 |1.8 |2.0 |2.0 to 2.3 |2.x |1.6 to 1.8 |1.x |2.0 to 2.2 |=== This are the combinations we have tested and expect to work, but depending on the features you are using, other combinations might work, too. === Maven Integration === Gradle Integration == Features```
0a26e264-334e-4f41-9723-c765f2d9c756
{ "language": "AsciiDoc" }
```asciidoc = Instructions Follow these steps to setup Docker on your machine. * Go to https://www.docker.com/products/overview#/install_the_platform * Download the distributable for your system type * Follow the installation instructions After installation continue with Exercise 1 to verify that your setup is running correctly. = Troubleshooting during installation *Windows Home Edition* If you run Windows Home Edition and/or get a message about Hyper-V being disabled, you have to install https://www.docker.com/products/docker-toolbox[Docker Toolbox] instead because your version of Windows does not support the "Docker for Windows"-product. *Hardware Virtualization disabled* If you encounter the message "VT-x/AMD-v has to be enabled", please enable "Intel Virtualization" in the BIOS of your system. This is required for running virtual machines. Detailed instructions can be found http://www.howtogeek.com/213795/how-to-enable-intel-vt-x-in-your-computers-bios-or-uefi-firmware/[here] under "Access the BIOS or UEFI Firmware". ``` Make windows caveats a note
```asciidoc = Instructions Follow these steps to setup Docker on your machine. * Go to https://www.docker.com/products/overview#/install_the_platform * Download the distributable for your system type * Follow the installation instructions After installation continue with Exercise 1 to verify that your setup is running correctly. [NOTE] *Windows Home Edition* + If you run Windows Home Edition and/or get a message about Hyper-V being disabled, you have to install https://www.docker.com/products/docker-toolbox[Docker Toolbox] instead because your version of Windows does not support the "Docker for Windows"-product. + *Hardware Virtualization disabled* + If you encounter the message "VT-x/AMD-v has to be enabled", please enable "Intel Virtualization" in the BIOS of your system. This is required for running virtual machines. Detailed instructions can be found http://www.howtogeek.com/213795/how-to-enable-intel-vt-x-in-your-computers-bios-or-uefi-firmware/[here] under "Access the BIOS or UEFI Firmware". ```
1a1e2e45-5808-4fd3-8a20-eb1d66a532c5
{ "language": "AsciiDoc" }
```asciidoc ==== Step 2: Install the repository and Minion package Connect with _SSH_ to your remote _RHEL_ system where you need a _Minion_ to be installed. .Install the Yum repository [source, shell] ---- yum -y https://yum.opennms.org/repofiles/opennms-repo-stable-rhel7.noarch.rpm rpm --import https://yum.opennms.org/OPENNMS-GPG-KEY ---- .Install the Minion package [source, bash] ---- yum -y install opennms-minion ---- The following packages will be automatically installed: * _opennms-minion_: The Minion meta package * _opennms-minion-container_: The _Karaf_ OSGi container with _Minion_ branding and additional management extensions * _opennms-minion-features-core_: Core utilities and services required by the _Minion_ features * _opennms-minion-features-default_: Service-specific features With the successful installed packages the _Minion_ is installed in the following directory structure: [source, shell] ---- [root@localhost /opt/minion]# $ tree -L 1 . ├── bin ├── deploy ├── etc ├── lib ├── repositories └── system ---- The Minion's startup configuration can be changed by editing the `/etc/sysconfig/minion` file. It allows to override the defaults used at startup including: * Location of the JDK * Memory usage * User to run as``` Fix missing yum install when installing the repo
```asciidoc ==== Step 2: Install the repository and Minion package Connect with _SSH_ to your remote _RHEL_ system where you need a _Minion_ to be installed. .Install the Yum repository [source, shell] ---- yum -y install https://yum.opennms.org/repofiles/opennms-repo-stable-rhel7.noarch.rpm rpm --import https://yum.opennms.org/OPENNMS-GPG-KEY ---- .Install the Minion package [source, bash] ---- yum -y install opennms-minion ---- The following packages will be automatically installed: * _opennms-minion_: The Minion meta package * _opennms-minion-container_: The _Karaf_ OSGi container with _Minion_ branding and additional management extensions * _opennms-minion-features-core_: Core utilities and services required by the _Minion_ features * _opennms-minion-features-default_: Service-specific features With the successful installed packages the _Minion_ is installed in the following directory structure: [source, shell] ---- [root@localhost /opt/minion]# $ tree -L 1 . ├── bin ├── deploy ├── etc ├── lib ├── repositories └── system ---- The Minion's startup configuration can be changed by editing the `/etc/sysconfig/minion` file. It allows to override the defaults used at startup including: * Location of the JDK * Memory usage * User to run as ```
9b9211a3-ebaf-426c-a9e3-1ecc098e3c0a
{ "language": "AsciiDoc" }
```asciidoc = Beats Platform include::./overview.asciidoc[] include::./gettingstarted.asciidoc[] include::./configuration.asciidoc[] include::./https.asciidoc[] include::./newbeat.asciidoc[] ``` Use Beats Platform Reference as title
```asciidoc = Beats Platform Reference include::./overview.asciidoc[] include::./gettingstarted.asciidoc[] include::./configuration.asciidoc[] include::./https.asciidoc[] include::./newbeat.asciidoc[] ```
71482156-4056-45b7-ac4c-2a8a66bc62a5
{ "language": "AsciiDoc" }
```asciidoc [[getting-started.system-requirements]] == System Requirements Spring Boot {spring-boot-version} requires https://www.java.com[Java 8] and is compatible up to and including Java 17. {spring-framework-docs}/[Spring Framework {spring-framework-version}] or above is also required. Explicit build support is provided for the following build tools: |=== | Build Tool | Version | Maven | 3.5+ | Gradle | 6.8.x, 6.9.x, and 7.x |=== [[getting-started.system-requirements.servlet-containers]] === Servlet Containers Spring Boot supports the following embedded servlet containers: |=== | Name | Servlet Version | Tomcat 9.0 | 4.0 | Jetty 9.4 | 3.1 | Jetty 10.0 | 4.0 | Undertow 2.0 | 4.0 |=== You can also deploy Spring Boot applications to any Servlet 3.1+ compatible container. ``` Document support for Java 18
```asciidoc [[getting-started.system-requirements]] == System Requirements Spring Boot {spring-boot-version} requires https://www.java.com[Java 8] and is compatible up to and including Java 18. {spring-framework-docs}/[Spring Framework {spring-framework-version}] or above is also required. Explicit build support is provided for the following build tools: |=== | Build Tool | Version | Maven | 3.5+ | Gradle | 6.8.x, 6.9.x, and 7.x |=== [[getting-started.system-requirements.servlet-containers]] === Servlet Containers Spring Boot supports the following embedded servlet containers: |=== | Name | Servlet Version | Tomcat 9.0 | 4.0 | Jetty 9.4 | 3.1 | Jetty 10.0 | 4.0 | Undertow 2.0 | 4.0 |=== You can also deploy Spring Boot applications to any Servlet 3.1+ compatible container. ```
76ec1727-ef2d-4076-bdc0-3a2b0dedce58
{ "language": "AsciiDoc" }
```asciidoc [[installation]] == Installation === Dependency ```xml <dependency> <groupId>net.ttddyy</groupId> <artifactId>datasource-assert</artifactId> <version>[LATEST_VERSION]</version> </dependency> ``` {datasource-proxy}[datasource-proxy] is the only transitive dependency library. + {assertj}[AssertJ] and {hamcrest}[Hamcrest] are optional. If you need to use their assertions, you need to explicitly specify dependency in your project. ==== Snapshot Snapshot is available via {oss-snapshot-repository}[oss sonatype snapshot repository]. To download snapshot jars, enable sonatype snapshot repository: ```xml <repositories> <repository> <id>sonatype-snapshots-repo</id> <url>https://oss.sonatype.org/content/repositories/snapshots</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> ``` === Supported Java Versions `datasource-assert` works with java 1.7+. ``` Document snapshot download from JitPack
```asciidoc [[installation]] == Installation === Dependency ```xml <dependency> <groupId>net.ttddyy</groupId> <artifactId>datasource-assert</artifactId> <version>[LATEST_VERSION]</version> </dependency> ``` {datasource-proxy}[datasource-proxy] is the only transitive dependency library. + {assertj}[AssertJ] and {hamcrest}[Hamcrest] are optional. If you need to use their assertions, you need to explicitly specify dependency in your project. ==== Snapshot (via Sonatype OSSRH) Snapshot is available via {oss-snapshot-repository}[oss sonatype snapshot repository]. To download snapshot jars, enable sonatype snapshot repository: ```xml <repositories> <repository> <id>sonatype-snapshots-repo</id> <url>https://oss.sonatype.org/content/repositories/snapshots</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> ``` ==== Snapshot (via JitPack) ```xml <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> ``` ```xml <dependency> <groupId>om.github.ttddyy</groupId> <artifactId>datasource-assert</artifactId> <version>master-SNAPSHOT</version> </dependency> ``` === Supported Java Versions `datasource-assert` works with java 1.7+. ```
f7d7abb2-ae15-406f-a9ca-24e5c7cc4b9c
{ "language": "AsciiDoc" }
```asciidoc [[new]] == What's New in Spring Security 5.5 Spring Security 5.5 provides a number of new features. Below are the highlights of the release. [[whats-new-servlet]] === Servlet * OAuth 2.0 Client ** Added support for https://github.com/spring-projects/spring-security/pull/9520[Jwt Client Authentication] `private_key_jwt` and `client_secret_jwt` ** Added https://github.com/spring-projects/spring-security/pull/9535[Jwt Bearer Authorization Grant] support ** Added https://github.com/spring-projects/spring-security/pull/8765[R2DBC implementation] of `ReactiveOAuth2AuthorizedClientService` * Configuration ** Introduced https://github.com/spring-projects/spring-security/issues/9205[DispatcherType request matcher] * Kotlin DSL ** Added https://github.com/spring-projects/spring-security/issues/9319[rememberMe support] ``` Add WebFlux section to What's New
```asciidoc [[new]] == What's New in Spring Security 5.5 Spring Security 5.5 provides a number of new features. Below are the highlights of the release. [[whats-new-servlet]] === Servlet * OAuth 2.0 Client ** Added support for https://github.com/spring-projects/spring-security/pull/9520[Jwt Client Authentication] `private_key_jwt` and `client_secret_jwt` ** Added https://github.com/spring-projects/spring-security/pull/9535[Jwt Bearer Authorization Grant] support ** Added https://github.com/spring-projects/spring-security/pull/8765[R2DBC implementation] of `ReactiveOAuth2AuthorizedClientService` * Configuration ** Introduced https://github.com/spring-projects/spring-security/issues/9205[DispatcherType request matcher] * Kotlin DSL ** Added https://github.com/spring-projects/spring-security/issues/9319[rememberMe support] [[whats-new-webflux]] === WebFlux ** Added https://github.com/spring-projects/spring-security/issues/8143[Kotlin coroutine support] for `EnableReactiveMethodSecurity` ```
f2977311-b13e-4690-87f5-87aac3bd37da
{ "language": "AsciiDoc" }
```asciidoc = rhq-metrics Project to implement a MaaS and CaaS * MaaS: Metrics as a Service * CaaS: Charts as a Service ``` Change Charting as a Service from CaaS to ChaaS. Pronounced Chaz. More unique chan CaaS and sounds better.
```asciidoc = rhq-metrics Project to implement a MaaS and CaaS * MaaS: Metrics as a Service * ChaaS: Charts as a Service ```
bb58b50f-0729-4463-b6e8-b523323e910d
{ "language": "AsciiDoc" }
```asciidoc To activate JSON views, add the following dependency to the `dependencies` block of your `build.gradle` where `VERSION` is the version of the plugin you plan to use: [source,groovy] compile "org.grails.plugins:views-json:VERSION" To enable Gradle compilation of JSON views for production environment add the following to the `buildscript` `dependencies` block: [source,groovy] buildscript { ... dependencies { ... classpath "org.grails.plugins:views-gradle:VERSION" } } Then apply the `org.grails.plugins.views-json` Gradle plugin after any Grails core gradle plugins: [source,groovy] ... apply plugin: "org.grails.grails-web" apply plugin: "org.grails.plugins.views-json" This will add a `compileGsonViews` task to Gradle that is executed when producing a JAR or WAR file.``` Use project version number in docs
```asciidoc To activate JSON views, add the following dependency to the `dependencies` block of your `build.gradle`: [source,groovy,subs="attributes"] compile "org.grails.plugins:views-json:{version}" To enable Gradle compilation of JSON views for production environment add the following to the `buildscript` `dependencies` block: [source,groovy,subs="attributes"] buildscript { ... dependencies { ... classpath "org.grails.plugins:views-gradle:{version}" } } Then apply the `org.grails.plugins.views-json` Gradle plugin after any Grails core gradle plugins: [source,groovy] ... apply plugin: "org.grails.grails-web" apply plugin: "org.grails.plugins.views-json" This will add a `compileGsonViews` task to Gradle that is executed when producing a JAR or WAR file. ```
64f28c43-3899-440d-97cc-bc2f437b85b9
{ "language": "AsciiDoc" }
```asciidoc // Copyright 2017 the original author or authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. [[licenses]] = License Information [[sec:gradle_documentation]] == Gradle Documentation _Copyright © 2007-2018 Gradle, Inc._ Gradle build tool source code is open-source and licensed under the link:https://github.com/gradle/gradle/blob/master/LICENSE[Apache License 2.0]. Gradle user manual and DSL references are licensed under link:http://creativecommons.org/licenses/by-nc-sa/4.0/[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License]. [[licenses:build_scan_plugin]] == Gradle Build Scan Plugin Use of the link:https://scans.gradle.com/plugin/[build scan plugin] is subject to link:https://gradle.com/legal/terms-of-service/[Gradle's Terms of Service]. ``` Adjust year in copyright notice
```asciidoc // Copyright 2017 the original author or authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. [[licenses]] = License Information [[sec:gradle_documentation]] == Gradle Documentation _Copyright © 2007-2019 Gradle, Inc._ Gradle build tool source code is open-source and licensed under the link:https://github.com/gradle/gradle/blob/master/LICENSE[Apache License 2.0]. Gradle user manual and DSL references are licensed under link:http://creativecommons.org/licenses/by-nc-sa/4.0/[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License]. [[licenses:build_scan_plugin]] == Gradle Build Scan Plugin Use of the link:https://scans.gradle.com/plugin/[build scan plugin] is subject to link:https://gradle.com/legal/terms-of-service/[Gradle's Terms of Service]. ```
e057ca58-bd41-40f5-93ec-0ea8c6084aaa
{ "language": "AsciiDoc" }
```asciidoc = tick Malcolm Sparks <[email protected]>; Henry Widd; Johanna Antonelli <[email protected]> 0.4.10-alpha, 2019-03-27 :toc: left :toclevels: 4 :docinfo: shared :sectnums: true :sectnumlevels: 2 :xrefstyle: short :nofooter: :leveloffset: +1 include::intro.adoc[] include::setup.adoc[] include::api.adoc[] include::dates.adoc[] include::durations.adoc[] include::clocks.adoc[] include::intervals.adoc[] include::calendars.adoc[] include::schedules.adoc[] include::formatting.adoc[] include::cookbook/index.adoc[] include::bibliography.adoc[] ``` Increment docs version to align with current code tag
```asciidoc = tick Malcolm Sparks <[email protected]>; Henry Widd; Johanna Antonelli <[email protected]> 0.4.25-alpha, 2020-10-01 :toc: left :toclevels: 4 :docinfo: shared :sectnums: true :sectnumlevels: 2 :xrefstyle: short :nofooter: :leveloffset: +1 include::intro.adoc[] include::setup.adoc[] include::api.adoc[] include::dates.adoc[] include::durations.adoc[] include::clocks.adoc[] include::intervals.adoc[] include::calendars.adoc[] include::schedules.adoc[] include::formatting.adoc[] include::cookbook/index.adoc[] include::bibliography.adoc[] ```
3abc9eb2-505d-40d0-9c7a-159a77ece9a7
{ "language": "AsciiDoc" }
```asciidoc = SpringBoot WebApp Demo https://projects.spring.io/spring-boot[Spring Boot] looks like a nice way to get started. This is a trivial webapp created using SpringBoot. == HowTo mvn spring-boot:run Or, in Eclipse with Spring plugin (or "Spring Tool Suite"), right-click Application.java, then RunAs->Spring Boot Application then connect to http://localhost:8000/ Note: that is port 8000, _not_ the usual 8080 to avoid conflicts. Change this in application.properties if you don't like it. Fill in the form and click Submit. == War Deployment It seems that you can't have both the instant-deployment convenience of Spring Boot AND the security of a full WAR deployment in the same POM file. You will need to make several changes to deploy as a WAR file. See the section entitled "Traditional Deployment"--"Create a deployable war file" in the spring-boot reference manual (Section 73.1 in the current snapshot as of this writing). ``` Revert indent of link, formatting failure
```asciidoc = SpringBoot WebApp Demo https://projects.spring.io/spring-boot[Spring Boot] looks like a nice way to get started. This is a trivial webapp created using SpringBoot. == HowTo mvn spring-boot:run Or, in Eclipse with Spring plugin (or "Spring Tool Suite"), right-click Application.java, then RunAs->Spring Boot Application then connect to http://localhost:8000/ Note: that is port 8000, _not_ the usual 8080 to avoid conflicts. Change this in application.properties if you don't like it. Fill in the form and click Submit. == War Deployment It seems that you can't have both the instant-deployment convenience of Spring Boot AND the security of a full WAR deployment in the same POM file. You will need to make several changes to deploy as a WAR file. See the section entitled "Traditional Deployment"--"Create a deployable war file" in the spring-boot reference manual (Section 73.1 in the current snapshot as of this writing). ```
cf1be2e8-028b-4bf7-9a35-3b5d97a94f2f
{ "language": "AsciiDoc" }
```asciidoc [[breaking-changes]] = Breaking changes [partintro] -- This section discusses the changes that you need to be aware of when migrating your application from one version of Elasticsearch to another. As a general rule: * Migration between minor versions -- e.g. `5.x` to `5.y` -- can be performed by <<rolling-upgrades,upgrading one node at a time>>. * Migration between consecutive major versions -- e.g. `2.x` to `5.x` -- requires a <<restart-upgrade,full cluster restart>>. * Migration between non-consecutive major versions -- e.g. `1.x` to `5.x` -- is not supported. See <<setup-upgrade>> for more info. -- include::migrate_5_0.asciidoc[] ``` Include 5.1 migration docs in migration docs
```asciidoc [[breaking-changes]] = Breaking changes [partintro] -- This section discusses the changes that you need to be aware of when migrating your application from one version of Elasticsearch to another. As a general rule: * Migration between minor versions -- e.g. `5.x` to `5.y` -- can be performed by <<rolling-upgrades,upgrading one node at a time>>. * Migration between consecutive major versions -- e.g. `2.x` to `5.x` -- requires a <<restart-upgrade,full cluster restart>>. * Migration between non-consecutive major versions -- e.g. `1.x` to `5.x` -- is not supported. See <<setup-upgrade>> for more info. -- include::migrate_5_1.asciidoc[] include::migrate_5_0.asciidoc[] ```
8a9df9f2-306f-4458-9ba2-babab73f372c
{ "language": "AsciiDoc" }
```asciidoc [[java-query-dsl-script-query]] ==== Script Query See {ref}/query-dsl-script-query.html[Script Query] [source,java] -------------------------------------------------- QueryBuilder qb = scriptQuery( new Script("doc['num1'].value > 1") <1> ); -------------------------------------------------- <1> inlined script If you have stored on each data node a script named `myscript.painless` with: [source,painless] -------------------------------------------------- doc['num1'].value > params.param1 -------------------------------------------------- You can use it then with: [source,java] -------------------------------------------------- QueryBuilder qb = scriptQuery( new Script( "myscript", <1> ScriptType.FILE, <2> "painless", <3> Collections.singletonMap("param1", 5)) <4> ); -------------------------------------------------- <1> Script name <2> Script type: either `ScriptType.FILE`, `ScriptType.INLINE` or `ScriptType.INDEXED` <3> Scripting engine <4> Parameters as a `Map` of `<String, Object>` ``` Update script query doc for 5.1
```asciidoc [[java-query-dsl-script-query]] ==== Script Query See {ref}/query-dsl-script-query.html[Script Query] [source,java] -------------------------------------------------- QueryBuilder qb = scriptQuery( new Script("doc['num1'].value > 1") <1> ); -------------------------------------------------- <1> inlined script If you have stored on each data node a script named `myscript.painless` with: [source,painless] -------------------------------------------------- doc['num1'].value > params.param1 -------------------------------------------------- You can use it then with: [source,java] -------------------------------------------------- QueryBuilder qb = scriptQuery( new Script( ScriptType.FILE, <1> "painless", <2> "myscript", <3> Collections.singletonMap("param1", 5)) <4> ); -------------------------------------------------- <1> Script type: either `ScriptType.FILE`, `ScriptType.INLINE` or `ScriptType.INDEXED` <2> Scripting engine <3> Script name <4> Parameters as a `Map` of `<String, Object>` ```
11890617-4e39-4916-b9cc-3b26855b9ec6
{ "language": "AsciiDoc" }
```asciidoc HACKING ======= HOMEPAGE -------- The master source code repository, bug tracking, etc for _pvpn_ is https://www.github.com/halhen/pvpn. Please report issues here, or to [email protected]. CONTRIBUTING ------------ * To contribute, send a patch as a github pull request or using +git format-patch+ to [email protected]. * Keep your patches atomic and complete. Remember documentation. * Add yourself to _AUTHORS_. * Unless you have a good reason to, rebase your changes to the latest master before submitting. * Spend an extra minute to write a proper commit message ** Describe the effect of the patch, rather than the contents. ** Use imperative form (e.g. +Change foo+ rather than +Changed foo+) ** Reference the issue (+#NNN+ or, if fixed, +fixes #NNN+) if applicable. ** Max 50 characters in the title. ** Max 72 characters wide lines. Don't be discouraged or annoyed if I ask you to change your patch before I accept it. The reason is probably that I have not been specific enough about how I would like patches to be. CHECKLISTS ---------- Create a new version ~~~~~~~~~~~~~~~~~~~~ * Update +VERSION+ in +Makefile+ * Describe changes +NEWS.asciidoc+ * Commit with version bump commit message * +git tag <VERSION>+ ``` Document more steps to do when bumbing version
```asciidoc HACKING ======= HOMEPAGE -------- The master source code repository, bug tracking, etc for _pvpn_ is https://www.github.com/halhen/pvpn. Please report issues here, or to [email protected]. CONTRIBUTING ------------ * To contribute, send a patch as a github pull request or using +git format-patch+ to [email protected]. * Keep your patches atomic and complete. Remember documentation. * Add yourself to _AUTHORS_. * Unless you have a good reason to, rebase your changes to the latest master before submitting. * Spend an extra minute to write a proper commit message ** Describe the effect of the patch, rather than the contents. ** Use imperative form (e.g. +Change foo+ rather than +Changed foo+) ** Reference the issue (+#NNN+ or, if fixed, +fixes #NNN+) if applicable. ** Max 50 characters in the title. ** Max 72 characters wide lines. Don't be discouraged or annoyed if I ask you to change your patch before I accept it. The reason is probably that I have not been specific enough about how I would like patches to be. CHECKLISTS ---------- Create a new version ~~~~~~~~~~~~~~~~~~~~ * Update +VERSION+ in +Makefile+ * Describe changes +NEWS.asciidoc+ * Commit with version bump commit message * +git tag <VERSION>+ * +git push+ * +git push --tags+ * make and upload new PKGBUILD to AUR ```
19f36d79-5d60-482c-b363-390caaeaa7d4
{ "language": "AsciiDoc" }
```asciidoc == Internationalization (i18n) ``` Add draft of i18n docs
```asciidoc = Internationalization (i18n) == Intro Translation messages for Beavy core, as well as Beavy apps and modules, use the [ICU syntax](http://userguide.icu-project.org/formatparse/messages). The translation messages themselves are managed with the [transifex](http://docs.transifex.com/introduction/) localization platform, using English as the default language. Please see BeavyHQ's transifex dashboard here: [www.transifex.com/beavy/beavy/](https://www.transifex.com/beavy/beavy/) If you would like to help us translate Beavy, please let us know or click the "Help Translate Beavy" button on the link above. == Python Translations Backend translations in python are accomplished using the following functions (supported by [Flask-ICU](https://github.com/beavyHQ/flask-icu)): * `format()` * `format_date()` * `format_time()` * `format_datetime()` * `format_number()` * `format_decimal()` * `format_currency()` * `format_scientific()` * `format_percent()` == How to use it in Javascript == Working with translations files == Extracting & pushing to transifex To extract messages == Updating from transifex == Shipping your own translations (for your app) ```
bffa8ecc-7794-4179-9498-a1568b2a0369
{ "language": "AsciiDoc" }
```asciidoc ## Developer Info [discrete] ### Source https://github.com/wildfly-extras/wildfly-camel [discrete] ### Issues https://github.com/wildfly-extras/wildfly-camel/issues [discrete] ### Jenkins https://ci.fabric8.io/job/wildfly-camel [discrete] ### Downloads https://github.com/wildfly-extras/wildfly-camel/releases [discrete] ### Forums, Lists, IRC #wildfly-camel channel on irc.freenode.net ``` Update link to CI build
```asciidoc ## Developer Info [discrete] ### Source https://github.com/wildfly-extras/wildfly-camel [discrete] ### Issues https://github.com/wildfly-extras/wildfly-camel/issues [discrete] ### CI Build https://github.com/wildfly-extras/wildfly-camel/actions [discrete] ### Downloads https://github.com/wildfly-extras/wildfly-camel/releases [discrete] ### Forums, Lists, IRC #wildfly-camel channel on irc.freenode.net ```