doc_id
stringlengths 36
36
| metadata
dict | input
stringlengths 28
4.3k
| output
stringlengths 11
5.45k
|
---|---|---|---|
2932e8f0-c669-4fd9-b45a-f85aef351db0 | {
"language": "AsciiDoc"
} | ```asciidoc
= How to Run a Mock Mission
1. Start MavProxy at the location of the first sample waypoint.
$ cd <path-to-ardupilot>/APMrover2
$ sim_vehicle.py -l 37.659984,-121.8874624,0,45
2. Start the simulation. Zoom and center the waypoints, if you like.
Mouse wheel or 2-finger scroll zooms, control-shift-drag moves the
map around.
$ roslaunch robo_magellan mock_mission.launch
3. In MavProxy, load the waypoint sample.
rosrun mavros mavwp load data/sample.qgc-wpl.txt
4. Start the mission by arming the throttle and going into AUTO mode.
$ rosservice call /mavros/cmd/arming "value: true"
success: True
result: 0
$ rosservice call /mavros/set_mode "{custom_mode: 'AUTO'}"
mode_sent: True
In rviz, you should see the rover follow the waypoints. When a cone is visible,
the heading to the cone is shown with a red arrow.
You can move the robot back to the starting point (approximately) using the command:
$ rosservice call /mavros/set_mode "{custom_mode: 'RTL'}"
Then you can restart the mock mission by setting the mode to AUTO again, in step 4.
```
Add information about failure to load waypoints | ```asciidoc
= How to Run a Mock Mission
1. Start MavProxy at the location of the first sample waypoint.
$ cd <path-to-ardupilot>/APMrover2
$ sim_vehicle.py -l 37.659984,-121.8874624,0,45
2. Start the simulation. Zoom and center the waypoints, if you like.
Mouse wheel or 2-finger scroll zooms, control-shift-drag moves the
map around.
$ roslaunch robo_magellan mock_mission.launch
3. In MavProxy, load the waypoint sample. You should then see the robot and the waypoints
in rviz. If you don't, try it again after a few seconds. (MavProxy can still be waiting
for SITL to start up.)
rosrun mavros mavwp load data/sample.qgc-wpl.txt
4. Start the mission by arming the throttle and going into AUTO mode.
$ rosservice call /mavros/cmd/arming "value: true"
success: True
result: 0
$ rosservice call /mavros/set_mode "{custom_mode: 'AUTO'}"
mode_sent: True
In rviz, you should see the rover follow the waypoints. When a cone is visible,
the heading to the cone is shown with a red arrow.
You can move the robot back to the starting point (approximately) using the command:
$ rosservice call /mavros/set_mode "{custom_mode: 'RTL'}"
Then you can restart the mock mission by setting the mode to AUTO again, in step 4.
``` |
ac456fa2-2a43-4e28-afb6-d7586e40cc90 | {
"language": "AsciiDoc"
} | ```asciidoc
[[analysis-truncate-tokenfilter]]
=== Truncate Token Filter
The `truncate` token filter can be used to truncate tokens into a
specific length. This can come in handy with keyword (single token)
based mapped fields that are used for sorting in order to reduce memory
usage.
It accepts a `length` parameter which control the number of characters
to truncate to, defaults to `10`.
```
Update truncate token filter to not mention the keyword tokenizer | ```asciidoc
[[analysis-truncate-tokenfilter]]
=== Truncate Token Filter
The `truncate` token filter can be used to truncate tokens into a
specific length.
It accepts a `length` parameter which control the number of characters
to truncate to, defaults to `10`.
``` |
5cf493fe-56fd-4b8c-8306-6dd4968c08b7 | {
"language": "AsciiDoc"
} | ```asciidoc
// Allow GitHub image rendering
:imagesdir: ../../../images
[[ga-opennms-operation-daemon-config-files-pollerd]]
==== Notifd
[options="header, autowidth"]
|===
| Internal Daemon Name | Reload Event
| _Notifd_ | `uei.opennms.org/internal/reloadDaemonConfig -p 'daemonName Notifd'`
|===
.Notifd configuration file overview
[options="header, autowidth"]
|===
| File | Restart Required | Reload Event | Description
| `notifd-configuration.xml` | no | yes | Describes auto-acknowledge prefix, e.g. prefix "RESOLVED: " for nodeUp/nodeDown events.
| `notificationCommands.xml` | no | no | Configuration for notification media, e.g. scripts, XMPP or HTTP Post, immediately applied.
| `notifications.xml` | no | no | Event notification definitions and changes are immediately applied.
| `destinationPaths.xml` | no | no | Contains paths for notification targets, e.g. JavaMail, XMPP or external scripts.
| `users.xml` | no | no | Contain pager and address information for notification destination paths.
| `groups.xml` | no | no | Groups can be used as target for notifications.
| `javamail-configuration.properties` | no | no | Configuration to send notification mails via specific mail servers.
|===
```
Fix copy & paste error | ```asciidoc
// Allow GitHub image rendering
:imagesdir: ../../../images
[[ga-opennms-operation-daemon-config-files-notifd]]
==== Notifd
[options="header, autowidth"]
|===
| Internal Daemon Name | Reload Event
| _Notifd_ | `uei.opennms.org/internal/reloadDaemonConfig -p 'daemonName Notifd'`
|===
.Notifd configuration file overview
[options="header, autowidth"]
|===
| File | Restart Required | Reload Event | Description
| `notifd-configuration.xml` | no | yes | Describes auto-acknowledge prefix, e.g. prefix "RESOLVED: " for nodeUp/nodeDown events.
| `notificationCommands.xml` | no | no | Configuration for notification media, e.g. scripts, XMPP or HTTP Post, immediately applied.
| `notifications.xml` | no | no | Event notification definitions and changes are immediately applied.
| `destinationPaths.xml` | no | no | Contains paths for notification targets, e.g. JavaMail, XMPP or external scripts.
| `users.xml` | no | no | Contain pager and address information for notification destination paths.
| `groups.xml` | no | no | Groups can be used as target for notifications.
| `javamail-configuration.properties` | no | no | Configuration to send notification mails via specific mail servers.
|===
``` |
74daee00-7ad8-4baa-8747-435bf087e5e2 | {
"language": "AsciiDoc"
} | ```asciidoc
[[snapshot-restore-apis]]
== Snapshot and restore APIs
You can use the following APIs to set up snapshot repositories, manage snapshot
backups, and restore snapshots to a running cluster.
For more information, see <<snapshot-restore>>.
[NOTE]
====
We are working on including more snapshot and restore APIs in this section. Some
content may not be included yet.
====
[discrete]
[[snapshot-restore-repo-apis]]
=== Snapshot repository management APIs
* <<clean-up-snapshot-repo-api,Clean up snapshot repository>>
* <<delete-snapshot-repo-api,Delete snapshot repository>>
* <<put-snapshot-repo-api,Put snapshot repository>>
include::clean-up-repo-api.asciidoc[]
include::delete-repo-api.asciidoc[]
include::get-repo-api.asciidoc[]
include::put-repo-api.asciidoc[]```
Fix snapshot repo API jump list | ```asciidoc
[[snapshot-restore-apis]]
== Snapshot and restore APIs
You can use the following APIs to set up snapshot repositories, manage snapshot
backups, and restore snapshots to a running cluster.
For more information, see <<snapshot-restore>>.
[NOTE]
====
We are working on including more snapshot and restore APIs in this section. Some
content may not be included yet.
====
[discrete]
[[snapshot-restore-repo-apis]]
=== Snapshot repository management APIs
* <<clean-up-snapshot-repo-api,Clean up snapshot repository>>
* <<delete-snapshot-repo-api,Delete snapshot repository>>
* <<get-snapshot-repo-api,Get snapshot repository>>
* <<put-snapshot-repo-api,Put snapshot repository>>
include::clean-up-repo-api.asciidoc[]
include::delete-repo-api.asciidoc[]
include::get-repo-api.asciidoc[]
include::put-repo-api.asciidoc[]``` |
c55fd88a-9791-4e89-9723-5cf4e92ccae4 | {
"language": "AsciiDoc"
} | ```asciidoc
[[elasticsearch-reference]]
= Reference
:version: 1.5.2
:branch: 1.5
:jdk: 1.8.0_25
:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current
include::getting-started.asciidoc[]
include::setup.asciidoc[]
include::migration/index.asciidoc[]
include::api-conventions.asciidoc[]
include::docs.asciidoc[]
include::search.asciidoc[]
include::indices.asciidoc[]
include::cat.asciidoc[]
include::cluster.asciidoc[]
include::query-dsl.asciidoc[]
include::mapping.asciidoc[]
include::analysis.asciidoc[]
include::modules.asciidoc[]
include::index-modules.asciidoc[]
include::testing.asciidoc[]
include::glossary.asciidoc[]
```
Rename reference docs to Elasticsearch Reference | ```asciidoc
[[elasticsearch-reference]]
= Elasticsearch Reference
:version: 1.5.2
:branch: 1.5
:jdk: 1.8.0_25
:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current
include::getting-started.asciidoc[]
include::setup.asciidoc[]
include::migration/index.asciidoc[]
include::api-conventions.asciidoc[]
include::docs.asciidoc[]
include::search.asciidoc[]
include::indices.asciidoc[]
include::cat.asciidoc[]
include::cluster.asciidoc[]
include::query-dsl.asciidoc[]
include::mapping.asciidoc[]
include::analysis.asciidoc[]
include::modules.asciidoc[]
include::index-modules.asciidoc[]
include::testing.asciidoc[]
include::glossary.asciidoc[]
``` |
8aaf7a2d-9444-4ec5-9c1a-e8253b954ac1 | {
"language": "AsciiDoc"
} | ```asciidoc
:author: Peter Niederwieser, The Spock Framework Team
:revnumber: X-replaced-by-gradle
:sourcedir: ../spock-specs/src/test/groovy/org/spockframework/docs
:sourcedir-spring: ../spock-spring/src/test/groovy/org/spockframework/spring/docs
:resourcedir-spring: ../spock-spring/src/test/resources/org/spockframework/spring/docs
:sourcedir-spring-boot: ../spock-spring/boot-test/src/test/groovy/org/spockframework/boot
```
Use `sectlinks` in docs to make linking to sections easier | ```asciidoc
:author: Peter Niederwieser, The Spock Framework Team
:revnumber: X-replaced-by-gradle
:sourcedir: ../spock-specs/src/test/groovy/org/spockframework/docs
:sourcedir-spring: ../spock-spring/src/test/groovy/org/spockframework/spring/docs
:resourcedir-spring: ../spock-spring/src/test/resources/org/spockframework/spring/docs
:sourcedir-spring-boot: ../spock-spring/boot-test/src/test/groovy/org/spockframework/boot
:sectlinks: true
``` |
f49a296d-1c02-4c33-8835-8c2a46210612 | {
"language": "AsciiDoc"
} | ```asciidoc
= Overview
Spring Security is a framework that provides authentication, authorization, and protection against common attacks.
With first class support for both imperative and reactive applications, it is the de-facto standard for securing Spring-based applications.
```
Add additional info to overview | ```asciidoc
= Spring Security
Spring Security is a framework that provides xref:features/authentication/index.adoc[authentication], xref:features/authorization/index.adoc[authorization], and xref:features/exploits/index.adoc[protection against common attacks].
With first class support for securing both xref:servlet/index.adoc[imperative] and xref:reactive/index.adoc[reactive] applications, it is the de-facto standard for securing Spring-based applications.
For a complete list of features, see the xref:features/index.adoc[Features] section of the reference.
== Getting Started
If you are ready to start securing an application see the Getting Started sections for xref:servlet/getting-started.adoc[servlet] and xref:reactive/getting-started.adoc[reactive]. These sections will walk you through creating your first Spring Security applications.
If you want to understand how Spring Security works, you can refer to the xref:servlet/architecture.adoc[Architecture] section.
// FIXME add link to reactive architecture
If you have any questions, there is a wonderful xref:community.adoc[community] that would love to help you!
``` |
91de8be6-7ce6-4f75-869b-e54fd3a03be6 | {
"language": "AsciiDoc"
} | ```asciidoc
= RHPDS instructions for OptaPlanner demos
:awestruct-description: How to demo OptaPlanner with RHPDS
:awestruct-layout: normalBase
:awestruct-priority: 0.1
:showtitle:
== Visit RHPDS
https://rhpds.redhat.com/[Visit RHPDS]
== Available demos on RHPDS
* link:optaweb-vehicle-routing.adoc[OptaWeb Vehicle Routing]
* link:optaweb-employee-rostering.adoc[OptaWeb Employee Rostering]
```
Fix links to RHPDS demo guides | ```asciidoc
= RHPDS instructions for OptaPlanner demos
:awestruct-description: How to demo OptaPlanner with RHPDS
:awestruct-layout: normalBase
:awestruct-priority: 0.1
:showtitle:
== Visit RHPDS
https://rhpds.redhat.com/[Visit RHPDS]
== Available demos on RHPDS
* link:optaweb-vehicle-routing.html[OptaWeb Vehicle Routing]
* link:optaweb-employee-rostering.html[OptaWeb Employee Rostering]
``` |
301fd2fd-2f4f-4fe0-a1c4-67e224317c81 | {
"language": "AsciiDoc"
} | ```asciidoc
//Module included in the following assemblies:
//
// * installing/installing_openstack/installing-openstack-installer.adoc
// * installing/installing_openstack/installing-openstack-installer-custom.adoc
// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc
[id="installation-osp-enabling-swift_{context}"]
= Enabling Swift on OpenStack
{product-title} on {rh-openstack-first} uses https://docs.openstack.org/security-guide/object-storage.html[OpenStack Object Storage (Swift)] to store and serve user configuration files.
Swift is operated by a user account with the `swiftoperator` role and `temp-url` support.
.Prerequisites
* A {rh-openstack} administrator account on the target environment
* On Ceph RGW, https://docs.ceph.com/docs/master/radosgw/config-ref/#swift-settings[the `account in url` option must be enabled]
.Procedure
To enable Swift on {rh-openstack}:
. As an administrator in the {rh-openstack} CLI, add the `swiftoperator` role to the account that will access Swift:
+
----
$ openstack role add --user <user> --project <project> swiftoperator
----
. As the account with the `swiftoperator` role, set a temporary URL property for the account:
+
----
$ openstack object store account set --property Temp-URL-Key=superkey
----
Your {rh-openstack} deployment can now use Swift to store and serve files.
```
Remove Temp-URL step from Swift procedure - BZ 1805176 | ```asciidoc
//Module included in the following assemblies:
//
// * installing/installing_openstack/installing-openstack-installer.adoc
// * installing/installing_openstack/installing-openstack-installer-custom.adoc
// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc
[id="installation-osp-enabling-swift_{context}"]
= Enabling Swift on OpenStack
{product-title} on {rh-openstack-first} uses https://docs.openstack.org/security-guide/object-storage.html[OpenStack Object Storage (Swift)] to store and serve user configuration files.
Swift is operated by a user account with the `swiftoperator` role.
.Prerequisites
* A {rh-openstack} administrator account on the target environment
* On Ceph RGW, https://docs.ceph.com/docs/master/radosgw/config-ref/#swift-settings[the `account in url` option must be enabled]
.Procedure
To enable Swift on {rh-openstack}:
. As an administrator in the {rh-openstack} CLI, add the `swiftoperator` role to the account that will access Swift:
+
----
$ openstack role add --user <user> --project <project> swiftoperator
----
Your {rh-openstack} deployment can now use Swift to store and serve files.
``` |
e24d275b-f721-4586-a3a5-85bffb9d9505 | {
"language": "AsciiDoc"
} | ```asciidoc
// Module included in the following assemblies:
//
// * authentication/identity_providers/configuring-github-identity-provider.adoc
[id="identity-provider-registering-github_{context}"]
= Registering a GitHub application
To use GitHub or GitHub Enterprise as an identity provider, you must register
an application to use.
.Procedure
. Register an application on GitHub:
** For GitHub, click https://github.com/settings/profile[Settings] ->
https://github.com/settings/developers[Developer settings] ->
https://github.com/settings/applications/new[Register a new OAuth application].
** For GitHub Enterprise, go to your GitHub Enterprise home page and then click
*Settings -> Developer settings -> Register a new application*.
. Enter an application name, for example `My OpenShift Install`.
. Enter a homepage URL, such as
`\https://oauth-openshift.apps.<cluster-name>.<cluster-domain>`.
. Optional: Enter an application description.
. Enter the authorization callback URL, where the end of the URL contains the
identity provider `name`:
+
----
https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>
----
+
For example:
+
----
https://oauth-openshift.apps.example-openshift-cluster.com/oauth2callback/github/
----
. Click *Register application*. GitHub provides a Client ID and a Client Secret.
You need these values to complete the identity provider configuration.
```
Add oauth apps section to path, in case users are reading the text and not clicking the links. Choosing the Developer settings optiion on GitHub brings users to the GitHub Apps section (settings/apps) and not the OAuth Apps section (/settings/developers) | ```asciidoc
// Module included in the following assemblies:
//
// * authentication/identity_providers/configuring-github-identity-provider.adoc
[id="identity-provider-registering-github_{context}"]
= Registering a GitHub application
To use GitHub or GitHub Enterprise as an identity provider, you must register
an application to use.
.Procedure
. Register an application on GitHub:
** For GitHub, click https://github.com/settings/profile[Settings] ->
https://github.com/settings/apps[Developer settings] ->
https://github.com/settings/developers[OAuth Apps] ->
https://github.com/settings/applications/new[Register a new OAuth application].
** For GitHub Enterprise, go to your GitHub Enterprise home page and then click
*Settings -> Developer settings -> Register a new application*.
. Enter an application name, for example `My OpenShift Install`.
. Enter a homepage URL, such as
`\https://oauth-openshift.apps.<cluster-name>.<cluster-domain>`.
. Optional: Enter an application description.
. Enter the authorization callback URL, where the end of the URL contains the
identity provider `name`:
+
----
https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>
----
+
For example:
+
----
https://oauth-openshift.apps.example-openshift-cluster.com/oauth2callback/github/
----
. Click *Register application*. GitHub provides a Client ID and a Client Secret.
You need these values to complete the identity provider configuration.
``` |
c323b008-f662-4a0b-bf37-bd499619a696 | {
"language": "AsciiDoc"
} | ```asciidoc
[[analysis-delimited-payload-tokenfilter]]
=== Delimited Payload Token Filter
Named `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found.
Example: "the|1 quick|2 fox|3" is split per default int to tokens `fox`, `quick` and `the` with payloads `1`, `2` and `3` respectively.
Parameters:
`delimiter`::
Character used for splitting the tokens. Default is `|`.
`encoding`::
The type of the payload. `int` for integer, `float` for float and `identity` for characters. Default is `float`.```
Fix minor issues in delimited payload token filter docs | ```asciidoc
[[analysis-delimited-payload-tokenfilter]]
=== Delimited Payload Token Filter
Named `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found.
Example: "the|1 quick|2 fox|3" is split by default into tokens `the`, `quick`, and `fox` with payloads `1`, `2`, and `3` respectively.
Parameters:
`delimiter`::
Character used for splitting the tokens. Default is `|`.
`encoding`::
The type of the payload. `int` for integer, `float` for float and `identity` for characters. Default is `float`.``` |
c73be7ee-f37b-451f-903f-7cff5a635182 | {
"language": "AsciiDoc"
} | ```asciidoc
= Running your database with data included
*Having to insert the data everytime you start a container can become a hassle. In this exercise we will put the data inside the Postgres image, so that it will be available on startup.*
The https://hub.docker.com/_/postgres/[Postgres] image provides a convenient way to execute scripts on startup of your database. All we have to do is add a .sh script to the docker-entrypoint-initdb.d-folder as per the documentation of the image.
== Steps to prepare the database
. Create the .sh script on your harddisk
. Create a Dockerfile that
.. Is build https://docs.docker.com/engine/reference/builder/#/from[from] postgres:9
.. https://docs.docker.com/engine/reference/builder/#/add[Adds] the .sh file to the docker-entrypoint-initdb.d folder in the container
. Build the image, run it and link it with the testnetapp container
.The script to add to the Dockerfile
----
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE DATABASE testnet;
\c testnet
CREATE TABLE kv (key varchar(100) PRIMARY KEY, value varchar(100));
INSERT INTO kv VALUES ('provider','postgres database');
EOSQL
----```
Create Dockerfile in empty folder | ```asciidoc
= Running your database with data included
*Having to insert the data everytime you start a container can become a hassle. In this exercise we will put the data inside the Postgres image, so that it will be available on startup.*
The https://hub.docker.com/_/postgres/[Postgres] image provides a convenient way to execute scripts on startup of your database. All we have to do is add a .sh script to the docker-entrypoint-initdb.d-folder as per the documentation of the image.
== Steps to prepare the database
. Create an empty folder
. Put the .sh script from below in that folder
. Create a Dockerfile in the folder that
.. Is build https://docs.docker.com/engine/reference/builder/#/from[from] postgres:9
.. https://docs.docker.com/engine/reference/builder/#/add[Adds] the .sh file to the docker-entrypoint-initdb.d folder in the container
. Build the image, run it and link it with the testnetapp container
.The script to add to the Dockerfile
----
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE DATABASE testnet;
\c testnet
CREATE TABLE kv (key varchar(100) PRIMARY KEY, value varchar(100));
INSERT INTO kv VALUES ('provider','postgres database');
EOSQL
----``` |
ff79c1e6-ca43-4874-9da1-33dc1fb96494 | {
"language": "AsciiDoc"
} | ```asciidoc
[[new]]
= What's New in Spring Security 5.7
Spring Security 5.7 provides a number of new features.
Below are the highlights of the release.
[[whats-new-servlet]]
== Servlet
* Web
** Introduced xref:servlet/authentication/persistence.adoc#requestattributesecuritycontextrepository[`RequestAttributeSecurityContextRepository`]
** Introduced xref:servlet/authentication/persistence.adoc#securitycontextholderfilter[`SecurityContextHolderFilter`] - Ability to require explicit saving of the `SecurityContext`
* OAuth 2.0 Client
** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]
** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerOAuth2AuthorizedClientProvider`
** Allow customizing claims on https://github.com/spring-projects/spring-security/issues/9855[JWT client assertions]
[[whats-new-webflux]]
== WebFlux
* OAuth 2.0 Client
** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]
** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerReactiveOAuth2AuthorizedClientProvider`
```
Update What's New for 5.7 | ```asciidoc
[[new]]
= What's New in Spring Security 5.7
Spring Security 5.7 provides a number of new features.
Below are the highlights of the release.
[[whats-new-servlet]]
== Servlet
* Web
** Introduced xref:servlet/authentication/persistence.adoc#requestattributesecuritycontextrepository[`RequestAttributeSecurityContextRepository`]
** Introduced xref:servlet/authentication/persistence.adoc#securitycontextholderfilter[`SecurityContextHolderFilter`] - Ability to require explicit saving of the `SecurityContext`
* OAuth 2.0 Client
** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]
** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerOAuth2AuthorizedClientProvider`
** Allow customizing claims on https://github.com/spring-projects/spring-security/issues/9855[JWT client assertions]
[[whats-new-webflux]]
== WebFlux
* Web
** Allow customizing https://github.com/spring-projects/spring-security/issues/10903[charset] in `ServerHttpBasicAuthenticationConverter`
* OAuth 2.0 Client
** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]
** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerReactiveOAuth2AuthorizedClientProvider`
``` |
dcb31b13-ae6f-40a8-a85d-c21a76b99940 | {
"language": "AsciiDoc"
} | ```asciidoc
[[new]]
= What's New in Spring Security 6.0
Spring Security 6.0 provides a number of new features.
Below are the highlights of the release.
== Breaking Changes
* https://github.com/spring-projects/spring-security/issues/10556[gh-10556] - Remove EOL OpenSaml 3 Support.
Use the OpenSaml 4 Support instead.
* https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`.
Instead use data storage to encrypt values.
* https://github.com/spring-projects/spring-security/issues/11520[gh-11520] - Remember Me uses SHA256 by default
* https://github.com/spring-projects/spring-security/issues/8819 - Move filters to web package
Reorganize imports
* https://github.com/spring-projects/spring-security/issues/7349 - Move filter and token to appropriate packages
Reorganize imports
```
Update What's New for 6.0 | ```asciidoc
[[new]]
= What's New in Spring Security 6.0
Spring Security 6.0 provides a number of new features.
Below are the highlights of the release.
== Breaking Changes
* https://github.com/spring-projects/spring-security/issues/10556[gh-10556] - Remove EOL OpenSaml 3 Support.
Use the OpenSaml 4 Support instead.
* https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`.
Instead use data storage to encrypt values.
* https://github.com/spring-projects/spring-security/issues/11520[gh-11520] - Remember Me uses SHA256 by default
* https://github.com/spring-projects/spring-security/issues/8819 - Move filters to web package
Reorganize imports
* https://github.com/spring-projects/spring-security/issues/7349 - Move filter and token to appropriate packages
Reorganize imports
* https://github.com/spring-projects/spring-security/issues/11026[gh-11026] - Use `RequestAttributeSecurityContextRepository` instead of `NullSecurityContextRepository`
``` |
bec52295-c276-455b-b35b-33dc124a10f7 | {
"language": "AsciiDoc"
} | ```asciidoc
. *Enable DataStax repository*
+
====
Create the `/etc/yum.repos.d/datastax.repo` file and edit it to contain the
following:
[source]
----
# DataStax (Apache Cassandra)
[datastax]
name = DataStax Repo for Apache Cassandra
baseurl = http://rpm.datastax.com/community
enabled = 1
gpgcheck = 1
gpgkey = https://rpm.datastax.com/rpm/repo_key
----
====
```
Disable GPG check for DataStax repo on RHEL 7. | ```asciidoc
. *Enable DataStax repository*
+
====
Create the `/etc/yum.repos.d/datastax.repo` file and edit it to contain the
following:
[source]
----
# DataStax (Apache Cassandra)
[datastax]
name = DataStax Repo for Apache Cassandra
baseurl = http://rpm.datastax.com/community
enabled = 1
gpgcheck = 0
gpgkey = https://rpm.datastax.com/rpm/repo_key
----
====
``` |
cfc1379c-9404-4868-9de9-9c0ccee66ba5 | {
"language": "AsciiDoc"
} | ```asciidoc
[[features]]
= Core Features
include::attributes.adoc[]
This section dives into the details of Spring Boot.
Here you can learn about the key features that you may want to use and customize.
If you have not already done so, you might want to read the "<<getting-started#getting-started>>" and "<<using#using>>" sections, so that you have a good grounding of the basics.
include::features/spring-application.adoc[]
include::features/external-config.adoc[]
include::features/profiles.adoc[]
include::features/logging.adoc[]
include::features/internationalization.adoc[]
include::features/json.adoc[]
include::features/task-execution-and-scheduling.adoc[]
include::features/testing.adoc[]
include::features/developing-auto-configuration.adoc[]
include::features/kotlin.adoc[]
include::features/whats-next.adoc[]
```
Fix section links to not include .html suffix | ```asciidoc
[[features]]
= Core Features
include::attributes.adoc[]
This section dives into the details of Spring Boot.
Here you can learn about the key features that you may want to use and customize.
If you have not already done so, you might want to read the "<<getting-started#getting-started, Getting Started>>" and "<<using#using, Developing with Spring Boot>>" sections, so that you have a good grounding of the basics.
include::features/spring-application.adoc[]
include::features/external-config.adoc[]
include::features/profiles.adoc[]
include::features/logging.adoc[]
include::features/internationalization.adoc[]
include::features/json.adoc[]
include::features/task-execution-and-scheduling.adoc[]
include::features/testing.adoc[]
include::features/developing-auto-configuration.adoc[]
include::features/kotlin.adoc[]
include::features/whats-next.adoc[]
``` |
2f1ac5e2-ba44-43fc-bda4-d6bf34a8dfb5 | {
"language": "AsciiDoc"
} | ```asciidoc
// Module included in the following assemblies:
//
// * networking/multiple_networks/remove-additional-network.adoc
[id="nw-multus-delete-network_{context}"]
= Removing an additional network attachment definition
As a cluster administrator, you can remove an additional network from your
{product-title} cluster. The additional network is not removed from any Pods it
is attached to.
.Prerequisites
* Install the OpenShift Command-line Interface (CLI), commonly known as `oc`.
* Log in as a user with `cluster-admin` privileges.
.Procedure
To remove an additional network from your cluster, complete the following steps:
. Edit the Cluster Network Operator (CNO) in your default text editor by running
the following command:
+
----
$ oc edit networks.operator.openshift.io cluster
----
. Modify the CR by removing the configuration from the `additionalNetworks`
collection for the network attachment definition you are removing.
+
[source,yaml]
----
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
additionalNetworks: [] <1>
----
<1> If you are removing the configuration mapping for the only additional
network attachment definition in the `additionalNetworks` collection, you must
specify an empty collection.
. Save your changes and quit the text editor to commit your changes.
. Delete the NetworkAttachmentDefinition CR for the additional network by running the following command. Replace `<name>` with the name of the additional network to remove.
+
----
$ oc delete network-attachment-definition <name>
----
. Optional: Confirm that the additional network CR was deleted by running the following command:
+
----
$ oc get network-attachment-definition --all-namespaces
----
```
Remove extra step for removing an additional network | ```asciidoc
// Module included in the following assemblies:
//
// * networking/multiple_networks/remove-additional-network.adoc
[id="nw-multus-delete-network_{context}"]
= Removing an additional network attachment definition
As a cluster administrator, you can remove an additional network from your
{product-title} cluster. The additional network is not removed from any Pods it
is attached to.
.Prerequisites
* Install the OpenShift Command-line Interface (CLI), commonly known as `oc`.
* Log in as a user with `cluster-admin` privileges.
.Procedure
To remove an additional network from your cluster, complete the following steps:
. Edit the Cluster Network Operator (CNO) in your default text editor by running
the following command:
+
----
$ oc edit networks.operator.openshift.io cluster
----
. Modify the CR by removing the configuration from the `additionalNetworks`
collection for the network attachment definition you are removing.
+
[source,yaml]
----
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
additionalNetworks: [] <1>
----
<1> If you are removing the configuration mapping for the only additional
network attachment definition in the `additionalNetworks` collection, you must
specify an empty collection.
. Save your changes and quit the text editor to commit your changes.
. Optional: Confirm that the additional network CR was deleted by running the following command:
+
----
$ oc get network-attachment-definition --all-namespaces
----
``` |
eaf4560b-8a5d-43f3-a345-629a10320908 | {
"language": "AsciiDoc"
} | ```asciidoc
// Module included in the following assemblies:
//
// * operators/olm-managing-custom-catalogs.adoc
[id="olm-creating-index-image_{context}"]
= Creating an index image
You can create an index image using the `opm` CLI.
.Prerequisites
* `opm` version 1.12.3+
* `podman` version 1.4.4+
* A bundle image built and pushed to a registry.
.Procedure
. Start a new index:
+
----
$ opm index add \
--bundles quay.io/<namespace>/test-operator:v0.1.0 \//<1>
--tag quay.io/<namespace>/test-catalog:latest <2>
----
<1> The bundle image to add to the index.
<2> The image tag that you want the index image to have.
. Push the index image to a registry:
+
----
$ podman push quay.io/<namespace>/test-catalog:latest
----
```
Add --binary-image to opm docs | ```asciidoc
// Module included in the following assemblies:
//
// * operators/olm-managing-custom-catalogs.adoc
[id="olm-creating-index-image_{context}"]
= Creating an index image
You can create an index image using the `opm` CLI.
.Prerequisites
* `opm` version 1.12.3+
* `podman` version 1.4.4+
* A bundle image built and pushed to a registry.
.Procedure
. Start a new index:
+
----
$ opm index add \
--bundles quay.io/<namespace>/test-operator:v0.1.0 \//<1>
--tag quay.io/<namespace>/test-catalog:latest \//<2>
[--binary-image <registry_base_image>] <3>
----
<1> Comma-separated list of bundle images to add to the index.
<2> The image tag that you want the index image to have.
<3> Optional: An alternative registry base image to use for serving the catalog.
. Push the index image to a registry:
+
----
$ podman push quay.io/<namespace>/test-catalog:latest
----
``` |
5c6036d0-3d0a-4939-8435-82aa7b5a03f7 | {
"language": "AsciiDoc"
} | ```asciidoc
= Topbeat reference
:libbeat: http://www.elastic.co/guide/en/beats/libbeat/1.0.0-rc1
:version: 1.0.0-rc1
include::./overview.asciidoc[]
include::./gettingstarted.asciidoc[]
include::./fields.asciidoc[]
// include::./configuration.asciidoc[]
include::./command-line.asciidoc[]
include::./windows.asciidoc[]
// include::./troubleshooting.asciidoc[]
```
Use master version in docs | ```asciidoc
= Topbeat reference
:libbeat: http://www.elastic.co/guide/en/beats/libbeat/master
:version: master
include::./overview.asciidoc[]
include::./gettingstarted.asciidoc[]
include::./fields.asciidoc[]
// include::./configuration.asciidoc[]
include::./command-line.asciidoc[]
include::./windows.asciidoc[]
// include::./troubleshooting.asciidoc[]
``` |
eeab51e2-b451-4488-8c8e-ed3463486774 | {
"language": "AsciiDoc"
} | ```asciidoc
[[new]]
= What's New in Spring Security 5.8
Spring Security 5.8 provides a number of new features.
Below are the highlights of the release.
* https://github.com/spring-projects/spring-security/pull/11638[gh-11638] - Refresh remote JWK when unknown KID error occurs
* https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation
* https://github.com/spring-projects/spring-security/issues/11661[gh-11661] - Configurable authentication converter for resource-servers with token introspection
* https://github.com/spring-projects/spring-security/pull/11771[gh-11771] - `HttpSecurityDsl` should support `apply` method
* https://github.com/spring-projects/spring-security/pull/11232[gh-11232] - `ClientRegistrations#rest` defines 30s connect and read timeouts
```
Update What's New for 5.8 | ```asciidoc
[[new]]
= What's New in Spring Security 5.8
Spring Security 5.8 provides a number of new features.
Below are the highlights of the release.
* https://github.com/spring-projects/spring-security/pull/11638[gh-11638] - Refresh remote JWK when unknown KID error occurs
* https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation
* https://github.com/spring-projects/spring-security/issues/11661[gh-11661] - Configurable authentication converter for resource-servers with token introspection
* https://github.com/spring-projects/spring-security/pull/11771[gh-11771] - `HttpSecurityDsl` should support `apply` method
* https://github.com/spring-projects/spring-security/pull/11232[gh-11232] - `ClientRegistrations#rest` defines 30s connect and read timeouts
* https://github.com/spring-projects/spring-security/pull/11464[gh-11464] - Remember Me supports SHA256 algorithm
``` |
a9af75bb-1384-4182-9f1e-fb824b3649fb | {
"language": "AsciiDoc"
} | ```asciidoc
= Event Data
=== Event Sourcing
TODO, link core event sourcing docs.
Apart from the recommended event store, Photon, Muon Java provides an in memory, simplified event store that only implements
the core ingest and stream functionality of a muon compatible event store. You can read more about it link:InMemEventStore.html[here]
This is most useful during building tests.
=== Event Based Testing
If you adopt the Event system (which requires an event store such as _Photon_), then you will design your system domains
as a set of _Events_, and your business processes as a set of causal relationships between them.
```
Update doc structure. ignore multicast on TC | ```asciidoc
== Event Based Systems
=== Event Sourcing
TODO, link core event sourcing docs.
Apart from the recommended event store, Photon, Muon Java provides an in memory, simplified event store that only implements
the core ingest and stream functionality of a muon compatible event store. You can read more about it link:InMemEventStore.html[here]
This is most useful during building tests.
=== Event Based Testing
If you adopt the Event system (which requires an event store such as _Photon_), then you will design your system domains
as a set of _Events_, and your business processes as a set of causal relationships between them.
``` |
34bd7334-cf8e-4615-b1b8-1dd0df0e0e3d | {
"language": "AsciiDoc"
} | ```asciidoc
include::_attributes.adoc[]
Spring Cloud Zookeeper provides http://zookeeper.apache.org/[Apache Zookeeper] integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with Zookeeper. The patterns provided include Service Discovery and Distributed Configuration.
## Features
* Service Discovery: instances can be registered with Zookeeper and clients can discover the instances using Spring-managed beans
* Supports Ribbon, the client side load-balancer via Spring Cloud Netflix
* Supports Zuul, a dynamic router and filter via Spring Cloud Netflix
* Distributed Configuration: using Zookeeper as a data store
## Quick Start
As long as Spring Cloud Zookeeper, http://curator.apache.org/[Apache Curator] and the Zookeeper Java Client are on the
classpath any Spring Boot application with `@EnableDiscoveryClient` will try to contact a Zookeeper
agent on `localhost:2181` (the default value of
`zookeeper.connectString`).
```java
@Configuration
@EnableAutoConfiguration
@EnableDiscoveryClient
@RestController
public class Application {
@RequestMapping("/")
public String home() {
return "Hello World";
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
```
A local Zookeeper server must be running. See the http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html[Zookeeper documentation] on how to run a Zookeeper server.
```
Fix Zookeeper link in documentation | ```asciidoc
include::_attributes.adoc[]
Spring Cloud Zookeeper provides http://zookeeper.apache.org/[Apache Zookeeper] integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with Zookeeper. The patterns provided include Service Discovery and Distributed Configuration.
## Features
* Service Discovery: instances can be registered with Zookeeper and clients can discover the instances using Spring-managed beans
* Supports Ribbon, the client side load-balancer via Spring Cloud Netflix
* Supports Zuul, a dynamic router and filter via Spring Cloud Netflix
* Distributed Configuration: using Zookeeper as a data store
## Quick Start
As long as Spring Cloud Zookeeper, http://curator.apache.org/[Apache Curator] and the Zookeeper Java Client are on the
classpath any Spring Boot application with `@EnableDiscoveryClient` will try to contact a Zookeeper
agent on `localhost:2181` (the default value of
`zookeeper.connectString`).
```java
@Configuration
@EnableAutoConfiguration
@EnableDiscoveryClient
@RestController
public class Application {
@RequestMapping("/")
public String home() {
return "Hello World";
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
```
A local Zookeeper server must be running. See the http://zookeeper.apache.org/[Zookeeper documentation] on how to run a Zookeeper server.
``` |
410b9cdf-a168-4890-a8a6-8f90947604e1 | {
"language": "AsciiDoc"
} | ```asciidoc
= Using the Kafka controller
:page-sidebar: apim_3_x_sidebar
:page-permalink: apim/3.x/apim_publishersme_using_kafka.html
:page-folder: apim/user-guide/publisher
:page-layout: apim3x
== Overview
This section describes the basic usage of the Kafka controller - producing and consuming messages.
=== Producing messages
Using the HTTP `POST` command to the example endpoint `https://apim-3-x-x-gateway.cloud.gravitee.io/kafka/messages`, you can send a message with the followig structure:
[source,json]
----
https://apim-3-13-x-gateway.cloud.gravitee.io/kafka/messages
{
"messages": [
{
"key": "key",
"value": {
"val1": "hello"
}
}
]
}
----
=== Consuming messages
Using the HTTP `GET` command to the example endpoint `https://apim-3-x-x-gateway.cloud.gravitee.io/kafka/messages`, you can receive any available messages.```
Fix link and snippet url | ```asciidoc
= Using the Kafka controller
:page-sidebar: apim_3_x_sidebar
:page-permalink: apim/3.x/apim_publisherguide_using_kafka.html
:page-folder: apim/user-guide/publisher
:page-layout: apim3x
== Overview
This section describes the basic usage of the Kafka controller - producing and consuming messages.
=== Producing messages
Using the HTTP `POST` command to the example endpoint `https://api.company.com/kafka/messages`, you can send a message with the followig structure:
[source,json]
----
https://api.company.com/kafka/messages
{
"messages": [
{
"key": "key",
"value": {
"val1": "hello"
}
}
]
}
----
=== Consuming messages
Using the HTTP `GET` command to the example endpoint `https://api.company.com/kafka/messages`, you can receive any available messages.``` |
f5b6908b-7be0-4f6f-97c5-0cb0cecd95c4 | {
"language": "AsciiDoc"
} | ```asciidoc
= EB4J
:doctype: article
:docinfo:
:toc:
:toclevels: 2
include::about.adoc[]
include::links.adoc[]
include::reports.adoc[]
```
Add link to top project page. | ```asciidoc
= EB4J
:doctype: article
:docinfo:
:toc:
:toclevels: 2
IMPORTANT: link:https://github.com/eb4j/eb4j[View on GitHub]
| link:https://eb4j.github.io/[Top project page]
include::about.adoc[]
include::links.adoc[]
include::reports.adoc[]
``` |
5aa8acc1-50f8-4940-8e7b-64f38a201bb1 | {
"language": "AsciiDoc"
} | ```asciidoc
[[vertx:setup]]
== *vertx:setup*
This goal adds the Vert.x Maven Plugin to your `pom.xml` file. The plugin is configured with a default configuration.
=== Example
[source,subs="attributes"]
----
mvn io.fabric8:vertx-maven-plugin:{version}:setup
----
The setup goal by default uses the plugin property _vertx-core-version_
from the plugin properties file vertx-maven-plugin.properties as the vert.x version of the project,
if you wish to override the vertx version, then you can run the same command as above with `-DvertxVersion=<your-vertx-version>`
e.g.
[source,subs="attributes-with-version"]
----
mvn io.fabric8:vertx-maven-plugin:{version}:setup -DvertxVersion=3.4.0-SNAPSHOT
----
This will configure the vert.x and its dependencies to `3.4.0-SNAPSHOT` i.e. Maven project property `vertx.version`
set to `3.4.0-SNAPSHOT`
```
Extend documentaiton about the setup goal | ```asciidoc
[[vertx:setup]]
== *vertx:setup*
This goal adds the Vert.x Maven Plugin to your `pom.xml` file. The plugin is configured with a default configuration.
=== Example
[source,subs="attributes"]
----
mvn io.fabric8:vertx-maven-plugin:{version}:setup
----
The setup goal by default uses the plugin property _vertx-core-version_
from the plugin properties file vertx-maven-plugin.properties as the vert.x version of the project,
if you wish to override the vertx version, then you can run the same command as above with `-DvertxVersion=<your-vertx-version>`
e.g.
[source,subs="attributes-with-version"]
----
mvn io.fabric8:vertx-maven-plugin:{version}:setup -DvertxVersion=3.4.0-SNAPSHOT
----
This will configure the vert.x and its dependencies to `3.4.0-SNAPSHOT` i.e. Maven project property `vertx.version`
set to `3.4.0-SNAPSHOT`
You can also generate a project if you don't have a `pom.xml` file:
[source,subs="attributes"]
----
mvn io.fabric8:vertx-maven-plugin:{version}:setup \
-DprojectGroupId=org.acme \
-DprojectArtifactId=acme-project \
-DprojectVersion=1.0-SNAPSHOT \ # default to 1.0-SNAPSHOT
-Dverticle=org.acme.Foo \
-Ddependencies=web,jmx,mongo
----
The `verticle` parameter creates a new verticle class file.
The `dependencies` parameters specifies the Vert.x dependencies you need.
``` |
bc2457e6-49f4-415d-a265-f3e43c6ebcce | {
"language": "AsciiDoc"
} | ```asciidoc
[[file-descriptors]]
=== File Descriptors
[NOTE]
This is only relevant for Linux and macOS and can be safely ignored if running
Elasticsearch on Windows. On Windows that JVM uses an
https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx[API]
limited only by available resources.
Elasticsearch uses a lot of file descriptors or file handles. Running out of
file descriptors can be disastrous and will most probably lead to data loss.
Make sure to increase the limit on the number of open files descriptors for
the user running Elasticsearch to 65,536 or higher.
For the `.zip` and `.tar.gz` packages, set <<ulimit,`ulimit -n 65536`>> as
root before starting Elasticsearch, or set `nofile` to `65536` in
<<limits.conf,`/etc/security/limits.conf`>>.
RPM and Debian packages already default the maximum number of file
descriptors to 65536 and do not require further configuration.
You can check the `max_file_descriptors` configured for each node
using the <<cluster-nodes-stats>> API, with:
[source,js]
--------------------------------------------------
GET _nodes/stats/process?filter_path=**.max_file_descriptors
--------------------------------------------------
// CONSOLE
```
Document JVM option MaxFDLimit for macOS () | ```asciidoc
[[file-descriptors]]
=== File Descriptors
[NOTE]
This is only relevant for Linux and macOS and can be safely ignored if running
Elasticsearch on Windows. On Windows that JVM uses an
https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx[API]
limited only by available resources.
Elasticsearch uses a lot of file descriptors or file handles. Running out of
file descriptors can be disastrous and will most probably lead to data loss.
Make sure to increase the limit on the number of open files descriptors for
the user running Elasticsearch to 65,536 or higher.
For the `.zip` and `.tar.gz` packages, set <<ulimit,`ulimit -n 65536`>> as
root before starting Elasticsearch, or set `nofile` to `65536` in
<<limits.conf,`/etc/security/limits.conf`>>.
On macOS, you must also pass the JVM option `-XX:-MaxFDLimit`
to Elasticsearch in order for it to make use of the higher file descriptor limit.
RPM and Debian packages already default the maximum number of file
descriptors to 65536 and do not require further configuration.
You can check the `max_file_descriptors` configured for each node
using the <<cluster-nodes-stats>> API, with:
[source,js]
--------------------------------------------------
GET _nodes/stats/process?filter_path=**.max_file_descriptors
--------------------------------------------------
// CONSOLE
``` |
e3bfa201-81e0-42bf-8545-d74b27ba058e | {
"language": "AsciiDoc"
} | ```asciidoc
[id="persistent-storage-csi-snapshots"]
= CSI volume snapshots
include::modules/common-attributes.adoc[]
:context: persistent-storage-csi-snapshots
toc::[]
This document describes how to use volume snapshots with supported Container Storage Interface (CSI) drivers to help protect against data loss in {product-title}. Familiarity with xref:../../storage/understanding-persistent-storage.adoc#persistent-volumes_understanding-persistent-storage[persistent volumes] is suggested.
:FeatureName: CSI volume snapshot
include::modules/technology-preview.adoc[leveloffset=+0]
include::modules/persistent-storage-csi-snapshots-overview.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-controller-sidecar.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-operator.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-provision.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-create.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-delete.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-restore.adoc[leveloffset=+1]
```
Remove TP note in CSI snapshots | ```asciidoc
[id="persistent-storage-csi-snapshots"]
= CSI volume snapshots
include::modules/common-attributes.adoc[]
:context: persistent-storage-csi-snapshots
toc::[]
This document describes how to use volume snapshots with supported Container Storage Interface (CSI) drivers to help protect against data loss in {product-title}. Familiarity with xref:../../storage/understanding-persistent-storage.adoc#persistent-volumes_understanding-persistent-storage[persistent volumes] is suggested.
include::modules/persistent-storage-csi-snapshots-overview.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-controller-sidecar.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-operator.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-provision.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-create.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-delete.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-snapshots-restore.adoc[leveloffset=+1]
``` |
4b9f90ef-9c0b-4cfe-b818-50585e0f8327 | {
"language": "AsciiDoc"
} | ```asciidoc
= Artificial Intelligence for Spring Boot
:awestruct-description: Learn how to use OptaPlanner (open source, java) for Artificial Intelligence planning optimization on Spring Boot.
:awestruct-layout: compatibilityBase
:awestruct-priority: 1.0
:awestruct-related_tag: spring
:showtitle:
OptaPlanner has a Spring Boot Starter to get up and running quickly.
Usage is similar to the link:quarkus.html[Quarkus] extension, but without the performance benefits.
video::U2N02ReT9CI[youtube]
== Guide
**https://github.com/ge0ffrey/getting-started-guides/blob/gs-constraint-solving-ai-optaplanner/README.adoc[Read the OptaPlanner on Spring Boot guide.]**
== Quick start
Run the quick start yourself:
. Download https://github.com/ge0ffrey/gs-constraint-solving-ai-optaplanner-backup/tree/master/complete[the source code].
. Run `./mvnw clean install`
. Open http://localhost:8080 in your browser
```
Fix wrong link (even if it's temporally) | ```asciidoc
= Artificial Intelligence for Spring Boot
:awestruct-description: Learn how to use OptaPlanner (open source, java) for Artificial Intelligence planning optimization on Spring Boot.
:awestruct-layout: compatibilityBase
:awestruct-priority: 1.0
:awestruct-related_tag: spring
:showtitle:
OptaPlanner has a Spring Boot Starter to get up and running quickly.
Usage is similar to the link:quarkus.html[Quarkus] extension, but without the performance benefits.
video::U2N02ReT9CI[youtube]
== Guide
**https://github.com/ge0ffrey/getting-started-guides/blob/gs-constraint-solving-ai-optaplanner/README.adoc[Read the OptaPlanner on Spring Boot guide.]**
== Quick start
Run the quick start yourself:
. Download https://github.com/ge0ffrey/getting-started-guides/tree/gs-constraint-solving-ai-optaplanner/complete[the source code].
. Run `./mvnw clean install`
. Open http://localhost:8080 in your browser
``` |
732733f7-f66b-4c5f-b8b7-50c36e0a742a | {
"language": "AsciiDoc"
} | ```asciidoc
= Contributing to Hawkular Metrics
Before contributing to Hawkular Metrics, it might be useful to read the
http://www.hawkular.org/docs/dev/development.html[How to develop on Hawkular] page on Hawkular website.
== Code style
Hawkular Metrics is a Hawkular component, so project level conventions apply here. That said, there are some
Metrics specific ones.
=== Logging
When working with the JBoss Logging API:
* Group all `INFO` level and above messages http://git.io/vnDW4[in one interface] per logical module for maintainability
* Instead of creating a logger instance in the message logger interface, use a http://git.io/vnDWr[helper class] to get
a logger instance with proper category
* Use existing message logger instance to log trace/debug messages instead of creating a separate logger instance
The logging `projectCode` is `HAWKMETRICS`.
Note that until we no longer need to support EAP 6.4,
http://lists.jboss.org/pipermail/hawkular-dev/2015-March/000378.html[we must not use primitive arguments in logging interfaces].
Logback is the logging backend for tests. It allows to http://git.io/vnDlr[set log level with a system property]
while still having a default value. No Maven filtering/replace dance involved.
```
Rephrase Metrics presentation a bit | ```asciidoc
= Contributing to Hawkular Metrics
Before contributing to Hawkular Metrics, it might be useful to read the
http://www.hawkular.org/docs/dev/development.html[How to develop on Hawkular] page on Hawkular website.
== Code style
Hawkular Metrics is a Hawkular subproject, so top level conventions apply here. That said, there are some
Metrics specific ones.
=== Logging
When working with the JBoss Logging API:
* Group all `INFO` level and above messages http://git.io/vnDW4[in one interface] per logical module for maintainability
* Instead of creating a logger instance in the message logger interface, use a http://git.io/vnDWr[helper class] to get
a logger instance with proper category
* Use existing message logger instance to log trace/debug messages instead of creating a separate logger instance
The logging `projectCode` is `HAWKMETRICS`.
Note that until we no longer need to support EAP 6.4,
http://lists.jboss.org/pipermail/hawkular-dev/2015-March/000378.html[we must not use primitive arguments in logging interfaces].
Logback is the logging backend for tests. It allows to http://git.io/vnDlr[set log level with a system property]
while still having a default value. No Maven filtering/replace dance involved.
``` |
b68ae270-775e-41ba-8b52-9a85afb7361d | {
"language": "AsciiDoc"
} | ```asciidoc
== License
The gem is available as open source under the terms of the
http://opensource.org/licenses/MIT[MIT License].
The gem includes "Iliad", a classical masterpiece by Homer, translated to
English by Samuel Butler. A work is in
https://wiki.creativecommons.org/wiki/Public_domain[public domain] in USA and
in almost whole world (if not whole world) as the translator has died over
100 years ago (not to mention the original author). It has been downloaded from
Project Gutenberg, more details about the work can be found
http://www.gutenberg.org/ebooks/2199[there].
```
Add metadata and badges to the Readme | ```asciidoc
Well Read Faker
===============
:homepage: https://github.com/skalee/well_read_faker
image:https://img.shields.io/gem/v/well_read_faker.svg[
Version, link="https://rubygems.org/gems/well_read_faker"]
image:https://img.shields.io/travis/skalee/well_read_faker/master.svg[
Build Status, link="https://travis-ci.org/skalee/well_read_faker/branches"]
image:https://img.shields.io/gemnasium/skalee/well_read_faker.svg[
Dependencies, link="https://gemnasium.com/skalee/well_read_faker"]
image:https://img.shields.io/codeclimate/github/skalee/well_read_faker.svg[
Code Climate, link="https://codeclimate.com/github/skalee/well_read_faker"]
image:http://img.shields.io/coveralls/skalee/well_read_faker.svg[
Test Coverage, link="https://coveralls.io/r/skalee/well_read_faker"]
:toc:
== License
The gem is available as open source under the terms of the
http://opensource.org/licenses/MIT[MIT License].
The gem includes "Iliad", a classical masterpiece by Homer, translated to
English by Samuel Butler. A work is in
https://wiki.creativecommons.org/wiki/Public_domain[public domain] in USA and
in almost whole world (if not whole world) as the translator has died over
100 years ago (not to mention the original author). It has been downloaded from
Project Gutenberg, more details about the work can be found
http://www.gutenberg.org/ebooks/2199[there].
``` |
34fe22bd-3f3b-4837-bb5e-7abf6a6d7ded | {
"language": "AsciiDoc"
} | ```asciidoc
:title: scribble.github.io
== Scribble Website
image:https://travis-ci.org/scribble/scribble.github.io.svg?branch=pages["Build Status", link="https://travis-ci.org/scribble/scribble.github.io"]
=== Description
When pushing a commit into this branch, the site is automatically built and published to http://www.scribble.org
Most likely the content of the website you are looking for is link:src/main/jbake/content/[here].
=== Building the site on localhost
. `git clone https://github.com/scribble/scribble.github.io`
. `git checkout pages`
. Provided Maven is installed, run one of following commands:
* `mvn jbake:generate` Simply runs jbake and generate the site into `target/website` dir.
* `mvn jbake:watch` Polls a folder and run jbake whenever changes happen.
* `mvn jbake:inline` Same as watch, but also launches an embedded winstone container that by default listens on http://localhost:8080. Additionally you may want to use `-Djbake.port=X` `-Djbake.listenAddress=Y`.
NOTE: `mvn install` will most likely fail on localhost, because it needs the OAuth token for GitHub.
```
Test update to check publication of website | ```asciidoc
:title: scribble.github.io
== Scribble Website
image:https://travis-ci.org/scribble/scribble.github.io.svg?branch=pages["Build Status", link="https://travis-ci.org/scribble/scribble.github.io"]
=== Description
When pushing a commit into this branch, the site is automatically built and published to http://www.scribble.org
Most likely the content of the website you are looking for is link:src/main/jbake/content/[here].
=== Building the site on localhost
. `git clone https://github.com/scribble/scribble.github.io`
. `git checkout pages`
. Provided Maven is installed, run one of following commands:
* `mvn jbake:generate` Simply runs jbake and generate the site into `target/website` dir.
* `mvn jbake:watch` Polls a folder and run jbake whenever changes happen.
* `mvn jbake:inline` Same as watch, but also launches an embedded winstone container that by default listens on http://localhost:8080. Additionally you may want to use `-Djbake.port=X` `-Djbake.listenAddress=Y`.
NOTE: `mvn install` will most likely fail on localhost, because it needs the OAuth token for GitHub.
``` |
c93246f4-e3b5-4426-a658-716817bdb3ba | {
"language": "AsciiDoc"
} | ```asciidoc
= RecordTrac Documentation
{% macro link(underscore_name) %}
link:{{ underscore_name }}.html[{{titleize(underscore_name)}}]
{% endmacro %}
== Welcome
This is the home page for documentation for RecordTrac app.
NOTE: For help contact ...
== What This Is
* {{ link('overview-of-app') }}
* {{ link('why-it-was-built') }}
* {{ link('research-and-references') }}
* {{ link('principles') }}
== How Is This Built
Learn about the technical details of implementation so you can see what's involved.
== API Documentation
* {{ link('api-documentation') }}
== Developer Documentation
How to modify and develop the app's source code.
=== Source Code
* https://github.com/codeforamerica/public-records[Source Code (Github)]
* https://github.com/codeforamerica/public-records/issues[Issue Tracker (Github)]
=== Developer Docs
* {{ link('db-helpers') }}
```
Fix issue with bulleted lists. | ```asciidoc
= RecordTrac Documentation
{% macro link(underscore_name) -%}
link:{{ underscore_name }}.html[{{titleize(underscore_name)}}]
{% endmacro -%}
== Welcome
This is the home page for documentation for RecordTrac app.
NOTE: For help contact ...
== What This Is
* {{ link('overview-of-app') }}
* {{ link('why-it-was-built') }}
* {{ link('research-and-references') }}
* {{ link('principles') }}
== How Is This Built
Learn about the technical details of implementation so you can see what's involved.
== API Documentation
* {{ link('api-documentation') }}
== Developer Documentation
How to modify and develop the app's source code.
=== Source Code
* https://github.com/codeforamerica/public-records[Source Code (Github)]
* https://github.com/codeforamerica/public-records/issues[Issue Tracker (Github)]
=== Developer Docs
* {{ link('db-helpers') }}
``` |
60614417-2e78-43c7-ad60-320ba5bd28f8 | {
"language": "AsciiDoc"
} | ```asciidoc
= CrashBurnFree
CrashBurnFree is a free crash reporting tool for Java (and potentially other languages).
The name is a pun, as (a) it's _free_ to use and modify and (b) one over-arching goal of Software Checking Tools
is to _free_ you from crashing and burning...
It is composed of the following sub-projects:
. javaclient - the Java client code; no external dependencies
. javaserver - an implementation of a Java EE (REST JAXRS) server
```
Bring into synch with reality | ```asciidoc
= CrashBurnFree
CrashBurnFree is a free crash reporting tool for Java (and potentially other languages).
The name is a pun, as (a) it's _free_ to use and modify and (b) one over-arching goal of Software Checking Tools
is to _free_ you from crashing and burning...
It is composed of the following sub-projects:
. javaclient - the Java client code; no external dependencies
. javaserver - an implementation of a Java EE (REST JAXRS) server
Status: javaclient seems to "work", but javaserver is, ahem, not quite
finished. Or at least not tried. And, there needs to be a "common"
library project for code shared between them.
``` |
729ba8da-acea-4de2-a463-1659d5b9cb44 | {
"language": "AsciiDoc"
} | ```asciidoc
= Changelog
== Version 0.3.0
Date: unreleased
- First version splitted from monolitic buddy package.```
Update changelog with latest changes on kdf. | ```asciidoc
= Changelog
== Version 0.4.0
Date: unreleased
- Replace record usage in kdf ns with reify.
- Rename kdf protocol from KDFType to IKDF
- Rename kdf protocol method names to more consistent ones.
- Add support for nio ByteBuffer for kdf.
== Version 0.3.0
Date: 2015-01-18
- First version splitted from monolitic buddy package.``` |
f5b02703-15b0-4bac-9079-27b9fcc1863c | {
"language": "AsciiDoc"
} | ```asciidoc
[[sample-dashboards]]
== Sample dashboards
In order to make it as easy as possible to get application performance insights
from packet data, we provide a few sample Kibana dashboards. The
dashboards are maintained in this
https://github.com/elastic/beats-dashboards[GitHub repository].
Load automatically all the sample dashboards in Kibana by following the TODO[steps].
image:./images/packetbeat-statistics.png[Topbeat statistics]
These dashboards are provided as examples, we recommend that you
http://www.elastic.co/guide/en/kibana/current/dashboard.html[customize] them
to your needs.
```
Add back the old text | ```asciidoc
[[sample-dashboards]]
== Sample dashboards
In order to make it as easy as possible to get application performance insights
from packet data, we provide a few sample Kibana dashboards. The
dashboards are maintained in this
https://github.com/elastic/beats-dashboards[GitHub repository], which also
includes instructions for loading them.
Load automatically all the sample dashboards in Kibana by following the TODO[steps].
image:./images/packetbeat-statistics.png[Topbeat statistics]
These dashboards are provided as examples, we recommend that you
http://www.elastic.co/guide/en/kibana/current/dashboard.html[customize] them
to your needs.
``` |
64fe5746-5bf0-4e7a-a3f0-18358b587df0 | {
"language": "AsciiDoc"
} | ```asciidoc
[[elasticsearch-reference]]
= Elasticsearch Reference
:version: 3.0.0-beta1
:major-version: 3.x
:branch: 3.0
:jdk: 1.8.0_25
:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current
:plugins: https://www.elastic.co/guide/en/elasticsearch/plugins/master
:issue: https://github.com/elastic/elasticsearch/issues
:pull: https://github.com/elastic/elasticsearch/pull
include::getting-started.asciidoc[]
include::setup.asciidoc[]
include::migration/index.asciidoc[]
include::api-conventions.asciidoc[]
include::docs.asciidoc[]
include::search.asciidoc[]
include::aggregations.asciidoc[]
include::indices.asciidoc[]
include::cat.asciidoc[]
include::cluster.asciidoc[]
include::query-dsl.asciidoc[]
include::mapping.asciidoc[]
include::analysis.asciidoc[]
include::modules.asciidoc[]
include::index-modules.asciidoc[]
include::testing.asciidoc[]
include::glossary.asciidoc[]
//////////////////////////////////////////
include::release-notes.asciidoc[]
//////////////////////////////////////////
include::redirects.asciidoc[]
```
Fix :issue: and :pull: asciidoc replacements | ```asciidoc
[[elasticsearch-reference]]
= Elasticsearch Reference
:version: 3.0.0-beta1
:major-version: 3.x
:branch: 3.0
:jdk: 1.8.0_25
:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current
:plugins: https://www.elastic.co/guide/en/elasticsearch/plugins/master
:issue: https://github.com/elastic/elasticsearch/issues/
:pull: https://github.com/elastic/elasticsearch/pull/
include::getting-started.asciidoc[]
include::setup.asciidoc[]
include::migration/index.asciidoc[]
include::api-conventions.asciidoc[]
include::docs.asciidoc[]
include::search.asciidoc[]
include::aggregations.asciidoc[]
include::indices.asciidoc[]
include::cat.asciidoc[]
include::cluster.asciidoc[]
include::query-dsl.asciidoc[]
include::mapping.asciidoc[]
include::analysis.asciidoc[]
include::modules.asciidoc[]
include::index-modules.asciidoc[]
include::testing.asciidoc[]
include::glossary.asciidoc[]
//////////////////////////////////////////
include::release-notes.asciidoc[]
//////////////////////////////////////////
include::redirects.asciidoc[]
``` |
360e43d7-b343-4aa0-bf39-4a13ac9d3d65 | {
"language": "AsciiDoc"
} | ```asciidoc
= Edge
Please see the link:app/README.adoc[app/README.adoc] for building and running the app.
```
Use xref syntax for document links | ```asciidoc
= Edge
Please see the <<app/README#>> for building and running the app.
``` |
b493fdca-a758-4d28-ab02-5fc2c7dcc236 | {
"language": "AsciiDoc"
} | ```asciidoc
= Minimal-J
Java - but small.
image::doc/frontends.png[]
Minimal-J applications are
* Responsive to use on every device
* Straight forward to specify and implement and therefore
* Easy to plan and manage
=== Idea
Business applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented.
Minimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user.
== Technical Features
* Independent of the used UI technology. Implementations for Web / Mobile / Desktop.
* ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported.
* Small: The minimalj.jar is still < 1MB
* Very few dependencies
* Applications can run standalone (like SpringBoot)
== Documentation
* link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications.
* link:doc/topics.adoc[Tutorial and examples] Informations for developers.
* link:doc/release_notes.adoc[Release Notes]
== Hello World
How to implement Hello World in Minimal-J:
link:_includes/ex-video.adoc[0VHz7gv6TpA]
=== Contact
* Bruno Eberhard, mailto:[email protected][[email protected]] ```
Include Hello World Youtube video | ```asciidoc
= Minimal-J
Java - but small.
image::doc/frontends.png[]
Minimal-J applications are
* Responsive to use on every device
* Straight forward to specify and implement and therefore
* Easy to plan and manage
=== Idea
Business applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented.
Minimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user.
== Technical Features
* Independent of the used UI technology. Implementations for Web / Mobile / Desktop.
* ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported.
* Small: The minimalj.jar is still < 1MB
* Very few dependencies
* Applications can run standalone (like SpringBoot)
== Documentation
* link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications.
* link:doc/topics.adoc[Tutorial and examples] Informations for developers.
* link:doc/release_notes.adoc[Release Notes]
== Hello World
How to implement Hello World in Minimal-J:
video::0VHz7gv6TpA[youtube]
=== Contact
* Bruno Eberhard, mailto:[email protected][[email protected]] ``` |
b1ba871e-c022-4fdb-9e99-775f31d8a7bd | {
"language": "AsciiDoc"
} | ```asciidoc
[[beats-reference]]
= Beats Platform Reference
:ES-version: 1.7.2
:Kibana-version: 4.1.2
:Dashboards-version: 1.0.0-beta3
include::./overview.asciidoc[]
include::./gettingstarted.asciidoc[]
include::./configuration.asciidoc[]
include::./command-line.asciidoc[]
include::./https.asciidoc[]
include::./newbeat.asciidoc[]
```
Update elasticsearch to version 1.7.3 and beat dashboards to beta4 branch. | ```asciidoc
[[beats-reference]]
= Beats Platform Reference
:ES-version: 1.7.3
:Kibana-version: 4.1.2
:Dashboards-version: 1.0.0-beta4
include::./overview.asciidoc[]
include::./gettingstarted.asciidoc[]
include::./configuration.asciidoc[]
include::./command-line.asciidoc[]
include::./https.asciidoc[]
include::./newbeat.asciidoc[]
``` |
cbe32d8d-377e-446a-b911-9eab5d944b53 | {
"language": "AsciiDoc"
} | ```asciidoc
= To-Do List
- mock_cone_detector creates infinite area and overflows h
- (*DONE*) new waypoints shorter than old don't delete existing waypoints
- adjust waypoints for start position and cone position
- cone area goes down when very close to cone
- (*DONE*) parameterize throttle and steering PWM values
- touch sensor does not work
- (*DONE*) cone detection in bright light does not work
- GUIDED mode does not work
- Encode PWM values or range set to use in the waypoints file
- If waypoint encountered before cone is seen, rover goes into HOLD mode
with no recovery. Needs to be fixed.
== Possible To-Do
- (*DONE*) Change from using WP_SPEED to CRUISE_SPEED. (Seems to be used by Vicky,
while WP_SPEED is not.)
- Have a way of manually triggering parameter reload
== Notes
MAV_CMD_DO_SET_HOME appears to reset the map origin, as well as zero the
offset between the map origin and base_link (for /mavros/local_position/pose
and /mavros/local_position/odom).
```
Update to-do document based on progress from yesterday | ```asciidoc
= To-Do List
- mock_cone_detector creates infinite area and overflows h
- (*DONE*) new waypoints shorter than old don't delete existing waypoints
- adjust waypoints for start position and cone position
- cone area goes down when very close to cone
- (*DONE*) parameterize throttle and steering PWM values
- (*DONE*) touch sensor does not work
- (*DONE*) cone detection in bright light does not work
- GUIDED mode does not work
- (*DONE*) Encode PWM values or range set to use in the waypoints file
- If waypoint encountered before cone is seen, rover goes into HOLD mode
with no recovery. Needs to be fixed.
== Possible To-Do
- (*DONE*) Change from using WP_SPEED to CRUISE_SPEED. (Seems to be used by Vicky,
while WP_SPEED is not.)
- Have a way of manually triggering parameter reload
== Notes
MAV_CMD_DO_SET_HOME appears to reset the map origin, as well as zero the
offset between the map origin and base_link (for /mavros/local_position/pose
and /mavros/local_position/odom).
``` |
854c1386-ee90-4ea0-9431-f9e9ba2f9732 | {
"language": "AsciiDoc"
} | ```asciidoc
= ClojureBridge San Francisco
ClojureBridge San Francisco
2017-09-16
:jbake-type: event
:jbake-edition: 2017
:jbake-link: http://www.clojurebridge.org/events/2017-09-16-san-francisco
:jbake-location: San Francisco, CA
:jbake-start: 2017-09-15
:jbake-end: 2017-09-16
ClojureBridge is a free 1-day workshop aimed at increasing the participation of women, trans-gener and non-binary gender individuals in the Clojure community. The workshop is intended for those new to programming as well as individuals with some programming experience who would like to explore programming using Clojure, a modern functional programming language.
The workshop will introduce participants to fundamental programming concepts and approaches.
```
Update clojurebridge sf 2017 url | ```asciidoc
= ClojureBridge San Francisco
ClojureBridge San Francisco
2017-09-15
:jbake-type: event
:jbake-edition: 2017
:jbake-link: http://www.clojurebridge.org/events/2017-09-15-san-francisco
:jbake-location: San Francisco, CA
:jbake-start: 2017-09-15
:jbake-end: 2017-09-16
ClojureBridge is a free 1-day workshop aimed at increasing the participation of women, trans-gener and non-binary gender individuals in the Clojure community. The workshop is intended for those new to programming as well as individuals with some programming experience who would like to explore programming using Clojure, a modern functional programming language.
The workshop will introduce participants to fundamental programming concepts and approaches.
``` |
83b606ed-c9cc-4da2-9fff-95056031a644 | {
"language": "AsciiDoc"
} | ```asciidoc
TinkerPop3
==========
image:https://raw.githubusercontent.com/tinkerpop/tinkerpop3/master/docs/static/images/tinkerpop3-splash.png[TinkerPop3]
* Build Project: `mvn clean install`
* Build AsciiDocs: `mvn process-resources -Dasciidoc`
* Deploy AsciiDocs: `mvn deploy -Dasciidoc`
* Deploy JavaDocs: `mvn deploy -Djavadocs`
[source,bash]
----
$ bin/gremlin.sh
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
gremlin>
----```
Update readme for deploying javadoc. | ```asciidoc
TinkerPop3
==========
image:https://raw.githubusercontent.com/tinkerpop/tinkerpop3/master/docs/static/images/tinkerpop3-splash.png[TinkerPop3]
* Build Project: `mvn clean install`
* Build AsciiDocs: `mvn process-resources -Dasciidoc`
* Deploy AsciiDocs: `mvn deploy -Dasciidoc`
* Deploy JavaDocs: `mvn deploy -Djavadoc-upload`
[source,bash]
----
$ bin/gremlin.sh
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
gremlin>
----``` |
9ff21642-23f4-44e5-97d3-4552b23bcc10 | {
"language": "AsciiDoc"
} | ```asciidoc
[[indices-delete-index]]
== Delete Index
The delete index API allows to delete an existing index.
[source,js]
--------------------------------------------------
$ curl -XDELETE 'http://localhost:9200/twitter/'
--------------------------------------------------
The above example deletes an index called `twitter`. Specifying an index,
alias or wildcard expression is required.
The delete index API can also be applied to more than one index, or on
all indices (be careful!) by using `_all` or `*` as index.
In order to disable allowing to delete indices via wildcards or `_all`,
set `action.destructive_requires_name` setting in the config to `true`.
This setting can also be changed via the cluster update settings api.```
Document index deletion using csv separated indices | ```asciidoc
[[indices-delete-index]]
== Delete Index
The delete index API allows to delete an existing index.
[source,js]
--------------------------------------------------
$ curl -XDELETE 'http://localhost:9200/twitter/'
--------------------------------------------------
The above example deletes an index called `twitter`. Specifying an index,
alias or wildcard expression is required.
The delete index API can also be applied to more than one index, by either using a comma separated list, or on all indices (be careful!) by using `_all` or `*` as index.
In order to disable allowing to delete indices via wildcards or `_all`,
set `action.destructive_requires_name` setting in the config to `true`.
This setting can also be changed via the cluster update settings api.
``` |
3ab87138-ef0b-4ea6-aba5-7b76c0f75525 | {
"language": "AsciiDoc"
} | ```asciidoc
= flexy-pool
Author <[email protected]>
v1.0.0, 2014-02-25
:toc:
:imagesdir: images
:homepage: http://vladmihalcea.com/
== Introduction
The flexy-pool library brings adaptability to a given Connection Pool, allowing it to resize on demand.
This is very handy since most connection pools offer a limited set of dynamic configuration strategies.
== Features
* extensive connection pool support
** http://docs.codehaus.org/display/BTM/Home[Bitronix Transaction Manager]
** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP]
** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP2]
** http://www.mchange.com/projects/c3p0/[C3P0]
** http://jolbox.com/[BoneCP]
** http://brettwooldridge.github.io/HikariCP/[HikariCP]
* statistics support
** source connection acquiring time histogram
** total connection acquiring time histogram
** retries attempts histogram
** maximum CP size histogram
** connection request count histogram
** connection lease time histogram
== Documentation
https://github.com/vladmihalcea/flexy-pool/wiki/Flexy-Pool-User-Guide[Flexy Pool User Guide]
== 1.0 Release TODO list
* explain jmx metrics and add real-life case study
* Maven OSS release
```
Remove toc and image ref | ```asciidoc
= flexy-pool
Author <[email protected]>
v1.0.0, 2014-02-25
:homepage: http://vladmihalcea.com/
== Introduction
The flexy-pool library brings adaptability to a given Connection Pool, allowing it to resize on demand.
This is very handy since most connection pools offer a limited set of dynamic configuration strategies.
== Features
* extensive connection pool support
** http://docs.codehaus.org/display/BTM/Home[Bitronix Transaction Manager]
** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP]
** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP2]
** http://www.mchange.com/projects/c3p0/[C3P0]
** http://jolbox.com/[BoneCP]
** http://brettwooldridge.github.io/HikariCP/[HikariCP]
* statistics support
** source connection acquiring time histogram
** total connection acquiring time histogram
** retries attempts histogram
** maximum CP size histogram
** connection request count histogram
** connection lease time histogram
== Documentation
https://github.com/vladmihalcea/flexy-pool/wiki/Flexy-Pool-User-Guide[Flexy Pool User Guide]
== 1.0 Release TODO list
* explain jmx metrics and add real-life case study
* Maven OSS release
``` |
e2dffaf3-976d-410a-bf96-5e5d31676454 | {
"language": "AsciiDoc"
} | ```asciidoc
= Eclipse Microprofile Metrics
This specification aims at providing a unified way for Microprofile servers to
export Monitoring data ("Telemetry") to management agents and also a unified
Java API, that all (application) programmers can use to expose their telemetry
data.
For the specification see http:spec/metrics_spec.adoc[Metrics spec]
For discussions visit the https://groups.google.com/forum/#!forum/microprofile[Microprofile Google group].```
Use correct syntax for link to spec. | ```asciidoc
= Eclipse Microprofile Metrics
This specification aims at providing a unified way for Microprofile servers to
export Monitoring data ("Telemetry") to management agents and also a unified
Java API, that all (application) programmers can use to expose their telemetry
data.
For the specification see link:spec/metrics_spec.adoc[Metrics spec]
For discussions visit the https://groups.google.com/forum/#!forum/microprofile[Microprofile Google group].``` |
1ea35fd1-ed12-40c1-8cf2-e986024ea94f | {
"language": "AsciiDoc"
} | ```asciidoc
// Module included in the following assemblies:
//
// * serverless/serverless-support.adoc
[id="serverless-about-collecting-data_{context}"]
= About collecting {ServerlessProductName} data
You can use the `oc adm must-gather` CLI command to collect information about your cluster, including features and objects associated with {ServerlessProductName}.
To collect {ServerlessProductName} data with `must-gather`, you must specify the {ServerlessProductName} image.
.Procedure
* Enter the command:
+
[source,terminal]
----
$ oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8
----
```
Add image tag to must-gather images | ```asciidoc
// Module included in the following assemblies:
//
// * serverless/serverless-support.adoc
[id="serverless-about-collecting-data_{context}"]
= About collecting {ServerlessProductName} data
You can use the `oc adm must-gather` CLI command to collect information about your cluster, including features and objects associated with {ServerlessProductName}. To collect {ServerlessProductName} data with `must-gather`, you must specify the {ServerlessProductName} image and the image tag for your installed version of {ServerlessProductName}.
.Procedure
* Collect data by using the `oc adm must-gather` command:
+
[source,terminal]
----
$ oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8:<image_version_tag>
----
+
.Example command
[source,terminal]
----
$ oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8:1.14.0
----
``` |
b3dac0cc-ccfa-47f4-a45c-2fbc1f45fc35 | {
"language": "AsciiDoc"
} | ```asciidoc
////
Accessible at:
https://ecp-candle.github.io/Supervisor/home.html
////
////
This prevents ^M from appearing in the output:
////
:miscellaneous.newline: \n
= CANDLE Supervisor Home Page
This is the main home page about CANDLE Supervisor effort with links to workflows and other supporting information.
== Key deadlines
1. May 15: End-to-end demo of hyperparameter harness
== Swift installations
* http://swift-lang.github.io/swift-t/sites.html#_jlse_knl[JLSE KNL]
* http://swift-lang.github.io/swift-t/sites.html#_cori[Cori]
* http://swift-lang.github.io/swift-t/sites.html#cooley_candle
```
Document more Swift/Python installations for CANDLE | ```asciidoc
////
Accessible at:
https://ecp-candle.github.io/Supervisor/home.html
You can compile this locally with:
$ ./adoc.sh README.adoc
////
////
This prevents ^M from appearing in the output:
////
:miscellaneous.newline: \n
= CANDLE Supervisor Home Page
This is the main home page about CANDLE Supervisor effort with links to workflows and other supporting information.
== Key deadlines
1. May 15: End-to-end demo of hyperparameter harness
== Swift installations
* http://swift-lang.github.io/swift-t/sites.html#_cori[Cori]
+
This uses the system-installed Python with ML libs at: +
+/usr/common/software/python/2.7-anaconda/envs/deeplearning+
* http://swift-lang.github.io/swift-t/sites.html#cooley_candle[Cooley]
+
This uses the system-installed Python with ML libs at: +
+/soft/analytics/conda/env/Candle_ML+
* http://swift-lang.github.io/swift-t/sites.html#_jlse_knl[JLSE KNL]
+
This does not yet have Python.
* JLSE Prasanna +
This uses a VirtualEnv Python at +/home/pbalapra/.virtualenvs+
** +~wozniak/Public/sfw/icc/swift-t-pb/stc/bin+
``` |
ed91def1-b8e1-47b2-9b96-53a46c01c425 | {
"language": "AsciiDoc"
} | ```asciidoc
# Janitor image:https://travis-ci.org/techdev-solutions/janitor.svg?branch=master["Build Status",link="https://travis-ci.org/techdev-solutions/janitor"]
An application to perform cleanup work using the https://getpocket.com[Pocket API].
Review your existing items via the web interface and have Janitor archive old items once per day.
## API Documentation
The documentation for the Kotlin API bindings can be found https://techdev-solutions.github.io/janitor/pocket-api/[here].
## Tutorial (German)
Kotlin: Ein Tutorial für Einsteiger @ JAXenter::
* https://jaxenter.de/kotlin-tutorial-48156[Part 1]
* https://jaxenter.de/kotlin-ein-tutorial-fuer-einsteiger-teil-2-48587[Part 2]
## Screenshots
### Item Overview
image:images/items.png?raw=true[Item Overiew]
### Error View
image:images/error.png?raw=true[Error View]```
Add link to 3rd part of JAXenter tutorial | ```asciidoc
# Janitor image:https://travis-ci.org/techdev-solutions/janitor.svg?branch=master["Build Status",link="https://travis-ci.org/techdev-solutions/janitor"]
An application to perform cleanup work using the https://getpocket.com[Pocket API].
Review your existing items via the web interface and have Janitor archive old items once per day.
## API Documentation
The documentation for the Kotlin API bindings can be found https://techdev-solutions.github.io/janitor/pocket-api/[here].
## Tutorial (German)
Kotlin: Ein Tutorial für Einsteiger @ JAXenter::
* https://jaxenter.de/kotlin-tutorial-48156[Part 1]
* https://jaxenter.de/kotlin-ein-tutorial-fuer-einsteiger-teil-2-48587[Part 2]
* https://jaxenter.de/kotlin-ein-tutorial-fuer-einsteiger-teil-3-48967[Part 3]
## Screenshots
### Item Overview
image:images/items.png?raw=true[Item Overiew]
### Error View
image:images/error.png?raw=true[Error View]``` |
6c70c507-41a6-4201-ba22-01736357233c | {
"language": "AsciiDoc"
} | ```asciidoc
James Glasbrenner's dotfiles
============================
James Glasbrenner <[email protected]>
July 27, 2017
My dotfiles repo.
License
-------
All content is licensed under the terms of link:LICENSE[The Unlicense License].
```
Add bootstrapping instructions for how to stow stow | ```asciidoc
James Glasbrenner's dotfiles
============================
James Glasbrenner <[email protected]>
July 27, 2017
My dotfiles repo.
Bootstrapping
-------------
Stow 2.3.0 is included in this repo for bootstrapping purposes.
To stow stow after cloning this repository to `$HOME/.dotfiles`, run
.bash
----------------------------------------------
PERL5LIB=$HOME/.dotfiles/stow/stow/lib/perl5 \
$HOME/.dotfiles/stow/stow/bin/stow \
-d $HOME/.dotfiles/stow \
-t /home/glasbren/.local \
stow
----------------------------------------------
License
-------
All content is licensed under the terms of link:LICENSE[The Unlicense License].
``` |
be958303-7a06-4cb8-84b0-019a5706aac3 | {
"language": "AsciiDoc"
} | ```asciidoc
== asciidoctor-fopub-embed-svg-example
An example for embedding SVG images in an asciidoctor document.
=== Setup
----
bundle install
----
=== How to make the target pdf file
Running the command below make the target pdf file named "class_diagram.pdf" out of "class_diagram.adoc"
----
bundle exec rake
----
=== Watch *.adoc files
Running the command below, rake watches "\*.adoc" files and build "\*.adoc" files whenever they are modified.
----
bundle exec rake watch
----
```
Remove the backslash for the second asterisk | ```asciidoc
== asciidoctor-fopub-embed-svg-example
An example for embedding SVG images in an asciidoctor document.
=== Setup
----
bundle install
----
=== How to make the target pdf file
Running the command below make the target pdf file named "class_diagram.pdf" out of "class_diagram.adoc"
----
bundle exec rake
----
=== Watch *.adoc files
Running the command below, rake watches "\*.adoc" files and build "*.adoc" files whenever they are modified.
----
bundle exec rake watch
----
``` |
7861ba28-167b-4859-a67e-a3c3d5da2cbe | {
"language": "AsciiDoc"
} | ```asciidoc
= Cypher Improvement Proposals
This is the home for all Cypher Improvement Proposal (CIP) documents.
CIPs are documents that outline the syntax and semantics of Cypher.
== The CIP Lifecycle
CIPs normally pass through a number of phases before and after ending up in this repository.
=== Accepted
It is open to anyone to author and submit a CIP as a pull request to this repository.
The CIP should then be suggested for addition into the `accepted` directory.
It is then the task of the CLG to discuss, give feedback, and eventually (potentially) accept the CIP, at which point the pull request will be merged.
=== Testable
A CIP enters this phase when it has received a sufficient set of TCK scenarios that outline the use of its suggested features.
=== Implemented
A CIP enters this phase when the reference implementation supports its suggested features.
This is verified by passing all the TCK scenarios that had been outlined for the CIP.
```
Remove mention of reference implementation | ```asciidoc
= Cypher Improvement Proposals
This is the home for all Cypher Improvement Proposal (CIP) documents.
CIPs are documents that outline the syntax and semantics of Cypher.
== The CIP Lifecycle
CIPs normally pass through a number of phases before and after ending up in this repository.
=== Accepted
It is open to anyone to author and submit a CIP as a pull request to this repository.
The CIP should then be suggested for addition into the `accepted` directory.
It is then the task of the CLG to discuss, give feedback, and eventually (potentially) accept the CIP, at which point the pull request will be merged.
=== Testable
A CIP enters this phase when it has received a sufficient set of TCK scenarios that outline the use of its suggested features.
``` |
65c42cb9-7d9d-4696-bb1a-ecfa4f30fd02 | {
"language": "AsciiDoc"
} | ```asciidoc
[[misc]]
== Misc
* https://github.com/elasticsearch/puppet-elasticsearch[Puppet]:
Elasticsearch puppet module.
* http://github.com/elasticsearch/cookbook-elasticsearch[Chef]:
Chef cookbook for Elasticsearch
* https://github.com/medcl/salt-elasticsearch[SaltStack]:
SaltStack Module for Elasticsearch
* http://www.github.com/neogenix/daikon[daikon]:
Daikon Elasticsearch CLI
* https://github.com/Aconex/scrutineer[Scrutineer]:
A high performance consistency checker to compare what you've indexed
with your source of truth content (e.g. DB)
```
Add Wireshark protocol dissection support | ```asciidoc
[[misc]]
== Misc
* https://github.com/elasticsearch/puppet-elasticsearch[Puppet]:
Elasticsearch puppet module.
* http://github.com/elasticsearch/cookbook-elasticsearch[Chef]:
Chef cookbook for Elasticsearch
* https://github.com/medcl/salt-elasticsearch[SaltStack]:
SaltStack Module for Elasticsearch
* http://www.github.com/neogenix/daikon[daikon]:
Daikon Elasticsearch CLI
* https://github.com/Aconex/scrutineer[Scrutineer]:
A high performance consistency checker to compare what you've indexed
with your source of truth content (e.g. DB)
* https://www.wireshark.org/[Wireshark]:
Protocol dissection for Zen discovery, HTTP and the binary protocol
``` |
6330c32a-5efa-4af0-8fed-db7cfe182238 | {
"language": "AsciiDoc"
} | ```asciidoc
= Vert.x JDBC client
image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-jdbc-client["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-jdbc-client/"]
This provides an asynchronous interface around a JDBC datasource so you can access data in any
database for which there is a JDBC driver.
Please see the in link:src/main/asciidoc/index.adoc[source] documentation or the main documentation on the http://vertx.io/docs/#data_access[web-site] for a full description:
```
Remove link to in-source docs | ```asciidoc
= Vert.x JDBC client
image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-jdbc-client["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-jdbc-client/"]
This provides an asynchronous interface around a JDBC datasource so you can access data in any
database for which there is a JDBC driver.
Please see the main documentation on the web-site for a full description:
* https://vertx.io/docs/vertx-jdbc-client/java/[Java documentation]
* https://vertx.io/docs/vertx-jdbc-client/js/[JavaScript documentation]
* https://vertx.io/docs/vertx-jdbc-client/kotlin/[Kotlin documentation]
* https://vertx.io/docs/vertx-jdbc-client/groovy/[Groovy documentation]
* https://vertx.io/docs/vertx-jdbc-client/ruby/[Ruby documentation]
``` |
d276a269-5bbb-4150-91e6-16e8b81a3f47 | {
"language": "AsciiDoc"
} | ```asciidoc
= Contracts Wizard
:page-notoc:
Not sure where to start? Use the interactive generator below to bootstrap your
contract and learn about the components offered in OpenZeppelin Contracts.
TIP: Place the resulting contract in your `contracts` directory in order to compile it with a tool like Hardhat or Truffle. Consider reading our guide on xref:learn::developing-smart-contracts.adoc[Developing Smart Contracts] for more guidance!
++++
<script async defer src="https://openzeppelin-contracts-wizard.netlify.app/build/embed.js"></script>
<oz-wizard style="display: block; min-height: 40rem;"></oz-wizard>
++++
```
Remove defer tag for Wizard embed script | ```asciidoc
= Contracts Wizard
:page-notoc:
Not sure where to start? Use the interactive generator below to bootstrap your
contract and learn about the components offered in OpenZeppelin Contracts.
TIP: Place the resulting contract in your `contracts` directory in order to compile it with a tool like Hardhat or Truffle. Consider reading our guide on xref:learn::developing-smart-contracts.adoc[Developing Smart Contracts] for more guidance!
++++
<script async src="https://wizard.openzeppelin.com/build/embed.js"></script>
<oz-wizard style="display: block; min-height: 40rem;"></oz-wizard>
++++
``` |
1ca1aa8b-b966-498e-a2b9-2cb9310a4335 | {
"language": "AsciiDoc"
} | ```asciidoc
= Basic Examples
The basic examples in this directory show how the setup a CMake project,
set compile flags, create executables and libraries, and install them.
The examples included are
- hello-cmake. A hello world example.
- hello-headers. A slighly more complicated hello world example, with using Hello class and seperate source and include folders.
- shared-library. An example using a shared library.
- static-library. An example using a static library.
- installing. Shows how to create a 'make install' target to install the binaries and libraries
- build-type. An example showing how to set a default build type of your project.
- compile-flags. Shows how to set compile flags
```
Add links to filders and fix some typos | ```asciidoc
= Basic Examples
The basic examples in this directory show how the setup a CMake project,
set compile flags, create and link executables and libraries, and install them.
The examples included are
- link:A-hello-cmake[hello-cmake]. A hello world example.
- link:B-hello-headers[hello-headers]. A slighly more complicated hello world example, using seperate source and include folders.
- link:C-static-library[static-library]. An example using a static library.
- link:D-shared-library[shared-library]. An example using a shared library.
- link:E-installing[installing]. Shows how to create a 'make install' target that will install binaries and libraries.
- link:F-build-type[build-type]. An example showing how to set a default build and optimization flags for your project.
- link:G-compile-flags[compile-flags]. Shows how to set additional compile flags.
- link:H-third-party-library[third-party-library]. Shows an example of how to link third party libraries.
``` |
da7237a6-17d5-4683-858a-489e8e961843 | {
"language": "AsciiDoc"
} | ```asciidoc
= Seed
[email protected]
:toc: left
:toclevels: 5
:stylesheet: styles/html.css
:sectlinks:
:sectnums:
:sectnumlevels: 5
:icons: font
:docinfo:
Seed was developed at the National Geospatial-Intelligence Agency (NGA) Research.
Seed is a general standard to aid in the discovery and consumption of a discrete unit of work contained within a Docker
image.
To comply with the Seed standard a Docker image must have a single label used for discovery. This can be applied
with the following Dockerfile snippet:
----
include::examples/complete/Dockerfile-snippet[]
----
Seed compliant images must be named in a specific fashion due to the lack of label search capability on Docker Hub
and Registry services. The suffix `-seed` must be used when naming images to enable discovery, prior to Hub or Registry
push. This requirement will be deprecated as label search support is standardized across Docker registry services.
See the latest version of the link:seed.html[full specification] for complete detail on manifest format.
Specification versions:
```
Test from contact information changes | ```asciidoc
= Seed
[email protected]
:toc: left
:toclevels: 5
:stylesheet: styles/html.css
:sectlinks:
:sectnums:
:sectnumlevels: 5
:icons: font
:docinfo:
Seed was developed at the National Geospatial-Intelligence Agency (NGA) Research.
Seed is a general standard to aid in the discovery and consumption of a discrete unit of work contained within a Docker
image.
To comply with the Seed standard a Docker image must have a single label used for discovery. This can be applied
with the following Dockerfile snippet:
----
include::examples/complete/Dockerfile-snippet[]
----
Seed compliant images must be named in a specific fashion due to the lack of label search capability on Docker Hub
and Registry services. The suffix `-seed` must be used when naming images to enable discovery, prior to Hub or Registry
push. This requirement will be deprecated as label search support is standardized across Docker registry services.
See the latest version of the link:seed.html[full specification] for complete detail on manifest format.
Specification versions:
``` |
da826f0f-c214-4cc0-898a-9a209d1f46dd | {
"language": "AsciiDoc"
} | ```asciidoc
[id="install-sno-installing-sno"]
= Installing OpenShift on a single node
:context: install-sno-installing-sno-with-the-assisted-installer
include::modules/common-attributes.adoc[]
toc::[]
include::modules/install-sno-generating-the-discovery-iso-with-the-assisted-installer.adoc[leveloffset=+1]
include::modules/install-sno-generating-the-discovery-iso-manually.adoc[leveloffset=+1]
include::modules/install-sno-installing-with-usb-media.adoc[leveloffset=+1]
include::modules/install-sno-monitoring-the-installation-with-the-assisted-installer.adoc[leveloffset=+1]
include::modules/install-sno-monitoring-the-installation-manually.adoc[leveloffset=+1]
```
Remove SNO assisted installer from OKD | ```asciidoc
[id="install-sno-installing-sno"]
= Installing OpenShift on a single node
:context: install-sno-installing-sno-with-the-assisted-installer
include::modules/common-attributes.adoc[]
toc::[]
ifndef::openshift-origin[]
include::modules/install-sno-generating-the-discovery-iso-with-the-assisted-installer.adoc[leveloffset=+1]
endif::openshift-origin[]
include::modules/install-sno-generating-the-discovery-iso-manually.adoc[leveloffset=+1]
include::modules/install-sno-installing-with-usb-media.adoc[leveloffset=+1]
ifndef::openshift-origin[]
include::modules/install-sno-monitoring-the-installation-with-the-assisted-installer.adoc[leveloffset=+1]
endif::openshift-origin[]
include::modules/install-sno-monitoring-the-installation-manually.adoc[leveloffset=+1]
``` |
6aae01f2-5d7d-4fd5-ab63-41e13ca9d9c2 | {
"language": "AsciiDoc"
} | ```asciidoc
Create the stripe configuration:
[source, bash]
----
> mkdir -p /home/ubuntu/killbil/etc/
> vi /home/ubuntu/killbil/etc/stripe.yml
----
Fill the details with your own keys:
:stripe:
:api_secret_key: API_SECRET_KEY
:test: true
:database:
:adapter: jdbcmysql
:jndi: killbill/osgi/jdbc
:driver: com.mysql.jdbc.Driver
:connection_alive_sql: select 1
:pool: 25
```
Remove unused parameter in stripe config for aws doc | ```asciidoc
Create the stripe configuration:
[source, bash]
----
> mkdir -p /home/ubuntu/killbil/etc/
> vi /home/ubuntu/killbil/etc/stripe.yml
----
Fill the details with your own keys:
:stripe:
:api_secret_key: API_SECRET_KEY
:test: true
:database:
:adapter: jdbcmysql
:jndi: killbill/osgi/jdbc
:connection_alive_sql: select 1
:pool: 25
``` |
f50df5ff-820d-453a-be10-07f06481ad95 | {
"language": "AsciiDoc"
} | ```asciidoc
OpenFurther Reference Documentation
===================================
About
-----
The following documentation applies to *OpenFurther version {version}*
Conventions
~~~~~~~~~~~
NOTE: A note
IMPORTANT: An important point
TIP: A tip
WARNING: A warning
CAUTION: A point of caution
Introduction
------------
FURTHeR, The Federated Utah Research and Translational Health electronic Repository, has been developed as part of the Biomedical Informatics Core (BMIC) at the University of Utah’s Center for Clinical and Translational Science (CCTS). FURTHeR’s primary function is to federate health information from heterogeneous data sources to aid researchers in cohort selection and retrospective comparative effectiveness research. FURTHeR achieves real-time federation of health information from heterogeneous data sources using a logical federated query language, a query and result translator for translating queries and results from each heterogeneous data source, and standard terminology to unify data across sources.
```
Fix version to 1.4.0-SNAPSHOT for now | ```asciidoc
OpenFurther Reference Documentation
===================================
About
-----
The following documentation applies to *OpenFurther version 1.4.0-SNAPSHOT*
Conventions
~~~~~~~~~~~
NOTE: A note
IMPORTANT: An important point
TIP: A tip
WARNING: A warning
CAUTION: A point of caution
Introduction
------------
FURTHeR, The Federated Utah Research and Translational Health electronic Repository, has been developed as part of the Biomedical Informatics Core (BMIC) at the University of Utah’s Center for Clinical and Translational Science (CCTS). FURTHeR’s primary function is to federate health information from heterogeneous data sources to aid researchers in cohort selection and retrospective comparative effectiveness research. FURTHeR achieves real-time federation of health information from heterogeneous data sources using a logical federated query language, a query and result translator for translating queries and results from each heterogeneous data source, and standard terminology to unify data across sources.
``` |
fd3a376b-0c88-4601-b458-605968915190 | {
"language": "AsciiDoc"
} | ```asciidoc
= PHP Reflect
**PHP Reflect** is a library that
adds the ability to reverse-engineer classes, interfaces, functions, constants, namespaces, traits and more.
Both were distributed as source code (install via composer) and a PHAR version
that bundles all dependencies in a single file.
== Install
You can either :
* download the phar version http://bartlett.laurent-laville.org/get/phpreflect-4.2.1.phar[4.2.1]
* install via https://packagist.org/packages/bartlett/php-reflect/[packagist] the current source dev-master
== Documentation
The documentation for PHP Reflect 4.2 is available
in http://php5.laurent-laville.org/reflect/manual/4.2/en/[English]
to read it online or download to read it later (multiple formats).
AsciiDoc source code are available on `docs` folder of the repository.
== Authors
* Laurent Laville
== License
This handler is licensed under the BSD-3-clauses License - see the `LICENSE` file for details
```
Add a note about minimum requirements to install this package | ```asciidoc
= PHP Reflect
**PHP Reflect** is a library that
adds the ability to reverse-engineer classes, interfaces, functions, constants, namespaces, traits and more.
Both were distributed as source code (install via composer) and a PHAR version
that bundles all dependencies in a single file.
== Requirements
PHP 5.5.0 or above.
== Install
You can either :
* download the phar version http://bartlett.laurent-laville.org/get/phpreflect-4.2.1.phar[4.2.1]
* install via https://packagist.org/packages/bartlett/php-reflect/[packagist] the current source dev-master
== Documentation
The documentation for PHP Reflect 4.2 is available
in http://php5.laurent-laville.org/reflect/manual/4.2/en/[English]
to read it online or download to read it later (multiple formats).
AsciiDoc source code are available on `docs` folder of the repository.
== Authors
* Laurent Laville
== License
This handler is licensed under the BSD-3-clauses License - see the `LICENSE` file for details
``` |
ccd1db11-0af3-4da7-a34f-f3c803dca0d9 | {
"language": "AsciiDoc"
} | ```asciidoc
[[es-release-notes]]
= Release Notes
[partintro]
--
This section summarizes the changes in each release.
* <<release-notes-5.0.0-alpha5>>
* <<release-notes-5.0.0-alpha4>>
* <<release-notes-5.0.0-alpha3>>
* <<release-notes-5.0.0-alpha2>>
* <<release-notes-5.0.0-alpha1>>
* <<release-notes-5.0.0-alpha1-2x>>
--
include::release-notes/5.0.0-alpha5.asciidoc[]
include::release-notes/5.0.0-alpha4.asciidoc[]
include::release-notes/5.0.0-alpha3.asciidoc[]
include::release-notes/5.0.0-alpha2.asciidoc[]
include::release-notes/5.0.0-alpha1.asciidoc[]
include::release-notes/5.0.0-alpha1-2x.asciidoc[]
```
Remove links to release notes | ```asciidoc
[[es-release-notes]]
= Release Notes
[partintro]
--
This section will summarize the changes in released versions.
--
``` |
afd72a6c-ee36-4178-871e-d78ad8fc9237 | {
"language": "AsciiDoc"
} | ```asciidoc
= Implementation details
Franz Wilhelmstötter <[email protected]>
:Author Initials: FWi
:toc:
:icons:
:numbered:
:website: http://jenetics.io/
This site contains selected description of implementation details.
include::CreateRandomSeed.adoc[]
include::OptimizeMinimum.adoc[]
```
Update main document of 'Implementation protocol'. | ```asciidoc
= Implementation protocols
Franz Wilhelmstötter <[email protected]>
:Author Initials: FWi
:toc:
:icons:
:numbered:
:website: http://jenetics.io/
While implementing the library, a lot of concrete implementation decisions have been made. This document gives a more detailed description of some selected implementation details.
include::CreateRandomSeed.adoc[]
include::OptimizeMinimum.adoc[]
``` |
62aedb2c-b0f5-4e4c-8309-d5373de8699b | {
"language": "AsciiDoc"
} | ```asciidoc
= Apache Camel 3.x Upgrade Guide
This document is for helping you upgrade your Apache Camel application
from Camel 3.x to 3.y. For example if you are upgrading Camel 3.0 to 3.2, then you should follow the guides
from both 3.0 to 3.1 and 3.1 to 3.2.
== Upgrading Camel 3.5 to 3.6
=== Camel Karaf
The following features has been removed due they become not compatible with OSGi: `camel-atmosphere-websocket`.
```
Upgrade guide about api components updates | ```asciidoc
= Apache Camel 3.x Upgrade Guide
This document is for helping you upgrade your Apache Camel application
from Camel 3.x to 3.y. For example if you are upgrading Camel 3.0 to 3.2, then you should follow the guides
from both 3.0 to 3.1 and 3.1 to 3.2.
== Upgrading Camel 3.5 to 3.6
=== API components upgrade
The `camel-braintree`, `camel-twilio` and `camel-zendesk` has updated to newer versions and regenerated their API
signatures in the Camel components which may change some of the existing singatures and as well bring in new.
=== Camel Karaf
The following features has been removed due they become not compatible with OSGi: `camel-atmosphere-websocket`.
``` |
a1938e0e-87e0-40ba-8992-067a1b17e493 | {
"language": "AsciiDoc"
} | ```asciidoc
= Cypher Design Philosophy Guidelines
An informal manifesto was defined to ensure the consistent ongoing development of the Cypher language.
The aim was to incorporate as many underlying principles as were deemed useful and relevant, and to distill these to no more than 15 or so rules to render the philosophy more useful and digestible.
Suggestions for changing the language ought to be guided by the following rules:
. Follow these rules unless there’s a better way.
. Cypher is a high-level Graph Query Language.
. Design should be validated by user needs.
. Readability is better than terseness.
. Declarative is better than imperative.
. Composability is better than complexity.
. Explicit is better than implicit.
. Visual is better than textual.
. Familiar to users of SQL and inspired by Python.
. Prefer a single way to do things.
. Consistency is important. Learning one part of the language should help understanding other parts.
. Errors should never pass silently, unless explicitly silenced.
. Progress is more important than perfection, but consideration beats hastiness.
. Don’t leave holes in the specification that allow the implementation to leak through.
. Never be too proud to throw something away.
```
Use numbered list in source | ```asciidoc
= Cypher Design Philosophy Guidelines
An informal manifesto was defined to ensure the consistent ongoing development of the Cypher language.
The aim was to incorporate as many underlying principles as were deemed useful and relevant, and to distill these to no more than 15 or so rules to render the philosophy more useful and digestible.
Suggestions for changing the language ought to be guided by the following rules:
1. Follow these rules unless there’s a better way.
2. Cypher is a high-level Graph Query Language.
3. Design should be validated by user needs.
4. Readability is better than terseness.
5. Declarative is better than imperative.
6. Composability is better than complexity.
7. Explicit is better than implicit.
8. Visual is better than textual.
9. Familiar to users of SQL and inspired by Python.
10. Prefer a single way to do things.
11. Consistency is important. Learning one part of the language should help understanding other parts.
12. Errors should never pass silently, unless explicitly silenced.
13. Progress is more important than perfection, but consideration beats hastiness.
14. Don’t leave holes in the specification that allow the implementation to leak through.
15. Never be too proud to throw something away.
``` |
283f0052-308e-47f1-821a-33074d60509b | {
"language": "AsciiDoc"
} | ```asciidoc
= GQL
:revnumber: {releaseVersion}
:numbered:
:imagesDir: images/
:baseDir: ../../../../..
:stem:
:core: {baseDir}/gql-core
:coreMain: {core}/src/main/java
:testMain: {core}/src/test/groovy
:testResources: {core}/src/test/resources
:ratpack: {baseDir}/gql-ratpack
[quote]
GQL is a library created to make it easier to expose and consume
GraphQL services.
[sidebar]
.Apache
****
The *GQL* project is open sourced under the http://www.apache.org/licenses/LICENSE-2.0.html[Apache 2 License].
****
include::intro.adoc[]
include::getting.adoc[]
include::dsl.adoc[]
include::relay.adoc[]
include::references.adoc[]
```
Fix include ratpack chapter in GQL guide | ```asciidoc
= GQL
:revnumber: {releaseVersion}
:numbered:
:imagesDir: images/
:baseDir: ../../../../..
:baseDirFirstLevel: ../../../..
:stem:
:core: {baseDir}/gql-core
:coreMain: {core}/src/main/java
:testMain: {core}/src/test/groovy
:testResources: {core}/src/test/resources
:ratpack: {baseDir}/gql-ratpack
[quote]
GQL is a library created to make it easier to expose and consume
GraphQL services.
[sidebar]
.Apache
****
The *GQL* project is open sourced under the http://www.apache.org/licenses/LICENSE-2.0.html[Apache 2 License].
****
include::intro.adoc[]
include::getting.adoc[]
include::dsl.adoc[]
include::relay.adoc[]
include::ratpack.adoc[]
include::references.adoc[]
``` |
42e5e077-2758-414b-a84b-8b07b54820e4 | {
"language": "AsciiDoc"
} | ```asciidoc
= To-Do List
- mock_cone_detector creates infinite area and overflows h
- (*DONE*) new waypoints shorter than old don't delete existing waypoints
- adjust waypoints for start position and cone position
- cone area goes down when very close to cone
- (*DONE*) parameterize throttle and steering PWM values
- touch sensor does not work
- cone detection in bright light does not work
- GUIDED mode does not work
- Encode PWM values or range set to use in the waypoints file
- If waypoint encountered before cone is seen, rover goes into HOLD mode
with no recovery. Needs to be fixed.
== Possible To-Do
- (*DONE*) Change from using WP_SPEED to CRUISE_SPEED. (Seems to be used by Vicky,
while WP_SPEED is not.)
- Have a way of manually triggering parameter reload
== Notes
MAV_CMD_DO_SET_HOME appears to reset the map origin, as well as zero the
offset between the map origin and base_link (for /mavros/local_position/pose
and /mavros/local_position/odom).
```
Update to-do with new completions | ```asciidoc
= To-Do List
- mock_cone_detector creates infinite area and overflows h
- (*DONE*) new waypoints shorter than old don't delete existing waypoints
- adjust waypoints for start position and cone position
- cone area goes down when very close to cone
- (*DONE*) parameterize throttle and steering PWM values
- touch sensor does not work
- (*DONE*) cone detection in bright light does not work
- GUIDED mode does not work
- Encode PWM values or range set to use in the waypoints file
- If waypoint encountered before cone is seen, rover goes into HOLD mode
with no recovery. Needs to be fixed.
== Possible To-Do
- (*DONE*) Change from using WP_SPEED to CRUISE_SPEED. (Seems to be used by Vicky,
while WP_SPEED is not.)
- Have a way of manually triggering parameter reload
== Notes
MAV_CMD_DO_SET_HOME appears to reset the map origin, as well as zero the
offset between the map origin and base_link (for /mavros/local_position/pose
and /mavros/local_position/odom).
``` |
c6d01f32-f8df-4685-9882-5ea426ddc2bf | {
"language": "AsciiDoc"
} | ```asciidoc
= Opal Runtime for Node.js
== Usage
```javascript
var Opal = require('opal-runtime').Opal;
// Now let's have fun with Opal!
```
```
Use Markdown compatible syntax for better rendering on npmjs.com | ```asciidoc
# Opal Runtime for Node.js
## Usage
```javascript
var Opal = require('opal-runtime').Opal;
// Now let's have fun with Opal!
```
``` |
e8cbe040-b26f-4f69-8cae-d4dfe2a169e4 | {
"language": "AsciiDoc"
} | ```asciidoc
[[improve-processes]]
=== Practices to Improve Processes
[[fig-improve-processes]]
.Practices for "Improve Processes"
image::improve-practice-processes.png["Practices for Improve Processes", title="Practices to improve processes"]
For an overview of other improvement practices,
see <<improve-practices-overview>>.
```
Add mob programming and remote mob programming | ```asciidoc
[[improve-processes]]
=== Practices to Improve Processes
[[fig-improve-processes]]
.Practices for "Improve Processes"
image::improve-practice-processes.png["Practices for Improve Processes", title="Practices to improve processes"]
For an overview of other improvement practices,
see <<improve-practices-overview>>.
One way to improve the processes is to resort to https://mobprogramming.org[Mob Programming] for onsite teams or https://www.remotemobprogramming.org[Remote Mob Programming] for distributed teams.
``` |
b61ce34f-a571-4901-81eb-17aee27b515c | {
"language": "AsciiDoc"
} | ```asciidoc
//////////////////////////////////////////////////////////////////////////
//// This content is shared by all Elastic Beats. Make sure you keep the
//// descriptions here generic enough to work for all Beats that include
//// this file. When using cross references, make sure that the cross
//// references resolve correctly for any files that include this one.
//// Use the appropriate variables defined in the index.asciidoc file to
//// resolve Beat names: beatname_uc and beatname_lc.
//// Use the following include to pull this content into a doc file:
//// include::../../libbeat/docs/shared-logstash-config.asciidoc[]
//////////////////////////////////////////////////////////////////////////
If you want to use Logstash to perform additional processing on the data collected by
{beatname_uc}, you need to configure {beatname_uc} to use Logstash.
To do this, you edit the {beatname_uc} configuration file to disable the Elasticsearch
output and use the Logstash output instead:
[source,yaml]
------------------------------------------------------------------------------
output:
logstash:
hosts: ["127.0.0.1:5044"]
# configure logstash plugin to loadbalance events between
# configured logstash hosts
#loadbalance: false
------------------------------------------------------------------------------
In this configuration, `hosts` specifies the Logstash server and the port (`5044`)
where Logstash is configured to listen for incoming Beats connections.
To use this configuration, you must also
{libbeat}/getting-started.html#logstash-setup[set up Logstash] to receive events
from Beats.```
Clarify how to enable and disable logstash output | ```asciidoc
//////////////////////////////////////////////////////////////////////////
//// This content is shared by all Elastic Beats. Make sure you keep the
//// descriptions here generic enough to work for all Beats that include
//// this file. When using cross references, make sure that the cross
//// references resolve correctly for any files that include this one.
//// Use the appropriate variables defined in the index.asciidoc file to
//// resolve Beat names: beatname_uc and beatname_lc.
//// Use the following include to pull this content into a doc file:
//// include::../../libbeat/docs/shared-logstash-config.asciidoc[]
//////////////////////////////////////////////////////////////////////////
If you want to use Logstash to perform additional processing on the data collected by
{beatname_uc}, you need to configure {beatname_uc} to use Logstash.
To do this, you edit the {beatname_uc} configuration file to disable the Elasticsearch
output by commenting it out and enable the Logstash output by uncommenting the
logstash section:
[source,yaml]
------------------------------------------------------------------------------
output:
logstash:
hosts: ["127.0.0.1:5044"]
# configure logstash plugin to loadbalance events between
# configured logstash hosts
#loadbalance: false
------------------------------------------------------------------------------
In this configuration, `hosts` specifies the Logstash server and the port (`5044`)
where Logstash is configured to listen for incoming Beats connections.
To use this configuration, you must also
{libbeat}/getting-started.html#logstash-setup[set up Logstash] to receive events
from Beats.``` |
9de0c254-06e4-4e33-aa0f-ef1df06864cc | {
"language": "AsciiDoc"
} | ```asciidoc
[[jackson]]
== Jackson Support
Spring Security has added Jackson Support for persisting Spring Security related classes.
This can improve the performance of serializing Spring Security related classes when working with distributed sessions (i.e. session replication, Spring Session, etc).
To use it, register the `SecurityJackson2Modules.getModules(ClassLoader)` as https://wiki.fasterxml.com/JacksonFeatureModules[Jackson Modules].
[source,java]
----
ObjectMapper mapper = new ObjectMapper();
ClassLoader loader = getClass().getClassLoader();
List<Module> modules = SecurityJackson2Modules.getModules(loader);
mapper.registerModules(modules);
// ... use ObjectMapper as normally ...
SecurityContext context = new SecurityContextImpl();
// ...
String json = mapper.writeValueAsString(context);
----
```
Document Jackson serialization support for OAuth 2.0 Client | ```asciidoc
[[jackson]]
== Jackson Support
Spring Security provides Jackson support for persisting Spring Security related classes.
This can improve the performance of serializing Spring Security related classes when working with distributed sessions (i.e. session replication, Spring Session, etc).
To use it, register the `SecurityJackson2Modules.getModules(ClassLoader)` with `ObjectMapper` (https://github.com/FasterXML/jackson-databind[jackson-databind]):
[source,java]
----
ObjectMapper mapper = new ObjectMapper();
ClassLoader loader = getClass().getClassLoader();
List<Module> modules = SecurityJackson2Modules.getModules(loader);
mapper.registerModules(modules);
// ... use ObjectMapper as normally ...
SecurityContext context = new SecurityContextImpl();
// ...
String json = mapper.writeValueAsString(context);
----
[NOTE]
====
The following Spring Security modules provide Jackson support:
- spring-security-core (`CoreJackson2Module`)
- spring-security-web (`WebJackson2Module`, `WebServletJackson2Module`, `WebServerJackson2Module`)
- <<oauth2client, spring-security-oauth2-client>> (`OAuth2ClientJackson2Module`)
- spring-security-cas (`CasJackson2Module`)
====
``` |
6626278b-3e24-49e3-87bc-7c9e66b56534 | {
"language": "AsciiDoc"
} | ```asciidoc
= flexy-pool
Author <[email protected]>
v1.0.0, 2014-02-25
:homepage: http://vladmihalcea.com/
== Introduction
The flexy-pool library brings adaptability to a given Connection Pool, allowing it to resize on demand.
This is very handy since most connection pools offer a limited set of dynamic configuration strategies.
== Features
* extensive connection pool support
** http://docs.codehaus.org/display/BTM/Home[Bitronix Transaction Manager]
** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP]
** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP2]
** http://www.mchange.com/projects/c3p0/[C3P0]
** http://jolbox.com/[BoneCP]
** http://brettwooldridge.github.io/HikariCP/[HikariCP]
* statistics support
** source connection acquiring time histogram
** total connection acquiring time histogram
** retries attempts histogram
** maximum CP size histogram
** connection request count histogram
** connection lease time histogram
== Documentation
. https://github.com/vladmihalcea/flexy-pool/wiki/Installation-Guide[Installation Guide]
. https://github.com/vladmihalcea/flexy-pool/wiki/User-Guide[User Guide]
[IMPORTANT]
Flexy Pool has not yet been released to OSS Maven Repository.
```
Use Flexy Pool instead of flexy-pool | ```asciidoc
= Flexy Pool
Author <[email protected]>
v1.0.0, 2014-02-25
:homepage: http://vladmihalcea.com/
== Introduction
The Flexy Pool library brings adaptability to a given Connection Pool, allowing it to resize on demand.
This is very handy since most connection pools offer a limited set of dynamic configuration strategies.
== Features
* extensive connection pool support
** http://docs.codehaus.org/display/BTM/Home[Bitronix Transaction Manager]
** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP]
** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP2]
** http://www.mchange.com/projects/c3p0/[C3P0]
** http://jolbox.com/[BoneCP]
** http://brettwooldridge.github.io/HikariCP/[HikariCP]
* statistics support
** source connection acquiring time histogram
** total connection acquiring time histogram
** retries attempts histogram
** maximum CP size histogram
** connection request count histogram
** connection lease time histogram
== Documentation
. https://github.com/vladmihalcea/flexy-pool/wiki/Installation-Guide[Installation Guide]
. https://github.com/vladmihalcea/flexy-pool/wiki/User-Guide[User Guide]
[IMPORTANT]
Flexy Pool has not yet been released to OSS Maven Repository.
``` |
27f9ba37-16ff-48f1-b431-818649c4a076 | {
"language": "AsciiDoc"
} | ```asciidoc
= Bestikk Opal Compiler
A simple tool to compile Ruby code to JavaScript using Opal.
== Install
$ npm i --save-dev bestikk-opal-compiler
== Usage
```javascript
var opalCompiler = require('bestikk-opal-compiler');
var compiler = opalCompiler({
dynamicRequireLevel: 'ignore',
defaultPaths: ['special/directory']
});
compiler.compile('hello.rb', 'hello.js'); // <1>
compiler.compile('module/hello', 'module.js', ['extra/directory']); // <2>
```
<1> Compile Ruby script `hello.rb` to `hello.js`
<2> Compile Ruby module `module/hello` to `module.js` using an extra directory
```
Use Markdown compatible syntax and add npm badge | ```asciidoc
# Bestikk Opal Compiler
ifdef::env-github[]
image:http://img.shields.io/npm/v/bestikk-opal-compiler.svg[npm version, link=https://www.npmjs.org/package/bestikk-opal-compiler]
endif::[]
A simple tool to compile Ruby code to JavaScript using Opal.
## Install
$ npm i --save-dev bestikk-opal-compiler
## Usage
```javascript
var opalCompiler = require('bestikk-opal-compiler');
var compiler = opalCompiler({
dynamicRequireLevel: 'ignore',
defaultPaths: ['special/directory']
});
compiler.compile('hello.rb', 'hello.js'); // <1>
compiler.compile('module/hello', 'module.js', ['extra/directory']); // <2>
```
<1> Compile Ruby script `hello.rb` to `hello.js`
<2> Compile Ruby module `module/hello` to `module.js` using an extra directory
``` |
22e99e82-d811-43ed-b931-dc7b0682613a | {
"language": "AsciiDoc"
} | ```asciidoc
= Microservices libraries comparison
== Purpose
This project is the companion of this Blog article: https://cdelmas.github.io/2015/11/01/A-comparison-of-Microservices-Frameworks.html.
== Build
To build it, just run `gradle shadowJar` in the command line.
== Run
Then you can run each server using `java -jar <server>.jar`.
```
Add a more complete documentation. | ```asciidoc
= Microservices libraries comparison
== Purpose
This project is the companion of this Blog article: https://cdelmas.github.io/2015/11/01/A-comparison-of-Microservices-Frameworks.html.
== Setup
You will need a keystore. You can follow a tutorial http://restlet.com/technical-resources/restlet-framework/guide/2.3/core/security/https[here] to set up a keystore with a self-signed certificate.
One of the client libraries requires you to import the Facebook's certificate into the truststore. To do that, you can follow these http://stackoverflow.com/questions/373295/digital-certificate-how-to-import-cer-file-in-to-truststore-file-using[instructions].
== Build
To build the servers, just run `gradlew shadowJar` in the command line. By the way, you need a JDK 8, but I guess you're up-to-date :)
== Run
Then you can run each server using `java -Djavax.net.ssl.trustStore=/path/to/trustStore.jks -Djavax.net.ssl.trustStorePassword=thepass -Djavax.net.ssl.keyStorePassword=thePass -Djavax.net.ssl.keyStorePath=/path/to/keyStore.jks -jar <server>.jar`. Don't forget to replace `<server>` with the correct jar name.
The Dropwizard server needs a special command line: `java ... -jar dropwizard.jar server /path/to/dropwizard-server.yml`.
== Notes
There are no unit tests, and it is fully assumed.
== The missing guys
In a future version, I will add Restx, Payara and Swarm to the comparison. It is a long exercise though, but stay in touch ;)
``` |
c64b3b65-7411-4d91-877a-b7583c1afa18 | {
"language": "AsciiDoc"
} | ```asciidoc
[[new]]
= What's New in Spring Security 5.8
Spring Security 5.8 provides a number of new features.
Below are the highlights of the release.
* https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation
```
Update What's New for 5.8 | ```asciidoc
[[new]]
= What's New in Spring Security 5.8
Spring Security 5.8 provides a number of new features.
Below are the highlights of the release.
* https://github.com/spring-projects/spring-security/pull/11638[gh-11638] - Refresh remote JWK when unknown KID error occurs
* https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation
* https://github.com/spring-projects/spring-security/issues/11661[gh-11661] - Configurable authentication converter for resource-servers with token introspection
* https://github.com/spring-projects/spring-security/pull/11771[gh-11771] - `HttpSecurityDsl` should support `apply` method
``` |
340f2b0d-fc07-4ab2-92cb-5d0dde736c53 | {
"language": "AsciiDoc"
} | ```asciidoc
[[java-admin-indices-put-mapping]]
==== Put Mapping
You can add mappings at index creation time:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{client-tests}/IndicesDocumentationIT.java[index-with-mapping]
--------------------------------------------------
<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`
<2> Add a `_doc` type with a field called `message` that has the datatype `text`.
There are several variants of the above `addMapping` method, some taking an
`XContentBuilder` or a `Map` with the mapping definition as arguments. Make sure
to check the javadocs to pick the simplest one for your use case.
The PUT mapping API also allows for updating the mapping after index
creation. In this case you can provide the mapping as a String similar
to the REST API syntax:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{client-tests}/IndicesDocumentationIT.java[putMapping-request-source]
--------------------------------------------------
<1> Puts a mapping on existing index called `twitter`
<2> Adds a new field `name` to the mapping
<3> The type can be also provided within the source
:base-dir!:
```
Add a change that was accidentally left out of the types documentation commit. | ```asciidoc
[[java-admin-indices-put-mapping]]
==== Put Mapping
You can add mappings at index creation time:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{client-tests}/IndicesDocumentationIT.java[index-with-mapping]
--------------------------------------------------
<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`
<2> Add a `_doc` type with a field called `message` that has the datatype `text`.
There are several variants of the above `addMapping` method, some taking an
`XContentBuilder` or a `Map` with the mapping definition as arguments. Make sure
to check the javadocs to pick the simplest one for your use case.
The PUT mapping API also allows for updating the mapping after index
creation. In this case you can provide the mapping as a String similar
to the REST API syntax:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{client-tests}/IndicesDocumentationIT.java[putMapping-request-source]
--------------------------------------------------
<1> Puts a mapping on existing index called `twitter`
<2> Adds a new field `name` to the mapping
<3> The type can be also provided within the source
``` |
4a73721e-7ed0-414a-acc2-16155e8068e3 | {
"language": "AsciiDoc"
} | ```asciidoc
[[getting-started.system-requirements]]
== System Requirements
Spring Boot {spring-boot-version} requires https://www.java.com[Java 8] and is compatible up to and including Java 18.
{spring-framework-docs}/[Spring Framework {spring-framework-version}] or above is also required.
Explicit build support is provided for the following build tools:
|===
| Build Tool | Version
| Maven
| 3.5+
| Gradle
| 6.8.x, 6.9.x, and 7.x
|===
[[getting-started.system-requirements.servlet-containers]]
=== Servlet Containers
Spring Boot supports the following embedded servlet containers:
|===
| Name | Servlet Version
| Tomcat 9.0
| 4.0
| Jetty 9.4
| 3.1
| Jetty 10.0
| 4.0
| Undertow 2.0
| 4.0
|===
You can also deploy Spring Boot applications to any servlet 3.1+ compatible container.
```
Document support for Java 19 | ```asciidoc
[[getting-started.system-requirements]]
== System Requirements
Spring Boot {spring-boot-version} requires https://www.java.com[Java 8] and is compatible up to and including Java 19.
{spring-framework-docs}/[Spring Framework {spring-framework-version}] or above is also required.
Explicit build support is provided for the following build tools:
|===
| Build Tool | Version
| Maven
| 3.5+
| Gradle
| 6.8.x, 6.9.x, and 7.x
|===
[[getting-started.system-requirements.servlet-containers]]
=== Servlet Containers
Spring Boot supports the following embedded servlet containers:
|===
| Name | Servlet Version
| Tomcat 9.0
| 4.0
| Jetty 9.4
| 3.1
| Jetty 10.0
| 4.0
| Undertow 2.0
| 4.0
|===
You can also deploy Spring Boot applications to any servlet 3.1+ compatible container.
``` |
2a6628a3-7047-4da4-ad35-7524ca7387f9 | {
"language": "AsciiDoc"
} | ```asciidoc
= Mongo Client
image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-mongo-client["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-mongo-client/"]
An asynchronous client for interacting with a MongoDB database
Please see the in source asciidoc documentation or the main documentation on the web-site for a full description
of Mongo service:
* Web-site docs
* link:vertx-mongo-client/src/main/asciidoc/java/index.adoc[Java in-source docs]
* link:vertx-mongo-client/src/main/asciidoc/js/index.adoc[JavaScript in-source docs]
* link:vertx-mongo-client/src/main/asciidoc/groovy/index.adoc[Groovy in-source docs]
```
Update readme with how to run docker for tests | ```asciidoc
= Mongo Client
image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-mongo-client["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-mongo-client/"]
An asynchronous client for interacting with a MongoDB database
Please see the in source asciidoc documentation or the main documentation on the web-site for a full description
of Mongo service:
* Web-site docs
* link:vertx-mongo-client/src/main/asciidoc/java/index.adoc[Java in-source docs]
* link:vertx-mongo-client/src/main/asciidoc/js/index.adoc[JavaScript in-source docs]
* link:vertx-mongo-client/src/main/asciidoc/groovy/index.adoc[Groovy in-source docs]
```
docker run --rm --name vertx-mongo -p 27017:27017 mongo
```
``` |
e69964a7-aca9-45b4-a3f2-e46931643635 | {
"language": "AsciiDoc"
} | ```asciidoc
[id="welcome-index"]
= {product-title} {product-version} REST APIs
include::modules/common-attributes.adoc[]
:context: rest-apis
toc::[]
include::modules/apis.adoc[leveloffset=+1]
```
Add proper ID to REST API assembly | ```asciidoc
[id="rest-api"]
= {product-title} {product-version} REST APIs
include::modules/common-attributes.adoc[]
:context: rest-apis
toc::[]
include::modules/apis.adoc[leveloffset=+1]
``` |
890972e0-a01d-4feb-acf6-5d8cc7042178 | {
"language": "AsciiDoc"
} | ```asciidoc
// Copyright 2015 Cloudera, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
[[quickstart]]
= Kudu Quickstart
:author: Kudu Team
:imagesdir: ./images
:icons: font
:toc: left
:toclevels: 3
:doctype: book
:backend: html5
:sectlinks:
:experimental:
This quickstart shows how to set up and run a VirtualBox virtual machine pre-installed with Kudu,
Kudu_Impala, and CDH. This is the quickest way to take Kudu for a spin.
[[quickstart_vm]]
== Get The Kudu Quickstart VM
Follow these link:https://github.com.cloudera/kudu-examples/[instructions] to download and set
up the Kudu Quickstart VM.
The VM contains Kudu, Impala, and other Hadoop ecosystem projects, for your convenience.
== Load Some Data
TBD
== Retrieve Some Data
TBD
== Next Steps
- link:installation.html[Installing Kudu]
- link:configuration.html[Configuring Kudu]
```
Fix link to kudu-examples in the docs | ```asciidoc
// Copyright 2015 Cloudera, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
[[quickstart]]
= Kudu Quickstart
:author: Kudu Team
:imagesdir: ./images
:icons: font
:toc: left
:toclevels: 3
:doctype: book
:backend: html5
:sectlinks:
:experimental:
This quickstart shows how to set up and run a VirtualBox virtual machine pre-installed with Kudu,
Kudu_Impala, and CDH. This is the quickest way to take Kudu for a spin.
[[quickstart_vm]]
== Get The Kudu Quickstart VM
Follow these link:https://github.com/cloudera/kudu-examples/[instructions] to download and set
up the Kudu Quickstart VM.
The VM contains Kudu, Impala, and other Hadoop ecosystem projects, for your convenience.
== Load Some Data
TBD
== Retrieve Some Data
TBD
== Next Steps
- link:installation.html[Installing Kudu]
- link:configuration.html[Configuring Kudu]
``` |
2bf7977a-4374-42d3-974c-4f28c0c2828b | {
"language": "AsciiDoc"
} | ```asciidoc
[id="configuring-sriov-device"]
= Configuring an SR-IOV network device
include::modules/common-attributes.adoc[]
:context: configuring-sriov-device
toc::[]
You can configure a Single Root I/O Virtualization (SR-IOV) device in your cluster.
include::modules/nw-sriov-device-discovery.adoc[leveloffset=+1]
include::modules/nw-sriov-configuring-device.adoc[leveloffset=+1]
.Next steps
* xref:../../networking/hardware_networks/configuring-sriov-net-attach.adoc#configuring-sriov-net-attach[Configuring a SR-IOV network attachment]
```
Add missing module for SR-IOV | ```asciidoc
[id="configuring-sriov-device"]
= Configuring an SR-IOV network device
include::modules/common-attributes.adoc[]
:context: configuring-sriov-device
toc::[]
You can configure a Single Root I/O Virtualization (SR-IOV) device in your cluster.
include::modules/nw-sriov-device-discovery.adoc[leveloffset=+1]
include::modules/nw-sriov-configuring-device.adoc[leveloffset=+1]
include::modules/nw-sriov-nic-partitioning.adoc[leveloffset=+1]
.Next steps
* xref:../../networking/hardware_networks/configuring-sriov-net-attach.adoc#configuring-sriov-net-attach[Configuring a SR-IOV network attachment]
``` |
655ab04a-caef-4e8e-b23c-dc2bf55e1a12 | {
"language": "AsciiDoc"
} | ```asciidoc
[id="olm-understanding-operatorhub"]
= Understanding OperatorHub
include::modules/common-attributes.adoc[]
:context: olm-understanding-operatorhub
toc::[]
include::modules/olm-operatorhub-overview.adoc[leveloffset=+1]
include::modules/olm-operatorhub-architecture.adoc[leveloffset=+1]
[id="olm-understanding-operatorhub-resources"]
== Additional resources
* xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-catalogsource_olm-understanding-olm[Catalog source]
* xref:../../operators/operator_sdk/osdk-about.adoc#osdk-about[About the Operator SDK]
* xref:../../operators/operator_sdk/osdk-generating-csvs.adoc#osdk-generating-csvs[Generating a ClusterServiceVersion (CSV)]
* xref:../../operators/understanding/olm/olm-workflow.adoc#olm-upgrades_olm-workflow[Operator installation and upgrade workflow in OLM]
* link:https://connect.redhat.com[Red Hat Partner Connect]
* link:https://marketplace.redhat.com[Red Hat Marketplace]
```
Fix link title to osdk-generating-csvs | ```asciidoc
[id="olm-understanding-operatorhub"]
= Understanding OperatorHub
include::modules/common-attributes.adoc[]
:context: olm-understanding-operatorhub
toc::[]
include::modules/olm-operatorhub-overview.adoc[leveloffset=+1]
include::modules/olm-operatorhub-architecture.adoc[leveloffset=+1]
[id="olm-understanding-operatorhub-resources"]
== Additional resources
* xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-catalogsource_olm-understanding-olm[Catalog source]
* xref:../../operators/operator_sdk/osdk-about.adoc#osdk-about[About the Operator SDK]
* xref:../../operators/operator_sdk/osdk-generating-csvs.adoc#osdk-generating-csvs[Defining cluster service versions (CSVs)]
* xref:../../operators/understanding/olm/olm-workflow.adoc#olm-upgrades_olm-workflow[Operator installation and upgrade workflow in OLM]
* link:https://connect.redhat.com[Red Hat Partner Connect]
* link:https://marketplace.redhat.com[Red Hat Marketplace]
``` |
64f3b49f-068a-45b2-9050-a049c81307cc | {
"language": "AsciiDoc"
} | ```asciidoc
[[getting-started.system-requirements]]
== System Requirements
Spring Boot {spring-boot-version} requires https://www.java.com[Java 8] and is compatible up to and including Java 16.
{spring-framework-docs}/[Spring Framework {spring-framework-version}] or above is also required.
Explicit build support is provided for the following build tools:
|===
| Build Tool | Version
| Maven
| 3.5+
| Gradle
| 6.8.x, 6.9.x, and 7.x
|===
[[getting-started.system-requirements.servlet-containers]]
=== Servlet Containers
Spring Boot supports the following embedded servlet containers:
|===
| Name | Servlet Version
| Tomcat 9.0
| 4.0
| Jetty 9.4
| 3.1
| Jetty 10.0
| 4.0
| Undertow 2.0
| 4.0
|===
You can also deploy Spring Boot applications to any Servlet 3.1+ compatible container.
```
Document support for Java 17 | ```asciidoc
[[getting-started.system-requirements]]
== System Requirements
Spring Boot {spring-boot-version} requires https://www.java.com[Java 8] and is compatible up to and including Java 17.
{spring-framework-docs}/[Spring Framework {spring-framework-version}] or above is also required.
Explicit build support is provided for the following build tools:
|===
| Build Tool | Version
| Maven
| 3.5+
| Gradle
| 6.8.x, 6.9.x, and 7.x
|===
[[getting-started.system-requirements.servlet-containers]]
=== Servlet Containers
Spring Boot supports the following embedded servlet containers:
|===
| Name | Servlet Version
| Tomcat 9.0
| 4.0
| Jetty 9.4
| 3.1
| Jetty 10.0
| 4.0
| Undertow 2.0
| 4.0
|===
You can also deploy Spring Boot applications to any Servlet 3.1+ compatible container.
``` |
d883995c-475d-4a63-a070-a283c123c823 | {
"language": "AsciiDoc"
} | ```asciidoc
= MidoNet Host Registration
. *Launch MidoNet CLI*
+
====
[source]
----
$ midonet-cli
midonet>
----
====
. *Create tunnel zone*
+
MidoNet supports the Virtual Extensible LAN (VXLAN) and Generic Routing
Encapsulation (GRE) protocols to communicate to other hosts within a tunnel
zone.
+
To use the VXLAN protocol, create the tunnel zone with type 'vxlan':
+
====
[source]
----
midonet> tunnel-zone create name tz type vxlan
tzone0
----
====
+
To use the GRE protocol, create the tunnel zone with type 'gre':
+
====
[source]
----
midonet> tunnel-zone create name tz type gre
tzone0
----
====
. *Add hosts to tunnel zone*
+
====
[literal,subs="quotes"]
----
midonet> list tunnel-zone
tzone tzone0 name tz type vxlan
midonet> list host
host host0 name *_network_* alive true
host host1 name *_compute1_* alive true
midonet> tunnel-zone tzone0 add member host host0 address *_ip_address_of_host0_*
zone tzone0 host host0 address *_ip_address_of_host0_*
midonet> tunnel-zone tzone0 add member host host1 address *_ip_address_of_host1_*
zone tzone0 host host1 address *_ip_address_of_host1_*
----
====
```
Add a note to allow traffic for GRE/VXLAN | ```asciidoc
= MidoNet Host Registration
. *Launch MidoNet CLI*
+
====
[source]
----
$ midonet-cli
midonet>
----
====
. *Create tunnel zone*
+
MidoNet supports the Virtual Extensible LAN (VXLAN) and Generic Routing
Encapsulation (GRE) protocols to communicate to other hosts within a tunnel
zone.
+
To use the VXLAN protocol, create the tunnel zone with type 'vxlan':
+
====
[source]
----
midonet> tunnel-zone create name tz type vxlan
tzone0
----
====
+
To use the GRE protocol, create the tunnel zone with type 'gre':
+
====
[source]
----
midonet> tunnel-zone create name tz type gre
tzone0
----
====
[IMPORTANT]
Make sure to allow GRE/VXLAN traffic for all hosts that belong to the tunnel
zone. For VXLAN MidoNet uses UDP port 6677 as default.
. *Add hosts to tunnel zone*
+
====
[literal,subs="quotes"]
----
midonet> list tunnel-zone
tzone tzone0 name tz type vxlan
midonet> list host
host host0 name *_network_* alive true
host host1 name *_compute1_* alive true
midonet> tunnel-zone tzone0 add member host host0 address *_ip_address_of_host0_*
zone tzone0 host host0 address *_ip_address_of_host0_*
midonet> tunnel-zone tzone0 add member host host1 address *_ip_address_of_host1_*
zone tzone0 host host1 address *_ip_address_of_host1_*
----
====
``` |
3a7107d2-2c89-47c8-9c6b-8c6fe4952f66 | {
"language": "AsciiDoc"
} | ```asciidoc
// Module included in the following assemblies:
//
// serverless/installing-openshift-serverless.adoc
[id="deleting-knative-serving-crds_{context}"]
= Deleting Knative Serving CRDs from the Operator
After uninstalling the {ServerlessOperatorName}, the Operator CRDs and API services remain on the cluster. Use this procedure to completely uninstall the remaining components.
.Prerequisite
* You have uninstalled Knative Serving and removed the {ServerlessOperatorName} using the previous procedure.
.Procedure
. Run the following command to delete the remaining Knative Serving CRDs:
+
----
$ oc delete crd knativeservings.serving.knative.dev
----
. Delete the Knative Serving API by using command:
+
----
$ oc delete apiservice v1alpha1.serving.knative.dev
----
```
Remove redundant step of removing api services | ```asciidoc
// Module included in the following assemblies:
//
// serverless/installing-openshift-serverless.adoc
[id="deleting-knative-serving-crds_{context}"]
= Deleting Knative Serving CRDs from the Operator
After uninstalling the {ServerlessOperatorName}, the Operator CRDs and API services remain on the cluster. Use this procedure to completely uninstall the remaining components.
.Prerequisite
* You have uninstalled Knative Serving and removed the {ServerlessOperatorName} using the previous procedure.
.Procedure
. Run the following command to delete the remaining Knative Serving CRDs:
+
----
$ oc delete crd knativeservings.serving.knative.dev
----
``` |
96478f61-0fe6-4538-9389-1249a790e431 | {
"language": "AsciiDoc"
} | ```asciidoc
[[release-notes-5.0.0-m6]]
=== 5.0.0-M6
*Date of Release:* ❓
*Scope:* Sixth milestone release of JUnit 5 with a focus on Java 9 compatibility, scenario tests, and additional extension APIs for JUnit Jupiter.
WARNING: This is a milestone release and contains breaking changes. Please refer to the
<<running-tests-ide-intellij-idea,instructions>> above to use this version in a version of
IntelliJ IDEA that bundles an older milestone release.
For a complete list of all _closed_ issues and pull requests for this release, consult the
link:{junit5-repo}+/milestone/11?closed=1+[5.0 M6] milestone page in the JUnit repository
on GitHub.
[[release-notes-5.0.0-m6-junit-platform]]
==== JUnit Platform
===== Bug Fixes
* ❓
===== Deprecations and Breaking Changes
* ❓
===== New Features and Improvements
* ❓
[[release-notes-5.0.0-m6-junit-jupiter]]
==== JUnit Jupiter
===== Bug Fixes
* ❓
===== Deprecations and Breaking Changes
* ❓
===== New Features and Improvements
* ❓
[[release-notes-5.0.0-m6-junit-vintage]]
==== JUnit Vintage
===== Bug Fixes
* ❓
===== Deprecations and Breaking Changes
* ❓
===== New Features and Improvements
* ❓
```
Add Assertions.fail return type change to M6 release notes | ```asciidoc
[[release-notes-5.0.0-m6]]
=== 5.0.0-M6
*Date of Release:* ❓
*Scope:* Sixth milestone release of JUnit 5 with a focus on Java 9 compatibility, scenario tests, and additional extension APIs for JUnit Jupiter.
WARNING: This is a milestone release and contains breaking changes. Please refer to the
<<running-tests-ide-intellij-idea,instructions>> above to use this version in a version of
IntelliJ IDEA that bundles an older milestone release.
For a complete list of all _closed_ issues and pull requests for this release, consult the
link:{junit5-repo}+/milestone/11?closed=1+[5.0 M6] milestone page in the JUnit repository
on GitHub.
[[release-notes-5.0.0-m6-junit-platform]]
==== JUnit Platform
===== Bug Fixes
* ❓
===== Deprecations and Breaking Changes
* ❓
===== New Features and Improvements
* ❓
[[release-notes-5.0.0-m6-junit-jupiter]]
==== JUnit Jupiter
===== Bug Fixes
* ❓
===== Deprecations and Breaking Changes
* ❓
===== New Features and Improvements
* All `fail` methods in `Assertions` can now be used as an expression.
[[release-notes-5.0.0-m6-junit-vintage]]
==== JUnit Vintage
===== Bug Fixes
* ❓
===== Deprecations and Breaking Changes
* ❓
===== New Features and Improvements
* ❓
``` |
cd1d3af1-761e-4bdd-8538-0f10a9d3c34f | {
"language": "AsciiDoc"
} | ```asciidoc
= re:Clojure
London Clojurians
2019-12-02
:jbake-type: event
:jbake-edition: 2019
:jbake-link: https://reclojure.org
:jbake-location: London, United Kingdom
:jbake-start: 2019-12-02
:jbake-end: 2019-12-02
We have the pleasure to invite you to https://reclojure.org[re:Clojure] - a brand new and free community driven conference, taking place in London on Dec 2nd, 2019 in an incredible venue. Many of the original ClojureX speakers will be there. Special thanks to the Clojure community and the sponsors for managing to put this together at such a short notice. We look forward to seeing you there!
Please reserve your seat at https://reclojure.org and join #reclojure on Clojurians for more info.
```
Update re:Clojure conference information - remove typo | ```asciidoc
= re:Clojure
London Clojurians
2019-12-02
:jbake-type: event
:jbake-edition: 2019
:jbake-link: https://reclojure.org
:jbake-location: London, United Kingdom
:jbake-start: 2019-12-02
:jbake-end: 2019-12-02
We have the pleasure to invite you to https://reclojure.org[re:Clojure] - a brand new and free community driven conference, taking place in London on Dec 2nd, 2019 in an incredible venue. Many of the original ClojureX speakers will be there. Special thanks to the Clojure community and the sponsors for managing to put this together at such short notice. We look forward to seeing you there!
Please reserve your seat at https://reclojure.org and join #reclojure on Clojurians for more info.
``` |
041fa46b-ada5-46ab-b39b-60913a2f0a60 | {
"language": "AsciiDoc"
} | ```asciidoc
= griffon-quartz-plugin
:linkattrs:
:project-name: griffon-quartz-plugin
image:http://img.shields.io/travis/griffon-plugins/{project-name}/master.svg["Build Status", link="https://travis-ci.org/griffon-plugins/{project-name}"]
image:http://img.shields.io/coveralls/griffon-plugins/{project-name}/master.svg["Coverage Status", link="https://coveralls.io/r/griffon-plugins/{project-name}"]
image:http://img.shields.io/badge/license-ASF2-blue.svg["Apache License 2", link="http://www.apache.org/licenses/LICENSE-2.0.txt"]
image:https://api.bintray.com/packages/griffon/griffon-plugins/{project-name}/images/download.svg[link="https://bintray.com/griffon/griffon-plugins/{project-name}/_latestVersion"]
---
This plugin allows your Griffon application to schedule jobs to be executed using a specified interval or
cron expression. The underlying system uses the http://www.quartz-scheduler.org[Quartz Scheduler, window="_blank"].
Refer to the link:http://griffon-plugins.github.io/{project-name}/[plugin guide, window="_blank"] for
further information on configuration and usage.
```
Add Patreon badge to readme | ```asciidoc
= griffon-quartz-plugin
:linkattrs:
:project-name: griffon-quartz-plugin
image:http://img.shields.io/travis/griffon-plugins/{project-name}/master.svg["Build Status", link="https://travis-ci.org/griffon-plugins/{project-name}"]
image:http://img.shields.io/coveralls/griffon-plugins/{project-name}/master.svg["Coverage Status", link="https://coveralls.io/r/griffon-plugins/{project-name}"]
image:http://img.shields.io/badge/license-ASF2-blue.svg["Apache License 2", link="http://www.apache.org/licenses/LICENSE-2.0.txt"]
image:https://api.bintray.com/packages/griffon/griffon-plugins/{project-name}/images/download.svg[link="https://bintray.com/griffon/griffon-plugins/{project-name}/_latestVersion"]
---
image:https://img.shields.io/gitter/room/griffon/griffon.js.svg[link="https://gitter.im/griffon/griffon]
image:https://img.shields.io/badge/donations-Patreon-orange.svg[https://www.patreon.com/user?u=6609318]
---
This plugin allows your Griffon application to schedule jobs to be executed using a specified interval or
cron expression. The underlying system uses the http://www.quartz-scheduler.org[Quartz Scheduler, window="_blank"].
Refer to the link:http://griffon-plugins.github.io/{project-name}/[plugin guide, window="_blank"] for
further information on configuration and usage.
``` |
eff474f3-f39d-47f5-b098-52cd28292a1e | {
"language": "AsciiDoc"
} | ```asciidoc
= Neo4j OGM - An Object Graph Mapping Library for Neo4j v{version}
ifdef::backend-html5[(C) {copyright}]
ifndef::backend-pdf[]
License: link:{common-license-page-uri}[Creative Commons 4.0]
endif::[]
ifdef::backend-pdf[]
(C) {copyright}
License: <<license, Creative Commons 4.0>>
endif::[]
[abstract]
--
This is the Neo4j object-graph mapping (OGM) manual, authored by the Neo4j team.
--
The three parts of the manual are:
* <<introduction>> -- Introducing graph database concepts, Neo4j and object-graph mapping.
* <<tutorial>> -- Follow along as you get started using Neo4j OGM.
* <<reference>> -- Reference documentation for Neo4j OGM.
But before starting, let's see the most important new features.
include::introduction/index.adoc[leveloffset=+1]
include::tutorial/index.adoc[leveloffset=+1]
include::reference/index.adoc[leveloffset=+1]
:numbered!:
[[appendix:migration]]
include::appendix/migration-2.1-to-3.0.adoc[leveloffset=+1]
[[design-considerations]]
include::appendix/design-considerations.adoc[leveloffset=+1]
[[faq]]
include::appendix/faq.adoc[leveloffset=+1]
ifdef::backend-pdf[]
include::{license-dir}/license.adoc[leveloffset=1]
endif::[]
```
Fix content ids in docs | ```asciidoc
= Neo4j OGM - An Object Graph Mapping Library for Neo4j v{version}
ifdef::backend-html5[(C) {copyright}]
ifndef::backend-pdf[]
License: link:{common-license-page-uri}[Creative Commons 4.0]
endif::[]
ifdef::backend-pdf[]
(C) {copyright}
License: <<license, Creative Commons 4.0>>
endif::[]
[abstract]
--
This is the Neo4j object-graph mapping (OGM) manual, authored by the Neo4j team.
--
The three parts of the manual are:
* <<introduction>> -- Introducing graph database concepts, Neo4j and object-graph mapping.
* <<tutorial>> -- Follow along as you get started using Neo4j OGM.
* <<reference>> -- Reference documentation for Neo4j OGM.
But before starting, let's see the most important new features.
include::introduction/index.adoc[leveloffset=+1]
include::tutorial/index.adoc[leveloffset=+1]
include::reference/index.adoc[leveloffset=+1]
:numbered!:
include::appendix/migration-2.1-to-3.0.adoc[leveloffset=+1]
include::appendix/design-considerations.adoc[leveloffset=+1]
include::appendix/faq.adoc[leveloffset=+1]
ifdef::backend-pdf[]
include::{license-dir}/license.adoc[leveloffset=1]
endif::[]
``` |
bf922034-c892-4ae9-be63-4f06ecaabcb5 | {
"language": "AsciiDoc"
} | ```asciidoc
:current: 6.4
:register: https://register.elastic.co
:elasticdocs: https://www.elastic.co/guide/en/elasticsearch/reference/{current}
:licenseexpiration: {stackdocs}/license-expiration.html
[WARNING]
--
After the trial license period expires, the trial platinum features
{licenseexpiration}[**operate in a degraded mode**].
You should update your license as soon as possible. You are essentially flying blind
when running with an expired license. The license can be updated at any point before
or on expiration, using the {elasticdocs}/update-license.html[Update License API]
or Kibana UI, if available in the version deployed.
With Elasticsearch 6.3+, you can revert to a free perpetual basic license
included with deployment by using the {elasticdocs}/start-basic.html[Start Basic API].
With Elasticsearch 6.2 and prior, you can {register}[register for free basic license] and apply
it to the cluster.
--
```
Update documentation versions and links | ```asciidoc
:current: 6.5
:register: https://register.elastic.co
:elasticdocs: https://www.elastic.co/guide/en/elasticsearch/reference/{current}
:licenseexpiration: {stackdocs}/license-expiration.html
[WARNING]
--
After the trial license period expires, the trial platinum features
{licenseexpiration}[**operate in a degraded mode**].
You should update your license as soon as possible. You are essentially flying blind
when running with an expired license. The license can be updated at any point before
or on expiration, using the {elasticdocs}/update-license.html[Update License API]
or Kibana UI, if available in the version deployed.
With Elasticsearch 6.3+, you can revert to a free perpetual basic license
included with deployment by using the {elasticdocs}/start-basic.html[Start Basic API].
With Elasticsearch 6.2 and prior, you can {register}[register for free basic license] and apply
it to the cluster.
--
``` |
51f3831b-cc7d-4577-bd46-e5fa23ea59ea | {
"language": "AsciiDoc"
} | ```asciidoc
spray-kamon-metrics: release notes
==================================
// tag::release-notes[]
=== 0.1.3-SNAPSHOT
* Dependency updates:
** Akka 2.4.8
** Kamon 0.6.2
** Spray 1.3.3
=== 0.1.2 (22 Sep 2015)
* `TracingHttpService`
** No longer change trace name
** When receiving a request context, no longer create a duplicate measurement
** Added a `sealRoute` method with the same signature and behaviour as the one
from `HttpService`
* Update dependencies
** Upgrade to Akka 2.3.14
** Upgrade to Kamon 0.5.1
** Now explicitly depend on Typesafe Config 1.3.0
=== 0.1.1 (17 Aug 2015)
* Update dependency on Kamon to 0.5.0
=== 0.1.0 (10 Aug 2015)
* Initial release
// end::release-notes[]
```
Update release notes for release | ```asciidoc
spray-kamon-metrics: release notes
==================================
// tag::release-notes[]
=== 0.1.3 (18 Aug 2016)
* Dependency updates:
** Akka 2.4.8
** Kamon 0.6.2
** Spray 1.3.3
=== 0.1.2 (22 Sep 2015)
* `TracingHttpService`
** No longer change trace name
** When receiving a request context, no longer create a duplicate measurement
** Added a `sealRoute` method with the same signature and behaviour as the one
from `HttpService`
* Update dependencies
** Upgrade to Akka 2.3.14
** Upgrade to Kamon 0.5.1
** Now explicitly depend on Typesafe Config 1.3.0
=== 0.1.1 (17 Aug 2015)
* Update dependency on Kamon to 0.5.0
=== 0.1.0 (10 Aug 2015)
* Initial release
// end::release-notes[]
``` |
87f9c87c-aee5-46cd-a2e1-601a71bc9e63 | {
"language": "AsciiDoc"
} | ```asciidoc
// Allow GitHub image rendering
:imagesdir: ../../images
[[gi-install-oracle-java-debian]]
=== Setup on _Debian-based_ systems
This section describes how to install _Oracle Java 8_ on a _Debian-based_ system like _Debian 8_ or _Ubuntu 14.04 LTS_.
.Install add-apt-repository package
[source, bash]
----
apt-get install -y python-software-properties
----
.Add Java repository from webud8 maintainer
[source, bash]
----
add-apt-repository ppa:webupd8team/java
apt-get update
----
.Install Oracle Java 8 installer
[source, bash]
----
apt-get install -y oracle-java8-installer
----
```
Fix Oracle Java install guide | ```asciidoc
// Allow GitHub image rendering
:imagesdir: ../../images
[[gi-install-oracle-java-debian]]
=== Setup on _Debian-based_ systems
This section describes how to install _Oracle Java 8_ on a _Debian-based_ system like _Debian 8_ or _Ubuntu 14.04 LTS_.
.Add Java repository from webupd8 maintainer
[source, bash]
----
su -
echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee /etc/apt/sources.list.d/webupd8team-java.list
echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee -a /etc/apt/sources.list.d/webupd8team-java.list
----
.Add repository key server and update repository
[source, bash]
----
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
apt-get update
----
.Install Oracle Java 8 installer
[source, bash]
----
apt-get install -y oracle-java8-installer
----
``` |
2eee6c1e-8371-423f-a7c4-55705c7b2cee | {
"language": "AsciiDoc"
} | ```asciidoc
* link:intro[intro]
* link:install[install]
* link:faq[faq]
```
Add link to config section | ```asciidoc
* link:intro[intro]
* link:install[install]
* link:configuration[configuration]
* link:faq[faq]
``` |
a0fa975c-1e67-4347-82ad-9d1cdd0b8349 | {
"language": "AsciiDoc"
} | ```asciidoc
# Vert.x OAuth2 Auth
Please see the documentation for more information.
## Running test
There is a Keycloak Docker image available, that you can load with the provided configuration `src/test/fixtures/vertx-test-realm.json`
```
docker run -d --rm --name keycloak -p 8888:8080 -e KEYCLOAK_USER=user -e KEYCLOAK_PASSWORD=password -e DB_VENDOR=H2 jboss/keycloak
```
But you can simply run `mvn test`, since the Keycloak responses are mocked with a Web Server started in the test setup.
### IntelliJ IDE
In IntelliJ IDE, you have to uncheck `argLine` in Preferences -> Build,Execution,Deployment -> Build Tools -> Maven -> Running Tests ...
to avoid `IntelliJ Error when running unit test: Could not find or load main class ${surefireArgLine}`
- https://github.com/vert-x3/vertx-ext-parent/issues/7
- https://stackoverflow.com/questions/24115142/intellij-error-when-running-unit-test-could-not-find-or-load-main-class-suref
```
Document how to run the Integration Tests | ```asciidoc
# Vert.x OAuth2 Auth
Please see the documentation for more information.
## Running test
Standard unit tests are run against a mock of some providers, `Google`, `Keycloak` to run
against a real provider (`Keycloak`) the `IT` profile must be enabled, which you can do by:
```
mvn -PIT ...
```
Or by having the environment variable `TRAVIS` set to `true`. When running the integration
tests you must have a local keycloak installed with the configuration file `src/test/fixtures/vertx-test-realm.json`.
And the container can be run locally as:
```
# build the image if not present
docker build -t vertx-test-keycloak src/test/fixtures
# run once there is a image
docker run -d -p 8888:8080 vertx-test-keycloak
```
### IntelliJ IDE
In IntelliJ IDE, you have to uncheck `argLine` in Preferences -> Build,Execution,Deployment -> Build Tools -> Maven -> Running Tests ...
to avoid `IntelliJ Error when running unit test: Could not find or load main class ${surefireArgLine}`
- https://github.com/vert-x3/vertx-ext-parent/issues/7
- https://stackoverflow.com/questions/24115142/intellij-error-when-running-unit-test-could-not-find-or-load-main-class-suref
``` |
c1f92524-1e03-4f69-98a6-091fa2efa1d4 | {
"language": "AsciiDoc"
} | ```asciidoc
= Turbine
image:https://travis-ci.org/concourse/turbine.svg?branch=master["Build Status", link="https://travis-ci.org/concourse/turbine"]
image:https://coveralls.io/repos/concourse/turbine/badge.png["Coverage Status", link="https://coveralls.io/r/concourse/turbine"]
* http://docs.room101agent.apiary.io/[API]
```
Add description and annotate api as stale | ```asciidoc
= Turbine
image:https://travis-ci.org/concourse/turbine.svg?branch=master["Build Status", link="https://travis-ci.org/concourse/turbine"]
image:https://coveralls.io/repos/concourse/turbine/badge.png["Coverage Status", link="https://coveralls.io/r/concourse/turbine"]
Provides a stateless REST API to execute builds on a https://github.com/cloudfoundry-incubator/garden[Warden] server.
* http://docs.room101agent.apiary.io/[API] (currently stale)
``` |
c78888d0-5786-4082-91c7-bca63a7f42d9 | {
"language": "AsciiDoc"
} | ```asciidoc
// Module included in the following assemblies:
//
// * networking/cluster-network-operator.adoc
[id="nw-cno-logs_{context}"]
= Viewing Cluster Network Operator logs
You can view Cluster Network Operator logs by using the `oc logs` command.
.Procedure
* Run the following command to view the logs of the Cluster Network Operator:
----
$ oc logs --namespace=openshift-network-operator deployment/network-operator
----
```
Add missing plus before command | ```asciidoc
// Module included in the following assemblies:
//
// * networking/cluster-network-operator.adoc
[id="nw-cno-logs_{context}"]
= Viewing Cluster Network Operator logs
You can view Cluster Network Operator logs by using the `oc logs` command.
.Procedure
* Run the following command to view the logs of the Cluster Network Operator:
+
----
$ oc logs --namespace=openshift-network-operator deployment/network-operator
----
``` |
f9b0aa0e-7cab-4d25-8e1b-39f49c3a33f8 | {
"language": "AsciiDoc"
} | ```asciidoc
== Prerequisites
=== Installing Protocol Buffers Compiler
==== For Windows
Install the binary from https://github.com/google/protobuf/releases/download/v2.4.1/protoc-2.4.1-win32.zip and then set the `/path/to/protoc/parent` on your PATH variable.
==== For Macs
```
brew tap homebrew/versions
brew install protobuf241
brew link --force --overwrite protobuf241
```
_For both Windows and Mac : To verify that installation was successful, `protoc --version` should display `2.4.1`_
```
Update how to get protoc on Mac | ```asciidoc
== Prerequisites
=== Installing Protocol Buffers Compiler
==== For Windows
Install the binary from https://github.com/google/protobuf/releases/download/v2.4.1/protoc-2.4.1-win32.zip and then set the `/path/to/protoc/parent` on your PATH variable.
==== For Macs
```
brew install https://raw.githubusercontent.com/Homebrew/homebrew-versions/7f5eb0d/protobuf241.rb
brew link --force --overwrite protobuf241
```
_For both Windows and Mac : To verify that installation was successful, `protoc --version` should display `2.4.1`_
``` |
47d2e51d-2127-4bc4-b7c0-dc3ccfd1bfd9 | {
"language": "AsciiDoc"
} | ```asciidoc
= Overview
{product-author}
{product-version}
:data-uri:
:icons:
:experimental:
With the OpenShift command line interface (CLI), you can
link:../dev_guide/new_app.html[create applications] and manage OpenShift
link:../dev_guide/projects.html[projects] from a terminal. The CLI is ideal in
situations where you are:
- Working directly with project source code.
- Scripting OpenShift operations.
- Restricted by bandwidth resources and cannot use the
link:../architecture/infrastructure_components/web_console.html[web console].
The OpenShift CLI is available using the `oc` command:
----
$ oc <command>
----
ifdef::openshift-enterprise[]
You can download and unpack the CLI with an active OpenShift Enterprise
subscription from the
https://access.redhat.com/downloads/content/290[Red
Hat Customer Portal].
endif::[]
ifdef::openshift-origin[]
You can download and unpack the CLI from the
https://github.com/openshift/origin/releases[Releases page] of the OpenShift
Origin source repository on GitHub.
endif::[]
[NOTE]
====
The CLI command examples presented through OpenShift documentation use
`oc` command syntax. If the `oc` binary is not available on your workstation,
you can alternatively substitute `openshift cli` in the examples if you
have the `openshift` binary.
====
```
Document Git as a prerequisite of the CLI client | ```asciidoc
= Overview
{product-author}
{product-version}
:data-uri:
:icons:
:experimental:
With the OpenShift command line interface (CLI), you can
link:../dev_guide/new_app.html[create applications] and manage OpenShift
link:../dev_guide/projects.html[projects] from a terminal. The CLI is ideal in
situations where you are:
- Working directly with project source code.
- Scripting OpenShift operations.
- Restricted by bandwidth resources and cannot use the
link:../architecture/infrastructure_components/web_console.html[web console].
The OpenShift CLI is available using the `oc` command:
----
$ oc <command>
----
ifdef::openshift-enterprise[]
You can download and unpack the CLI with an active OpenShift Enterprise
subscription from the
https://access.redhat.com/downloads/content/290[Red
Hat Customer Portal].
endif::[]
ifdef::openshift-origin[]
You can download and unpack the CLI from the
https://github.com/openshift/origin/releases[Releases page] of the OpenShift
Origin source repository on GitHub.
endif::[]
[NOTE]
====
The CLI command examples presented through OpenShift documentation use
`oc` command syntax. If the `oc` binary is not available on your workstation,
you can alternatively substitute `openshift cli` in the examples if you
have the `openshift` binary.
====
[NOTE]
====
Certain operations require Git to be locally installed on a client. For example the command to create an application using a remote Git repository:
`$ oc new-app \https://gitrepository/app`
====
``` |
8f06d43a-96b6-4101-acff-1b1d2de768c8 | {
"language": "AsciiDoc"
} | ```asciidoc
= Griffon 2.0.0.RC2 Released
Andres Almiray
2014-08-11
:jbake-type: post
:jbake-status: published
:category: news
:idprefix:
== Griffon 2.0.0.RC2 Released
The Griffon team is happy to announce the release of Griffon 2.0.0.RC2!
This is the first release candidate of Griffon 2.0.0. If all goes according to plan the next release
will be 2.0.0 final in a few weeks time.
The following list summarizes the changes brought by this release:
* Groovy support upgraded to Groovy 2.3.6.
* Pivot support upgraded to Groovy 2.0.4.
* Lots of updates applied to Lazybones application templates.
* Properly sort auto-loaded modules in tests (test modules are placed last).
* More content added to the link:../guide/2.0.0.RC2/index.html[Griffon Guide].
We look forward to your feedback. Please report any problems you find to the Griffon User list,
or better yet, file a bug at http://github.com/griffon/griffon/issues
Remember you can also contact the team on Twitter: http://twitter.com/theaviary[@theaviary].
Many thanks to all who contributed to this release!
The Griffon Team
```
Fix news entry for 2.0.0.rC2 | ```asciidoc
= Griffon 2.0.0.RC2 Released
Andres Almiray
2014-08-11
:jbake-type: post
:jbake-status: published
:category: news
:idprefix:
== Griffon 2.0.0.RC2 Released
The Griffon team is happy to announce the release of Griffon 2.0.0.RC2!
This is the second release candidate of Griffon 2.0.0. If all goes according to plan the next release
will be 2.0.0 final in a few weeks time.
The following list summarizes the changes brought by this release:
* Groovy support upgraded to Groovy 2.3.6.
* Pivot support upgraded to Groovy 2.0.4.
* Lots of updates applied to Lazybones application templates.
* Properly sort auto-loaded modules in tests (test modules are placed last).
* More content added to the link:../guide/2.0.0.RC2/index.html[Griffon Guide].
We look forward to your feedback. Please report any problems you find to the Griffon User list,
or better yet, file a bug at http://github.com/griffon/griffon/issues
Remember you can also contact the team on Twitter: http://twitter.com/theaviary[@theaviary].
Many thanks to all who contributed to this release!
The Griffon Team
``` |
7648ca09-481b-4255-a0a5-7f836567f0c3 | {
"language": "AsciiDoc"
} | ```asciidoc
= secret nemesis
[quote]
Submission for a group symbolic regression assignment in my AI class.
== Status
Function generation is done, though the function itself is rarely pretty.
== Getting started
Don't bother trying to run this on a Windows PC unless you know what you're
doing (if you don't understand init.sh, just setup a virtual machine).
[source,bash]
----
./init.sh
source venv/bin/activate
# OPTIONAL: change the parameters in symreg.py
python symreg.py
----
== Authors
In alphabetic order:
- Ciaran Downey (@ciarand)
- Eric Rosenzweig (@erosenzweig)
- Joe Greene (@joegreene)
- Michelle Beck (@michelephantz)
- Rhapsody Budiono (@rhapswt)
== License
ISC, see the LICENSE file.
```
Clarify quote, make it even less egocentric | ```asciidoc
= secret nemesis
[quote]
Submission for a group symbolic regression assignment in CSUF's Summer 2015 AI
class.
== Status
Function generation is done, though the function itself is rarely pretty.
== Getting started
Don't bother trying to run this on a Windows PC unless you know what you're
doing (if you don't understand init.sh, just setup a virtual machine).
[source,bash]
----
./init.sh
source venv/bin/activate
# OPTIONAL: change the parameters in symreg.py
python symreg.py
----
== Authors
In alphabetic order:
- Ciaran Downey (@ciarand)
- Eric Rosenzweig (@erosenzweig)
- Joe Greene (@joegreene)
- Michelle Beck (@michelephantz)
- Rhapsody Budiono (@rhapswt)
== License
ISC, see the LICENSE file.
``` |
a165338c-11c4-4e39-8dd6-66cd1f7407e7 | {
"language": "AsciiDoc"
} | ```asciidoc
= Hawkular UI Services
[.lead]
*Hawkular UI Services* is the home of Angular UI services, such as ngResources and other utility services that can be used across Hawkular UI Modules(plugins). These UI plugins comprise the Hawkular console for https://github.com/hawkular/hawkular[Hawkular]. This console is based on https://github.com/hawtio/hawtio/blob/master/docs/Overview2dotX.md[Hawt.io 2], which a javascript based client-side framework. The various directories underneath console directory are Hawt.io 2 plugins and the various plugins eventually compose a Hawkular console. This modular approach to creating hawtio console plugins allows us to create individual plugins that comprise a console or can easily be plugged into other Hawtio based consoles. The plugin system makes it easy to add/remove functionality (even dynamically).
To build the service:
```shell
npm install
bower install
gulp
```
The service can be tested with the server. To run the actual testsuite you need to have hawkular-metrics server running
on localhost on port 8080. It must have a clean database (= no tenants, no metrics, ...). Once the server is up and
running execute:
```shell
gulp rest
```
to start the testsuite.
```
Fix the readme about the testsuite execution | ```asciidoc
= Hawkular UI Services
[.lead]
*Hawkular UI Services* is the home of Angular UI services, such as ngResources and other utility services that can be used across Hawkular UI Modules(plugins). These UI plugins comprise the Hawkular console for https://github.com/hawkular/hawkular[Hawkular]. This console is based on https://github.com/hawtio/hawtio/blob/master/docs/Overview2dotX.md[Hawt.io 2], which a javascript based client-side framework. The various directories underneath console directory are Hawt.io 2 plugins and the various plugins eventually compose a Hawkular console. This modular approach to creating hawtio console plugins allows us to create individual plugins that comprise a console or can easily be plugged into other Hawtio based consoles. The plugin system makes it easy to add/remove functionality (even dynamically).
To build the service:
```shell
npm install
bower install
gulp
```
The service can be tested with the server. To run the actual testsuite you need to have relevant (that means hawkular-metrics,
hawkular-inventory, ...) server running on localhost on port 8080. It must have a clean database (= no tenants, no metrics, ...).
Once the server is up and running, execute the relevant rest sub-target:
```shell
gulp rest:inventory
gulp rest:metric
```
to start the testsuite.
``` |
Subsets and Splits