content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Hyper-V generation 2 virtual machines –)
The final part (at least for now, I’m hoping a guest author will write part 11) of this series on generation 2 virtual machines in Hyper-V re-visits conversion of a generation 1 virtual machine to a generation 2 virtual machine. Part 8 walked through the manual process, assuming it is possible. However, there is an easier way with a PowerShell script I wrote and recently released. Called Convert-VMGeneration.ps1, it can be downloaded from. The script is intended to make life as simple as possible, reducing as many as reasonably possible manual post-migration fix-ups that may be required.
Convert-VMGeneration.ps1 is self-documenting – after downloading to a local drive, running get-help .\Convert-VMGeneration.ps1 –full will give you everything you need to know on how to use. I will assume that you have read part 8 of this series before use so you understand the three phases of conversion – capture, apply and clone.
A few tips worth mentioning:
- Run reagentc /disable in the guest operating system prior to shutting down and converting (if reagentc /info indicates it is enabled). After conversion, in the generation 2 virtual machine, run reagentc /enable. This makes life a lot easier in terms of the Windows Recovery Environment and (lack of) manual intervention.
- I strongly recommend you use the –Path parameter. Alternately, after a VM has been created, move the VM and its related storage to the right path using Hyper-V Manager. Note that if you subsequently perform a storage move, data-disks which were in use by the generation 1 virtual machine will no longer be valid from its perspective.
- For the first few times of use, you might want to specify the name of the temporary .WIM file using the –WIM parameter, combined with –KeepWIM. That’ll save some time if a failure occurs during the apply or clone phases as you’ll not need to redo the potentially time-consuming capture phase. Using the -WIM parameter also makes it easier if you are running short of disk space and want to use an alternate location.
- Be aware that you may not be able to start both the generation 1 and generation 2 virtual machines simultaneously (for example if they share a data VHDX). In reality, you really don’t want both machines on the network at the same time anyway. Be cautious of other implications too such as domain joined machine account passwords changing and so on.
- Perform a move of the VM and all its data to another path after migration and you’ve verified the functionality of the converted generation 2 virtual machine. This will make it much cleaner to delete the old generation 1 virtual machine and clean up any old files which may be lingering.
- The conversion performs a highly destructive step which completely wipes a disk. If everything is to plan, it’s the blank VHDX used as the boot disk for the generation 2 virtual machine. However, things could go wrong. For that reason, I strongly recommend you heed the warning (which cannot be suppressed) and make appropriate verifications. Should data loss occur for reasons such as coding error, no liability is assumed. If in doubt, export the generation 1 virtual machine, and import it onto a scratch box. Then do the conversion. That way, you are assured you will not lose anything important IF there is a bug.
- Similar to HVRemote which some of you may have used, this script is not supported or endorsed by Microsoft Corporation.
- Don’t close the PowerShell window while a conversion is in progress. Due to a bug in Windows, you may leave your system in a state whereby a drive letter is ‘leaked’ – it will be visible in Windows File Explorer, but unusable until the system is restarted. To cancel the script while it is progress, use Control-C and wait for it to clean-up instead.
Here’s an example of the conversion of a Windows Server 2012 virtual machine. The original virtual machine is highly available. The resulting generation 2 virtual machine will need to be made highly available after migration has completed.
Here’s an example where there are some (potential) issues to resolve. One is that there is an additional data partition on the boot disk for the source VM. Windows Recovery Environment was left enabled during the migration and will not operate correctly in the generation 2 virtual machine. Lastly, the original source boot disk was a differencing disk. Appropriate fix-ups may be required to mirror the configuration on the generation 2 virtual machine.
There are some VM configurations which aren’t handled, and the script will block the conversion:
- The VM is running. The reason is simply that you cannot reasonably expect an operating system to continue running correctly if the firmware is fundamentally changed from under it.
- Checkpoints (snapshots) are in use by the generation 1 virtual machine. The reasons partly relate to the above statement – an online checkpoint is equivalent to a running VM. While offline checkpoints could be converted in theory, there are further problems:
- The mount-diskimage cmdlet that the script uses under the covers does not support .AVHDX files which are used by checkpoints. Further, the time to do the conversion would be prohibitive – each checkpoint would have to done individually.
- Rebuilding the checkpoint tree with parent/child would be pretty much impossible – at best an individual offline checkpoint could be converted to a ‘standalone’ generation 2 virtual machine.
- Hyper-V replica is enabled. The reason was simply my time to validate this scenario. In theory it should work. Obviously replica would need to be re-enabled after the conversion. The workaround is to disable replica prior to conversion, or use the “-IgnoreReplicaCheck” parameter.
- When a guest cluster is configured in the generation 1 virtual machine using a shared VHDX. A highly available/clustered VM from the parent partition perspective works fine though. I am somewhat doubtful a guest cluster would come across cleanly, and have done no validation. The script hard blocks if any VHDX in the generation 1 virtual machine is shared.
I hope you find Convert-VMGeneration.ps1 useful. I’ll try to fix any bugs you report as time permits! A note for PowerShell aficionados reading the code – I make no apologies for my lack of PowerShell skills. This was the first time I’ve undertaken writing anything of reasonable complexity in PS. I’m sure there are many ways to make the code more powershellesque in efficiency.
So with that, I’ve reached the end of the planned parts of this series on generation 2 virtual machines in Hyper-V in Windows 8.1 and Windows Server 2012 R2. I hope you enjoyed them and found them and the conversion utility useful. As always, comments, questions and feedback are welcome.
Cheers,
John. | https://docs.microsoft.com/en-us/archive/blogs/jhoward/hyper-v-generation-2-virtual-machines-part-10 | 2020-02-17T01:57:07 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
Install Node.js applications
NOTE: To avoid a conflict between two or more Node.js applications attempting to use the same port, modify the application settings as needed to use a different port for each application.. | https://docs.bitnami.com/azure/apps/openproject/configuration/install-apps/ | 2020-02-17T00:36:51 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.bitnami.com |
Introduction to Azure managed disks.
The available types of disks are ultra disks, premium solid-state drives (SSD), standard SSDs, and standard hard disk drives (HDD). For information about each individual disk type, see Select a disk type for IaaS VMs.
Benefits of managed disks
Let's go over some of the benefits you gain by using managed disks.
Highly durable and available
Managed disks are designed for 99.999% availability. Managed disks achieve this by providing you with three replicas of your data, allowing for high durability. If one or even two replicas experience issues, the remaining replicas help ensure persistence of your data and high tolerance against failures. This architecture has helped Azure consistently deliver enterprise-grade durability for infrastructure as a service (IaaS) disks, with an industry-leading ZERO% annualized failure rate.
Simple and scalable VM deployment.
Integration with availability sets
Managed disks are integrated with availability sets to ensure that the disks of VMs in an availability set are sufficiently isolated from each other to avoid a single point of failure. Disks are automatically placed in different storage scale units (stamps). If a stamp fails due to hardware or software failure, only the VM instances with disks on those stamps fail. For example, let's say you have an application running on five VMs, and the VMs are in an Availability Set. The disks for those VMs won't all be stored in the same stamp, so if one stamp goes down, the other instances of the application continue to run. (RBAC) to assign specific permissions for a managed disk to one or more users. Managed disks expose a variety of operations, including read, write (create/update), delete, and retrieving a shared access signature (SAS) URI for the disk. You can grant access to only the operations a person needs to perform
Azure Server-side Encryption provides encryption-at-rest and safeguards your data to meet your organizational security and compliance commitments. Server-side encryption is enabled by default for all managed disks, snapshots, and images in all the regions where managed disks are available. You can either allow Azure to manage your keys for you, these are platform-managed keys, or you can manage the keys yourself, these are customer-managed keys. Visit the Managed Disks FAQ page for more IaaS VMs.
Disk roles
There are three main disk roles in Azure: the data disk, the OS disk, and the temporary disk. These roles map to disks that are attached to your virtual machine.
Data disk
A data disk is a managed disk that's attached to a virtual machine to store application data, or other data you need to keep. Data disks are registered as SCSI drives and are labeled with a letter that you choose. Each data disk has a maximum capacity of:
Images
Managed disks also support creating a managed custom image. You can create an image from your custom VHD in a storage account or directly from a generalized (sysprepped) VM. This process captures a single image. This image contains all managed disks associated with a VM, including both the OS and data disks. This managed custom image enables creating hundreds of VMs using your custom image without the need to copy or manage any storage accounts.
For information on creating images, see the following articles:
- How to capture a managed image of a generalized VM in Azure
- How to generalize and capture a Linux virtual machine using the Azure CLI
Images versus snapshots
It's important to understand the difference between images and snapshots. With managed disks, you can take an image of a generalized VM that has been deallocated. This image includes all of the disks attached to the VM. You can use this image to create a VM, and it includes all of the disks.
A snapshot is a copy of a disk at the point in time the snapshot is taken. It applies only to one disk. If you have a VM that has one disk (the OS disk), you can take a snapshot or an image of it and create a VM from either the snapshot or the image.
A snapshot doesn't have awareness of any disk except the one it contains. This makes it problematic to use in scenarios that require the coordination of multiple disks, such as striping. Snapshots would need to be able to coordinate with each other and this is currently not supported..
Feedback | https://docs.microsoft.com/en-us/azure/virtual-machines/windows/managed-disks-overview?WT.mc_id=docs-azuredevtips-micrum | 2020-02-17T01:34:38 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['../../includes/media/virtual-machines-managed-disks-overview/disk-types.png',
'Disk roles in action'], dtype=object)
array(['../../includes/media/virtual-machines-managed-disks-overview/real-time-disk-allocation.png',
'Three level provisioning system showing bandwidth and IOPS allocation'],
dtype=object)
array(['../../includes/media/virtual-machines-managed-disks-overview/example-vm-allocation.png',
'Standard_DS1v1 example allocation'], dtype=object) ] | docs.microsoft.com |
Analytics Advisor Cyber Kill Chain
The Analytics Advisor dashboards are designed to help you understand what content you might want to deploy inside of Splunk based on the content you already have and the data that’s present in your environment. The Kill Chain Overview dashboard even includes a custom vizualization designed to show what content is tied to different parts of the Kill Chain.
- Like the Analytics Advisor Content Overview dashboard, the Kill Chain Oveview dashboard takes into account the data and active content in your environment to help you choose new and better content. See that dashboard for a full tour of the three steps in this dashboard.
- Each number in these dashboards represents a piece of content. In order to guide you through the dashboard, follow the headlines 1, 2 and 3 to find the content. You can also go directly to the full details for each piece of content by clicking the green button under heading 3.
- Any content labelled Active means that you have content (detections, correlations etc.) enabled in your environment.
- Any content labelled Available means that you have content that can be enabled with data already in Splunk.
- Any content labelled Needs data means that the data to support the content is missing in Splunk.
- The Kill Chain tab shows the coverage in your environment against the Kill Chain steps. You can adjust what numbers are displayed in the visualisation to show Active/Available content.
- The Chart View tab shows on a high level and how your environment stacks up against the content available and the Cyber Kill Chain. | https://docs.splunksecurityessentials.com/features/kill_chain_overview/ | 2020-02-17T01:25:02 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.splunksecurityessentials.com |
6.4. Information System Management¶
This chapter contains SIMP security concepts that are related to the management security controls in NIST 800-53.
6.4.2. SIMP Self Risk Assessment¶
6.4.3. Vulnerability Scanning¶
The SIMP development and security team performs regular vulnerability scanning of the product using commercial and open source tools. Results and mitigations for findings from those tools can be provided upon request. [CA-2 : SECURITY ASSESSMENTS, RA-5 : VULNERABILITY SCANNING] | https://simp.readthedocs.io/en/6.3.3/security_conop/System_Management.html | 2020-02-17T00:16:11 | CC-MAIN-2020-10 | 1581875141460.64 | [] | simp.readthedocs.io |
What is Azure Active Directory B2C?) is a customer identity access management (CIAM) solution capable of supporting millions of users and billions of authentications per day. It takes care of the scaling and safety of the authentication platform, monitoring and automatically handling threats like denial-of-service, password spray, or brute force attacks.. The following sections of this overview walk you through a demo application that uses Azure AD B2C. You're also welcome to move on directly to a more in-depth technical overview of Azure AD B2C.
Example: WoodGrove Groceries
WoodGrove Groceries is a live web application created by Microsoft to demonstrate several Azure AD B2C features. The next few sections review some of the authentication options provided by Azure AD B2C to the WoodGrove website.
Business overview
WoodGrove is an online grocery store that sells groceries to both individual consumers and business customers. Their business customers buy groceries on behalf of their company, or businesses that they manage.
WoodGrove Groceries offers several sign-in options based on the relationship their customers have with the store:
- Individual customers can sign up or sign in with individual accounts, such as with a social identity provider or an email address and password.
- Business customers can sign up or sign in with their enterprise credentials.
- Partners and suppliers are individuals who supply the grocery store with products to sell. Partner identity is provided by Azure Active Directory B2B.
Authenticate individual customers
When a customer selects Sign in with your personal account, they're redirected to a customized sign-in page hosted by Azure AD B2C. You can see in the following image that we've customized the user interface (UI) to look and feel just like the WoodGrove Groceries website. WoodGrove's customers should be unaware that the authentication experience is hosted and secured by Azure AD B2C.
WoodGrove allows their customers to sign up and sign in by using their Google, Facebook, or Microsoft accounts as their identity provider. Or, they can sign up by using their email address and a password to create what's called a local account.
When a customer selects Sign up with your personal account and then Sign up now, they're presented with a custom sign-up page.
After entering an email address and selecting Send verification code, Azure AD B2C sends them the code. Once they enter their code, select Verify code, and then enter the other information on the form, they must also agree to the terms of service.
Clicking the Create button causes Azure AD B2C to redirect the user back to the WoodGrove Groceries website. When it redirects, Azure AD B2C passes an OpenID Connect authentication token to the WoodGrove web application. The user is now signed-in and ready to go, their display name shown in the top-right corner to indicate they're signed in.
Authenticate business customers
When a customer selects one of the options under Business customers, the WoodGrove Groceries website invokes a different Azure AD B2C policy than it does for individual customers.
This policy presents the user with an option to use their corporate credentials for sign-up and sign-in. In the WoodGrove example, users are prompted to sign in with any Office 365 or Azure AD account. This policy uses a multi-tenant Azure AD application and the
/common Azure AD endpoint to federate Azure AD B2C with any Office 365 customer in the world.
Authenticate partners
The Sign in with your supplier account link uses Azure Active Directory B2B's collaboration functionality. Azure AD B2B is a family of features in Azure Active Directory to manage partner identities. Those identities can be federated from Azure Active Directory for access into Azure AD B2C-protected applications.
Learn more about Azure AD B2B in What is guest user access in Azure Active Directory B2B?.
Next steps
Now that you have an idea of what Azure AD B2C is and some of the scenarios it can help with, dig a little deeper into its features and technical aspects.
Feedback | https://docs.microsoft.com/en-us/azure/active-directory-b2c/overview?WT.mc_id=docs-azuredevtips-micrum | 2020-02-17T01:26:56 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['media/overview/azureadb2c-overview.png',
'Infographic of Azure AD B2C identity providers and downstream applications'],
dtype=object)
array(['media/overview/sign-in-small.png',
'Customized sign-up and sign-in pages and background image'],
dtype=object)
array(['media/overview/scenario-singlesignon.png',
'Diagram of third-party identities federating to Azure AD B2C'],
dtype=object)
array(['media/overview/scenario-remoteprofile.png',
'A logical diagram of Azure AD B2C communicating with an external user store'],
dtype=object)
array(['media/overview/scenario-progressive.png',
'A visual depiction of progressive profiling'], dtype=object)
array(['media/overview/scenario-idproofing.png',
'A diagram showing the user flow for third-party identity proofing'],
dtype=object)
array(['media/overview/woodgrove-overview.png',
'Individual (B2C), business (B2C), and partner (B2B) sign-in pages'],
dtype=object)
array(['media/overview/sign-in.png',
'Custom WoodGrove sign-in page hosted by Azure AD B2C'],
dtype=object)
array(['media/overview/sign-up.png',
'Custom WoodGrove sign-up page hosted by Azure AD B2C'],
dtype=object)
array(['media/overview/signed-in-individual.png',
'WoodGrove Groceries website header showing user is signed in'],
dtype=object) ] | docs.microsoft.com |
GeoModel Version 0.1 Specification¶
The first release version of GeoModel will be a minimum viable product (MVP) containing features that replace the functionality of the existing implementation along with a few new requirements.
Terminology¶
Locality¶
The locality of a user is a geographical region from which most of that user’s online activity originates.
Primary Interface¶
GeoModel v0.1 is an alert built into MozDef that:
- Processes authentication-related events.
- Updates user locality information.
- Emits alerts when some specific conditions are met.
Data Stores¶
GeoModel interacts with MozDef to both query for events as well as store new alerts.
GeoModel also maintains its own user locality information. Version 0.1 will store this information in the same ElasticSearch instance that MozDef uses, under a configured index.
Functional Components¶
GeoModel v0.1 can be thought of as consisting of two core “components” that are each responsible for a distinct set of responsibilities. These two components interact in a pipeline.
Because GeoModel v0.1 is implemented as an
Alert in MozDef,
it is essentially a distinct Python program run by MozDef’s
AlertTask
scheduler.
Analysis Engine¶
The first component handles the analysis of events pertaining to authenticated actions made by users. These events are retrieved from MozDef and analyzed to determine locality of users which is then persisted in a data store.
This component has the following responsibilities:
- Run configured queries to retrieve events describing authenticated actions taken by users from MozDef.
- Load locality state from ElasticSearch.
- Remove outdated locality information.
- Update locality state with information from retrieved events.
Alert Emitter¶
The second component handles the creation of alerts and communicating of those alerts to MozDef.
This component has the following responsibilities:
- Inspect localities produced by the Analysis Engine to produce alerts.
- Store alerts in MozDef’s ElasticSearch instance.
The Alert Emitter will, given a set of localities for a user, produce an alert if and only if both:
- User activity is found to originate from a location outside of all previously known localities.
- It would not be possible for the user to have travelled to a new locality from the one they were last active in.
Data Models¶
The following models describe what data is required to implement the features that each component is responsible for. They are described using a JSON-based format where keys indicidate the names of values and values are strings containing those values’ types, which are represented using TypeScript notation. We use this notation because configuration data as well as data stored in ElasticSearch are represented as JSON and JSON-like objects respectively.
General Configuration¶
The top-level configuration for GeoModel version 0.1 must contain the following.
{ "localities": { "es_index": string, "valid_duration_days": number, "radius_kilometres": number }, "events": { "search_window": object, "lucene_query": string, }, "whitelist": { "users": Array<string>, "cidrs": Array<string> } }
Using the information above, GeoModel can determine:
- What index to store locality documents in.
- What index to read events from.
- What index to write alerts to.
- What queries to run in order to retrieve a complete set of events.
- When a user locality is considered outdated and should be removed.
- The radius that localities should have.
- Whitelisting rules to apply.
In the above, note that
events.queries describes an array of objects. Each of
these objects are expected to contain a query for ElasticSearch using
Lucene syntax. The
username field is expected to be a string describing the path into
the result dictionary your query will return that will produce the username of
the user taking an authenticated action.
The
search_window object can contain any of the keywords passed to Python’s
timedelta
constructor.
So for example the following:
{ "events": [ { "search_window": { "minutes": 30 }, "lucene_query": "tags:auth0", "username_path": "details.username" } ] }
would query ElasticSearch for all events tagged
auth0 and try to extract
the
username from
result["details"]["username"] where
result is one of
the results produced by executing the query.
The
alerts.whitelist portion of the configuration specifies a couple of
parameters for whitelisting acitivity:
- From any of a list of users (based on
events.queries.username).
- From any IPs within the range of any of a list of CIDRs.
For example, the following whitelist configurations would instruct GeoModel
not to produce alerts for actions taken by “testuser” or for any users
originating from an IP in either the ranges
1.2.3.0/8 and
192.168.0.0/16.
{ "alerts": { "whitelist": { "users": ["testuser"], "cidrs": ["1.2.3.0/8", "192.168.0.0/16"]: } } }
Note however that GeoModel will still retain locality information for whitelisted users and users originating from whitelisted IPs.
User Locality State¶
GeoModel version 0.1 uses one ElasticSearch Type (similar to a table in a relational database) to represent locality information. Under this type, one document exists per user describing that user’s locality information.
{ "type_": "locality", "username": string, "localities": Array<{ "sourceipaddress": string, "city": string, "country": string, "lastaction": date, "latitude": number, "longitude": number, "radius": number }> }
Using the information above, GeoModel can determine:
- All of the localities of a user.
- Whether a locality is older than some amount of time.
- How far outside of any localities a given location is.
Alerts¶
Alerts emitted to the configured index are intended to cohere to MozDef’s preferred naming scheme.
{ "username": string, "hops": [ { "origin": { "ip": string, "city": string, "country": string, "latitude": number, "longitude": number, "geopoint": GeoPoint } "destination": { "ip": string, "city": string, "country": string, "latitude": number, "longitude": number, "geopoint": GeoPoint } } ] }
Note in the above that the
origin.geopoint field uses ElasticSearch’s
GeoPoint
type.
User Stories¶
User stories here make references to the following categories of users:
- An operator is anyone responsible for deploying or maintaining a deployment of MozDef that includes GeoModel.
- An investigator is anyone responsible for viewing and taking action based on alerts emitted by GeoModel.
Potential Compromises Detected¶
As an investigator, I expect that if a user is found to have performed some authenticated action in one location and then, some short amount of time later, in another that an alert will be emitted by GeoModel.
Realistic Travel Excluded¶
As an investigator, I expect that if someone starts working somehwere, gets on a plane and continues working after arriving in their destination that an alert will not be emitted by GeoModel.
Diversity of Indicators¶
As an operator, I expect that GeoModel will fetch events pertaining to authenticated actions from new sources (Duo, Auth0, etc.) after I deploy MozDef with GeoModel configured with queries targeting those sources. | https://mozdef.readthedocs.io/en/update_format_docs/geomodel/specifications/v0_1.html | 2020-02-17T01:46:47 | CC-MAIN-2020-10 | 1581875141460.64 | [] | mozdef.readthedocs.io |
'aws help' for descriptions of global parameters.
failover-db-cluster [--db-cluster-identifier <value>] [--target-db-instance-identifier <value>] [--cli-input-json <value>] [- cluster. For example, mydbcluster-replicauster -> (structure)
Contains the details of an Amazon Neptune DB cluster.
This data type is used as a response element in the DescribeDBClusters action.
AllocatedStorage -> (integer)AllocatedStorage always returns 1, because Neptune DB cluster storage size is not fixed, but instead automatically adjusts as needed.
AvailabilityZones -> (list)
Provides the list of EC2 Availability Zones that instances in the DB cluster can be created in.
(string)
BackupRetentionPeriod -> (integer)Specifies the number of days for which automatic DB snapshots are retained.
CharacterSetName -> (string)(Not supported by Neptune).
PercentProgress -> (string)Specifies the progress of the operation as a percentage.
EarliestRestorableTime -> (timestamp)Specifies Read Replicas that are available in a DB cluster. As clients request new connections to the reader endpoint, Neptune distributes the connection requests among the Read Replicas in the DB cluster. This functionality can help balance your read workload across multiple Read Replicas in your DB cluster.
If a failover occurs, and the Read Replica that you are connected to is promoted to be the primary instance, your connection is dropped. To continue sending your read workload to other Read Replicas in the cluster, you can then reconnect to the reader endpoint.
MultiAZ -> (boolean)Specifies whether the DB cluster has instances in multiple Availability Zones.
Engine -> (string)Provides)(Not supported by Neptune)
)Not supported by Neptune. a Read Replica is promoted to the primary instance after a failure of the existing primary instance. AWS services on your behalf.
- PENDING - the IAM role ARN is being associated with the DB cluster.
- INVALID - the IAM role ARN is associated with the DB cluster, but the DB cluster is unable to assume the IAM role in order to access other AWS services on your behalf.
IAMDatabaseAuthenticationEnabled -> (boolean)True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled, and otherwise false.
CloneGroupId -> (string)Identifies the clone group to which the DB cluster is associated.
ClusterCreateTime -> (timestamp)Specifies the time when the DB cluster was created, in Universal Coordinated Time (UTC).. | https://docs.aws.amazon.com/cli/latest/reference/neptune/failover-db-cluster.html | 2020-02-17T01:27:39 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.aws.amazon.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::DynamoDB::Types::DeleteReplicaAction
- Defined in:
- (unknown)
Overview
Note:
When passing DeleteReplicaAction as input to an Aws::Client method, you can use a vanilla Hash:
{ region_name: "RegionName", # required }
Represents a replica to be removed.
Returned by:
Instance Attribute Summary collapse
- #region_name ⇒ String
The Region of the replica to be removed.
Instance Attribute Details
#region_name ⇒ String
The Region of the replica to be removed. | https://docs.aws.amazon.com/sdkforruby/api/Aws/DynamoDB/Types/DeleteReplicaAction.html | 2020-02-17T00:46:35 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.aws.amazon.com |
Software Download Directory
changes.mady.by.user Nancy Admin
Saved on Feb 06, 2019
changes.mady.by.user MaryAnn Rapuano
Saved on Feb 11, 2019
Now | https://docs.frevvo.com/d/pages/diffpagesbyversion.action?pageId=21538238&selectedPageVersions=1&selectedPageVersions=2 | 2020-02-17T00:39:50 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.frevvo.com |
Microsoft BitLocker Administration and Monitoring Beta Now Available
I. | https://docs.microsoft.com/en-us/archive/blogs/mdop/microsoft-bitlocker-administration-and-monitoring-beta-now-available | 2020-02-17T02:23:06 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
Task
Update Options. Timeout Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Gets or sets the maximum time that the server can spend processing the request, in seconds. The default is 30 seconds.
[Newtonsoft.Json.JsonProperty(PropertyName="")] public Nullable<int> Timeout { get; set; }
member this.Timeout : Nullable<int> with get, set
Public Property Timeout As Nullable(Of Integer)
Property Value
- System.Nullable<System.Int32>
Implements
- Attributes
- Newtonsoft.Json.JsonPropertyAttribute | https://docs.azure.cn/zh-cn/dotnet/api/microsoft.azure.batch.protocol.models.taskupdateoptions.timeout?view=azure-dotnet | 2022-09-25T08:13:24 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.azure.cn |
Difference between revisions of "Community:Information"
Revision as of 22:53, 24 January 2011
Welcome to our community information page. This is the location to discuss all kinds of editor related information.
Contents:
Wanted Articles
You may also add links to Topics you would like to see in our documentation:
Surround a word with double brackets:
[[Wanted Topic]] and that word will be highlighted in red.
Guidelines for new Articles
For now we only have few guidelines we used when creating new articles. Feel free to extend or change this list.
- Don't be afraid
- We happily accept your contributions.
- Article Title
- Please make sure article titles are singular nouns wherever possible.
- Categories
- Try to categorize your article via [[Category:Name]]
- Code Examples
- Please use the [[Category:Code Examples]]
- Syntax Highlighting
- Please surround your code with: <pre class="brush:{xml|xquery|java}>…</pre>and the code is automatically highlighted. | https://docs.basex.org/index.php?title=Community:Information&diff=next&oldid=2569 | 2022-09-25T08:57:28 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.basex.org |
What does the email status “Processed” means?
An email gets a “Processed” status when the Netcore Email API server receives the email request with valid parameters. The status is automatically changed to ‘sent’ if the delivery of the email to the recipient’s mailing server is successful.
Updated over 1 year ago
Did this page help you? | https://emaildocs.netcorecloud.com/docs/what-does-the-email-status-processed-means | 2022-09-25T08:52:46 | CC-MAIN-2022-40 | 1664030334515.14 | [] | emaildocs.netcorecloud.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
Detect Entities
Use the DetectEntities, BatchDetectEntities, and StartEntitiesDetectionJob operations to detect entities in a document. An entity is a textual reference to the unique name of a real-world object such as people, places, and commercial items, and to precise references to measures such as dates and quantities.
For example, in the text "John moved to 1313 Mockingbird Lane in 2012," "John" might
be recognized as a
PERSON, "1313 Mockingbird Lane" might be recognized as a
LOCATION, and "2012" might be recognized as a
DATE.
Each entity also has a score that indicates the level of confidence that Amazon Comprehend has that it correctly detected the entity type. You can filter out the entities with lower scores to reduce the risk of using incorrect detections.
The following table lists the entity types.
You can use any of the following operations to detect entities in a document or set of documents.
The operations return a list of Entity objects, one for each entity in the document. The
BatchDetectEntities
operation returns a list of
Entity objects, one list for each document in
the batch. The
StartEntitiesDetectionJob operation starts an asynchronous
job that produces a file containing a list of
Entity objects for each
document in the job.
The following example is the response from the
DetectEntities
operation.
{ "Entities": [ { "Text": "today", "Score": 0.97, "Type": "DATE", "BeginOffset": 14, "EndOffset": 19 }, { "Text": "Seattle", "Score": 0.95, "Type": "LOCATION", "BeginOffset": 23, "EndOffset": 30 } ], "LanguageCode": "en" } | https://docs.aws.amazon.com/comprehend/latest/dg/how-entities.html | 2019-10-14T03:30:54 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.aws.amazon.com |
To add dhtmlxForm into an application, you need to take the following simple steps:
<!DOCTYPE html> <html> <head> <title>How to Start with dhtmlxForm</title> <script type="text/javascript" src="../../codebase/suite.js"></script> <link rel="stylesheet" href="../../codebase/suite.css"> </head> <body> <div id="form_container"></div> <script>// creating dhtmlxForm var form = new dhx.Form("form");</script> </body> </html>
Related sample: Initialization - DHTMLX Form
Create an HTML file and place full paths to JS and CSS files of the dhtmlxSuite library into the header of the file. The files are:
<script type="text/javascript" src="../../codebase/suite.js"></script> <link rel="stylesheet" href="../../codebase/suite.css">
Add a container for the Form and give it an id, e.g. "form_container":
<div id="form_container"></div>
Now you need to specify the list of Form controls. For example, you can create a form with two text fields for entering a name and an email, a checkbox for the user to give consent to data processing and a button to send a form to a server.
Thus, the structure of your form will look like this:
To add controls inside a form, you should put them into a layout, either a vertical one (the rows attribute), or a horizontal one (the cols attribute). In the example below controls are arranged vertically, one under another:
var form_data = { rows:[ { id: "name", type: "input", label: "Name", icon: "dxi-magnify", placeholder: "John Doe" }, { id:"email", type: "input", label: "Email", placeholder: "[email protected]" }, { type: "checkbox", label: "I agree", name: "agree", labelInline: true, id: "agree", value: "checkboxvalue", }, { type: "button", value: "Send", size: "medium", view: "flat", color: "primary" } ] };
Initialize Form with the
dhx.Form object constructor. The constructor takes two parameters:
var form = new dhx.Form("form_container", form_data);
Related sample: Initialization - DHTMLX FormBack to top | https://docs.dhtmlx.com/suite/form__how_to_start.html | 2019-10-14T03:41:27 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.dhtmlx.com |
Installation¶
Stable release¶
To install Alternative Cinder Scheduler Classes, run this command in your terminal:
$ pip install alt_cinder_sch
This is the preferred method to install Alternative Cinder Scheduler Classes, as it will always install the most recent stable release.
If you don’t have pip installed, this Python installation guide can guide you through the process.
From sources¶
The easiest way of installing the package from source is using pip:
$ pip install git+
Alternative the sources for Alternative Cinder Scheduler Classes can be downloaded from the Github repo.
You can either clone the public repository:
$ git clone git://github.com/akrog/alt_cinder_sch
$ curl -OL
Once you have a copy of the source, you can install it with:
$ python setup.py install | https://alt-cinder-sch.readthedocs.io/en/latest/installation.html | 2019-10-14T03:10:48 | CC-MAIN-2019-43 | 1570986649035.4 | [] | alt-cinder-sch.readthedocs.io |
Support policies for Azure Kubernetes Service
This article provides details about technical support policies and limitations for Azure Kubernetes Service (AKS). The article also details worker node management, managed control plane components, third-party open-source components, and security or patch management.
Service updates and releases
- For release information, see AKS release notes.
- For information on features in preview, see AKS preview features and related projects.
Managed features in AKS
Base infrastructure as a service (IaaS) cloud components, such as compute or networking components, give users access to low-level controls and customization options. By contrast, AKS provides a turnkey Kubernetes deployment that gives customers the common set of configurations and capabilities they need. AKS customers have limited customization, deployment, and other options. These customers don't need to worry about or manage Kubernetes clusters directly.
With AKS, the customer gets a fully managed control plane. The control plane contains all of the components and services the customer needs to operate and provide Kubernetes clusters to end users. All Kubernetes components are maintained and operated by Microsoft.
Microsoft manages and monitors the following components through the control pane:
- Kubelet or Kubernetes API servers
- Etcd or a compatible key-value store, providing Quality of Service (QoS), scalability, and runtime
- DNS services (for example, kube-dns or CoreDNS)
- Kubernetes proxy or networking
AKS isn't a completely managed cluster solution. Some components, such as worker nodes, have shared responsibility, where users must help maintain the AKS cluster. User input is required, for example, to apply a worker node operating system (OS) security patch.
The services are managed in the sense that Microsoft and the AKS team deploys, operates, and is responsible for service availability and functionality. Customers can't alter these managed components. Microsoft limits customization to ensure a consistent and scalable user experience. For a fully customizable solution, see AKS Engine.
Note
AKS worker nodes appear in the Azure portal as regular Azure IaaS resources. But these virtual machines are deployed into a custom Azure resource group (prefixed with MC\*). It's possible to change AKS worker nodes. For example, you can use Secure Shell (SSH) to change AKS worker nodes the way you change normal virtual machines (you can't, however, change the base OS image, and changes might not persist through an update or reboot), and you can attach other Azure resources to AKS worker nodes. But when you make changes out of band management and customization, the AKS cluster can become unsupportable. Avoid changing worker nodes unless Microsoft Support directs you to make changes.
Shared responsibility
When a cluster is created, the customer defines the Kubernetes worker nodes that AKS creates. Customer workloads are executed on these nodes. Customers own and can view or modify the worker nodes.
Because customer cluster nodes execute private code and store sensitive data, Microsoft Support can access them in only a limited way. Microsoft Support can't sign in to, execute commands in, or view logs for these nodes without express customer permission or assistance.
Because worker nodes are sensitive, Microsoft takes great care to limit their background management. In many cases, your workload will continue to run even if the Kubernetes master nodes, etcd, and other Microsoft-managed components fail. Carelessly modified worker nodes can cause losses of data and workloads and can render the cluster unsupportable.
AKS support coverage
Microsoft provides technical support for the following:
- Connectivity to all Kubernetes components that the Kubernetes service provides and supports, such as the API server.
- Management, uptime, QoS, and operations of Kubernetes control plane services (Kubernetes master nodes, API server, etcd, and kube-dns, for example).
- Etcd. Support includes automated, transparent backups of all etcd data every 30 minutes for disaster planning and cluster state restoration. These backups aren't directly available to customers or users. They ensure data reliability and consistency.
- Any integration points in the Azure cloud provider driver for Kubernetes. These include integrations into other Azure services such as load balancers, persistent volumes, or networking (Kubernetes and Azure CNI).
- Questions or issues about customization of control plane components such as the Kubernetes API server, etcd, and kube-dns.
- Issues about networking, such as Azure CNI, kubenet, or other network access and functionality issues. Issues could include DNS resolution, packet loss, routing, and so on. Microsoft supports various networking scenarios:
- Kubenet (basic) and advanced networking (Azure CNI) within the cluster and associated components
- Connectivity to other Azure services and applications
- Ingress controllers and ingress or load balancer configurations
- Network performance and latency
Microsoft doesn't provide technical support for the following:
- Questions about how to use Kubernetes. For example, Microsoft Support doesn't provide advice on how to create custom ingress controllers, use application workloads, or apply third-party or open-source software packages or tools.
Note
Microsoft Support can advise on AKS cluster functionality, customization, and tuning (for example, Kubernetes operations issues and procedures).
- Third-party open-source projects that aren't provided as part of the Kubernetes control plane or deployed with AKS clusters. These projects might include Istio, Helm, Envoy, or others.
Note
Microsoft can provide best-effort support for third-party open-source projects such as Helm and Kured. Where the third-party open-source tool integrates with the Kubernetes Azure cloud provider or other AKS-specific bugs, Microsoft supports examples and applications from Microsoft documentation.
- Third-party closed-source software. This software can include security scanning tools and networking devices or software.
- Issues about multicloud or multivendor build-outs. For example, Microsoft doesn't support issues related to running a federated multipublic cloud vendor solution.
- Network customizations other than those listed in the AKS documentation.
Note
Microsoft does support issues and bugs related to network security groups (NSGs). For example, Microsoft Support can answer questions about an NSG failure to update or an unexpected NSG or load balancer behavior.
AKS support coverage for worker nodes
Microsoft responsibilities for AKS worker nodes
Microsoft and customers share responsibility for Kubernetes worker nodes where:
- The base OS image has required additions (such as monitoring and networking agents).
- The worker nodes receive OS patches automatically.
- Issues with the Kubernetes control plane components that run on the worker nodes are automatically remediated. Components include the following:
- Kube-proxy
- Networking tunnels that provide communication paths to the Kubernetes master components
- Kubelet
- Docker or Moby daemon
Note
On a worker node, if a control plane component is not operational, the AKS team might need to reboot individual components or the entire worker node. These reboot operations are automated and provide auto-remediation for common issues. These reboots occur only on the node level and not the cluster unless these is an emergency maintenance or outage.
Customer responsibilities for AKS worker nodes
Microsoft doesn't automatically reboot worker nodes to apply OS-level patches. Although OS patches are delivered to the worker nodes, the customer is responsible for rebooting the worker nodes to apply the changes. Shared libraries, daemons such as solid-state hybrid drive (SSHD), and other components at the level of the system or OS are automatically patched.
Customers are responsible for executing Kubernetes upgrades. They can execute upgrades through the Azure control panel or the Azure CLI. This applies for updates that contain security or functionality improvements to Kubernetes.
Note
Because AKS is a managed service, its end goals include removing responsibility for patches, updates, and log collection to make the service management more complete and hands-off. As the service's capacity for end-to-end management increases, future releases might omit some functions (for example, node rebooting and automatic patching).
Security issues and patching
If a security flaw is found in one or more components of AKS, the AKS team will patch all affected clusters to mitigate the issue. Alternatively, the team will give users upgrade guidance.
For worker nodes that a security flaw affects, if a zero-downtime patch is available, the AKS team will apply that patch and notify users of the change.
When a security patch requires worker node reboots, Microsoft will notify customers of this requirement. The customer is responsible for rebooting or updating to get the cluster patch. If users don't apply the patches according to AKS guidance, their cluster will continue to be vulnerable to the security issue.
Node maintenance and access
Worker nodes are a shared responsibility and are owned by customers. Because of this, customers have the ability to sign in to their worker nodes and make potentially harmful changes such as kernel updates and installing or removing packages.
If customers make destructive changes or cause control plane components to go offline or become nonfunctional, AKS will detect this failure and automatically restore the worker node to the previous working state.
Although customers can sign in to and change worker nodes, doing this is discouraged because changes can make a cluster unsupportable.
Network ports, access, and NSGs
As a managed service, AKS has specific networking and connectivity requirements. These requirements are less flexible than requirements for normal IaaS components. In AKS, operations like customizing NSG rules, blocking a specific port (for example, using firewall rules that block outbound port 443), and whitelisting URLs can make your cluster unsupportable.
Note
Currently, AKS doesn't allow you to completely lock down egress traffic from your cluster. To control the list of URLs and ports your cluster can use for outbound traffic see limit egress traffic.
Unsupported alpha and beta Kubernetes features
AKS supports only stable features within the upstream Kubernetes project. Unless otherwise documented, AKS doesn't support alpha and beta features that are available in the upstream Kubernetes project.
In two scenarios, alpha or beta features might be rolled out before they're generally available:
- Customers have met with the AKS product, support, or engineering teams and have been asked to try these new features.
- These features have been enabled by a feature flag. Customers must explicitly opt in to use these features.
Preview features or feature flags
For features and functionality that require extended testing and user feedback, Microsoft releases new preview features or features behind a feature flag. Consider these features as prerelease or beta features.
Preview features or feature-flag features aren't meant for production. Ongoing changes in APIs and behavior, bug fixes, and other changes can result in unstable clusters and downtime.
Features in public preview are fall under 'best effort' support as these features are in preview and not meant for production and are supported by the AKS technical support teams during business hours only. For additional information please see:
Note
Preview features take effect at the Azure subscription level. Don't install preview features on a production subscription. On a production subscription, preview features can change default API behavior and affect regular operations.
Upstream bugs and issues
Given the speed of development in the upstream Kubernetes project, bugs invariably arise. Some of these bugs can't be patched or worked around within the AKS system. Instead, bug fixes require larger patches to upstream projects (such as Kubernetes, node or worker operating systems, and kernels). For components that Microsoft owns (such as the Azure cloud provider), AKS and Azure personnel are committed to fixing issues upstream in the community.
When a technical support issue is root-caused by one or more upstream bugs, AKS support and engineering teams will:
- Identify and link the upstream bugs with any supporting details to help explain why this issue affects your cluster or workload. Customers receive links to the required repositories so they can watch the issues and see when a new release will provide fixes.
- Provide potential workarounds or mitigations. If the issue can be mitigated, a known issue will be filed in the AKS repository. The known-issue filing explains:
- The issue, including links to upstream bugs.
- The workaround and details about an upgrade or another persistence of the solution.
- Rough timelines for the issue's inclusion, based on the upstream release cadence.
Feedback | https://docs.microsoft.com/en-us/azure/aks/support-policies | 2019-10-14T04:13:05 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.microsoft.com |
Importing from Content Hub and URLdispatcher¶
In the previous guide we have seen a little bit about how Content Hub works. In this guide we will see how
URLdispatcher works and how to handle imported data from the Content Hub.
Handle data from the Content Hub¶
OpenStore app from open-store.io
One of the easiest ways of testing an app, is to send a test click to yourself on Telegram, and opening that click file with the OpenStore through the Content Hub:
If we tap on the OpenStore app, it will be opened and it will ask if we want to install the click file. Let’s take a look into the Main.qml code of the app to see how it is done:
Connections { target: ContentHub onImportRequested: { var filePath = String(transfer.items[0].url).replace('file://', '') print("Should import file", filePath) var fileName = filePath.split("/").pop(); var popup = PopupUtils.open(installQuestion, root, {fileName: fileName}); popup.accepted.connect(function() { contentHubInstallInProgress = true; PlatformIntegration.clickInstaller.installPackage(filePath) }) } }
Do you see that Connections element that targets the ContentHub? When it receives the signal onImportRequested, it will take the url of the data sent from the Content Hub (
transfer.items[0].url is the url of the first data sent) and open a
PopUp element to let the user install the click.
What about the URLdispatcher?¶
The URL dispatcher is a piece of software, similar to the Content Hub, that makes our life easier trying to choose the correct app for a certain protocol. For example: We probably want to open the web browser when tapping on an http protocol. If we tap on a map link it is handy to open it with uNav or to open a twitter link in the Twitter app! How does that work?
The
URLdispatcher selects which app (according to their
manifest.json) will open a certain link.
Let’s see how our navigation app looks inside. uNav’s manifest shows special hooks for the
URLdispatcher in its manifest.json code:
1 [ 2 { 3 "protocol": "http", 4 "domain-suffix": "map.unav.me" 5 }, 6 { 7 "protocol": "http", 8 "domain-suffix": "unav-go.github.io" 9 }, 10 { 11 "protocol": "geo" 12 }, 13 { 14 "protocol": "http", 15 "domain-suffix": "" 16 }, 17 { 18 "protocol": "http", 19 "domain-suffix": "" 20 }, 21 { 22 "protocol": "https", 23 "domain-suffix": "maps.google.com" 24 } 25 ]
This means that a link that looks like will be opened in uNav. And that’s defined in lines 2 and 3, where it looks for protocol http followed by map.unav.me.
Also, a URI formatted geo:xxx,xxx should open in uNav, as it’s defined in line 11.
And how do we manage the received URL?¶
After the
URLdispatcher sends the link to the correspondent app, we need to handle that URL or URI in the targeted app. Let’s see how to do that:
In the main qml file, we need to add some code to know what to do with the dispatched URL. First add an Arguments element that holds the URL, as is done, for example, in the Linphone app. Also, we add connection to the URI Handler with a Connection element with
UriHandler as a target.
Arguments { id: args Argument { name: 'url' help: i18n.tr('Incoming Call from URL') required: false valueNames: ['URL'] } } Connections { target: UriHandler onOpened: { console.log('Open from UriHandler') if (uris.length > 0) { console.log('Incoming call from UriHandler ' + uris[0]); showIncomingCall(uris[0]); } } }
This code will manage a URI in the form
linphone://sip:[email protected] when the app is opened. But what do we need to do to handle this link when the app is closed?
We need to add a little bit extra code that will cover two cases: 1) We receive one URL 2) We receive more than one URL
Component.onCompleted: { //Check if opened the app because we have an incoming call if (args.values.url && args.values.url.match(/^linphone/)) { console.log("Incoming Call on Closed App") showIncomingCall(args.values.url); } else if (Qt.application.arguments && Qt.application.arguments.length > 0) { for (var i = 0; i < Qt.application.arguments.length; i++) { if (Qt.application.arguments[i].match(/^linphone/)) { showIncomingCall(Qt.application.arguments[i]); } } } //Start timer for Registering Status checkStatus.start() }
What happens if more than one app has the same URL type defined?¶
A very good question. What happens if we tap on a Twitter link? How is such a URL handled by the
URLdispatcher as protocol
http or the protocol?
What happens if two apps have the same defined protocol?
Now it’s time to do some tests and share the results in the next guide. | https://docs.ubports.com/en/latest/appdev/guides/importing-CH-urldispatcher.html | 2019-10-14T04:10:33 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['../../_images/01ichu.png', '../../_images/01ichu.png'],
dtype=object)
array(['../../_images/02ichu.png', '../../_images/02ichu.png'],
dtype=object)
array(['../../_images/03ichu.png', '../../_images/03ichu.png'],
dtype=object)
array(['../../_images/05ichu.png', '../../_images/05ichu.png'],
dtype=object) ] | docs.ubports.com |
Manage Microsoft Teams settings for your organization.
To learn more, see Admin settings for apps in Teams. or domain. files. For more information, see Guest access in Microsoft Teams.
Teams settings
In Teams settings, you can set up features for teams including notifications and feeds, email integration, cloud storage options, and devices.
Notifications and feeds
Here you can control whether suggested feeds appear in users' activity feed in Teams. To learn more about the activity feed, see Explore the Activity feed in Teams. | https://docs.microsoft.com/en-us/MicrosoftTeams/enable-features-office-365?redirectSourcePath=%252fit-it%252farticle%252fImpostazioni-amministratore-per-Microsoft-Teams-3966A3F5-7E0F-4EA9-A402-41888F455BA2 | 2019-10-14T04:11:05 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.microsoft.com |
IWinMLRuntimeFactory::CreateRuntime.
Creates a WinML runtime.
Syntax
HRESULT CreateRuntime( WINML_RUNTIME_TYPE RuntimeType, IWinMLRuntime **ppRuntime );
Parameters
RuntimeType
A WINML_RUNTIME_TYPE that decribes the type of WinML runtime.
ppRuntime
A pointer to the created IWinMLRuntime.
Return Value
If this method succeeds, it returns S_OK. Otherwise, it returns an HRESULT error code. | https://docs.microsoft.com/en-us/windows/win32/api/winml/nf-winml-iwinmlruntimefactory-createruntime | 2019-10-14T03:54:12 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.microsoft.com |
Contents Now Platform Capabilities Previous Topic Next Topic Copy File activity Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Copy File activity The Copy File activity copies a file from an SFTP server (source host) to another SFTP server (target host). Input variables Table 1. Copy File input variables Variable Description sourceHost Name or IP address of the server containing the files you want to transfer. sourcePort Port number to use to communicate with the source server. The default port number is 22. sourceFilePath Full path to the file to copy from the source host. targetHost Name or IP address of the server to which you want to move the files. targetPort Port number to use to communicate with the target server. The default port number is 22. targetFilePath Full path to the copied file on the target host. tempFileSuffix Temporary suffix to use when moving a file. If this field contains a value, the activity deletes the duplicate target file, if it exists, and then copies the source file to a temporary file using targetFilePath + tempFileSuffix as the name. Upon completion, the activity renames the file to the actual target file name. If this field is blank, the activity copies the source file directly to the target file and overwrites it, if it already exists. sourceCredentialTag Specific credential alias this activity must use to run SSH commands on the source host. targetCredentialTag Specific credential tag this activity must use to run SSH commands on the target host. Output variables Table 2. Copy File output variables Variable Description errorMessages The executionResult.errorMessages from the Activity designer parsing sources. If this variable is not null, the operation has failed. result Text message advising that the command was executed successfully. Conditions Table 3. Copy File conditions Condition Description Success The activity succeeded in copying the file. Failure The activity failed to copy the file. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/madrid-servicenow-platform/page/administer/orchestration-activities/reference/r_CopyFileActivity.html | 2019-10-14T04:01:24 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.servicenow.com |
If you choose not to use the CloudBees Flow built-in (default) database, use this page to configure your alternate CloudBees Flow-supported alternate database (such as MySQL, SQL Server, or Oracle) to communicate with CloudBees Flow.
Fill in the fields as follows:
Click Save and Restart Server after entering information in all fields. You may need to consult with your Database Administrator if you lack all of the information required on this web page. | https://docs.cloudbees.com/docs/cloudbees-cd/9.2/automation-platform/help-editdatabaseconfiguration | 2022-01-16T22:58:26 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.cloudbees.com |
nodetool garbagecollect
Removes deleted data from one or more tables.
Removes deleted data from one or more tables.
Note: The nodetool garbagecollect command is not the same as the Perform GC option in OpsCenter.
Synopsis
nodetool [connection_options] garbagecollect [-g ROW|CELL] [-j job_threads] [--] .
- -g, --granularity ROW|CELL
ROW (default) removes deleted partitions and rows.
CELL also removes overwritten or deleted cells.
- -j, --jobs num_jobs
- num_jobs - Number of SSTables affected simultaneously. Default: 2.
- 0 - Use all available compaction threads.
- keyspace_name
- The keyspace name.
- table_name
- One or more table names, separated by a space.
Examples
To remove deleted data from all tables and keyspaces at the default granularity
nodetool garbagecollect
To remove deleted data from all tables and keyspaces, including overwritten or deleted cells
nodetool garbagecollect -g CELL | https://docs.datastax.com/en/dse/6.0/dse-dev/datastax_enterprise/tools/nodetool/toolsGarbageCollect.html | 2022-01-16T21:39:10 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.datastax.com |
so-test¶
so-test will run
so-tcpreplay to replay some pcap samples to your sniffing interface.
Warning
You will need to have Internet access in order to download the pcap samples. Also, if you have a distributed deployment, make sure you run
so-tcpreplay on the manager first to download the necessary Docker image.
so-test Replay functionality not enabled; attempting to enable now (may require Internet access)... Pulling so-tcpreplay image ========================================================================= Starting tcpreplay... This could take a while if another Salt job is running. Run this command with --force to stop all Salt jobs before proceeding. ========================================================================= local: ---------- ID: so-tcpreplay Function: docker_container.running Result: True Comment: Created container 'so-tcpreplay' Started: 15:55:48.390107 Duration: 1460.452 ms Changes: ---------- container_id: ---------- added: f035103cd8bf43134b56d4b19d77a0ae9e7c09fcb54ef6da67cf89bef5fc4019 state: ---------- new: running old: None Summary for local ------------ Succeeded: 1 (changed=1) Failed: 0 ------------ Total states run: 1 Total run time: 1.460 s Replaying PCAP(s) at 10 Mbps on interface bond0... Actual: 111557 packets (12981286 bytes) sent in 10.38 seconds Rated: 1249997.6 Bps, 9.99 Mbps, 10742.07 pps Flows: 4102 flows, 394.99 fps, 2074477 flow packets, 45106 non-flow Statistics for network device: bond0 Successful packets: 55304 Failed packets: 444 Truncated packets: 0 Retried packets (ENOBUFS): 0 Retried packets (EAGAIN): 0 Replay completed. Warnings shown above are typically expected. | https://docs.securityonion.net/en/2.3/so-test.html | 2022-01-16T21:53:40 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.securityonion.net |
Programmable Internetworking & Communication Operating System Docs ... Click Spaces -> Space Directory to see docs for all releases ...
This document describes the hardware components and characteristics of the switch. For more detail information, refer to Hardware Compatibility List.
- Hardware Use Precautions
- Switch Machine Outline and System Characteristics
- Dell
- EdgeCore/Accton
- Delta/Agema | https://docs.pica8.com/display/PicOS421sp/Hardware+Description | 2022-01-16T22:50:20 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.pica8.com |
Understanding Artifact Builds
============================= | https://firefox-source-docs.mozilla.org/_sources/contributing/build/artifact_builds.rst.txt | 2022-01-16T22:37:33 | CC-MAIN-2022-05 | 1642320300244.42 | [] | firefox-source-docs.mozilla.org |
Avatar API
Introduced in GitLab 11.0.
Get a single avatar URL
Get a single avatar URL for a user with the given email address.
If:
- No user with the given public email address is found, results from external avatar services are returned.
- Public visibility is restricted, response is
403 Forbiddenwhen unauthenticated.
This endpoint can be accessed without authentication.
GET /[email protected]
Parameters:
Example request:
curl ""
Example response:
{ "avatar_url": "" } | https://docs.gitlab.com/13.12/ee/api/avatar.html | 2022-01-16T22:07:54 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.gitlab.com |
1 Introduction
The Data Hub Catalog is a catalog of OData services exposing datasets that you can use in your apps. This means that new apps can be built by using these shared datasets from your organization to provide access to the data they connect to. In Mendix Studio Pro, these exposed datasets are added as external entities through the Data Hub pane. The integrated Data Hub Catalog search functionality in Studio Pro is available to find suitable datasets to use in your apps.
This document provides general information and guidelines on consumed datasets in apps. For details on using shared datasets in Studio Pro, see External Entities in the Studio Pro Guide.
For details on the security of the data that the shared datasets connect to, and for defining access to the datasets for specified user roles, see Data Accessibility and Security.
2 Using Registered Assets in your App
Shared data which is represented by the exposed datasets registered in the Data Hub Catalog can be added to your app in Studio Pro through the Data Hub pane. These datasets are introduced into the domain model as external entities.
You can use the Catalog to find registered data sources and use the Copy Data Source URI button obtain the OData service URI which can be used in other enterprise applications.
The following sections summarize important points to consider when using OData services and registered datasets in your apps in Studio Pro.
2.1 Services
The published OData service document (the API) is included in the module definition (in Studio Pro) and contains the metadata for linking to the data for the datasets exposed in the service.
When a new version of the OData service for an external entity is registered in the Data Hub Catalog, the consumed OData service will have to be updated in the consuming app to make use of the new features that the new version brings. For more details on updating a consumed service see the Updating or Switching a Consumed OData Service section of Consumed OData Service.
This is not compulsory, and users can continue to use an older version of a service unless the new version was deployed to the same service endpoint as the previous version. In Studio Pro, new versions of a service are indicated and users can choose to Update the service, or Switch to another version of the service deployed to another endpoint.
It is good practice for publishers of a service to serve a notice of deprecation on a service version that will be replaced with a new service that may contain breaking changes which would cause the consuming app to fail. In this case the updated service should be deployed to a new service endpoint and Studio Pro users will get the option to Switch to the new version.
2.2 Consumed (External) Entities
When you use an external entity from a published OData service through the Data Hub pane in Studio Pro, you are consuming the dataset from the service (which is published from the app deployed in a specific environment). The OData endpoint for the dataset is used in the consuming app.
It is not possible to change the structural values of attributes or associations between two external entities.
When security is enabled for your app, you can define access rules for external entities just as you would for persistable and non-persistable entities. You can define access rules based on user roles (for more details, see Security and Controlling Access to Information).
You can associate external entities with local entities (both persistable and non-persistable. However, the external entity cannot be the owner of an association, which means that the association has to be from a local entity to the external entity in the domain model, and the value for the association owner must be set to Default.
Mendix entities that are specializations in the originating app will be published and consumed as discrete entities that include the inherited attributes and associations. When the generalized entity is also exposed in the same service as the specialized entities, the inheritance relationship will not be present in the metadata contract or when both are consumed.
Associations that are inherited from a generalization will be exposed and shown when the specialization is consumed. However the same association of the generalized entity is not supported for the specialization in the same domain model The same association cannot be exposed and consumed for two different external entities in the same domain model.
2.3 Datasets
Data for external entities is not in the consuming app’s database but in the database of the app that publishes the OData service.
The data set that is associated with the consumed entity is maintained in the publishing app.
Access to the data is through the published REST OData service, with “reading” and “querying” of the data by the consuming app.
3 Operations on External Entities in Consuming Apps
The following operations are affected when using external entities in a consuming app:
- Aggregations – you can count a list of external entities, but you cannot show other aggregations such as sum, average, minimum, and maximum; this is because OData version 3.0 does not support these operations; the only exception is that you can use the aggregate list microflow activity, which for all aggregations except Count will retrieve everything and perform the aggregation in memory
- XPath – you can use XPath to filter external entities; all XPath constructs are supported, except the following:
- Three conversions from date/time:
day-of-year-from-dateTime,
weekday-from-dateTime, and
week-from-dateTime
- Aggregations:
avg(),
max(),
min(), and
sum()
- Using an association between a local and an external entity
- Comparing attributes to other attributes (you can only compare an attribute to a literal value or a variable)
- Exist expressions (filtering on whether an associated object exists)
- Filtering in the middle of a path (such as
[Module.Car_Person/Module.Car[Brand='BMW']/Module.Car_Plate/Module.Plate/Number='123'], where
[Brand='BMW']appears in the middle of the path)
- Expressions with
reverse()(as mentioned in Querying Over Self-References)
- OQL – you cannot define OQL queries on external entities (for example, in datasets)
4 Registered Datasets in OData Services from Non-Mendix Systems
For registered OData datasets from non-Mendix apps, the restrictions described below apply.
4.1 Keys
All datasets must have a key. The key can have one or more properties with the following conditions:
- The properties cannot be nullable (so they must have
isNullable="false"specified)
- Only the following types are allowed:
Byte,
SByte,
Int16,
Int32,
Int64,
Boolean,
Decimal,
Single,
Double, and
String
- If the property type is
String, a
MaxLengthmust be specified
The key attributes are not available as attributes of the external entity.
4.2 Supported Metadata Features
When importing metadata in a consumed OData service in Studio Pro, all unsupported constructs are ignored. The following constructs are supported:
- Only entities that are published in the service feed can be imported. Entities that only appear in the metadata file and not in the service feed cannot be imported as external entities.
Attribute types have to be primitive (not complex, collections, or enumerations). The types of the attributes in your app are based on the types of the attributes in the OData metadata:
The following conditions apply:
- When the OData endpoints contain operations, these are not imported in the consumed OData service; you can use a Call REST service activity to call these operations
- In Mendix, Booleans cannot be null; when the service returns a null, the value is false
- Attributes marked as
FC_KeepInContent=falseare not supported
- Decimal values outside the range of a Mendix decimal are currently not supported; when the service returns a value outside of the range, there is an error
4.3 FileDocuments
External entities with binary attributes are not imported as FileDocuments. That means that their use is limited. | https://docs.mendix.com/data-hub/data-hub-catalog/consume | 2022-01-16T22:25:54 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.mendix.com |
uselect – Wait for events¶
This module provides functions to efficiently wait for events on multiple streams.
Note
This module is not available on the BOOST Move hub.
- poll() uselect.Poll ¶
Creates an instance of the Poll class.
- class Poll¶
- register(obj: IO) None ¶
- register(obj: IO, eventmask: int) None
Register stream
objfor polling.
eventmaskis logical OR of:
Note that flags like
POLLHUPand
POLLERRare not valid as input eventmask (these are unsolicited events which will be returned from
poll()regardless of whether they are asked for). This semantics is per POSIX.
eventmaskdefaults to
POLLIN | POLLOUT.
It is OK to call this function multiple times for the same
obj. Successive calls will update
obj’s eventmask to the value of
eventmask(i.e. will behave as
modify()).
- modify(obj: IO, eventmask: int) None ¶
Modify the
eventmaskfor
obj. If
objis not registered,
OSErroris raised with error of
ENOENT.
- poll() List[Tuple[IO, int]] ¶
- poll(timeout: int) List[Tuple[IO, int]]
Wait for at least one of the registered objects to become ready or have an exceptional condition, with optional timeout in milliseconds (if
timeoutarg
POLL*constants described above. Note that flags
POLLHUPand.
- ipoll() Iterator[Tuple[IO, int]] ¶
- ipoll(timeout: int) Iterator[Tuple[IO, int]]
- ipoll(timeout: int, flags: int) Iterator[Tuple[IO, int]]
Like
poll(), but instead returns an iterator which yields a callee-owned tuple. This function provides an efficient, allocation-free way to poll on streams.
If
flagsis 1, one-shot behavior for events is employed: streams for which events happened will have their event masks automatically reset (equivalent to
poll.modify(obj, 0)), so new events for such a stream won’t be processed until new mask is set with
modify(). This behavior is useful for asynchronous I/O schedulers. | https://docs.pybricks.com/en/v3.1.0/micropython/uselect.html | 2022-01-16T21:47:36 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.pybricks.com |
Date: Sun, 16 Jan 2022 14:53:26 -0800 (PST) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_262417_1924151004.1642373606916" ------=_Part_262417_1924151004.1642373606916 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
OAuth 2.0 ha=
s three main phases. They are; requesting an Authorizatio=
n Grant, exchanging the Authorization Grant for an Access=
Token and accessing the resources using this Access Token. OpenID Connect is anothe=
r identity layer on top of OAuth 2.0. OAuth applications can get authentica=
tion event information over the IDToken and can get the extra claims of the=
authenticated user from the OpenID Connect UserInfo endpoint.
= span>
To enable OAuth support for your c=
lient application, you must first register your application. Follow the ins=
tructions below to add a new application.
Let's get started to configure the service provider you created!
Fill in the form that appears. For the Allowed Grant Ty= pes you can disable the ones you do not require or wish to bl= ock.
Note: The grant type highlighted below is a cus=
tom grant type. This will only appear on the UI if you have configured the JWT grant type. The value specified in the
<Gr=
antTypeName> property of the
identity.xml file when =
creating the custom grant type is the value that will appear on the UI. For=
more information on writing a custom grant type, see Writing a Custom OAuth 2.0 Gran=
t Type.
When fillin= g out the New Application form, the following de= tails should be taken into consideration.
Edit: Click to edit the OAuth/OpenID Connect C= onfigurations
Revoke: Click to revoke (deactivate) the OAuth= application. This action revokes all tokens issued for this application. I= n order to activate the application, you have to regenerate the consumer se= cret.
Regenerate Secret: Click to regenerate the sec= ret key of the OAuth application.
Delete: Click to delete the OAuth/OpenID = Connect Configurations
Tip: The OAuth client key and client secret are stored = in plain text. To encrypt the client secret, access token and refresh token= , do the following:
Open the
identity.xml file found in the
<IS_HOME&g=
t;/repository/conf/identity directory and change the
<Token=
PersistenceProcessor> property as follows:
<Toke= nPersistenceProcessor>org.wso2.carbon.identity.oauth.tokenprocessor.Encr= yptionDecryptionPersistenceProcessor</TokenPersistenceProcessor>=20
After updating the configuration, make sure to restart the server = for the changes to be applied on WSO2 IS.<= /p>
See = Configuring OpenID Connect Single Logout to configure single logout or = session management with OpenID Connect.
See Delegated Access= Control for more information on working with OAuth2/OpenIDConnect. See= the following topics for samples of configuring delegated access control:<= /p> | https://docs.wso2.com/exportword?pageId=80728313 | 2022-01-16T22:53:26 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.wso2.com |
KCO Recurring reduces the effort for customers who want to start a subscription of any kind online. The functionality will enable you as a merchant to sell subscriptions and other recurring purchases through Klarna Checkout. It is as easy to set the subscription up as completing any purchase with Klarna Checkout. The function is based on a unique token that is created with the first purchase. This token which represents the customer and their purchase are then used to initiate an additional purchase using Klarna Payments.
This feature needs to be enabled in your Klarna account in order to work. To set it up, please reach out to Klarna merchant support
In the create order API call, add the field recurring and the optional recurring_description fields. See example:
Accept: application/json Authorization: Basic a2xhcm5hOnVuaWNvcm5z Content-Type: application/json { "recurring": true, "recurring_description": "12 month subscription" // optional field }
If recurring is 'true', no financing payment methods will be available in the checkout.
After the consumer has completed a purchase, the Checkout order will have a
recurring_token property see API that contains a token which the merchant will use to create the following recurring orders. This token must be stored for later use and it is up to the merchant to store and create orders with the token. The token can only be used by the merchant ID that created it
To be able to use the stored token you need to use the Klarna Payments API.
Read up about the Klarna Customer token lifecycle, then learn how to place order from Klarna customer token, as well as how to read Klarna customer token.
More documentation about reading tokens and checking their status can be found here.
In case the payment method registered on the token cannot be captured (
403), and the response is
PAYMENT_METHOD_FAILED, we recommend trying again in different times. If that does not work, we suggest sending a notice for the customers that they need to create a new subscription or add funds to their payment method if they wish to continue.
For
DIRECT_DEBIT payment method that cannot be captured, Klarna will fallback to invoice and reply as if it’s a successful capture. | https://docs.klarna.com/klarna-checkout/popular-use-cases/recurring/ | 2022-01-16T22:37:05 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.klarna.com |
Studies show that about 75% of shoppers who abandon their carts usually plan to come back. By adding an Abandoned Cart email you can send a friendly reminder to shoppers, encouraging them to buy today and pay for it later with Klarna. Make sure to include a call to action encouraging customers to 'Shop Now'.
Give an incentive to shoppers who return to their shopping carts. Offer a discount code, free shipping, or next day delivery in order to boost conversion. | https://docs.klarna.com/marketing/au/email/abandoned-cart-e-mail/ | 2022-01-16T21:34:36 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.klarna.com |
Troubleshooting Windows Admin Center
Applies to: Windows Admin Center, Windows Admin Center Preview, Azure Stack HCI, version v20H2
Important
This guide will help you diagnose and resolve issues that are preventing you from using Windows Admin Center. If you are having an issue with a specific tool, please check to see if you are experiencing a known have had a bug which caused Windows Admin Center to fail. Please use a current supported version of Windows.
If you're getting WinRM error messages while managing servers in Windows Admin Center
WinRM doesn't allow credential delegation by default. To allow delegation, the computer needs to have Credential Security Support Provider (CredSSP) enabled temporarily.
If you're receiving WinRM error messages, try using the verification steps in the Manual troubleshooting section of Troubleshoot CredSSP to resolve them.=
Azure features don't work properly in Edge
Edge has known issues related to security zones that affect Azure login in Windows Admin Center. If you are having trouble using Azure features when using Edge, try adding, and the URL of your gateway as trusted sites and to allowed sites for Edge pop-up blocker settings on your client side browser.
To do this:
- Go to the Security tab
- Under the Trusted Sites option, click on the sites button and add the URLs in the dialog box that opens. You'll need to add your gateway URL as well as.
- Go to the Pop-up Blocker settings in Microsoft Edge via edge://settings/content/popups?search=pop-up
- You'll need to add your gateway URL as well as the Allow list.
Having an issue with an Azure-related feature?
Please send us an email at wacFeedbackAzure?
Collecting HAR files
A HTTP Archive Format (HAR) file is a log of a web browser's interaction with a site. This information is crucial for troubleshooting and debugging. To collect a HAR file in Microsoft Edge or Google Chrome, please follow the steps below:
Press F12 to open Developer Tools window, and then click the Network tab.
Select the Clear icon to clean up network log.
Click to select the Preserve Log check box.
Reproduce the issue.
After reproducing the issue, click on Export HAR.
Specify where to save the log and click Save.
Providing feedback on issues
Go to Event Viewer > Application and Services > Microsoft-ServerManagementExperience and look for any errors or warnings.
File a bug on GitHub) | https://docs.microsoft.com/uk-ua/windows-server/manage/windows-admin-center/support/troubleshooting | 2022-01-16T22:54:17 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.microsoft.com |
AWS Managed Microsoft AD
AWS Managed Microsoft AD is the cloud based active directory service offered by Amazon Web Services. You can configure AWS Managed Microsoft AD to send LDAP data to InsightIDR for tracking and alerting purposes.
Before you begin
- Review and ensure that you meet the prerequisites for creating a Managed Microsoft AD AWS Directory:
- When you configure AWS Managed Microsoft AD, make note of the following information, as you’ll need to reference it when enabling the event source in InsightIDR:
- Domain Name
- DNS address
- Admin password
Configure AWS Managed Microsoft AD
Task 1: Create AWS Managed Microsoft AD Service
- In the AWS console, search for “Directory Service”, select AWS Managed Microsoft AD as your directory type, and click Next.
- Provide the domain name that will be used for the domain, and enter a password.
- In Directory Details, make note of the DNS addresses that will be used by InsightIDR to poll the LDAP data. You will need this information when setting up your event source in InsightIDR.
Task 2: Configure DHCP Option Set
Configure the DHCP options set and assign it to the VPC in use. This allows any instances in that VPC to point to the specified domain and DNS servers to resolve their domain names.
- Open the Amazon VPC console and in the navigation pane, click Create DHCP Options Set.
- Name your DHCP options set, and enter the Domain Name and Domain name servers.
- Choose Create DHCP options set, select the newly added DHCP options set, and click Save.
Task 3: Deploy an instance to manage users and groups
After you set up a domain service, you can create a new instance to manage Users and Groups in AWS Managed Microsoft AD.
For instructions, see.
Task 4: (Optional) Run a test LDAP query from the new instance
Once you’ve completed the setup, we recommend that you test the connection using a tool approved by your organization. In this section, we’ll walk you through our test case.
For the purposes of our example, we created a couple of EC2 instances, naming the first R7AWS-ADMGMT (used for managing AD users & groups), and the second instance named R7AWS-VM1. Both are joined to the newly created domain and as an additional step, we tested the LDAP connection from R7AWS-VM1.
We tested the connection using Idp.exe (which you can download HERE), and the following results show a successful connection, and that the instance is polling the AD user account information.
Set up an LDAP event-source in InsightIDR
When you complete this step, be sure to use the credentials provided to you when you created the AWS Managed Microsoft AD Directory Service.
To set up an event source:
- From the left menu, select Data Collection. The Data Collection page appears.
- Click the Setup Event Source dropdown and choose Add Event Source.
- Under User Attribution, select LDAP. The Add Event Source panel appears.
- Choose your collector and select Microsoft Active Directory LDAP.
- Choose the timezone that matches the location of your event source logs.
- In the Server field, enter the DNS address you noted in step 3 of Create AWS Managed Directory Service.
- In the Refresh Rate field, enter the refresh rate in hours.
- In the User Domain field, enter the AD Domain.
- In the Credentials field, enter the domain credentials that you created.
- In the Password field, enter the password to access the LDAP server.
- (Optional) In the Base DN field, enter the value for your Base Distinguished Name.
- (Optional) Enter the name of the group that has admin privileges.
- Click Save.
Verify the configuration
Once you’ve added your event source, you should verify that InsightIDR is successfully pulling LDAP data.
To verify the configuration:
- In InsightIDR, navigate to Data Collection and select the Event-Sources tab.
- Under Product Type, choose LDAP and click View raw log to confirm that LDAP queries are successfully running.
A successful LDAP poll:
'{"physicalDeliveryOfficeName":"Home","whenCreated":"20191205012438.0Z","manager":"CN=bclinton,OU=Users,OU=r7aws,DC=r7aws,DC=local","sAMAccountName":"fflinstone","givenName":"Fred","distinguishedName":"CN=Fred Flinstone,OU=Users,OU=r7aws,DC=r7aws,DC=local","title":"Rock Miner","objectGUID":"MT/MCXDbkkObo7iaJQKmtQ==","sn":"Flintstone","department":"Mining Division","userAccountControl":"66048","userPrincipalName":"[email protected]","pwdLastSet":"132199826781552103"}
{"physicalDeliveryOfficeName":"Del Rio","whenCreated":"20191205021039.0Z","manager":"CN=bclinton,OU=Users,OU=r7aws,DC=r7aws,DC=local","sAMAccountName":"cwhite","givenName":"Chuck","distinguishedName":"CN=Chuck White,OU=Users,OU=r7aws,DC=r7aws,DC=local","title":"Fuller Brush Salesman","objectGUID":"iCHgbaS6KU2ri9MwpQWItg==","sn":"White","department":"Sales Division","userAccountControl":"66048","userPrincipalName":"[email protected]","pwdLastSet":"132199854395196939"}' | https://docs.rapid7.com/insightidr/aws-managed-microsoft-ad/ | 2022-01-16T22:28:56 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.rapid7.com |
unreal.PaperSprite¶
- class unreal.PaperSprite(outer=None, name='None')¶
Bases:
unreal.Object
Sprite Asset
Stores the data necessary to render a single 2D sprite (from a region of a texture) Can also contain collision shapes for the sprite. see: UPaperSpriteComponent
C++ Source:
Plugin: Paper2D
Module: Paper2D
File: PaperSprite.h
Editor Properties: (see get_editor_property/set_editor_property)
additional_source_textures(Array(Texture)): [Read-Write] Additional source textures for other slots
alternate_material(MaterialInterface): [Read-Write] The alternate material to use on a sprite instance if not overridden (this is only used for Diced render geometry, and will be the opaque material in that case, slot 1)
atlas_group(PaperSpriteAtlas): [Read-Write] Spritesheet group that this sprite belongs to
body_setup(BodySetup): [Read-Write] Baked physics data.
collision_geometry(SpriteGeometryCollection): [Read-Write] Custom collision geometry polygons (in texture space)
collision_thickness(float): [Read-Write] The extrusion thickness of collision geometry when using a 3D collision domain
custom_pivot_point(Vector2D): [Read-Write] Custom pivot point (relative to the sprite rectangle)
default_material(MaterialInterface): [Read-Write] The material to use on a sprite instance if not overridden (this is the default material when only one is being used, and is the translucent/masked material for Diced render geometry, slot 0)
origin_in_source_image_before_trimming(Vector2D): [Read-Write] Origin within SourceImage, prior to atlasing
pivot_mode(SpritePivotMode): [Read-Write] Pivot mode
pixels_per_unreal_unit(float): [Read-Write] The scaling factor between pixels and Unreal units (cm) (e.g., 0.64 would make a 64 pixel wide sprite take up 100 cm)
render_geometry(SpriteGeometryCollection): [Read-Write] Custom render geometry polygons (in texture space)
rotated_in_source_image(bool): [Read-Write] This texture is rotated in the atlas
snap_pivot_to_pixel_grid(bool): [Read-Write] Should the pivot be snapped to a pixel boundary?
sockets(Array(PaperSpriteSocket)): [Read-Write] List of sockets on this sprite
source_dimension(Vector2D): [Read-Write] Dimensions within SourceTexture (in pixels)
source_image_dimension_before_trimming(Vector2D): [Read-Write] Dimensions of SourceImage
source_texture(Texture2D): [Read-Write] The source texture that the sprite comes from
source_texture_dimension(Vector2D): [Read-Write] Dimension of the texture when this sprite was created Used when the sprite is resized at some point
source_uv(Vector2D): [Read-Write] Position within SourceTexture (in pixels)
sprite_collision_domain(SpriteCollisionMode): [Read-Write] Collision domain (no collision, 2D, or 3D)
trimmed_in_source_image(bool): [Read-Write] This texture is trimmed, consider the values above
- property alternate_material¶
[Read-Only] The alternate material to use on a sprite instance if not overridden (this is only used for Diced render geometry, and will be the opaque material in that case, slot 1)
- Type
- | https://docs.unrealengine.com/4.27/en-US/PythonAPI/class/PaperSprite.html | 2022-01-16T23:20:27 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.unrealengine.com |
From Varibill Documentation
This template is used when invoices or credit notes are sent via email from Varibill (Refer to "Invoices" or "Credit Notes" screens). You are able to manage the subject and body of the email using Varibill, but your email signatures are set up and managed on your servers. (Speak to your system administrator if you wish to change this)
The email address used to send your invoices or credit notes can be managed using the "Tenant Details" screen.
Insert one / more of the curly brackets {} in your subject / body to reference the information it represents.
Subject
- {ContactNameAndSurname} = Registered Name (business clients) / Contact First Name & Surname (Individual clients) as captured under the client’s information (Refer to "Manage Clients").
- {InvoiceDate} = Invoice date.
- {ClientAccountCode} = Client Account Code.
- {ContactPersonName} = Finance contact person’s name (business clients) / Contact person name (Individual clients) as captured on the "Clients" screen.
- {ContactPersonSurname} = Finance contact person’s surname (business clients) / Contact person surname (Individual clients) as captured on the "Clients" screen.
- {InvoiceDate} = Invoice date.
- {ClientAccountCode} = Client Account Code. | https://docs.varibill.com/Email_Template | 2022-01-16T21:10:50 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.varibill.com |
PyCC Optimizer Documentation¶
Each PyCC optimizer applies some transformation to your source code. All optimizers are disabled by default in both the pycc-transform and pycc-compile scripts. Below is a list of optimizations that can be enabled, a description of what transformation it applies, and the command line flag needed to enable it.
Constant In-lining¶
Flag: –constants
As demonstrated on the main page, this option replaces the use of read-only, constant values with their literal values. This affects variables that are assigned only once within the given scope and are assigned to a number, string, or name value. Name values are any other symbols including True, False, and None.
This transformation does not apply to constants that are assigned to complex types such as lists, tuples, function calls, or generators.
In addition, simple arithmetic operations performed on constant values are automatically calculated and the constant value inserted back. | https://pycc.readthedocs.io/en/latest/optimizers.html | 2022-01-16T22:18:58 | CC-MAIN-2022-05 | 1642320300244.42 | [] | pycc.readthedocs.io |
#include <QmitkDataStorageFilterProxyModel.h>
Definition at line 26 of file QmitkDataStorageFilterProxyModel.h.
Definition at line 54 of file QmitkDataStorageFilterProxyModel.h.
If the predicate pred returns true, the node will be hidden in the data manager view
Check if predicate is present in the list of filtering predicates.
Remove a predicate from the list of filters. Returns true if pred was found and removed.
Definition at line 55 of file QmitkDataStorageFilterProxyModel.h. | https://docs.mitk.org/nightly/classQmitkDataStorageFilterProxyModel.html | 2022-01-16T22:02:50 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.mitk.org |
Syslog¶
If you want to send syslog from other devices to the manager, you’ll need to run so-allow on the manager and then choose the
syslog option to allow the port through the firewall. If sending syslog to a sensor, please see the Examples in the Firewall section.
If you need to add custom parsing for those syslog logs, we recommend using Elasticsearch ingest parsing. | https://docs.securityonion.net/en/2.3/syslog.html | 2022-01-16T21:33:17 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.securityonion.net |
IDXGISwapChain interface (dxgi.h)
An IDXGISwapChain interface implements one or more surfaces for storing rendered data before presenting it to an output.
Inheritance
The IDXGISwapChain interface inherits from IDXGIDeviceSubObject. IDXGISwapChain also has these types of members:
Methods
The IDXGISwapChain interface has these methods.
Remarks
You can create a swap chain by calling IDXGIFactory2::CreateSwapChainForHwnd, IDXGIFactory2::CreateSwapChainForCoreWindow, or IDXGIFactory2::CreateSwapChainForComposition. You can also create a swap chain when you call D3D11CreateDeviceAndSwapChain; however, you can then only access the sub-set of swap-chain functionality that the IDXGISwapChain interface provides. | https://docs.microsoft.com/en-us/windows/win32/api/dxgi/nn-dxgi-idxgiswapchain | 2022-01-16T22:44:50 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.microsoft.com |
N3048EP-ON, N3048ET-ON, N1148T-ON and N3132PX switches use the OverlayFS file system which doesn’t allow the user files on the switch to be saved from the RAM to the flash automatically. This results in the loss of user files when the switch is powered off. The user can add the file path (the file that you want to be saved even after power off) to the backup list, in /mnt/open/picos/backup_files file, and then run the save_config command. With this technique, the user files on the backup list are backed up to the flash ensuring that the user files are not lost when the power is turned off.
PICOS has built-in default backup files, as listed below.
root@Xorplus$ cat /etc/picos/backup_files.lst /etc/passwd /etc/shadow /etc/group /etc/gshadow /etc/resolv.conf /etc/picos/picos_start.conf /etc/picos/switch-public.key /etc/picos/pica.lic /pica/config/pica_startup.boot /pica/config/pica.conf.01 /pica/config/pica.conf.02 /pica/config/pica.conf.03 /pica/config/pica.conf.04 /pica/config/pica.conf.05 /ovs/ovs-vswitchd.conf.db /ovs/function.conf.db /ovs/config/meters /ovs/config/groups /ovs/config/flows /ovs/var/lib/openvswitch/pki/ /var/log/report_diag.log /var/log/report_diag.log.1 /var/log/report_diag.log.2 /var/log/report_diag.log.3 /var/log/report_diag.log.4 /var/log/report_diag.log.5 /cftmp/upgrade.log /cftmp/upgrade2.log
Warning:
If the user operation makes change to the above files, you need to manually run the configuration saving command to save these files from the RAM to the flash. For details, please refer to Configuration Saving Guide and copy running-config startup-config.
If you want to save a user files that are not in the above default backup file list, you can follow the backup operation steps described below.
Step1 Add the file path to the backup list (the file that you want to be saved even after power off), in /mnt/open/picos/backup_files file.
For example, if you want to backup /home/admin/a.txt, then add /home/admin/a.txt to the backup file list:
root@Xorplus$ cat /etc/picos/user_backup_files.lst /home/admin/a.txt
Step2 Under Linux bash, issue the following command manually to begin file backup.
root@Xorplus$ save_config
After the above two steps, the user file /home/admin/a.txt will be backed up to the flash. This file will not be lost when the power is turned off. | https://docs.pica8.com/display/PicOS421sp/File+Backup | 2022-01-16T22:08:43 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.pica8.com |
Salt¶
From:
Salt is a new approach to infrastructure management built on a dynamic communication bus. Salt can be used for data-driven orchestration, remote execution for any infrastructure, configuration management for any app stack, and much more.
Note
Salt is a core component of Security Onion 2 as it manages all processes on all nodes. In a distributed deployment, the manager node controls all other nodes via salt. These non-manager nodes are referred to as salt minions.
Firewall Requirements¶
4505/tcpand
4506/tcp:
Checking Status¶
You can use salt’s
test.ping to verify that all your nodes are up:
sudo salt \* test.ping
Remote Execution¶
Similarly, you can use salt’s
cmd.run to execute a command on all your nodes at once. For example, to check disk space on all nodes:
sudo salt \* cmd.run 'df'
Configuration¶
Many of the options that are configurable in Security Onion 2 are done via pillar assignments in either the global or minion pillar files. Pillars are a Saltstack concept, formatted typically in YAML, that can be used to parameterize states via templating. Saltstack states are used to ensure the state of objects on a minion. In many of the use cases below, we are providing the ability to modify a configuration file by editing either the global or minion pillar file.
Global pillar file: This is the pillar file that can be used to make global pillar assignments to the nodes. It is located at
/opt/so/saltstack/local/pillar/global.sls.
Minion pillar file: This is the minion specific pillar file that contains pillar definitions for that node. Any definitions made here will override anything defined in other pillar files, including global. This is located at
/opt/so/saltstack/local/pillar/minions/<minionid>.sls.
Default pillar file: This is the pillar file located under
/opt/so/saltstack/default/pillar/. Files here should not be modified as changes would be lost during a code update.
Local pillar file: This is the pillar file under
/opt/so/saltstack/local/pillar/. These are the files that will need to be changed in order to customize nodes.
Warning
Salt sls files are in YAML format. When editing these files, please be very careful to respect YAML syntax, especially whitespace. For more information, please see.
Here are some of the items that can be customized with pillar settings:
Salt Minion Startup Options¶
Currently, the salt-minion service startup is delayed by 30 seconds. This was implemented to avoid some issues that we have seen regarding Salt states that used the ip_interfaces grain to grab the management interface IP.
If you need to increase this delay, it can be done using the
salt:minion:service_start_delay pillar. This can be done in the minion pillar file if you want the delay for just that minion, or it can be done in the
global.sls file if it should be applied to all minions.
salt: minion: service_start_delay: 60 # in seconds.
More Information¶
See also
For more information about Salt, please see. | https://docs.securityonion.net/en/2.3/salt.html | 2022-01-16T21:16:50 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.securityonion.net |
Kubernetes events 🔗
Description 🔗
The Splunk Distribution of OpenTelemetry Collector provides the
kubernetes-events monitor type by using the Splunk Observability Cloud Smart Agent Receiver.
This monitor type listens for Kubernetes events by calling the K8s API running on manager nodes, and sends Kubernetes events into Splunk Observability Cloud as Infrastructure Monitoring events.
Upon startup, the Kubernetes events monitor type sends all of the events that K8s has that are still persisted and then send any new events as they come in. The various agents perform leader election amongst themselves to decide which instance will send events, unless the
alwaysClusterReporter config option is set to
true.
When
alwaysClusterReporter is set to
true, every node, with the configuration, will emit the same metrics. There is no additional querying of the manager node. When enabled each agent on every node of the cluster fetches events from the k8s API. Which can bring down k8s api = manager nodes.
Note 🔗
Larger clusters might encounter instability when setting this configuration across a large number of nodes. Enable with caution.:
receivers: smartagent/kubernetes-events: type: kubernetes-events ... # Additional config
To use this monitor type, configure which events to send. You can see the types of events happening in your cluster with the following command:
kubectl get events -o yaml --all-namespaces
From the output, you can select which events to send by the Reason (Started, Created, Scheduled) and Kind (Pod, ReplicaSet, Deployment…) combinations. These events need to be specified individually with a single reason and involveObjectKind for each event rule you want to allow and are placed in the whitelistedEvents configuration option as a list of events you want to send.
Note Event names will match the reason name
Example YAML configuration:
receivers: smartagent/kubernetes-events: type: kubernetes-events whitelistedEvents: - reason: Created involvedObjectKind: Pod - reason: SuccessfulCreate involvedObjectKind: ReplicaSet
To complete this monitor type activation, you must also include it in a metrics pipeline. To do this, add the monitor type to the service > pipelines > metrics > receivers section of your configuration file. For example:
service: pipelines: metrics: receivers: [smartagent/kubernetes. | https://docs.splunk.com/observability/gdi/kubernetes-events/kubernetes-events.html | 2022-01-16T23:15:50 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.splunk.com |
Appearance >> Customize >> Template Options >> Homepage Settings – Section
If you want to set it up like our demo first you have to setup static home page.
Your can follow following step to setup static home page.
- Please go to Dashboard >> Pages >> add new page and set title name home and publish it.
- Then, please go to Appearance >> Customize >> Template Options >> Homepage Settings and check ( if unchecked ) A static page then from Homepage dropdown section please select your home page and save it.
- After that Homepage template options checked Widgetize layout.(Default value is widgetize)
- Then, everything you need to do is save and check all widgets from Appearance >>Widgets and drag and drop those widgets into sidebars.
Structure of magazine Layout. | https://docs.themecentury.com/homepage-settings-hamroclass/ | 2022-01-16T22:06:48 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.themecentury.com |
When a vulnerability is published, organizations typically analyze the potential for impact in their environment. Dependency-Track can help identify all affected projects across the organization. If the vulnerability is published to a datasource Dependency-Track supports (i.e. NVD, NPM, OSS Index, VulnDB, etc), then simply looking up the vulnerability in the platform is all that’s required.
Dependency-Track contains a full mirror for each of the vulnerability datasources it supports. Virtually all public information about the vulnerability including the description, affected versions, CWE, and severity, are captured, as well as the affected projects. The list of affected projects is dynamically generated based on data in Dependency-Track at the time of inquiry.
Alternatively, if the component name and version are known, then performing a search on that component will reveal a list of vulnerabilities, as well as a list of all projects that have a dependency on the component.
| https://docs.dependencytrack.org/usage/impact-analysis/ | 2019-02-16T03:59:55 | CC-MAIN-2019-09 | 1550247479838.37 | [array(['/images/screenshots/vulnerability.png', 'incident response'],
dtype=object)
array(['/images/screenshots/vulnerable-component.png',
'incident response'], dtype=object) ] | docs.dependencytrack.org |
, or communicate with school staff – all from a single experience in Office 365 for Education.
- Check your environment’s readiness for Teams.
- Deploy School Data Sync to make it easier for teachers, teachers, uducators on their experience with Office 365 and Teams. Use a channel in Teams when your school has fewer than 2500 turn off Microsoft Teams licenses
Teams is a cloud-based service. Once an educator or student has a valid license, they can run the desktop, web, and mobile Teams clients. They can install these clients themselves -- the IT admin doesn't need to deploy these clients.
You can manage individual user licenses for Microsoft Teams by using the Office 365 Admin Center or by using PowerShell. See Office 365 licensing for Teams for information about both methods. This is valuable to understand if you are interested in piloting Teams before broad enablement.
Note
Before you turn on Teams for your school, make sure you have the proper controls in place. A piloting program helps significantly to ensure the proper users are able to use and give feedback on what types of controls enhance the usage and management of the product.
Below are some sample pilot user groups and the teams that could be of interest.
Note
For help and support with Teams licenses or other issues, submit an inquiry at.
Configure Teams for your school.
Configure tenant-wide settings
General
The General section allows you to configure the following settings for your entire institution across all the license types you may have.
- Show organizational chart in personal profile.
- Use Skype for Business for recipients who don’t have Teams.
You can enable email integration with channels, as well as create a restricted senders list. To turn on or turn off email integration, move the toggle to Off or On, then select Save.
The Allow senders list helps you to control who can email teams within your organization.
Apps
You can enable external apps, new external apps, and sideloading for apps in Teams. To disable or enable the setting, move the toggle to Off or On, then choose Save.
Enabling external apps provides a drop-down list for you to select the applications you'd like in your institution, which allows teachers to combine their favorite apps to work within the Teams platform. If you select Enable new external apps by default, new apps will automatically show up in your list as they become available.
The ability to sideload an app is only available to non-guest team members. This is useful if you have any programming courses or are testing any custom learning management systems that can integrate with Teams. To learn more, see The Microsoft Teams developer platform.
By enabling Assignments, educators can provide assignments and iterative feedback to students.
Once external apps in Teams is enabled, educators Teams client, users can add tabs for Word documents, PowerPoint, Forms, OneNote Class Notebooks, Assignments, and more. Over time, more tabs will be added, both from Microsoft and from partners.
Custom cloud storage
You can enable various forms of cloud storage within Teams. Currently, Box, Dropbox, Google Drive, and ShareFile are supported. To disable or enable the setting, toggle the switch to Off or On, and then select Save. When these settings are enabled, any member of a team can add a new provider in which documents can be stored to or retrieved from.
Teams uses SharePoint as the default file storage provider. For more information, see How SharePoint Online and OneDrive for Business interact with Teams. To learn how your quota is calculated and how best to manage it, see Manage site collection storage limits. If you don’t want to manage each site’s collection, you'll also learn how to let the service automatically handle that for you.
Settings by user/license type.
You should only have Education listed in your license types. The dropdown includes Education-Faculty and staff, Education-Student, and Guest. The system can only differentiate users based on the licenses you've assigned them. If you only have one license type, the settings here can be treated as tenant-wide settings.
To enable a license type to have access to Teams, move the toggle to On, then select Save. This is just a temporary switch; eventually, you'll need to manage user access to Teams through user licenses as you would for all other Office workloads.
Note
For help and support with tenant-wide settings or other issues, submit an inquiry at.
Configure Teams by user or license type
Important
The new Microsoft Teams admin center is here!.
As part of the migration to the new Microsoft Teams admin center, how to configure Teams by license type is accomplished via policy. The policy types are accessible via the Admin Center navigation. See below for an example list of Messaging policies.
With policies, features can now be turned on/off tenant admins. If you want to manage these in the future, create new custom policies and assign the custom policies to users.
A custom policy can be assigned to any user. To do this, click + New policy, set the features, and click Save. This custom policy can be assigned to a user through the Users tab or via Admin Center, create a new policy and assign on the Users tab.
Note
Until a custom policy is assigned to a user, the user will be using the Global policy setting. This means that if Chat is enabled in the Global policy and disabled in the custom Student policy, until the custom policy is assigned, the Student will be able to chat. In this case, it may be easier to set Chat as disabled globally and use custom policies to enable Chat for Faculty users.
Appendix
How to create and assign a messaging policy
- See if and Mac),.
Note
Safari isn't currently supported. Check the Teams Roadmap for news about new features in Teams. Users who try to open Teams on Safari will be directed to download the Teams desktop client.
Note
As long as an operating system can run the supported browser Teams is supported. For example, running Firefox on the Linux operating system is an option for using Teams.
Resources, feedback, and support
Teams resources for Education admins
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/microsoftteams/teams-quick-start-edu?redirectSourcePath=%252fen-gb%252farticle%252fmicrosoft-teams-getting-started-guide-for-it-admins-e7b992dc-de27-4303-8973-7a1ca8ad7cfb | 2019-02-16T03:12:18 | CC-MAIN-2019-09 | 1550247479838.37 | [array(['media/quick-start-enable-teams-microsoft365-deployment-team.png',
'Screenshot of a sample Microsoft 365 Deployment team.'],
dtype=object)
array(['media/quick-start-example-scenarios.png',
'Screenshot of Teams user groups.'], dtype=object)
array(['media/enable_microsoft_teams_features_in_your_office_365_organization_image1.png',
'Screenshot of the settings in the General section in the Office 365 admin center.'],
dtype=object)
array(['media/qs-edu-email-integration.png',
'Screenshot of the settings in the Email integration section in the Office 365 admin center.'],
dtype=object)
array(['media/qs-edu-apps2.png',
'Screenshot of the settings in the Apps section in the Office 365 admin center.'],
dtype=object)
array(['media/add_a_tab_to_microsoft_teams_image.png',
'Screenshot of how to add tabs for apps to Teams.'], dtype=object)
array(['media/enable_microsoft_teams_features_in_your_office_365_organization_image7.png',
'Screenshot of the settings in the Custom cloud storage section in the Office 365 admin center.'],
dtype=object)
array(['media/qs-edu-settings-by-user-license-type-showing-dropdown.png',
'Screenshot of the settings in the Teams and channels section in the Office 365 admin center.'],
dtype=object)
array(['media/qs-edu-settings-by-user-license-type-showing-toggle-on.png',
'Screenshot of the settings for Microsoft Teams license picker section in the Office 365 admin center.'],
dtype=object)
array(['media/teams-messaging-policies-edu.png',
'Screenshot of the Messaging policies page in the Microsoft Teams admin center.'],
dtype=object)
array(['media/teams-assigned-policies-edu.PNG',
'Screenshot of the Assigned policies section of the Microsoft Teams admin center.'],
dtype=object) ] | docs.microsoft.com |
This chapter provides a high-level introduction to Spring Integration’s core concepts and components. It includes some programming tips to help you make the most of Spring Integration., components capable of producing should be managed within a layer that is logically above the application’s service layer, interacting with those services through interfaces in much, and others),.
Section 6.1.2, “Message Channel Implementations” has a detailed discussion of the variety of channel implementations available in Spring Integration..
A message router is responsible for deciding what channel or channels (if any) should receive the message next. Typically, the decision is based upon the message’s content or earlier., discard them, or send:
someService(the
id):
someComponent.someMethod.serviceActivator:
someService:
someAdapter(the
id):
someAdapter:
someAdapter
someAdapter.source(as long as you use the convention of appending
.sourceto the
@Beanname) Section 10.4 Section B.1.1, “Annotation-driven Configuration with the
:
Example 5.1. pom.xml
... ers>
> ...
Specifically,
You can add a property for
${spring.boot.version} or use an explicit version.
This section documents some of the ways to get the most from Spring Integration. Section 5.3, “Main Components”, earlier in this chapter).
Their implementations (contracts) are:
org.springframework.messaging.Message: See Chapter 7, Message;
org.springframework.messaging.MessageChannel: See Section 6.1, “Message Channels”;
org.springframework.integration.endpoint.AbstractEndpoint: See Section 6.2, Chapter 39,:
>
As discussed in Section 5.6, . | https://docs.spring.io/spring-integration/docs/5.1.3.BUILD-SNAPSHOT/reference/html/overview.html | 2019-02-16T03:18:14 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.spring.io |
Contributing to this project¶
Checklist¶
- All potential contributors must read the Contributor Code of Conduct and follow it
- Fork the repository on GitHub or GitLab
- Create a new branch, e.g.,
git checkout -b bug/12345
- Fix the bug and add tests (if applicable [1], see How To Add Tests)
- Run the tests (see How To Run The Tests below)
- Add documentation (as necessary) for your change
- Build the documentation to check for errors and formatting (see How To Build The Documentation below)
- Add yourself to the
AUTHORS.rst(unless you’re already there)
- Commit it. Follow these rules in your commit (if it closes an issue)
- See Example Commit Message below
- Push it to your fork
- Create a request for us to merge your contribution
After this last step, it is possible that we may leave feedback in the form of review comments. When addressing these comments, you can follow two strategies:
- Amend/rebase your changes into an existing commit
- Create a new commit with a different message [2] describing the changes in that commit and push it to your branch
This project is not opinionated about which approach you should prefer. We only ask that you are aware of the following:
- Neither GitHub nor GitLab notifies us that you have pushed new changes. A friendly ping is encouraged
- graffatcolmingov@gmail
How To Add Tests¶
We use pytest to run tests and to simplify how we write tests. If you’re
fixing a bug in an existing please find tests for that module or feature and
add to them. Most tests live in the
tests directory. If you’re adding a
new feature in a new submodule, please create a new module of test code. For
example, if you’re adding a submodule named
foo then you would create
tests/test_foo.py which will contain the tests for the
foo submodule.
How To Run The Tests¶
Run the tests in this project using tox. Before you run the tests, ensure you have installed tox either using your system package manager (e.g., apt, yum, etc.), or your prefered python installer (e.g., pip).
Then run the tests on at least Python 2.7 and Python 3.x, e.g.,
$ tox -e py27,py34
Finally run one, or both, of the flake8 style enforcers, e.g.,
$ tox -e py27-flake8 # or $ tox -e py34-flake8
It is preferable if you run both to catch syntax errors that might occur in Python 2 or Python 3 (based on how familiar you are with the common subset of language from both).
Tox will manage virtual environments and dependencies for you so it will be the only dependency you need to install to contribute to this project.
How To Build The Documentation¶
To build the docs, you need to ensure tox is installed and then you may run
$ tox -e docs
This will build the documentation into
docs/_build/html. If you then run
$ python2.7 -m SimpleHTTPServer # or $ python3.4 -m http.server
from that directory, you can view the docs locally at.
Example Commit Message¶
Allow users to use the frob when uploading data When uploading data with FooBar, users may need to use the frob method to ensure that pieces of data are not munged. Closes #1234567 | https://toolbelt.readthedocs.io/en/latest/contributing.html | 2019-02-16T04:22:42 | CC-MAIN-2019-09 | 1550247479838.37 | [] | toolbelt.readthedocs.io |
This chapter provides a tutorial introduction to MySQL by showing how to use the mysql client program to create and use a simple database. mysql (sometimes referred to as the “terminal monitor” or just “monitor”) is an interactive program that enables. | http://doc.docs.sk/mysql-refman-5.5/tutorial.html | 2019-02-16T03:01:46 | CC-MAIN-2019-09 | 1550247479838.37 | [] | doc.docs.sk |
Contents Now Platform Capabilities Previous Topic Next Topic Provision a virtual machine with Puppet Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Provision a virtual machine with Puppet The procedure for ordering a virtual machine with Puppet is the same as for ordering a standard virtual machine through the service catalog. Provisioning a virtual machine with Puppet does not add the virtual machine information as a virtual machine CI. Instead, the instance records the new CI as a Linux server. Completing Provisioning Tasks A cloud operator must complete the provisioning task generated by the catalog request to provision the virtual machine. To provision a virtual machine with Puppet, ensure that: Guest customization is set to Yes. Node definition is set to a node definition. Authenticating New Puppet Virtual Machines When provisioning a new virtual machine with Puppet, the Puppet Master can authenticate and configure the virtual machine. Provisioning a new virtual machine with Puppet bypasses the standard Puppet approval process, only cloud provisioning approvals are required. After the virtual machine has been provisioned, the Puppet agent included on the virtual machine attempts to retrieve configuration information from the Puppet Master. If the Puppet agent and Puppet Master successfully communicate, the Puppet agent configures the virtual machine without further user interaction according to the node definition assigned to the virtual machine. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/product/configuration-automation/concept/c_ProvisionVMWithPuppet.html | 2019-02-16T03:53:36 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.servicenow.com |
Cron expressions encode a repeating interval of time. The format
supported by the pump machinery of the Sesam node is explained in more
detail below. It mostly follows the explanation here: but note that the special character
W and
# is not supported by our implementation.
A cron-expression is a string of 5 or 6 fields separated by space character. The string is parsed from left to right and denotes in sequence: | https://docs.sesam.io/cron-expressions.html | 2019-02-16T03:22:24 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.sesam.io |
4. Manager Configuration fails with SQL Authentication Error¶
When going through the installation process of the RaMP DCIM Manager Configuration Tool
If you came across the following warning message:
And you clicked [Yes] to proceed, then clicked [Next] on the screen below:
Then ran into the following error:
The fix consists in confirming that the following 4 conditions are met:
- An existing SQL Server instance exists on the server
- Confirm that the SQL Server collation is SQL_Latin1_General_CP1_CI_AS
- Confirm that SQL Authentication is turned on
- Confirm that the username and password (Local & SQL Server) has read/write permissions, and enter it accordingly within the following screen:
When the installation has been completed successfully, you should see the following screen: | https://docs.tuangru.com/faq/questions/Q4.html | 2019-02-16T04:06:04 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.tuangru.com |
You use a single workflow in vRealize Orchestrator to inject your custom logic into the IaaS workflow stubs and assign your customized life cycles to machine blueprints..
You must design your custom vRealize Orchestrator workflows to accept string inputs. If your custom workflow expects a complex data type, create a wrapper workflow that looks up this complex value and translates it to a string. For an example wrapping workflow, see the sample Workflow template, provided in . | https://docs.vmware.com/en/vRealize-Automation/7.4/com.vmware.vra.prepare.use.doc/GUID-3F8968B1-EC5B-4F2A-975B-06CF2CF96F7D.html | 2019-02-16T02:55:23 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.vmware.com |
Make Commands¶
The easiest way to get started is to use the built in
make commands. Your project contains a Makefile that allows you to setup your development environment with a single command. This command will create your project’s virtual environment, install all pip dependencies, create the development database, run migrations and load initial data to database, install front-end dependencies and finally start the development server for you.
To do this run
make develop_env
You can access your site at. The Admin back-end is available at default Admin username is admin and The default Admin password is admin123.
Make command line¶
Create the virtualenv for the project
make virtualenv
Install the requirements to the virtualenv
make requirements
Create a PostgreSQL database for the project. It will have the same name as the project
make db
Run the migrations
make migrate
Populate the site with initial page structure
make initial_data
Copy the media(images and documents) to project root
make copy_media
Install all front-end dependencies with bower
make bower
Start the standard Django dev server
make runserver
Start Server with livereload functionality
make livereload
Run your unit tests
make test
Run your functional tests
make func_test
Install Node modules:
make node_modules
Minify Images used in site
make compress_images
Generate a static site from the project
make static_site | https://wagtail-cookiecutter-foundation.readthedocs.io/en/latest/references/using_make.html | 2018-12-10T06:16:47 | CC-MAIN-2018-51 | 1544376823318.33 | [] | wagtail-cookiecutter-foundation.readthedocs.io |
Folie is a command-line utility to talk to a µC via a (local or remote) serial port
GitHub repository:
Folie is work-in-progress. For a first introduction see these articles:
The first version of Folie is part of the Embello project, and can be found on GitHub, see the README.
Folie v2 is a complete rewrite, with as goal to better support all platforms and to simplify uploads with built-in flash images for a few µC boards.
There are a number of ready-made executables on the Releases page. There are no dependencies, they’ll also run on older versions of each OS. Folie repository is Open Source, see the “unlicense”. To contribute fixes and improvements, you’re welcome to fork the repository and submit a pull request. | https://docs.jeelabs.org/folie/ | 2018-12-10T07:20:41 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.jeelabs.org |
BitWizard Rasp
raspduino ID for board option in “platformio.ini” (Project Configuration File):
[env:raspduino] platform = atmelavr board = raspduino
You can override default BitWizard Raspduino settings per build environment using
board_*** option, where
*** is a JSON object path from
board manifest raspduino.json. For example,
board_build.mcu,
board_build.f_cpu, etc.
[env:raspduino] platform = atmelavr board = raspduino ; change microcontroller board_build.mcu = atmega328p ; change MCU frequency board_build.f_cpu = 16000000L
Debugging¶
PIO Unified Debugger currently does not support BitWizard Raspduino board. | http://docs.platformio.org/en/latest/boards/atmelavr/raspduino.html | 2018-12-10T06:30:56 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.platformio.org |
In password, you can reset user password.
Reset User Password.
To reset “Password”, click on the “Configuration”, then “Profiles”, then click “Password”. Password page will be displayed.
Click on the “Update” button.
Enter old password in “Old Password” input box, then enter new password in “New Password” input box, then enter new password in “Confirm New Password” to confirm.
Click “Save” button to save the information. | http://docs.smacc.com/password/ | 2018-12-10T06:45:18 | CC-MAIN-2018-51 | 1544376823318.33 | [array(['http://docs.smacc.com/wp-content/uploads/2017/01/password-1.png',
None], dtype=object)
array(['http://docs.smacc.com/wp-content/uploads/2017/01/password-2-1.png',
None], dtype=object) ] | docs.smacc.com |
Inherited deployments
If you are a system administrator who has inherited the responsibility for a Splunk software deployment, use this manual to gain an understanding of your deployment's network characteristics, data sources, user population, and knowledge objects. This information will help orient you to the essential aspects of the Splunk platform running in your environment. It includes specific suggestions for how to discover what is running, how well it is running, who is using it, and where to go for more detailed information.
For a high-level introduction to Splunk Enterprise software, see the Splunk Enterprise Overview manual.
To learn about the basics of searching and reporting with Splunk software, use the Search Tutorial.
If Splunk software is new to you, there are resources available to help you:
The Splunk Professional Services team is also available to perform a technical assessment of your Splunk environment to ensure that your deployment and internal processes follow best practices.
This documentation applies to the following versions of Splunk® Enterprise:! | http://docs.splunk.com/Documentation/Splunk/7.0.1/InheritedDeployment/Introduction | 2018-12-10T06:46:39 | CC-MAIN-2018-51 | 1544376823318.33 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Use this API to control the behavior of the RhoMobile Log API as well as access it. This API gives access to the Logging functionality. There are five functions to add messages to the log with different severity (from lowest to highest) : trace, info, warning, error and fatal. Each of those functions gets two parameters: message and category. Category is an user defined group that helps with used searching and filtering.
Accessing Log File: sendLogFile: will sent all the log to server showLog: brings up popup with log, readLogFile: returns the full log file, cleanLogFile: removes all logged messages
Filtering:
Using level property: It limits minimal severity of messages that will be added to log. For example: setting log level to 2 (warning) will filter out messages generated by trace and info.
Categories: user defined groups that are used to select messages from different modules for ease of use. There are two main filters: includeCategories and excludeCategories. They are both active at the same time. includeCategories allows to select groups/categories that should be in the log (setting this property to empty will turn disable logging). excludeCategories is used for filtering out some of categories.
excludeFilter, this filter is used to remove all sensitive information like passwords, security tokens from log.
Log destinations (any combinations of them):
This API is part of the
coreapi extension that is included automatically.
extensions: ["coreapi"]
Be sure to review the JavaScript API Usage guide for important information about using this API in JavaScript
Be sure to review the Ruby API Usage guide for important information about using this API in Ruby
Clean log file, all logged messages will be removed.
Synchronous Return:
Method Access:
Rho.Log.cleanLogFile()
Rho::Log.cleanLogFile()
Log message at the Error level.
Parameters
Log message.
Log category.
Synchronous Return:
Method Access:
Rho.Log.error(STRING message, STRING category)
Rho::Log.error(STRING message, STRING category)
Log message at the FatalError level. Application will be terminated (on all platforms except iOS).
Parameters
Log message.
Log category.
Synchronous Return:
Method Access:
Rho.Log.fatalError(STRING message, STRING category)
Rho::Log.fatalError(STRING message, STRING category)
Log message at the Info level.
Parameters
Log message.
Log category.
Synchronous Return:
Method Access:
Rho.Log.info(STRING message, STRING category)
Rho::Log.info(STRING message, STRING category)
Read log file. Returns string from the log file containing specified number of symbols.
Parameters
Maximum size of the resulting string in symbols.
Synchronous Return:
Method Access:
Rho.Log.readLogFile(INTEGER limit)
Rho::Log.readLogFile(INTEGER limit)
Send log file to destinationURI property. Please note that this procedure is blocking and will stop any logging while log file is being send.
Parameters
Synchronous Return:
Method Access:
Rho.Log.sendLogFile(CallBackHandler callback)
Rho::Log.sendLogFile(CallBackHandler callback)
Display Log view window.
Synchronous Return:
Method Access:
Rho.Log.showLog()
Rho::Log.showLog()
Log message at the Trace level. By default trace messages are not shown in log (if level equals to 1).
Parameters
Log message.
Log category.
Synchronous Return:
Method Access:
Rho.Log.trace(STRING message, STRING category)
Rho::Log.trace(STRING message, STRING category)
Log message at the Warning level.
Parameters
Log message.
Log category.
Synchronous Return:
Method Access:
Rho.Log.warning(STRING message, STRING category)
Rho::Log.warning(STRING message, STRING category)
List of log destinations that are being used. Destination could be set to empty (disable all logging), Logging to several destinations could be set by setting destination to comma separated list in any order (for example “stdio,file”). By default logging to console can be enabled from rhoconfig.txt (LogToOutput = 1). After Rhodes initialization logging to file is enabled automatically.
Possible Values (STRING):
Log is written to a local file on the device (typically rholog.txt)
Log is written to the standard output (ex: Android ADB)
Log is written to a remote logger.
Property Access:
myObject.destination
Log server URI where log will be posted by using Rho::Log.sendLogFile or from the log view. Log server source code is open and available at, so you can deploy your own logserver. URI format: ‘[/path][?log_name=appName]’. Default value is set in rhoconfig.txt (logserver)
Property Access:
myObject.destinationURI
Comma-separated list of excluded log categories. Set to ‘’ (empty) to allow all messages to be logged. Set to concrete value to filter out log from those categories. Default value is ‘’ (empty), it is set in rhoconfig.txt (ExcludeLogCategories)
Property Access:
myObject.excludeCategories
Define exclude parameters log filter(for security reasons) – parameter names separated by comma. It works when user tries to put in log string containing json / urls. Default value is “” (empty). For example, if user set excludeFilter=“password”, then tries to put in log this string: “{"user”:“alex”,“password”:“abcdef”,“sessionid”:123456}“, "abcdef” will not appear in log.
Property Access:
myObject.excludeFilter
Path to the log file including file name. The path is relative to the platform specific application root or start if from ‘/’ if you wish to store elsewhere (‘/mnt/sdcard/myapp.log’). Default file path is “rholog.txt”
Default: rholog.txt
Property Access:
myObject.filePath
Maximum log file size in bytes, set 0 to unlimited size; when limit is reached, log wraps to beginning of file. Default value is 50000, it is set in rhoconfig.txt (MaxLogFileSize)
Default: 50000
Property Access:
myObject.fileSize
Comma-separated list of included log categories. Set to ‘*’ (asterisk) to log all categories. Set to ‘’ (empty) to filter out all messages. Default value is ‘*’ (asterisk), it is set in rhoconfig.txt (LogCategories).
Default: *
Property Access:
myObject.includeCategories
The current logging level. Minimal severity level of messages that will appear in log. When level is set to 0 any messages will be logged. When level is set to 4 only fatal error messages will be logged. Default value is defined in rhoconfig.txt (MinSeverity)
Possible Values (STRING):
Everything will be logged. Also see settings for controlling log size.
Information level logs and above will be shown.
Warnings and above will only be shown.
Error level log messages and above will be shown.
Fatal level log messages and above will be shown.
Property Access:
myObject.level
Enables the logging of memory usage in the system; specifies the time interval in milliseconds at which memory logs will be generated periodically. Setting it to 0 will disable logging memory information.
Default: 0
Property Access:
myObject.memoryPeriod
Turn on remote network traces regardless of log level set (e.g. Network, asyncHttp). Traces contain information about connection process, sent and received headers and data. Please note that this parameter will not take an effect in case of local server app (and / or shared runtime). Default value can be overridden by the setting in rhoconfig.txt (net_trace). To get local server trace, use
Rho.Log.LEVEL_TRACE in JavaScript and
Rho::Log::LEVEL_TRACE in Ruby.
Default: false
Property Access:
myObject.netTrace
Skip http package body from log(for security reasons). Please note that this parameter will not take an effect in case of remote server app (and / or shared runtime), no log will appear in this case.
Default: false
Property Access:
myObject.skipPost
Show the contents of the log file in a window with controls to refresh, clear and send. Useful for debugging and when asking users to report error messages.
Rho.Log.showLog();
Rho::Log.showLog
Retrieve the contents of the log file as a string.
//Read at most 16384 symbols logFileContent = Rho.Log.readLogFile(16384);
# Read at most 16384 symbols logFileContent = Rho::Log.readLogFile 16384
Clear the contents of the log file. In this example, logFileContentBefore will contain the log up until that point, while logFileContentAfter will be empty.
// Read log file logFileContentBefore = Rho.Log.readLogFile(16384); // Clear log file Rho.Log.cleanLogFile(); // Read log file again - this time it will be empty logFileContentAfter = Rho.Log.readLogFile(16384);
# Read log file logFileContentBefore = Rho::Log.readLogFile 16384 # Clear log file Rho::Log.cleanLogFile # Read log file again - this time it will be empty logFileContentAfter = Rho::Log.readLogFile(16384
Categories help you organize your logging messages and find related statements using tools like grep or a text editor’s “search” function.
You can ask the system to automatically log memory usage information on a set interval. This can be used to debug potential memory leaks in operations where a high number of objects are touched in memory.
Rho.Log.memoryPeriod = 1000; // Perform memory-intensive operations here. Examining the log will tell us if we have a memory leak // Once our task finishes, disable automatic memory logging Rho.Log.memoryPeriod = 0;
# Request that memory usage be logged automatically by the system every second. Rho::Log.memoryPeriod = 1000 # Perform memory-intensive operations here. Examining the log will tell us if we have a memory leak # Once our task finishes, disable automatic memory logging Rho::Log.memoryPeriod = 0 | http://docs.tau-technologies.com/en/6.0/api/Log | 2018-12-10T06:14:29 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.tau-technologies.com |
Cloud Cruiser consists of these primary components:
You can deploy each of these components on separate servers or host them together, depending on the nature of the deployment and performance considerations. For most production environments, Cloud Cruiser recommends that you deploy the database on one server, the collectors and application server on another, and the analytics server on a third.
The following diagram shows the detailed architecture of Cloud Cruiser: | https://docs.consumption.support.hpe.com/CC3/01Getting_Started/01Introduction/04Cloud_Cruiser_architecture | 2018-12-10T06:41:23 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.consumption.support.hpe.com |
When your trial license expires or when your usage of Cloud Cruiser exceeds your customer license, you see the following warning message when you log in to the Cloud Cruiser Portal. The message continues to appear with each login to inform you that you are out of compliance.
If this happens, please contact Cloud Cruiser to purchase additional license capacity. In this case Cloud Cruiser personnel will configure your system to reflect the increased capacity, removing the warning.
NOTE: All Cloud Cruiser functionality remains enabled when you are out of compliance. However, batch jobs with a Charge step that previously ran with a result of Completed Successfully will now have the result Completed with Warnings. | https://docs.consumption.support.hpe.com/CC3/05Administering/Licensing/License_compliance_notifications | 2018-12-10T06:36:44 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.consumption.support.hpe.com |
What does switching Office 365 plans do to my service and billing?
When you switch plans automatically by using the Switch plans button, your services and billing are affected.
Access to services
Admins won't be able to use the Admin center while the plan is being switched. This can take up to an hour.
Users will experience no interruption of service. They will continue to have the existing service until the switch switch switching to the new subscription.
Note
The length of time it takes to actually credit your payment account depends on the payment method that was used for the subscription.
Switching switch until closer to your prepaid subscription's expiration date. | https://docs.microsoft.com/en-us/office365/admin/subscriptions-and-billing/what-does-switching-plans-do-to-my-service-and-billing?redirectSourcePath=%252fsl-si%252farticle%252fkako-zamenjava-paketa-storitve-office-365-vpliva-na-mojo-storitev-in-obra%2525C4%25258Dunavanje-7180c99d-c1e3-417a-8a2e-6c26dedc8339&view=o365-worldwide | 2018-12-10T07:00:42 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.microsoft.com |
_MINIDUMP_TYPE Enumeration
Identifies the type of information that will be written to the minidump file by the MiniDumpWriteDump function.
Important
typedef enum _MINIDUMP_TYPE { MiniDumpNormal, MiniDumpWithDataSegs, MiniDumpWithFullMemory, MiniDumpWithHandleData, MiniDumpFilterMemory, MiniDumpScanMemory, MiniDumpWithUnloadedModules, MiniDumpWithIndirectlyReferencedMemory, MiniDumpFilterModulePaths, MiniDumpWithProcessThreadData, MiniDumpWithPrivateReadWriteMemory, MiniDumpWithoutOptionalData, MiniDumpWithFullMemoryInfo, MiniDumpWithThreadInfo, MiniDumpWithCodeSegs, MiniDumpWithoutAuxiliaryState, MiniDumpWithFullAuxiliaryState, MiniDumpWithPrivateWriteCopyMemory, MiniDumpIgnoreInaccessibleMemory, MiniDumpWithTokenInformation, MiniDumpWithModuleHeaders, MiniDumpFilterTriage, MiniDumpWithAvxXStateContext, MiniDumpWithIptTrace, MiniDumpValidTypeFlags } MINIDUMP_TYPE; | https://docs.microsoft.com/en-us/windows/desktop/api/minidumpapiset/ne-minidumpapiset-_minidump_type | 2018-12-10T07:30:32 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.microsoft.com |
WooCommerce
- WooCommerce Displays The Wrong Prices
- Fix issues with NGINX configuration and WooCommerce cookies
- Pages Aren’t Cached When WooCommerce Geolocation Enabled
- Update products on stock after new Woocommerce orders
- Making WP Rocket work with WooCommerce recently viewed products widget
- Using WP Rocket with YITH WooCommerce Wishlist | https://docs.wp-rocket.me/category/902-woocommerce | 2018-12-10T07:00:20 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.wp-rocket.me |
Adafruit Pro Trinket 5V/165 ID for board option in “platformio.ini” (Project Configuration File):
[env:protrinket5] platform = atmelavr board = protrinket5
You can override default Adafruit Pro Trinket 5V/16MHz (USB) settings per build environment using
board_*** option, where
*** is a JSON object path from
board manifest protrinket5.json. For example,
board_build.mcu,
board_build.f_cpu, etc.
[env:protrinket5] platform = atmelavr board = protrinket5 ; change microcontroller board_build.mcu = atmega328p ; change MCU frequency board_build.f_cpu = 16000000L
Debugging¶
PIO Unified Debugger currently does not support Adafruit Pro Trinket 5V/16MHz (USB) board. | http://docs.platformio.org/en/latest/boards/atmelavr/protrinket5.html | 2018-12-10T06:49:05 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.platformio.org |
Neurodesign documentation¶
Neurodesign: design optimisation¶
- class
src.neurodesign.
design(order, ITI, experiment, onsets=None)[source]
This class represents an experimental design for an fMRI experiment.
check_hardprob()[source]
Function to check whether frequencies of stimuli are exactly the prespecified frequencies.
check_maxrep(maxrep)[source]
Function to check whether design does not exceed maximum repeats within design.
crossover(other, seed=1234)[source]
Function to crossover design with other design and create offspring.
- class
src.neurodesign.
experiment(TR, P, C, rho, stim_duration, n_stimuli, ITImodel=None, ITImin=None, ITImax=None, ITImean=None, restnum=0, restdur=0, t_pre=0, t_post=0, n_trials=None, duration=None, resolution=0.1, FeMax=1, FdMax=1, FcMax=1, FfMax=1, maxrep=None, hardprob=False, confoundorder=3)[source]
This class represents an fMRI experiment.
CreateLmComp()[source]
This function generates components for the linear model: hrf, whitening matrix, autocorrelation matrix and CX
CreateTsComp()[source]
This function computes the number of scans and timpoints (in seconds and resolution units)
- class
src.neurodesign.
optimisation(experiment, weights, preruncycles, cycles, seed=None, I=4, G=20, R=[0.4, 0.4, 0.2], q=0.01, Aoptimality=True, folder=None, outdes=3, convergence=1000, optimisation='GA')[source]
This class represents the population of experimental designs for fMRI.
check_develop(design, weights=None)[source]
Function to check and develop a design to the population. Function will check design against strict options and develop the design if valid.
Generate: generating stimulus order and ITI’s¶
src.generate.
iti(ntrials, model, min=None, mean=None, max=None, lam=None, resolution=0.1, seed=1234)[source]
Function will generate an order of stimuli.
Msequence: generating msequences¶
- class
src.msequence.
Msequence[source]
A class for an order of experimental trials.
GenMseq(mLen, stimtypeno, seed)[source]
Function to generate a random msequence given the length of the desired sequence and the number of different values.
Mseq(baseVal, powerVal, shift=None, whichSeq=None, userTaps=None)[source]
Function to generate a specific msequence given the base and power values. | https://neurodesign.readthedocs.io/en/latest/genalg.html | 2018-12-10T06:17:55 | CC-MAIN-2018-51 | 1544376823318.33 | [] | neurodesign.readthedocs.io |
RT 4.2.15 Documentation
RT::Config
- NAME
- SYNOPSYS
- DESCRIPTION
- METHODS
- LoadConfig
NAME
RT::Config - RT's config
SYNOPSYS
# get config object use RT::Config; my $config = RT::Config->new; $config->LoadConfigs; # get or set option my $rt_web_path = $config->Get('WebPath'); $config->Set(EmailOutputEncoding => 'latin1'); # get config object from RT package use RT; RT->LoadConfig; my $config = RT->Config;
DESCRIPTION
RT::Config class provide access to RT's and RT extensions' config files.
RT uses two files for site configuring:
First file is RT_Config.pm - core config file. This file is shipped with RT distribution and contains default values for all available options. You should never edit this file.
Second file is RT_SiteConfig.pm - site config file. You can use it to customize your RT instance. In this file you can override any option listed in core config file.
RT extensions could also provide thier config files. Extensions should use <NAME>_Config.pm and <NAME>_SiteConfig.pm names for config files, where <NAME> is extension name.
NOTE: All options from RT's config and extensions' configs are saved in one place and thus extension could override RT's options, but it is not recommended.
%META
Hash of Config options that may be user overridable or may require more logic than should live in RT_*Config.pm
Keyed by config name, there are several properties that can be set for each config optin:
Section - What header this option should be grouped under on the user Preferences page Overridable - Can users change this option SortOrder - Within a Section, how should the options be sorted for display to the user Widget - Mason component path to widget that should be used to display this config option WidgetArguments - An argument hash passed to the WIdget Description - Friendly description to show the user Values - Arrayref of options (for select Widget) ValuesLabel - Hashref, key is the Value from the Values list, value is a user friendly description of the value Callback - subref that receives no arguments. It returns a hashref of items that are added to the rest of the WidgetArguments PostSet - subref passed the RT::Config object and the current and previous setting of the config option. This is called well before much of RT's subsystems are initialized, so what you can do here is pretty limited. It's mostly useful for effecting the value of other config options early. PostLoadCheck - subref passed the RT::Config object and the current setting of the config option. Can make further checks (such as seeing if a library is installed) and then change the setting of this or other options in the Config using the RT::Config option. Obfuscate - subref passed the RT::Config object, current setting of the config option and a user object, can return obfuscated value. it's called in RT->Config->GetObfuscated()
METHODS
new
Object constructor returns new object. Takes no arguments.
LoadConfigs
Load all configs. First of all load RT's config then load extensions' config files in alphabetical order. Takes no arguments.
LoadConfig
Takes param hash with
File field. First, the site configuration file is loaded, in order to establish overall site settings like hostname and name of RT instance. Then, the core configuration file is loaded to set fallback values for all settings; it bases some values on settings from the site configuration file.
Note that core config file don't change options if site config has set them so to add value to some option instead of overriding you have to copy original value from core config file.
Configs
Returns list of config files found in local etc, plugins' etc and main etc directories.
LoadedConfigs
Returns a list of hashrefs, one for each config file loaded. The keys of the hashes are:
- as
Name this config file was loaded as (relative filename usually).
- filename
The full path and filename.
- extension
The "extension" part of the filename. For example, the file
RTIR_Config.pmwill have an
extensionvalue of
RTIR.
- site
True if the file is considered a site-level override. For example,
sitewill be false for
RT_Config.pmand true for
RT_SiteConfig.pm.
Get
Takes name of the option as argument and returns its current value.
In the case of a user-overridable option, first checks the user's preferences before looking for site-wide configuration.
Returns values from RT_SiteConfig, RT_Config and then the %META hash of configuration variables's "Default" for this config variable, in that order.
Returns different things in scalar and array contexts. For scalar options it's not that important, however for arrays and hash it's. In scalar context returns references to arrays and hashes.
Use
scalar perl's op to force context, especially when you use
(..., Argument = RT->Config->Get('ArrayOpt'), ...)> as perl's '=>' op doesn't change context of the right hand argument to scalar. Instead use
(..., Argument = scalar RT->Config->Get('ArrayOpt'), ...)>.
It's also important for options that have no default value(no default in etc/RT_Config.pm). If you don't force scalar context then you'll get empty list and all your named args will be messed up. For example
(arg1 = 1, arg2 => RT->Config->Get('OptionDoesNotExist'), arg3 => 3)> will result in
(arg1 = 1, arg2 => 'arg3', 3)> what is most probably unexpected, or
(arg1 = 1, arg2 => RT->Config->Get('ArrayOption'), arg3 => 3)> will result in
(arg1 = 1, arg2 => 'element of option', 'another_one' => ..., 'arg3', 3)>.
GetObfuscated
the same as Get, except it returns Obfuscated value via Obfuscate sub
Set
Set option's value to new value. Takes name of the option and new value. Returns old value.
The new value should be scalar, array or hash depending on type of the option. If the option is not defined in meta or the default RT config then it is of scalar type. | https://docs.bestpractical.com/rt/4.2.15/RT/Config.html | 2018-12-10T06:12:00 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.bestpractical.com |
To create a rate plan, perform the following procedure:
You can now select specific resources and define alternate rate information for one or more of the available resources. Any resource that is found in the default rate plan can be cloned to an alternate rate plan and configured. For more information, see Setting alternative rates with rate plans.
To rename a rate plan, select it in the list and click Rename. You cannot rename the rate plan named default.
To delete a rate plan, select it in the list and click Delete. | https://docs.consumption.support.hpe.com/CC3/05Administering/Managing_rate_plans/Creating_rate_plans | 2018-12-10T05:57:18 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.consumption.support.hpe.com |
DrawerController class
Provides interactive behavior for Drawer widgets.
Rarely used directly. Drawer controllers are typically created automatically by Scaffold widgets.
The draw controller provides the ability to open and close a drawer, either via an animation or via user interaction. When closed, the drawer collapses to a translucent gesture detector that can be used to listen for edge swipes.
See also:
- Drawer, a container with the default width of a drawer.
- Scaffold.drawer, the Scaffold slot for showing a drawer.
- Inheritance
- Object
- Diagnosticable
- DiagnosticableTree
- Widget
- StatefulWidget
- DrawerController
Constructors
- DrawerController({GlobalKey<
State<key, @required Widget child, @required DrawerAlignment alignment, DrawerCallback drawerCallback }) StatefulWidget>>
- Creates a controller for a Drawer. [...]const
Properties
- alignment → DrawerAlignment
- The alignment of the Drawer. [...]final
- child → Widget
- The widget below this widget in the tree. [...]final
- drawerCallback → DrawerCallback
- Optional callback that is called when a Drawer is opened or closed DrawerController | https://docs.flutter.io/flutter/material/DrawerController-class.html | 2018-12-10T06:09:40 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.flutter.io |
Contents Now Platform Custom Business Applications Previous Topic Next Topic Run automated test suite ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Run automated test suite After creating an automated test suite, run it on a non-production instance. Before you beginRole required: admin, atf_test_admin or atf_test_designer. You must have created the test suite you want to run. The test execution property must be enabled. You must have an admin or atf_test_admin role to do so.Note: The test execution property is disabled by default to prevent running tests on a production system. Run tests only on development, test, and other sub-production instances. About this taskThis procedure outlines how to start a test suite manually. You can also schedule test suites to run at a later time. For more information, see Working with scheduled test suites. Procedure Navigate to Automated Test Framework > Test Suites. If necessary to view the Test Suites list, click Test Suites. Click the row containing the test suite you want to run. The system displays the Test Suite form. Click Run Test Suite. Note: If the test execution property is not enabled, the Run Suite button does not display. In this case, see the annotation at the top of the form, and click the link to enable running tests. If the tests associated with the test suite include a form step (any step involving a UI), or other kinds of UI test steps, the Pick a Browser dialog appears before executing the tests. Use it to choose among any currently-running test clients, or start a new runner. For more information, review Browser recommendations for all tests and suites.. If the | https://docs.servicenow.com/bundle/kingston-application-development/page/administer/auto-test-framework/task/atf-run-suite.html | 2018-12-10T06:51:17 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.servicenow.com |
Apps, Clients and Tools
There is a wide range of tools you can use to interact with Travis CI:
- Websites: Full Web Clients, Dashboards, Tools
- Mobile Applications: Android, iOS, Windows Phone
- Desktop: Mac OS X, Linux, Cross Platform
- Command Line Tools: Full Clients, Build Monitoring, Generators
- Plugins: Google Chrome, Opera, Editors, Other
- Libraries: Ruby, JavaScript, PHP, Python, Elixir, R, Go
And if you don’t find anything that fits your needs, you can also interact with our API directly.
Note however that Travis CI can not take any responsibility of for third-party tools you might use.
Websites #
Full Web Clients #
Travis CI Web Client #
Our official web interface, written in Ember.js
Dashboards #
TravisLight #
Online build monitoring tool
By William Durand
TravisWall #
Online build monitoring tool for public/private repos
By Eric Geloen
Team Dashboard #
Visualize your team’s metrics all in one place
By Frederik Dietz
CI Status #
Travis CI dashboard
By Piwik.
node-build-monitor #
Simple and extensible Build Monitor written in Node.js
By Marcell Spies
CI Dashboard #
Travis CI builds dashboard
By Ahmed El-Sayed
Tools #
Travis Web Encrypter #
Encrypt Secure Variables
By Konstantin Haase
Mobile Applications #
Android #
Siren of Shame (Android) #
Gamification for your builds
By Automated Architecture
iOS #
Jarvis #
iPad client for Travis CI, supports private projects
By NinjaConcept GmbH
Project Monitor #
iPhone app that monitors public and private builds
By Dimitri Roche
Siren of Shame (iOS) #
Gamification for your builds
By Automated Architecture
Windows Phone #
Travis7 #
A Windows Phone client for Travis CI
By Tim Felgentreff
Desktop #
If you are looking for desktop notifications, our command line client supports them.
Mac OS X #
CCMenu #
OS X status bar app
By ThoughtWorks Inc.
Linux #
BuildNotify #
Linux alternative to CCMenu
By Anay Nayak
Cross Platform #
Build Checker App #
Check CI-server build statuses
By Will Mendes.
CatLight #
Shows build status in tray / menu bar
By catlight.io
Command Line Tools #
Full Clients #
Travis CLI #
Feature complete command line client
PSTravis #
Command line client for PowerShell
Build Monitoring #
Bickle #
Display build status in your terminal
By Jiri Pospisil
Travis Surveillance #
Monitor a project in your terminal
By Dylan Egan
Travis Build Watcher #
Trigger a script on build changes
By Andrew Sutherland
Status Gravatar #
Sets Gravatar profile image depending on build status
By Gleb Bahmutov
Chroma Feedback #
Turn your Razer keyboard, mouse or headphone into a extreme feedback device
By Henry Ruhs
Generators #
travis-encrypt #
Encrypt environment variables
By Patrick Williams
travis-tools #
Easy secure data encryption
By Michael van der Weg
Travisify (Ruby) #
Creates .travis.yml with tagging and env variables
By James Smith
Travisify (Node.js) #
Add Travis CI hooks to your GitHub project
By James Halliday
Plugins #
Google Chrome #
My Travis #
Monitor your projects builds within Chrome
By Leonardo Quixadá
github+travis #
Display build status next to project name on GitHub
By Tomas Carnecky
GitHub Status #
Display build status next to project name on GitHub
By excellenteasy
Opera #
GitHub+Travis #
Display build status next to project name on GitHub
By smasty
Editors #
Atom Plugin #
Travis CI integration for Atom
By Tom Bell
Brackets Plugin #
Travis CI integration for Brackets
By Cas du Plessis
Emacs Package #
Travis CI integration for Emacs
By Skye Shaw
Vim Plugin #
Travis CI integration for Vim
By Keith Smiley
Other #
git-travis #
Git subcommand to display build status
By Dav Glass
gh-travis #
NodeGH plugin for integrating Travis CI
By Eduardo Antonio Lundgren Melo and Zeno Rocha Bueno Netto
Travis CI 🡒 Discord Webhook #
Serverless solution for sending build status from Travis CI to Discord as webhooks.
By Sankarsan Kampa
Libraries #
Ruby #
- travis.rb (official)
- TravisMiner by Shane McIntosh
- hoe-travis by Eric Hodel
- Knapsack by Artur Trzop
JavaScript #
- travis-ci by Patrick Williams
- node-travis-ci by Maciej Małecki
- travis-api-wrapper by Christopher Maujean
- travis.js by Konstantin Haase
- ee-travis by Michael van der Weg
- Favis CI by Jaune Sarmiento
PHP #
- php-travis-client by Leszek Prabucki | https://docs.travis-ci.com/user/apps/ | 2018-12-10T06:34:20 | CC-MAIN-2018-51 | 1544376823318.33 | [array(['/images/apps/travis-web.jpg', 'travis-web'], dtype=object)
array(['/images/apps/travis-light.jpg', 'travis-light'], dtype=object)
array(['/images/apps/travis-wall.jpg', 'travis-wall'], dtype=object)
array(['/images/apps/team-dashboard.jpg', 'travis-light'], dtype=object)
array(['/images/apps/ci-status.png', 'ci-status'], dtype=object)
array(['/images/apps/node-build-monitor.jpg', 'node-build-monitor'],
dtype=object)
array(['/images/apps/ci-dashboard.jpg', 'ci-dashboard'], dtype=object)
array(['/images/apps/travis-encrypt.jpg', 'travis-encrypt'], dtype=object)
array(['/images/apps/siren-android.jpg', 'Siren of Shame'], dtype=object)
array(['/images/apps/jarvis.jpg', 'jarvis'], dtype=object)
array(['/images/apps/project-monitor.jpg', 'Project Monitor'],
dtype=object)
array(['/images/apps/siren-ios.jpg', 'Siren of Shame'], dtype=object)
array(['/images/apps/travis7.jpg', 'travis7'], dtype=object)
array(['/images/apps/ccmenu.jpg', 'CCMenu'], dtype=object)
array(['/images/apps/screensaver-ninja.gif',
'Travis CI in Screensaver Ninja with Custom CSS'], dtype=object)
array(['/images/apps/buildnotify.jpg', 'BuildNotify'], dtype=object)
array(['/images/apps/build-checker-app.png', 'BuildCheckerApp'],
dtype=object)
array(['/images/apps/catlight-tray.png', 'CatLight Build Status'],
dtype=object)
array(['/images/apps/cli.jpg', 'cli'], dtype=object)
array(['/images/apps/bickle.jpg', 'bickle'], dtype=object)
array(['/images/apps/travis-surveillance.jpg', 'travis-surveillance'],
dtype=object)
array(['/images/apps/travis-build-watcher.jpg', 'travis-build-watcher'],
dtype=object)
array(['/images/apps/status-gravatar.jpg', 'status-gravatar'],
dtype=object)
array(['/images/apps/chroma-feedback.jpg', 'chroma feedback'],
dtype=object)
array(['/images/apps/node-travis-encrypt.jpg', 'travis-encrypt'],
dtype=object)
array(['/images/apps/travis-tools.jpg', 'travis-tools'], dtype=object)
array(['/images/apps/travisify-ruby.jpg', 'travisify-ruby'], dtype=object)
array(['/images/apps/travisify-node.jpg', 'travisify-node'], dtype=object)
array(['/images/apps/chrome-my-travis.jpg', 'chrome-my-travis'],
dtype=object)
array(['/images/apps/chrome-github-travis.jpg', 'chrome-github-travis'],
dtype=object)
array(['/images/apps/chrome-github-status.jpg', 'chrome-github-status'],
dtype=object)
array(['/images/apps/chrome-github-travis.jpg', 'chrome-github-travis'],
dtype=object)
array(['/images/apps/atom.jpg', 'atom'], dtype=object)
array(['/images/apps/brackets.jpg', 'brackets'], dtype=object)
array(['/images/apps/emacs.jpg', 'emacs'], dtype=object)
array(['/images/apps/vim.jpg', 'vim'], dtype=object)
array(['/images/apps/git.png', 'git-travis'], dtype=object)
array(['/images/apps/nodegh.jpg', 'NodeGH'], dtype=object)
array(['https://github.com/DiscordHooks.png', 'TravisCI Discord Webhook'],
dtype=object) ] | docs.travis-ci.com |
Export and import¶
Zulip has high quality export and import tools that can be used to move data from one Zulip server to another, do backups or compliance work, or migrate from your own servers to the hosted Zulip Cloud service.
When using these tools, it’s important to ensure that the Zulip server you’re exporting from and the one you’re exporting to are running the same version of Zulip, since we do change and extend the format from time to time.
Backups¶
If you want to move hardware for a self-hosted Zulip installation, we recommend Zulip’s database-level backup and restoration process. Zulip’s backup process is structurally very unlikely to ever develop bugs, and will restore your Zulip server to the exact state it was left in. The big thing it can’t do is support a migration to a server hosting a different set of organizations than the original one (because doing so generally requires renumbering all the users/messages/etc.).
Zulip’s export/import tools (documented on this page) have full support for such a renumbering process. While these tools are carefully designed and tested to make various classes of bugs impossible or unlikely, the extra complexity required for renumbering makes them structurally more risky than the direct postgres backup process.
Export your Zulip data¶
For best results, you’ll want to shut down access to the organization
you are exporting with
manage.py deactivate_realm before exporting,
so that nobody can send new messages (etc.) while you’re exporting
data. We include that in the instructions below.
Log in to a shell on your Zulip server as the
zulip user. Run the
following commands:
cd /home/zulip/deployments/current ./manage deactivate_realm -r '' # Deactivates the organization ./manage.py export -r '' # Exports the data
(The
-r option lets you specify the organization to export;
'' is
the default organization hosted at the Zulip server’s root domain.)
This will generate a tarred archive with a name like
/tmp/zulip-export-zcmpxfm6.tar.gz. The archive contains several
JSON files (containing the Zulip organization’s data) as well as an
archive of all the organization’s uploaded files.
Import into a new Zulip server¶
The Zulip server you’re importing into needs to be running the same
version of Zulip as the server you exported from, so that the same
formats are consistent. For exports from zulipchat.com, usually this
means you need to upgrade your Zulip server to the latest
master
branch, using [upgrade-zulip-from-git][upgrade-zulip-from-git].
First install a new Zulip server, skipping “Step 3: Create a Zulip organization, and log in” (you’ll create your Zulip organization via the data import tool instead).
Log in to a shell on your Zulip server as the
zulip user. Run the
following commands, replacing the filename with the path to your data
export tarball:
cd /tmp tar -xf /path/to/export/file/zulip-export-zcmpxfm6.tar.gz cd /home/zulip/deployments/current ./manage.py import '' /tmp/zulip-export-zcmpxfm6 ./manage reactivate_realm -r '' # Reactivates the organization
This could take several minutes to run, depending on how much data you’re importing.
Import options
The commands above create an imported organization on the root domain
(
EXTERNAL_HOST) of the Zulip installation. You can also import into a
custom subdomain, e.g. if you already have an existing organization on the
root domain. Replace the last two lines above with the following, after replacing
<subdomain> with the desired subdomain.
./manage.py import <subdomain> /tmp/zulip-export-zcmpxfm6 ./manage reactivate_realm -r <subdomain> # Reactivates the organization
Logging in¶
Once the import completes, all your users will have accounts in your new Zulip organization, but those accounts won’t have passwords yet (since for). | https://zulip.readthedocs.io/en/stable/production/export-and-import.html | 2018-12-10T06:44:54 | CC-MAIN-2018-51 | 1544376823318.33 | [] | zulip.readthedocs.io |
libOSTree
New! See the docs online at Read The Docs (OSTree)
This project is now known as "libOSTree", renamed from "OSTree"; the focus is on the shared library. However, in most of the rest of the documentation, we will use the term "OSTree", since it's slightly shorter, and changing all documentation at once is impractical. We expect to transition to the new name over time.
libOSTree is a library and suite of command line tools that combines a "git-like" model for committing and downloading bootable filesystem trees, along with a layer for deploying them and managing the bootloader configuration.
The core OSTree model is like git in that it checksums individual files and has a content-addressed-object store. It's unlike git in that it "checks out" the files via hardlinks, and they should thus be immutable. Therefore, another way to think of OSTree is that it's just a more polished version of Linux VServer hardlinks.
Features:
- Atomic upgrades and rollback for the system
- Replicating content incrementally over HTTP via GPG signatures and "pinned TLS" support
- Support for parallel installing more than just 2 bootable roots
- Binary history on the server side (and client)
- Introspectable shared library API for build and deployment systems
This last point is important - you should think of the OSTree command line as effectively a "demo" for the shared library. The intent is that package managers, system upgrade tools, container build tools and the like use OSTree as a "deduplicating hardlink store".
Projects using OSTree
rpm-ostree is a tool that uses OSTree as a shared library, and supports committing RPMs into an OSTree repository, and deploying them on the client. This is appropriate for "fixed purpose" systems. There is in progress work for more sophisticated hybrid models, deeply integrating the RPM packaging with OSTree.
Project Atomic uses rpm-ostree to provide a minimal host for Docker formatted Linux containers. Replicating a base immutable OS, then using Docker for applications meshes together two different tools with different tradeoffs.
flatpak uses OSTree for desktop application containers.
GNOME Continuous is a custom build system designed for OSTree, using OpenEmbedded in concert with a custom build system to do continuous delivery from hundreds of git repositories.
Building
Releases are available as GPG signed git tags, and most recent versions support extended validation using git-evtag.
However, in order to build from a git clone, you must update the submodules. If you're packaging OSTree and want a tarball, I recommend using a "recursive git archive" script. There are several available online; this code in OSTree is an example.
Once you have a git clone or recursive archive, building is the same as almost every autotools project:
env NOCONFIGURE=1 ./autogen.sh ./configure --prefix=... make make install DESTDIR=/path/to/dest
More documentation
New! See the docs online at Read The Docs (OSTree)
Some more information is available on the old wiki page:
Contributing
See Contributing. | http://ostree.readthedocs.io/en/latest/ | 2017-02-19T14:12:28 | CC-MAIN-2017-09 | 1487501169776.21 | [] | ostree.readthedocs.io |
Reporting on the infrastructure
This topic provides instructions for running reports on infrastructure items.
To run standard reports on infrastructure items
- Click the Infrastructure tab in the Primary Navigation Bar.
-.
- To display the View Object page of any of the objects found, click the relevant entry.
See Infrastructure reports for further information and details of the infrastructure-related reports.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/discovery/102/reporting-on-the-infrastructure-590874162.html | 2020-01-17T17:30:54 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.bmc.com |
How do I verify my application for production?
In order to verify your application we need the Application Client ID and the possible
redirect_uri values you will be using with the request authorization flow.
Having a whitelist of values that helps us guard against phishing attacks by controlling where we return authorization codes to for your application.
This Stack Exchange article is a good explanation of why this is necessary: What is the purpose of OAuth 2.0 redirect_uri checking?
We do support wildcards in the host name, but only for subdomains of domains under your control, eg:
https://*.yourapp.com/auth/cronofy/callback
Once your application is in production then this also will start billing. The free tier only applies whilst your apps are in development mode. So if you haven’t entered your billing details yet, you’ll need to do that as well before you go live:
When you’re ready to go, email [email protected] your Application Client ID and
redirect_uri whitelist and we’ll switch you to production mode. | https://docs.cronofy.com/developers/faqs/verify-application/ | 2020-01-17T15:29:37 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.cronofy.com |
Azure Functions HTTP triggers and bindings
This article explains how to work with HTTP triggers and output bindings in Azure Functions.
An HTTP trigger can be customized to respond to webhooks..
Tip
If you plan to use the HTTP or WebHook bindings, plan to avoid port exhaustion that can be caused by improper instantiation of
HttpClient. For more information, see How to manage connections in Azure Functions.
The code in this article defaults to the syntax which uses .NET Core, used in Functions version 2.x and higher. For information on the 1.x syntax, see the 1.x functions templates.
Packages - Functions 1.x
The HTTP bindings are provided in the Microsoft.Azure.WebJobs.Extensions.Http NuGet package, version 1.x. Source code for the package is in the azure-webjobs-sdk-extensions GitHub repository.
Support for this binding is automatically provided in all development environments. You don't have to manually install the package or register the extension.
Packages - Functions 2.x and higher
The HTTP bindings are provided in the Microsoft.Azure.WebJobs.Extensions.Http NuGet package, version 3.x. Source code for the package is in the azure-webjobs-sdk-extensions GitHub repository.
Support for this binding is automatically provided in all development environments. You don't have to manually install the package or register the extension.
Trigger
The HTTP trigger lets you invoke a function with an HTTP request. You can use an HTTP trigger to build serverless APIs and respond to webhooks.
By default, an HTTP trigger returns HTTP 200 OK with an empty body in Functions 1.x, or HTTP 204 No Content with an empty body in Functions 2.x and higher. To modify the response, configure an HTTP output binding.
Trigger - example")]"); }
Trigger - attributes
In C# class libraries and Java, the
HttpTrigger attribute is available to configure the function.
You can set the authorization level and allowable HTTP methods in attribute constructor parameters, webhook type, and a route template. For more information about these settings, see Trigger - configuration.
This example demonstrates how to use the HttpTrigger attribute.
[FunctionName("HttpTriggerCSharp")] public static Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req) { ... }
For a complete example, see the trigger example.
Trigger - configuration
The following table explains the binding configuration properties that you set in the function.json file and the
HttpTrigger attribute.
Trigger - usage. As an example, the following function.json file defines a
route property for an HTTP trigger:
{ "bindings": [ { "type": "httpTrigger", "name": "req", "direction": "in", "methods": [ "get" ], "route": "products/{category:alpha}/{id:int?}" }, { "type": "http", "name": "res", "direction": "out" } ] }
Using this configuration, the function is now addressable with the following route instead of the original route.
http://<APP_NAME>.azurewebsites.net/api/products/electronics/357
This allows the function code to support two parameters in the address, category and id.
You can use any Web API Route Constraint with your parameters. The following C# function code makes use of both parameters.
using System.Net; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Primitives; public static IActionResult Run(HttpRequest req, string category, int? id, ILogger log) { var message = String.Format($"Category: {category}, ID: {id}"); return (ActionResult)new OkObjectResult(message); }
By default, all function routes are prefixed with api. You can also customize or remove the prefix using the
http.routePrefix property in your host.json file. The following example removes the api route prefix by using an empty string for the prefix in the host.json file.
{ . The ClaimsPrincipal; }
Authorization keys
Functions lets you use keys to make it harder to access your HTTP function endpoints during development. A standard HTTP trigger may require such an API key be present in the request.
Important
While keys may help obfuscate your HTTP endpoints during development, they are not intended as a way to secure an HTTP trigger in production. To learn more, see Secure an HTTP endpoint in production.
Note
In the Functions 1.x runtime, webhook providers may use keys to authorize requests in a variety of ways, depending on what the provider supports. This is covered in Webhooks and keys. The Functions runtime in version 2.x and higher does not include built-in support for webhook providers.
There are two types of keys:
- Host keys: These keys are shared by all functions within the function app. When used as an API key, these allow access to any function within the function app.
- Function keys: These keys apply only to the specific functions under which they are defined. When used as an API key, these only allow access to that function.
Each key is named for reference, and there is a default key (named "default") at the function and host level. Function keys take precedence over host keys. When two keys are defined with the same name, the function key is always used.
Each function app also has a special master key. This key is a host key named
_master, which provides administrative access to the runtime APIs. This key cannot be revoked. When you set an authorization level of
admin, requests must use the master key; any other key results in authorization failure.
Caution
Due to the elevated permissions in your function app granted by the master key, you should not share this key with third parties or distribute it in native client applications. Use caution when choosing the admin authorization level.
Obtaining keys
Keys are stored as part of your function app in Azure and are encrypted at rest. To view your keys, create new ones, or roll keys to new values, navigate to one of your HTTP-triggered functions in the Azure portal and select Manage.
You may obtain function keys programmatically by using Key management APIs.:
Turn on App Service Authentication / Authorization for your function app. The App Service platform lets you use Azure Active Directory (AAD) and several third-party identity providers to authenticate clients. You can use this to an Azure App Service Environment (ASE). ASE provides a dedicated hosting environment in which to run your functions. ASE lets you configure a single front-end gateway that you can use to authenticate all incoming requests. For more information, see Configuring a Web Application Firewall (WAF) for App Service Environment.
When using one of these function app-level security methods, you should set the HTTP-triggered function authorization level to
anonymous..
Trigger -.
Output
Use the HTTP output binding to respond to the HTTP request sender. This binding requires an HTTP trigger and allows you to customize the response associated with the trigger's request. If an HTTP output binding is not provided, an HTTP trigger returns HTTP 200 OK with an empty body in Functions 1.x, or HTTP 204 No Content with an empty body in Functions 2.x and higher.
Output - configuration
The following table explains the binding configuration properties that you set in the function.json file. For C# class libraries, there are no attribute properties that correspond to these function.json properties.
Output - usage
To send an HTTP response, use the language-standard response patterns. In C# or C# script, make the function return type
IActionResult or
Task<IActionResult>. In C#, a return value attribute isn't required.
For example responses, see the trigger example.
host.json settings
This section describes the global configuration settings available for this binding in versions 2.x and higher. The example host.json file below contains only the version 2.x+ settings for this binding. For more information about global configuration settings in versions 2.x and beyond, see host.json reference for Azure Functions.
Note
For a reference of host.json in Functions 1.x, see host.json reference for Azure Functions 1.x.
{ "extensions": { "http": { "routePrefix": "api", "maxOutstandingRequests": 200, "maxConcurrentRequests": 100, "dynamicThrottlesEnabled": true, "hsts": { "isEnabled": true, "maxAge": "10" }, "customHeaders": { "X-Content-Type-Options": "nosniff" } } } }
Next steps
Learn more about Azure functions triggers and bindings
Feedback | https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook?WT.mc_id=devto-dotnet-cephilli | 2020-01-17T16:30:19 | CC-MAIN-2020-05 | 1579250589861.0 | [array(['media/functions-bindings-http-webhook/manage-function-keys.png',
'Manage function keys in the portal.'], dtype=object)
array(['media/functions-bindings-http-webhook/github-add-webhook.png',
None], dtype=object) ] | docs.microsoft.com |
General. For more detailed information see Cluster License Keys.
Viewing the maximum number of allowed shards
The maximum number of allowed shards, which is determined by the Cluster Key, appears in the Max number of shards field.
Viewing the cluster name
The cluster name appears in the Cluster name field. This gives a common name that your team or Redis Labs support can refer to. It is especially helpful if you have multiple clusters.
Setting your time zone
You can set your time zone in the Timezone field. This is recommended in order to make sure that the date, time fields, and log entries are shown in your preferred time zone.
Configuring email server settings
To enable receiving alerts by email, fill in the details for your email server in the email server settings section and select the requested connection security method: TLS/SSL, STARTTLS, or None. Upon completing to fill-in all details, it is advisable to verify the specified settings by clicking Test Mail. | https://docs.redislabs.com/latest/rs/administering/cluster-operations/settings/general/ | 2020-01-17T17:17:57 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.redislabs.com |
To start, make sure you have the most up to date version of the Scout for Pet Owners App (1.7.0+).
Apple iOS
Google Android
Add a Pet Profile Photo
Log into the app using your email address and password.
Use the menu in the upper left hand corner and navigate to the pets section of the app.
Select a pet from the list or add a new pet.
Tap the "+" button at the top of the pet profile.
Choose a method for uploading your pet profile. You can take a photo using the camera or upload an existing photo from your phone's photo gallery.
Select the photo you would like to use as a profile picture. Zoom and or move the image to fit within the guidelines.
Note: Your pet sitter will have access to the full frame photo.
Select "choose" to set your pet's profile photo.
Remove a Pet Profile Photo
To remove a profile image, tap the existing profile image and select "Remove Profile Picture" | https://docs.scoutforpets.com/en/articles/2468803-add-a-pet-profile-photo | 2020-01-17T17:20:44 | CC-MAIN-2020-05 | 1579250589861.0 | [array(['https://downloads.intercomcdn.com/i/o/83986544/b6b365872effbe83ff4700dc/Menu.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/83985218/cad0419d04ccd854f07c84b3/Pets.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/83984922/cb8951f8e0b9e74c19ff6ad5/Button.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/83985873/a9905be657b690601a66e542/Upload+Options.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/83986307/3c3d406a14dda088cbfa8e27/Adjust+the+image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/83986421/1a6e77052bd0195efbc58daa/Final+Profile+Pic.png',
None], dtype=object) ] | docs.scoutforpets.com |
All content with label 2lcache+article+cache+cachestore+grid+guide+infinispan+listener+loader+notification+userguide.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, partitioning, query, deadlock, intro, pojo_cache, archetype, lock_striping, jbossas,
nexus, schema, s3, amazon, jcache, test, api, xsd, ehcache, maven, documentation, roadmap, youtube, write_behind, 缓存, ec2, hibernate, aws, interface, custom_interceptor, clustering, setup, mongodb, eviction, gridfs, out_of_memory, concurrency, fine_grained, jboss_cache, import, index, events, batch, configuration, hash_function, buddy_replication, pojo, write_through, cloud, mvcc, tutorial, presentation, jbosscache3x, xml, read_committed, distribution, jira, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, br, websocket, transaction, async, interactive, xaresource, searchable, demo, scala, installation, client, jpa, filesystem, tx, user_guide, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, snapshot, repeatable_read, webdav, hotrod, docs, consistent_hash, batching, store, whitepaper, jta, faq, spring, as5, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - 2lcache, - article, - cache, - cachestore, - grid, - guide, - infinispan, - listener, - loader, - notification, - userguide )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/2lcache+article+cache+cachestore+grid+guide+infinispan+listener+loader+notification+userguide | 2020-01-17T16:47:40 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.jboss.org |
All content with label api+cloud+data_grid+hibernate_search+hot_rod+infinispan+jboss_cache+listener+migration+nexus+partitioning+release+searchable.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, query, deadlock, intro, contributor_project, archetype, jbossas, lock_striping, guide, schema,
cache, amazon, s3, memcached, grid, test, jcache, xsd, ehcache, maven, documentation, wcm, youtube, userguide, write_behind, 缓存, ec2, streaming, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, large_object,, xaresource, build, gatein, demo, scala, cache_server, installation, client, non-blocking, filesystem, jpa, tx, user_guide, gui_demo, eventing, student_project, client_server, testng, infinispan_user_guide, standalone, snapshot, hotrod, webdav, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jsr-107, lucene, jgroups, locking, rest
more »
( - api, - cloud, - data_grid, - hibernate_search, - hot_rod, - infinispan, - jboss_cache, - listener, - migration, - nexus, - partitioning, - release, - searchable )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/api+cloud+data_grid+hibernate_search+hot_rod+infinispan+jboss_cache+listener+migration+nexus+partitioning+release+searchable | 2020-01-17T17:21:54 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.jboss.org |
Pocket Symphony (Enabling SSRS Feature in non-default Site Collections)
After installing the Reporting Services Add-in for Integration with SharePoint, you may find that the Reporting Services features are not visible in your SharePoint web sites. Why does this happen? Because when the add-in is installed, it is only activated for the sites in the default Site Collection.
Steps to resolve:
- Bowse to the 'Home' or root URL of the site collection.
- Click on Site Actions -->Site Settings --> Site Collection Features.
- Choose the "Reporting Services Integration" feature, under the list of features for Site "Collection Features".
- Press the "Activate" button.
Note: You will need to be owner of the site collection in order to view the "Site Collection features" link. | https://docs.microsoft.com/en-us/archive/blogs/bwelcker/pocket-symphony-enabling-ssrs-feature-in-non-default-site-collections | 2020-01-17T17:34:31 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.microsoft.com |
IoT Hub Documentation
Learn how to use IoT Hub to connect, monitor, and control billions of Internet of Things assets. Tutorials, API references, videos and other documentation help you deploy reliable and bi-directional communication between IoT devices and a solution back-end.
5-Minute Quickstarts
Step-by-Step Tutorials
- Configure automated message routing in IoT Hub
- Configure your devices from a back-end service
- Implement a device firmware update process
- Use a simulated device to test connectivity with your IoT hub | https://docs.microsoft.com/en-us/azure/iot-hub/?WT.mc_id=ondotnet-c9-cxa | 2020-01-17T16:34:13 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.microsoft.com |
Test-SPContent
Database
Syntax
Test-SPContentDatabase [-Identity] <SPContentDatabasePipeBind> [-AssignmentCollection <SPAssignmentCollection>] [-DatabaseCredentials <PSCredential>] [-ExtendedCheck] [-ServerInstance <SPDatabaseServiceInstancePipeBind>] [-ShowLocation] [-ShowRowCounts] [<CommonParameters>]
Test-SPContentDatabase -Name <String> -WebApplication <SPWebApplicationPipeBind> [-AssignmentCollection <SPAssignmentCollection>] [-DatabaseCredentials <PSCredential>] [-ExtendedCheck] [-ServerInstance <SPDatabaseServiceInstancePipeBind>] [-ShowLocation] [-ShowRowCounts] [<CommonParameters>]
Description
This cmdlet contains more than one parameter set. You may only use parameters from one parameter set, and you may not combine parameters from different parameter sets. For more information about how to use parameter sets, see Cmdlet Parameter Sets ()...
For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation at SharePoint Server Cmdlets.
Examples
----------------------------EXAMPLE 1-----------------------
Test-SPContentDatabase -name WSS_Content_DB -webapplication
This example tests the WSS_Content_DB content database against the sitename Web application and returns a list of issues.
----------------------------EXAMPLE 2-----------------------
$DB = Get-SPContentDatabase -site Test-SPContentDatabase $DB -showrowcounts. PSCredential object that contains the user name and password to be used for database SQL Server Authentication.
The type must be a valid PSCredential object.
Checks for inconsistent authentication modes during database-attach upgrade process.
The selected mode, claims or classic, must be the same in both versions.
Specifies an existing connected SharePoint content database to one of the two parameter sets in the form of a GUID or database name if it is unique.
Specifies the existing content database to test.
The type must be a valid name of a SharePoint content database; for example, SPContentDB1.
Specifies the instance of the database service to use to test the specified content database.
The type must be a valid GUID, such as 12345678-90ab-cdef-1234-567890bcdefgh; a valid name of a SQL Server instance (for example, DBSvrInstance1); or an instance of a valid SPDatabaseServiceInstance object.
Specifies the locations where missing templates and features are being used within the database. Typically, reported locations are scoped within the site collections that are within the specified content database.
The use of the parameter significantly increases the time to complete the test procedure.
Returns database statistics which are row counts for tables in the content database.
Specifies the SharePoint Web application to use to test the content database.
The type must be a valid GUID, in the form 12345678-90ab-cdef-1234-567890bcdefgh; or a valid name of SharePoint Web application (for example, MyOfficeApp1); or an instance of a valid SPWebApplication object.
Feedback | https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/test-spcontentdatabase?view=sharepoint-ps | 2020-01-17T16:46:42 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.microsoft.com |
Adding and Editing Logos
You may want to add some branding to a project, which can be done by inserting a logo. You can also attach a web address to a logo, sending respondents who click it to a website or your choice.
To Add a Logo
Send Respondents to a Website on Click
-.
To Change a Logo
- Click the logo you currently have inserted
- Press ‘Change’
- Choose another image from the Library
To Remove a Logo
- Click the logo you currently have inserted
- Press ‘Remove’
Note: This will remove the logo from your project, but it will still be present in your Image Library. | https://docs.shout.com/article/68-adding-and-editing-logos | 2020-01-17T17:24:49 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.shout.com |
.
- Ensure that JAVA_HOME is set to the installation directory of the supported Java JDK. postgres. | https://gpdb.docs.pivotal.io/5130/admin_guide/kerberos-lin-client.html | 2020-01-17T17:49:04 | CC-MAIN-2020-05 | 1579250589861.0 | [] | gpdb.docs.pivotal.io |
Yandex Alice integration¶
Any model specified by a DeepPavlov config can be launched as a skill for Yandex.Alice. You can do it using command line interface or using python.
Command line interface¶
To interact with Alice you will require your own HTTPS certificate. To generate a new one – run:
openssl req -new -newkey rsa:4096 -days 365 -nodes -x509 -subj "/CN=MY_DOMAIN_OR_IP" -keyout my.key -out my.crt
To run a model specified by the
<config_path> config file as an Alice
skill, run:
python -m deeppavlov alice <config_path> --https --key my.key --cert my.crt [-d] [-p <port>]
-d: download model specific data before starting the service.
The command will print the used host and port. Default web service properties
(host, port, model endpoint, GET request arguments, paths to ssl cert and key,
https mode) can be modified via changing
deeppavlov/utils/settings/server_config.json file.
--https,
--key,
--cert,
-p arguments override default values from
server_config.json.
Advanced API configuration is described in
REST API section.
Now set up and test your dialog (). Detailed documentation of the platform could be found on. Advanced API configuration is described in REST API section.
Python¶
To run a model specified by a DeepPavlov config
<config_path> as an Alice
skill using python, you have to run following code:
from deeppavlov.utils.alice import start_alice_server start_alice_server(<config_path>, host=<host>, port=<port>, endpoint=<endpoint>, https=True, ssl_key='my.key', ssl_cert='my.crt')
All arguments except
<model_config_path> are optional. Optional arguments override
corresponding values from
deeppavlov/utils/settings/server_config.json. | http://docs.deeppavlov.ai/en/latest/integrations/yandex_alice.html | 2020-01-17T16:51:30 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.deeppavlov.ai |
This guide describes how to install
Axional Studio
1 Installation Prerequisites and Requirements
To setup a server a minimum software should be installed in the guest
Linux computer
- A Linux operating system, with at least 2 GB of RAM.
- The last release of Java 11.
- The internet connection must be enabled, due the install files are downloaded from an external repository.
1.1 Java
A Java 11 virtual machine is required to run the application.
Run the following command in a terminal to check the current Java version:
$ java -version
If Java is already installed, then you can see a result message Studio the user axs
We recommend to install it, in the /home/axs/studio directory.
As root user create the user axs:
$ su - root $ useradd axs $ passwd axs
After the user axs is created, login with this user and create the studio subdirectory:
$ su - axs $ mkdir studio $ cd studio
1.3 Axional installer
Working as axs user, on directory /home/axs/studio download the installation tool using the common curl and tar commands:
$ su - axs
$ curl -k -s "" | tar -x -z --strip-components=1 --exclude ".hg*" --exclude ".project"
2 Install the application
Use
axional installer to setup the remote software repository and the credentials.
Type this command to install the product while in the newly created /home/axs/studio folder:
$ ./install.sh install
* Looking for curl... * Looking for unzip... * Check Java ... = Found java executable in PATH = Java version 9.0.4 * Looking for gradle * Trying to download Gradle distribution ()
First time you execute the installer, it will ask for some parameters to configure the installation framework. You'll be asked for a user and password for accesing deister nexus repository. Ask for it to deister support team.
2.1 Connection details
Installer will prompt for connection parameters to the
Nexus repository such as the URL, user and password
================================================================ CONFIGURATION ================================================================ Nexus Base URL []: Nexus User [deister-software-dist]: UserProvided Nexus User Password []: PassProvided
2.2 Choose product
Next the the installer will prompt for
product. We will enter studio.
The script will prompt for weather we want to download DB exports (dictionaries) along with the studio.....: deister-software-dist NEXUSPASS.....: ********** NEXUSPRODUCT..: studio PRODUCTNAME...: axional.studio.core PRODUCTVERS...: 0.0.+ Are You Sure? [y/n]
This questions are asked only first time and stored in file .install.rc. Following use of install.sh shell, will use parameters answered for accesing deister nexus repositories.
3 Network configuration
It is required that the localhost name resolves to an address in order for the server to know it's IP address. For example, in Linux systems, it can be necessary to create an entry in the "hosts" file (/etc/hosts):
... 10.0.0.1 my_host ... | https://docs.deistercloud.com/content/Axional%20development%20products.15/Axional%20Studio.4/Installation.4/Install%20(boot).7.xml?embedded=true | 2020-01-17T16:07:44 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.deistercloud.com |
FAQs
Here are some frequently asked questions about Redis Enterprise Software.
Features and Terminology
Redis Labs has enhanced open source Redis with a technology layer that encapsulates open source Redis, while fully supporting all its commands, data structures and modules. It adds exceptional flexibility, stable high performance and unmatched resilience, as well as multiple deployment choices (public and private clouds, on-premises, hybrid, RAM-Flash combination), topology (active-active, active-passive, active-replica) and support for very large dataset sizes. This enhanced and exponentially more powerful database platform is Redis Enterprise.
Learn more about Redis Enterprise.
Yes we are. Not only are we are the home of Redis, but most of Redis’ core engineers also work for Redis Labs! We contribute extensively to the open source Redis project. As a rule, we adhere to the open source’s specifications and make every effort to update our service with its latest versions.
That said, the following Redis features are not applicable in the context of our service:
- Shared databases aren’t supported in our service given their potential negative impact on performance. We recommend using dedicated databases instead (read this post for more information). Therefore, the following commands are blocked and show an error when used:
- Data persistence and backups are managed from the service’s web interface, so the following commands are blocked:
- Since replication is managed automatically by the service and since it could present a security risk, the following commands are blocked:
- Redis Labs clustering technology is different than the open source Redis Cluster and supports clustering in a seamless manner that works with all standard Redis clients. As a result, all Cluster related commands are blocked and show an error when used.
- Redis Labs clustering technology allows multiple active proxies. As a result, the CLIENT ID command cannot guarantee incremental IDs between clients who connect to different nodes under multi proxy policies.
- Commands that aren’t relevant for a hosted Redis service are blocked:
- Additionally, only a subset of Redis’ configuration settings (via CONFIG GET/SET) is applicable to Redis Cloud. Attempts to get or set a configuration parameter that isn’t included in the following list show an error when used:
- hash-max-ziplist-entries
- hash-max-ziplist-value
- list-max-ziplist-entries
- list-max-ziplist-value
- notify-keyspace-events
- set-max-intset-entries
- slowlog-log-slower-than (value must be larger than 1000)
- slowlog-max-len (value must be between 128 and 1024)
- zset-max-ziplist-entries
- zset-max-ziplist-value
- Lastly, unlike Redis’ 512MB limit, the maximum size of key names in our service is 64KB (key values, however, can have sizes up to 512MB).
Redis Enterprise Software offers a comprehensive suite of high-availability provisions, including in-memory replication, persistent storage, and backups.
A shard is any type of provisioned Redis instance, such as a master copy, slave copy, database shard that is part of a clustered database, etc.
Redis Enterprise works with all existing standard clients; it does not require you to use any special clients.
You can use, experience and administer the full capabilities of Redis Enterprise Software (RS), but you may not deploy it in a production environment. In addition, the trial version allows a maximum of four shards and is limited to thirty (30) days of use after initial installation on the first server in the cluster. After the thirty day trial, the cluster shifts to read-only status. The free version does not provide the same support options as the paid version. Finally, no SLA is provided with the trial version. To continue operation of the cluster with full capabilities, you must purchase a subscription cluster key from Redis Labs.
Redis Enterprise Software (RS) works with any standard Redis client. Use your existing Redis client and code, as they work directly against a RS cluster. You point your existing standard Redis client and code connection string at the RS cluster, then scale on the RS cluster as you need.
Technical Capabilities
The number of databases is unlimited. The limiting factor is the available memory in the cluster, and the number of shards in the subscription.
Note that the impact of the specific database configuration on the number of shards it consumes. For example:
- Enabling database replication, without enabling database clustering, creates two shards: a master shard and a slave shard.
- Enabling database clustering creates as many database shards as you configure.
- Enabling both database replication and database clustering creates double the number of database shards you configure.
As explained in the open source Redis FAQ, under "What happens if Redis runs out of memory?":
...[you] can use the "maxmemory" option in the config file to put a limit to the memory Redis can use. If this limit is reached Redis starts to reply with an error to write commands (but continues to accept read-only commands), or you can configure it to evict keys when the max memory limit is reached in the case you are using Redis for caching.
You can set the maxmemory value of each Redis Enterprise Software database in the management UI using the Memory limit property, as well as configure an eviction policy by setting it to any of the standard Redis behaviors, without interrupting database operations. | https://docs.redislabs.com/latest/rs/faqs/ | 2020-01-17T17:24:09 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.redislabs.com |
Imageio standard images¶
Imageio provides a number of standard images. These include classic 2D images, as well as animated and volumetric images. To the best of our knowledge, all the listed images are in public domain.
The image names can be loaded by using a special URI,
e.g.
imread('imageio:astronaut.png').
The images are automatically downloaded (and cached in your appdata
directory).
- chelsea.bsdf: The chelsea.png in a BSDF file(for testing)
- newtonscradle.gif: Animated GIF of a newton’s cradle
- cockatoo.mp4: Video file of a cockatoo
- stent.npz: Volumetric image showing a stented abdominal aorta
- astronaut.png: Image of the astronaut Eileen Collins
- camera.png: Classic grayscale image of a photographer
- checkerboard.png: Black and white image of a chekerboard
- chelsea.png: Image of Stefan’s cat
- clock.png: Photo of a clock with motion blur (Stefan van der Walt)
- coffee.png: Image of a cup of coffee (Rachel Michetti)
- coins.png: Image showing greek coins from Pompeii
- horse.png: Image showing the silhouette of a horse (Andreas Preuss)
- hubble_deep_field.png: Photograph taken by Hubble telescope (NASA)
- immunohistochemistry.png: Immunohistochemical (IHC) staining
- moon.png: Image showing a portion of the surface of the moon
- page.png: A scanned page of text
- text.png: A photograph of handdrawn text
- wikkie.png: Image of Almar’s cat
- chelsea.zip: The chelsea.png in a zipfile (for testing) | https://imageio.readthedocs.io/en/latest/standardimages.html | 2020-01-17T17:26:07 | CC-MAIN-2020-05 | 1579250589861.0 | [] | imageio.readthedocs.io |
household set comes with 15 different household surfaces. Each of the 11 wall surfaces come in 10 different color variations, while the 4 carpet surfaces come in another 10 different colors. Options for wall paint types include matte, gloss and semi gloss.
Every surface comes with 6 different base textures, including a 32bit EXR displacement map for accurate height information without “stair stepping” artifacts. The other textures for each include Diffuse,. | http://docs.daz3d.com/doku.php/public/read_me/index/62405/start | 2020-01-17T17:23:25 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.daz3d.com |
When It Is Appropriate to Use and When Not?
MEISSA is not the appropriate choice in many cases, see when it is not a good idea to use it.
When you go to buy bread from the bakery, you don’t go with a shuttle. Same can be applied here. I believe MEISSA is not the appropriate choice in many cases. I will mention a few. If you have only unit tests and they are executed for a few minutes without parallel test execution, then you can use the native tests runners. You don’t need to bother with more machines or changing the tool. Another possibility is, if you are happy with your current solution, and you are satisfied with its speed, stability and usability, you should stick to it. You have built experience using and maintaining it. I am against the usage of new technologies only because they are viral or someone else told me to do so. Maybe you are from this 15% of people that have built something custom and many people/teams in your organisation use it. It may be quite time-consuming to change all your CI builds and train all of your colleagues. | http://docs.meissarunner.com/when-it-is-appropriate-to-use-and-when-not/ | 2020-01-17T15:27:52 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.meissarunner.com |
The CISS Themes can be configured in the my.reddoxx.com portal.
Here you define the appearance (layout) of your CISS portal page.
If you wish to have different layouts for separate domains, you need to create multiple themes and then assign a domain to your prepared themes.
The CISS themes need to be assigned to the local internet domains.
The following steps are required to access the CISS Theme Configuration:
Following steps are required to edit a CISS Theme:
The theme can now be used at your CISS Management in the lokal internet domain.
Additionally you need to select CISS in your Filter Profiles to activate the CISS Challenges. | https://appliance.docs.reddoxx.com/en/quick-guides/ciss-theme-configuration | 2020-01-17T16:32:57 | CC-MAIN-2020-05 | 1579250589861.0 | [] | appliance.docs.reddoxx.com |
Setting up S7-1200/1500¶
Requirements
- Siemens Edition or Ultimate Edition
- S7-1200/1500 PLC
- TIA Portal
Sample Project
Sorting by Height with S7-1200/1500
This tutorial gives you step-by-step instructions on how to use a Siemens S7-1200/1500 PLC to control FACTORY I/O. Although the following instructions refer to an S7-1200 model, the same steps apply to the 1500.
Setting up communication between PC and PLC¶
Connect the PLC to the network.
Create a new project in TIA Portal.
Select Configure a device.
Click on Add new device. From the controllers tree expand SIMATIC S7-1200 > CPU > Unspecified CPU 1200, select the CPU under it and click on Add.
You are now on TIA Portal's Project View. Click on detect to automatically detect the PLC from a list of available devices on the network.
Choose PN/IE as the type of PG/PC interface and on PG/PC interface select the network adapter that you are using to connect to the PLC.
When scanning completes, select the PLC from the list of compatible devices. Next, click on Detect.
If you are adding a PLC that is not in the same subnet as your computer, you will be prompted about assigning a new IP address to the network interface; click on Yes.
Later you may have to change the PLC's IP address to one in the same subnet as your computer, otherwise FACTORY I/O might not be able to connect to it.
The detected PLC is now on the Device view. Some of its properties need to be tweaked to allow communication with FACTORY I/O. Double Left-click on the controller to open the Properties panel.
Start by assigning the PLC an IP address. On the General tab of the Properties page expand PROFINET interface and select Ethernet addresses.
If you were prompted about assigning a new IP address to the network interface, you should now assign an IP address to the PLC that is in the same subnet as your computer.
PLC physical inputs use by default the first memory addresses of %I. For FACTORY I/O to be able to write sensors values to %I you must offset the input addresses, we recommend an offset of 10. Click on I/O addresses and change the Input addresses > Start address to 10.
FACTORY I/O should not use input addresses that are assigned to physical inputs
Otherwise, the values written by FACTORY I/O will be overwritten as the state of physical inputs is copied to I memory.
Finally, some Protection & Security settings are required to be able to establish connection to the PLC.
Click on Protection and enable Permit access with PUT/GET communication from remote partner under Connection mechanisms.
Set the Access level as HMI access or higher (Read access or Full access).
For PLCs with firmware lower than V4.0, the correct protection settings are:
Right-click on the device and select Download to device > Hardware configuration. Next, Start the CPU.
Connecting FACTORY I/O to the PLC¶
In FACTORY I/O click on FILE > Driver Configuration to open the Driver Window.
Select Siemens S7-1200/1500 on the driver drop-down list.
Open the driver Configuration Panel by clicking on CONFIGURATION.
Make sure S7-1200 is selected on the Model drop-down list and. | https://docs.factoryio.com/tutorials/siemens/setting-up-s7-1200-1500/ | 2020-01-17T16:57:56 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.factoryio.com |
Table of Contents
Konqueror offers some features to enhance your browsing experience. One such feature is Web Shortcuts.
You may already have noticed that KDE is very Internet friendly. For example, you can click on the menu item or type the keyboard shortcut assigned to that command (Alt+F2 or Alt+Space, unless you have changed it) and type in a URI. [1]
Web shortcuts, on the other hand, let you come up with new pseudo
URL schemes, or shortcuts, that basically let you
parameterize commonly used
URIs. For example, if you like the Google search
engine, you can configure KDE so that a pseudo URL
scheme like gg will trigger a search on
Google. This way, typing
gg: will search for
my
query
my
query on Google.
Note
One can see why we call these pseudo URL
schemes. They are used like a URL scheme, but the
input is not properly URL encoded, so one will type
google:kde apps and not
google:kde+apps.
You can use web shortcuts wherever you would normally use URIs. Shortcuts for several search engines should already be configured on your system, but you can add new keywords, and change or delete existing ones in this module.
The descriptive names of defined web shortcuts are shown in a list box. As with other lists in KDE, you can click on a column heading to toggle the sort order between ascending and descending, and you can resize the columns.
At the bottom of the list the option Enable Web shortcuts has to be checked to enable this feature. Use the buttons on the right to create, modify or delete shortcuts.
If Use preferred shortcuts only is checked, only web shortcuts marked as preferred in the third column of the list are used in places where only a few select shortcuts can be shown at one time.
Below the list you find two additional options:
- Default Web shortcuts
Select the search engine to use for input boxes that provide automatic lookup services when you type in normal words and phrases instead of a URL. To disable this feature select None from the list.
- Keyword delimiter
Choose the delimiter that separates the keyword from the phrase or word to be searched.
If you double-click on a specific entry in the list of defined search providers or click the button, the details for that entry are shown in a popup dialog. In addition to the descriptive name for the item, you can also see the URI which is used, as well as the associated shortcuts which you can type anywhere in KDE where URIs are expected. A given search provider can have multiple shortcuts, each separated by a comma.
The text boxes are used not only for displaying information about an item in the list of web shortcuts, but also for modifying or adding new items.
You can change the contents of either the Shortcut URL or the Shortcuts text box. Click to save your changes or to exit the dialog with no changes.
If you examine the contents of the Shortcuts
URL text box, you will find that most, if not all of the
entries have a
\{@} in them. This sequence of four
characters acts as a parameter, which is to say that they are replaced
by whatever you happen to type after the colon character that is
between a shortcut and its parameter. To add this query placeholder
to a shortcuts url, click on the button at the right of the text box.
Let's consider some examples to clarify how to use web shortcuts.
Suppose that the URI is\{@}, and
gg is a shortcut to this
URI. Then, typing
gg: is
equivalent to
alpha.
You could type anything after the
alpha
: character;
whatever you have typed simply replaces the
\{@}
characters, after being converted to the appropriate character set for
the search provider and then properly
URL-encoded. Only the
\{@} part of
the search URI is touched, the rest of it is
supposed to be properly URL-encoded already and is
left as is.
You can also have shortcuts without parameters. Suppose the
URI was
file:/home/me/mydocs/calligra/words and the
shortcut was mywords. Then, typing
mywords: is the same as typing the complete
URI. Note that there is nothing after the colon
when typing the shortcut, but the colon is still required in order for
the shortcut to be recognized as such.
By now, you will have understood that even though these shortcuts are called web shortcuts, they really are shortcuts to parameterized URIs, which can point not only to web sites like search engines but also to anything else that can be pointed to by a URI. Web shortcuts are a very powerful feature of navigation in KDE. | https://docs.kde.org/trunk5/en/frameworks/kcontrol5/webshortcuts/index.html | 2020-01-17T15:45:37 | CC-MAIN-2020-05 | 1579250589861.0 | [array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)] | docs.kde.org |
Description guidelines for SEO
Each page should have a single, unique description that accurately reflects the contents of the page.
Explanation
Most search engines display the content of the
<meta name="description"> tag in search results; it is often the most visible and most valuable opportunity to persuade potential customers to click through to your page.
The
<meta name="description"> tag is one of the most important tags that the site developer controls that can influence the relevance and ranking of a site in search-engine results. A precise snippet of descriptive text can improve the click-through volume to the site.
Guidelines
The
<meta name="description"> tag should be defined in the
<head> tag section of the page, between the
<title> tag and the
<meta name="keywords"> tag.
Review the
<meta name="description"> tag of each page in the site to make sure that it accurately describes the page that contains it.
Check each
<meta name="description"> tag for the following:
Each page should have a
<meta name="description">tag.
For more information, see WEB1027 - The description for the page is missing.
Each
<meta name="description">tag should be a child of the
<head>tag.
For more information, see WEB1022 - The <meta name="description"> tag is not inside the <head> tag section.
A
<meta name="description">tag should occur only once in a page.
For more information, see WEB1026 - The <meta name="description"> tag should be declared only once in a page.
Each
<meta name="description">tag in a site should be unique.
For more information, see WEB1028 - The <meta name="description"> tag contents are not unique within the site.
A
<meta name="description">tag should not be empty and should not exceed 150 characters.
For more information, see WEB1024 - The <meta name="description"> tag contents are too short and WEB1023 - The <meta name="description"> tag contents are too long.
The
<title>and
<meta name="description">tags should have different content.
For more information, see WEB1045 - The title and description for the page are identical.
See also
Concepts
Page and site guidelines for SEO
Title guidelines for SEO
Heading guidelines for SEO
Keywords guidelines for SEO
Image guidelines for SEO
Hyperlink guidelines for SEO
Send feedback about this topic to Microsoft. © 2011 Microsoft Corporation. All rights reserved. | https://docs.microsoft.com/en-us/previous-versions/visualstudio/design-tools/expression-studio-4/ff723968%28v%3Dexpression.40%29 | 2020-01-17T17:38:14 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.microsoft.com |
$ subscription-manager repos --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-3.11-rpms" \ --enable="rhel-7-server-ansible-2.6-rpms" more secure and scalable multi-tenant operating system for today’s enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.
Red Hat OpenShift Container Platform 3.11 (RHBA-2018:2652) is now available. This release is based on OKD 3.11, and it uses Kubernetes 1.11. New features, changes, bug fixes, and known issues that pertain to OpenShift Container Platform 3.11 are included in this topic.
OpenShift Container Platform 3.11 is supported on Red Hat Enterprise Linux 7.4 and later with the latest packages from Extras, including CRI-O 1.11 and Docker 1.13. It is also supported on Atomic Host 7.5 and later.
OpenShift Container Platform 3.11 is supported on Red Hat Enterprise Linux 7 nodes running in Federal Information Processing Standards (FIPS) mode.
For initial installations, see the Installing Clusters documentation.
To upgrade to this release from a previous version, see the Upgrading Clusters documentation.
OpenShift Container Platform 3.11 is the last release in the 3.x stream. Large changes to the underlying architecture and installation process are coming in version 4.0, and many features will be deprecated.
Because of the extent of the changes in OpenShift Container Platform 4.0, the product documentation will also undergo significant changes, including the deprecation of large amounts of content. New content will be released based on the architectural changes and updated use cases.
This release adds improvements related to the following components and concepts.
The OLM aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster:
Includes a catalog of curated Operators, with the ability to load other Operators into the cluster
Handles rolling updates of all Operators to new versions
Supports role-based access control (RBAC) for certain teams to use certain Operators
See Installing the Operator Framework for more information.
The Operator SDK is a development tool to jump-start building an Operator with generated code and a CLI to aid in building, testing, and publishing your Operator. The Operator SDK:
See Getting started with the Operator SDK in OKD documentation for more information and walkthroughs.
Brokers mediate service requests in the Service Catalog. The goal is for you to initiate the request and for the system to fulfill the request in an automated fashion.
The Automation Broker manages applications defined in Ansible Playbook Bundles (APB). OpenShift Container Platform 3.11 includes support for discovering and running APB sources published to Ansible Galaxy from the OpenShift Container Platform Automation Broker.
See OpenShift Automation Broker for more information.
The Red Hat Container Catalog is moving from
registry.access.redhat.com to
registry.redhat.io.
registry.redhat.io requires authentication for access to
images and hosted content on OpenShift Container Platform.
OpenShift Container Platform 3.11 adds support for authenticated
registries. The broker uses
cluster-wide as the default setting for registry
authentication credentials. You can define
oreg_auth_user and
oreg_auth_password in the inventory file to configure the credentials.
The Service Catalog added support for namespaced brokers in addition to the
previous cluster scoped behavior. This means you can register the broker with
the service catalog as either a cluster-scoped
ClusterServiceBroker or a
namespace-scoped
ServiceBroker kind. Depending on the broker’s scope, its
services and plans are available to the entire cluster or scoped to a specific
namespace. When installing the broker, you can set the
kind argument as
ServiceBroker (namespace-specific) or
ClusterServiceBroker (cluster-wide).
In OpenShift Container Platform 3.11,
openshift_certificate_expiry_warning_days, which
indicates the amount of time the auto-generated certificates must be valid for
an upgrade to proceed, is added.
Additionally,
openshift_certificate_expiry_fail_on_warn is added, which
determines whether the upgrade fails if the auto-generated certificates are not
valid for the period specified by the
openshift_certificate_expiry_warning_days parameter.
See Configuring Your Inventory File for more information.
openshift-ansible now requires Ansible 2.6 for both installation of
OpenShift Container Platform 3.11 and upgrading from version 3.10.
The minimum version of Ansible required for OpenShift Container Platform 3.11 to run
playbooks is now 2.6.x. On both master and node, use
subscription-manager to
enable the repositories that are necessary to install OpenShift Container Platform
using Ansible 2.6. For example:
$ subscription-manager repos --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-3.11-rpms" \ --enable="rhel-7-server-ansible-2.6-rpms"
Ansible 2.7 is not yet supported.
Registry auth credentials are now required for OpenShift Container Platform so that images and metadata can be pulled from an authenticated registry, registry.redhat.io.
Registry auth credentials are required prior to installing and upgrading when:
openshift_deployment_type ==
‘openshift-enterprise’
oreg_url ==
‘registry.redhat.io’ or undefined
To configure authentication,
oreg_auth_user and
oreg_auth_password must be defined in the inventory file.
Pods can also be allowed to reference images from other secure registries.
See Importing Images from Private Registries for more information.
Ansible configuration is now updated to ensure OpenShift Container Platform installations are logged by default.
The Ansible configuration parameter
log_path is now defined. Users must be in
the /usr/share/ansible/openshift-ansible directory prior to running any
playbooks.
OpenShift Container Storage (OCS) provides software defined storage as a container for use with OpenShift Container Platform. Use OCS to define persistent volumes (PV) for use with your containers. (BZ#1645358)
CSI allows OpenShift Container Platform to consume storage from storage backends that implement the CSI interface as persistent storage.
See Persistent Storage Using Container Storage Interface (CSI) for more information.
You can now control the use of the local ephemeral storage feature on your nodes. This helps.
OpenShift Container Platform is capable of provisioning PVs using the OpenStack Manila shared file system service.
See Persistent Storage Using OpenStack Manila for more information.
You can expand PV claims online from OpenShift Container Platform for GlusterFS by creating a storage class with
allowVolumeExpansion set to
true, which causes the following to happen:
The PVC uses the storage class and submits a claim.
The PVC specifies a new increased size.
The underlying PV is resized. or pods referencing your volume are restarted.
Network attached file systems, such as GlusterFS and Azure File, can be expanded without having to restart the referencing pod, as these systems do not require unique file system expansion.
See Expanding Persistent Volumes for more information.
Tenants can now leverage the underlying storage technology backing the PV assigned to them to make a snapshot of their application data. Tenants can also now restore a given snapshot from the past to their current application.
You can use an external provisioner to access EBS, GCE pDisk, and hostPath. This Technology Preview feature has tested EBS and hostPath. The tenant must stop the pods and start them manually.
To use the external provisioner to access EBS and hostPath:
The administrator runs an external provisioner for the cluster. These are images from the Red Hat Container Catalog.
The tenant creates a PV claim and owns a PV from one of the supported storage solutions.
The administrator must create a new
StorageClass in the cluster, for example:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: snapshot-promoter provisioner: volumesnapshot.external-storage.k8s.io/snapshot-promoter
The tenant creates a snapshot of a PV claim named
gce-pvc, and the resulting
snapshot is
snapshot-demo, for example:
$ oc create -f snapshot.yaml apiVersion: volumesnapshot.external-storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo namespace: myns spec: persistentVolumeClaimName: gce-pvc
The pod is restored to that snapshot, for example:
$ oc create -f restore.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: snapshot-pv-provisioning-demo annotations: snapshot.alpha.kubernetes.io/snapshot: snapshot-demo spec: storageClassName: snapshot-promoter
Updated guidance around Cluster Maximums for OpenShift Container Platform 3.11 is now available.
New recommended guidance for master
For large or dense clusters, the API server might get overloaded because of the default queries per second (QPS) limits. Edit /etc/origin/master/master-config.yaml and double or quadruple the QPS limits.
See Recommended Practices for OpenShift Container Platform Master Hosts for more information.
OpenShift Container Platform exposes metrics that can be collected and stored in backends by the cluster-monitoring-operator. As an OpenShift Container Platform administrator, you can view system resources, containers, and component’s metrics in one dashboard interface, Grafana.
In OpenShift Container Platform 3.11, the cluster monitoring operator installation is enabled
by default as
node-role.kubernetes.io/infra=true in your cluster. You can
update this by setting
openshift_cluster_monitoring_operator_node_selector in
the inventory file of your customized node selector.Ensure there is an available
node in your cluster to avoid unexpected failures.
See Scaling Cluster Monitoring Operator for capacity planning details.
Prometheus cluster monitoring is now fully supported in OpenShift Container Platform and deployed by default into an OpenShift Container Platform cluster.
Query and plot cluster metrics collected by Prometheus.
Receive notifications from pre-packaged alerts, enabling owners to take corrective actions and start troubleshooting problems.
View pre-packaged Grafana dashboards for etcd, cluster state, and many other aspects of cluster health.
See Configuring Prometheus Cluster Monitoring for more information.
Elasticsearch 5 and Kibana 5 are now available. Kibana dashboards can be saved and shared between users. Elasticsearch 5 introduces better resource usage and performance and better resiliency.
Additionally, new numeric types,
half_float and
scaled_float are now added.
There are now instant aggregations in Kibana 5, making it faster. There is also
a new API that returns an explanation of why Elasticsearch shards are unassigned.
Usually called plug-ins or binary extensions, this feature allows you to
extend the default set of
oc commands available and, therefore, allows you to
perform new tasks.
See Extending the CLI for information on how to install and write extensions for the CLI..
See Triggering Builds for more information.
In some scenarios, build operations require credentials or other configuration data to access dependent resources, but it is undesirable for that information to be placed in source control. You can define input secrets and input ConfigMaps for this purpose.
See Build Inputs for additional details.
OpenShift Container Platform always shipped
kubectl for Linux on
the master’s file system, but it is now available in the
oc client downloads.
All container images available through the Red Hat Container Catalog are hosted
on an image registry,
registry.access.redhat.com. The Red Hat Container
Catalog is.
See Authentication Enabled Red Hat Registry.
See Container Registry for more information.
See Kuryr SDN Administration and Configuring Kuryr SDN for best practices in OpenShift Container Platform and Red Hat OpenStack integration.
The OpenShift Container Platform router is the most common way to get traffic into the cluster. The table below lists the OpenShift Container Platform router (HAProxy) enhancements for 3.11.
Adding basic active/backup HA for project/namespace egress IPs now allows a namespace to have multiple egress IPs hosted on different cluster nodes.
To add basic active/backup HA to an existing project/namepace:
Add two or more egress IPs to its
netnamespace:
$ oc patch netnamespace myproject -p '{"egressIPs":["10.0.0.1","10.0.0.2"]}'
Add the first egress IP to a node in the cluster:
# oc patch hostsubnet node1 -p '{"egressIPs":["10.0.0.1"]}'
Add the second egress IP to a different node in the cluster:
# oc patch hostsubnet node2 -p '{"egressIPs":["10.0.0.2"]}'
The project/namespace uses the first listed egress IP by default (if available) until that node stops responding, upon which other nodes switch to using the next listed egress IP, and so on. This solution requires greater than or equal to two IPs.
If the original IP eventually comes back, the nodes switch back to using the original egress IP.
See Enabling Static IPs for External Project Traffic for more information.
A fully-automatic HA option is now available. Projects/namespaces are automatically allocated a single egress IP on a node in the cluster, and that IP is automatically migrated from a failed node to a healthy node.
To enable the fully-automatic HA option:
Patch one of the cluster nodes with the
egressCIDRs:
# oc patch hostsubnet node1 -p '{"egressCIDRs":["10.0.0.0/24"]}'
Create a project/namespace and add a single egress IP to its
netnamespace:
# oc patch netnamespace myproject -p '{"egressIPs":["10.0.0.1"]}'
The OpenShift Container Platform SDN overlay VXLAN port is now configurable (default is
4789). VMware modified the VXLAN port used in the VMware NSX SDN (≥v6.2.3) from
8472 to
4789 to adhere to RFC 7348.
When running the OpenShift Container Platform SDN overlay on top of VMware’s NSX SDN underlay, there is a port conflict since both use the same VXLAN port (
4789). With a configurable VXLAN port, users can choose the port configuration of the two products, used in combination, for their particular environment.
To configure the VXLAN port:
Modify the VXLAN port in master-config.yaml with the new port number (for example,
4889 instead of
4789):
vxlanPort: 4889
clusternetwork and restart the master API and controller:
$ oc delete clusternetwork default $ master-restart api controllers
Restart all SDN pods in the
openshift-sdn project:
$ oc delete pod -n openshift-sdn -l app=sdn
Allow the new port on the firewall on all nodes:
# iptables -i OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 4889 -j ACCEPT.
See Pod Priority and Preemption for more information..
The Node Problem Detector monitors the health of your nodes by finding specific.
The three problem daemons are:
Kernel Monitor, which monitors the kernel log via journald and reports problems according to regex patterns.
AbrtAdaptor, which monitors the node for kernel problems and application crashes from journald.
CustomerPluginMonitor, which allows you to test for any condition and exit on a
0 or
1 should your condition not be met.
See Node Problem Detector for more information.
You can configure an auto-scaler on your OpenShift Container Platform cluster in Amazon Web Services (AWS) to provide elasticity for your application workload. The auto-scaler ensures that enough nodes are active to run your pods and that the number of active nodes is proportional to current demand.
See Configuring the cluster auto-scaler in AWS for more information.
OpenShift Container Platform 3.11 introduces a cluster administrator console tailored toward application development and cluster administrator personas.
Users have a choice of experience based on their role or technical abilities, including:
An administrator with Containers as a Service (CaaS) experience and with heavy exposure to Kubernetes.
An application developer with Platform as a Service (PaaS) experience and standard OpenShift Container Platform UX.
Sessions are not shared across the consoles, but credentials are.
See Configuring Your Inventory File for details on configuring the cluster console.
OpenShift Container Platform now has an expanded ability to manage and troubleshoot cluster nodes, for example:
Node status events are extremely helpful in diagnosing resource pressure and other failures.
Runs node-exporter as a DaemonSet on all nodes, with a default set of scraped metrics from the kube-state-metrics project.
Metrics are protected by RBAC.
Those with cluster-reader access and above can view metrics.
You can view, edit, and delete the following Kubernetes objects:
Networking
Routes and ingress
Storage
PVs and PV claims
Storage classes
Admin
Projects and namespaces
Nodes
Roles and RoleBindings
CustomResourceDefinition (CRD)
OpenShift Container Platform 3.11 includes visual management of the cluster’s RBAC roles and RoleBindings, which allows you to:
Find users and service accounts with a specific role.
View cluster-wide or namespaced bindings.
Visually audit a role’s verbs and objects.
Project administrators can self-manage roles and bindings scoped to their namespace.
The cluster-wide event stream provides the following ways to help debug events:
All namespaces are accessible by anyone who can list the namespaces and events.
Per-namespace is accessible for all project viewers.
There is an option to filter by category and object type.
You can use this feature to configure cooperating containers in a pod, such as a log handler sidecar container, or to troubleshoot container images that do not include debugging utilities like a shell, for example: visible to all other processes.
Any
kill all semantics used within the process are broken.
Any
exec processes from other containers show up.
See Expanding Persistent Volumes for more information.
GitHub Enterprise is now an auth provider..
See Configuring Authentication and User Agent for more information.
oc now supports the Security Support Provider Interface (SSPI) to allow for
single sign-on (SSO) flows on Windows. If you use the request header identity
provider with a GSSAPI-enabled proxy to connect an Active Directory server to
OpenShift Container Platform, users can automatically authenticate to OpenShift Container Platform using
the
oc command line interface from a domain-joined Windows computer.
See Configuring Authentication and User Agent for more information.
Red Hat OpenShift Service Mesh is a platform that provides behavioral insights and operational control over the service mesh, providing a uniform way to connect, secure, and monitor microservice applications.
The term service mesh is often used to describe the network of microservices that make up applications based on a distributed microservice architecture and the interactions between those microservices. As a service mesh grows in size and complexity, it can become harder to understand and manage.
OpenShift Container Platform 3.11 introduces the following notable technical changes.
subjectaccessreviews.authorization.openshift.io and resourceaccessreviews.authorization.openshift.io are now cluster-scoped only. If you need namespace-scoped requests, use localsubjectaccessreviews.authorization.openshift.io and localresourceaccessreviews.authorization.openshift.io.
No new privs flag
Security Context Constraints have two new options to manage use of the (Docker)
no_new_privs flag to prevent containers from gaining new privileges:
The
AllowPrivilegeEscalation flag gates whether or not a user is allowed to set the security context of a container.
The
DefaultAllowPrivilegeEscalation flag sets the default for the
allowPrivilegeEscalation option.
For backward compatibility, the
AllowPrivilegeEscalation flag defaults to
allowed. If that behavior is not desired, this field can be used to default to
disallow, while still permitting pods to request
allowPrivilegeEscalation
explicitly.
Forbidden and unsafe sysctls options
Security Context Constraints have two new options to control which sysctl options can be defined in a pod spec:
The
forbiddenSysctls option excludes specific sysctls.
The
allowedUnsafeSysctls option controls specific needs such as high performance or real-time application tuning.
All safe sysctls are enabled by default; all unsafe sysctls are disabled by default and must be manually allowed by the cluster administrator.
The
oc deploy command is deprecated in OpenShift Container Platform 3.7. The
oc rollout command replaces this command.
The deprecated
oc env and
oc volume commands are now removed. Use
oc set
env and
oc set volume instead.
The
oc ex config patch command will be removed in a future release, as the
oc patch command replaces it.
The
oc export command is deprecated in OpenShift Container Platform 3.10. This command will be removed in a future release, as the
oc get --export command replaces it.
In OpenShift Container Platform 3.11,
oc types is now deprecated. This command will be
removed in a future release. Use the official documentation instead.
The OpenShift Container Platform Pipeline Plug-in is deprecated but continues to work with
OpenShift Container Platform versions up to version 3.11. For later versions of
OpenShift Container Platform, either use the
oc binary directly from your Jenkins
Pipelines or use the OpenShift Container Platform Client Plug-in.
Curator now works with Elasticsearch 5.
See Aggregating Container Logs for additional information.
Hawkular is now deprecated and will be removed in a future release.
Instead of
registry.access.redhat.com, OpenShift Container Platform now uses
registry.redhat.io as the source of images for version 3.11. For access,
registry.redhat.io requires credentials. See Authentication Enabled Red Hat Registry for more information.
Red Hat strongly recommends using the overlayFS storage driver instead of Device Mapper. For better performance, use overlayfs2 for Docker engine or overlayFS for CRI-O. Previously, we recommended using Device Mapper.
This release fixes bugs for the following components:
Builds
ConfigMap Build Sources allows you to use ConfigMaps as a build source, which is transparent and easier to maintain than secrets. ConfigMaps can be injected into any OpenShift build. (BZ#1540978)
Information about out of memory (OOM) killed build pods is propagated to a build object. This information simplifies debugging and helps you discover what went wrong if appropriate failure reasons are described to the user. A build controller populates the status reason and message correctly when a build pod is OOM killed. (BZ#1596440)
The logic for updating the build status waited to update the log snippet containing the tail of the build log only ran after the build status changed to the failed state. The build would first transition to a failed state, then get updated again with the log snippet. This means code watching for the build to enter a failed state would not see the log snippet value populated initially. The code is now changed to populate the log snippet field when the build transitions to failed status, so the build update will contain both the failed state and the log snippet. Code that watches the build for a transition to the failed state will see the log snippet as part of the update that transitioned the build to failed, instead of seeing a subsequent update later. (BZ#1596449)
If a job used the
JenkinsPipelineStrategy build strategy, the prune settings
were ignored. As a result, setting
successfulBuildsHistoryLimit and
failedBuildsHistoryLimit did not correctly prune older jobs. The code has been changed to prune jobs properly.
(BZ#1543916)
Cloud Compute
You can now configure NetworkManager for
dns=none during installation. This configuration is commonly used when deploying OpenShift Container Platform on Microsoft Azure, but can also be useful in other scenarios. To configure this, set
openshift_node_dnsmasq_disable_network_manager_dns=true.
(BZ#1535340)
Image. Now, updates to the image stream that result in no new or updated tags that need to be imported will not result in an import API call. With this fix, invalid requests do not go to the import API, and no errors occur in the controller. (BZ#1613979)
Image pruning stopped on encountering any unexpected error while deleting blobs. In the case of an image deletion error, image pruning failed to remove any image object from etcd. Images are now being pruned concurrently in separated jobs. As a result, image pruning does not stop on a single unexpected blob deletion failure. (BZ#1567657)
Installer
When deploying to AWS, the
build_ami play failed to clean /var/lib/cloud. An unclean /var/lib/cloud directory causes cloud-init to skip execution. Skipping execution causes a newly deployed node to fail to bootstrap and auto-register to OpenShift Container Platform. This bug fix cleans the /var/lib/cloud directory during
seal_ami play.
(BZ#1599354)
The installer now enables the router’s extended route validation by default.
This validation performs additional validation and sanitation of routes' TLS
configuration and certificates. Extended route validation was added to the
router in OpenShift Container Platform 3.3 and enhanced with certificate sanitation in
OpenShift Container Platform 3.6. However, the installer did not previously enable extended
route validation. There was initial concern that the validation might be too
strict and reject valid routes and certificates, so it was disabled by default.
But it has been determined to be safe to enable by default on new installs. As a
result, extended route validation is enabled by default on new clusters. It
can be disabled using by setting
openshift_hosted_router_extended_validation=False in the Ansible inventory.
Upgrading an existing cluster does not enable extended route validation.
(BZ#1542711)
Without the fully defined azure.conf file when a load balancer service was requested through OpenShift Container Platform, the load balancer would never fully register and provide the external IP address. Now the azure.conf, with all the required variables, allows the load balancer to be deployed and provides the external IP address. (BZ#1613546)
To facilitate using CRI-O as the container-runtime for OpenShift Container Platform, update the node-config.yaml file with the correct endpoint settings. The
openshift_node_groups defaults have been extended to include CRI-O variants
for each of the existing default node groups. To use the CRI-O runtime for a
group of compute nodes, use the following inventory variables:
openshift_use_crio=True
openshift_node_group_name="node-config-compute-crio"
Additionally, to deploy the Docker garbage collector,
docker gc, the following
variable must be set to
True. This bug fix changes the previous variable default value from
True to
False:
openshift_crio_enable_docker_gc=True
(BZ#1615884)
The ansible.cfg file distributed with
openshift-ansible now sets a default log path of ~/openshift-ansible.log. This ensures that logs are written in a predictable location by default. To use the distributed ansible.cfg file, you must first change directories to
/usr/share/ansible/openshift-ansible before running Ansible playbooks. This
ansible.cfg file also sets other options meant to increase the performance
and reliability of
openshift-ansible.
(BZ#1458018)
Installing Prometheus in a multi-zone or region cluster using dynamic storage
provisioning causes the Prometheus pod to become unschedulable in some cases.
The Prometheus pod requires three physical volumes: one for the Prometheus
server, one for the Alertmanager, and one for the alert-buffer. In a multi-zone cluster with dynamic storage, it is possible that one or more of these volumes becomes allocated in a different zone than the others. This causes the Prometheus pod to become unschedulable due to each node in the cluster only able to access physical volumes in its own zone. Therefore, no node can run the Prometheus pod and access all three physical volumes. The recommended solution is to create a storage class which restricts volumes to a single zone using the
zone: parameter, and assigning this storage class to the Prometheus volumes using the Ansible installer inventory variable,
openshift_prometheus_<COMPONENT>_storage_class=<zone_restricted_storage_class>. With this workaround, all three volumes get created in the same zone or
region, and the Prometheus pod is automatically scheduled to a node in the
same zone.
(BZ#1554921)
Logging
Previously, the
openshift-ansible installer only supported
shared_ops and
unique as Kibana index methods. This bug fix allows users in a non-ops EFK
cluster to share the default index in Kibana, to share queries, dashboards, and
so on. (BZ#1608984)
As part of installing the ES5 stack, users need to create a sysctl file for the nodes that ES runs on. This bug fix evaluates which nodes/Ansible hosts to run the tasks against. (BZ#1609138)
Additional memory is required to support Prometheus metrics and retry queues to avoid periodic restarts from out-of-the-box memory. This bug fix increases out-of-the-box memory for Fluentd. As a result, Fluentd pods avoid out-of-the-box memory restarts. (BZ#1590920)
Fluentd will now reconnect to Elasticsearch every 100 operations by default. If one Elasticsearch starts before the others in the cluster, the load balancer in the Elasticsearch service will connect to that one and that one only, and so will all of the Fluentd connecting to Elasticsearch. With this enhancement, by having Fluentd reconnect periodically, the load balancer will be able to spread the load evenly among all of the Elasticsearch in the cluster. (BZ#1489533)
The rubygem ffi 1.9.25 reverted a patch, which allowed it to work on systems
with SELinux
deny_execmem=1. This cases Fluentd to crash. This bug fix reverts
the patch reversion and, as a result, Fluentd does not crash when using SELinux
deny_execmem=1.
(BZ#1628407)
Management Console
The log viewer was not accounting for multi-line or partial line responses. If a response contained a multi-line message, it was appended and treated as a single line, causing the line numbers to be incorrect. Similarly, if a partial line were received, it would be treated as a full line, causing longer log lines sometimes to be split into multiple lines, again making the line count incorrect. This bug fix adds logic in the log viewer to account for multi-line and partial line responses. As a result, line numbers are now accurate. (BZ#1607305)
Monitoring
The
9100 port was blocked on all nodes by default. Prometheus could not scrape the
node_exporter service running on the other nodes, which listens on port
9100. This bug fix modifies the firewall configuration to allow incoming TCP traffic for the
9000 -
1000 port range. As a result, Prometheus can now scrape the
node_exporter services.
(BZ#1563888)
node_exporter starts with the
wifi collector enabled by default. The
wifi collector requires SELinux permissions that are not enabled, which causes AVC denials though it does not stop
node_exporter. This bug fix ensures
node_exporter starts with the
wifi collector being explicitly disabled. As a
result, SELinux no longer reports AVC denials.
(BZ#1593211)
Uninstalling Prometheus currently deletes the entire
openshift-metrics
namespace. This has the potential to delete objects which have been created in
the same namespace but are not part of the Prometheus installation. This bug fix changes the uninstall process to delete only the specific objects which were created by the Prometheus install and delete the namespace if there are no remaining objects, which allows Prometheus to be installed and uninstalled while sharing a namespace with other objects.
(BZ#1569400)
Pod
Previously, a Kubernetes bug caused
kubectl drain to stop when pods returned
an error. With the
Kubernetes fix, the
command no longer hangs if pods return an error.
(BZ#1586120)
Routing
Because dnsmasq was exhausting the available file descriptors after the OpenShift Extended Comformance Tests and the Node Vertical Test, dnsmasq was hanging and new pods were not being created. A change to the code increases the maximum number of open file descriptors so the node can pass the tests. (BZ#1608571)
If 62 or more IP addresses are specified using an
haproxy.router.openshift.io/ip_whitelist annotation on a route, the router
will error due to exceeding the maximum parameters on the command (63). The
router will not reload. The code was changed to use an
overflow map if the there are too many IPs in the whitelist annotation and pass
the map to the HA-proxy ACL.
(BZ#1598738)
By design, using a route with several services, when configuring a service with
set route-backend set to
0, the weight would drop all existing connections and associated end user connections. With this bug fix, a value of
0 means the server will not participate in load-balancing but will still accept persistent connections.
(BZ#1584701)
Because the liveness and readiness probe could not differentiate between a pod
that was alive and one that was ready, a router with
ROUTER_BIND_PORTS_AFTER_SYNC=true was reported as failed. This bug fix splits the liveness and readiness probe into separate probes, one for readiness and one for liveness. As a result, a router pod can be alive but not yet ready.
(BZ#1550007)
When the HAproxy router contains a large number of routes (10,000 or more), the router will not pass the liveness and Readiness due to low performance, which kills the router repeatedly. The root cause of this issue is likely that a health check cannot be completed within the default readiness and liveness detection cycle. To prevent this problem, increase the interval of the probes. (BZ#1595513)
Service Broker
The deprovision process for Ansible Service Broker was not deleting secrets from the openshift-ansible-service-broker project. With this bug fix, the code was changed to delete all associated secrets upon Ansible Service Broker deprovisioning. (BZ#1585951)
Previously, the broker’s reconciliation feature would delete its image references before getting the updated information from the registry, and there would be a period before the records appeared in the broker’s data store while other jobs were still running. The reconciliation feature was redesigned to do an in-place update for items that have changed. For items removed from the registry, the broker deletes only those not already provisioned. It will also mark those items for deletion, which filters them out of the UI, preventing future provisions of those items. As a result, the broker’s reconciliation feature makes provisioning and deprovisioning more resilient to registry changes. (BZ#1577810)
Previously, users would see an error message when an item was not found, even if
it is normal not to be found. As a result, successful jobs might have an error
message logged, causing the user concern that there might be a problem when
there was none. The logging level of the message has now been changed from
error to
debug, because the message is still useful for debugging purposes, but not useful for a production installation, which usually has the level set to
info or higher. As a result, users will not see an error message when the instance is not found unless there was an actual problem.
(BZ#1583587)
If the cluster is not running or is not reachable, the
svcat version command resulted in an error. The code has been changed to always report the client version, and if the server is reachable, it then reports the server version.
(BZ#1585127)
In some scenarios, using the
svcat deprovision <service-instance-name> --wait command sometimes resulted in the
svcat command terminating with a panic error. When this happened, the
deprovision command got executed, and the program then encountered a code bug when attempting to wait for the instance to be fully deprovisioned. This issue is now resolved.
(BZ#1595065)
Storage
Previously, because the kubelet system containers could not write to the /var/lib/iscsi directory, iSCSI volumes could not be attached. Now, you can mount the host /var/lib/iscsi into the kubelet system container so that iSCSI volumes can be attached. (BZ#1598271).
Due to a change in the authentication for the Kibana web console, you must log back into the console after an upgrade and every 168 hours after initial login. The Kibana console has migrated to oauth-proxy. (BZ#1614255)
A Fluentd dependency on a systemd library is not releasing file handles. Therefore, the host eventually runs out of file handles. As a workaround, periodically recycle Fluentd to force the process to release unused file handles. See Resolving Fluentd journald File Locking Issues for more information on resolving this issue. (BZ#1664744)
Security, bug fix, and enhancement updates for OpenShift Container Platform 3.11 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 3.11.11. Versioned asynchronous releases, for example with the form OpenShift Container Platform 3.11.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow.
Issued: 2018-11-19
OpenShift Container Platform release 3.11.43 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2018:3537 advisory. The container images included in the update are provided by the RHBA-2018:3536 advisory.
Space precluded documenting all of the bug fixes and enhancements for this release in the advisory. See the following sections for notes on upgrading and details on the bug fixes and enhancements included in this release.
Log messages from a CRI-O pod could be split in the middle by nature. As a result, partial log messages were indexed in the Elasticsearch. The newer fluent-plugin-concat supports merging the CRI-O style split messages into one, which is not available for the current fluentd (v0.12) that OpenShift Container Platform logging v3.11 uses. The functionality was backported to the fluentd v0.12. With this bug fix, the CRI-O style split log messages are merged back to the original full message. (BZ#1552304)
The event router intentionally generated duplicate event logs as to not lose
them. The
elasticsearch_genid plug-in is now extended to
elasticsearch_genid_ext so
that it takes the
alt_key and
alt_tag. If a log message has a tag matched the
alt_tag value, it uses the
alt_key value as the Elasticsearch primary key. You
could specify a field, which, no duplicate event logs are indexed in Elasticsearch. (BZ#1613722)
The Netty dependency does not make efficient use of the heap. Therefore, Elasticsearch begins to fail on the network layer at a high logging volume. With this bug fix, the Netty recycler is disabled and Elasticsearch is more efficient in processing connections. (BZ#1627086)
The installer did not parameterize the configmap used by the Elasticsearch pods.
The operations Elasticsearch pods used the configmap of the non-operations
Elasticsearch pods. Parameterize the template used by the installer so that the
pods use the
logging-es-ops configmap.
(BZ#1627689)
When using docker with the journald log driver, all container logs, including system and plain docker container logs, are logged to the journal, and read by fluentd. Consequently, fluentd does not know how to handle these non-Kubernetes container logs and throws exceptions. Treat non-Kubernetes container logs as logs from other system services (for example, send them to the operations index). Logs from non-Kubernetes containers are now indexed correctly and do not cause any errors. (BZ#1632364)
When using docker with log-driver journald, the setting in
/etc/sysconfig/docker has changed to use
--log-driver journald instead of
--log-driver=journald. Fluentd cannot detect that journald is being used, so
assumes
json-file, and cannot read any Kubernetes metadata because it does not
look for the journald
CONTAINER_NAME field. This results in a lot of fluentd
errors. Change the way Fluentd detects the docker log driver so that it looks
for
--log-driver journald in addition to
--log-driver=journald. Fluentd can
now detect the docker log driver, and can correctly process Kubernetes container
logs.
(BZ#1632648)
When fluentd is configured as the combination of collectors and MUX, event logs
from the event were supposed to be processed by MUX, not by the collector for
the both
MUX_CLIENT_MODE maximal and minimal. This is because if an event log
is formatted in the collector (and the event record is put under the Kubernetes
key), the log is forwarded to MUX and passed to the k8s-meta plug-in there and
the existing Kubernetes record is overwritten. It wiped out the event
information from the log.
Fix 1:
To avoid the replacement, if the log is from event router, the tag is rewritten
to
${tag}.raw in input-post-forward-mux.conf, which makes the log treated
in the
MUX_CLIENT_MODE=minimal way.
Fix 2:
There was another bug in Ansible. That is, the environment variable
TRANSFORM_EVENTS was not set in MUX even if
openshift_logging_install_eventrouter is set to
true.
With these two bug fixes, the event logs are correctly logged when MUX is
configured with
MUX_CLIENT_MODE=maximal as well as minimal.
(BZ#1632895)
In OpenShift Container Platform 3.10 and newer, the API server runs as a static pod and only
mounted /etc/origin/master and /var/lib/origin inside that pod. CAs
trusted by the host were not trusted by the API server. The API server pod
definition now mounts /etc/pki into the pod. The API server now trusted all
certificate authorities trusted by the host including those defined by the
installer variable
openshift_additional_ca. This can be used to import image
streams from a registry verified by a private CA.
(BZ#1641657). Reuse the TCP connection when using the OSB Client Library. (BZ#1641796)
An. With this bug fix, the timeout is increased to a sufficiently large value to avoid this problem. Artifact reuse should no longer timeout. (BZ#1642350)01)
Previously, the cluster console in OpenShift Container Platform 3.11 would always show the
value
0 for the crashlooping pods count on the cluster status page, even when
there were crashlooping pods. The problem is now fixed and the count now
accurately reflects the count for the selected projects.
(BZ#1643948)
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2018-12-12
OpenShift Container Platform release 3.11.51 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2018:3743 advisory. The container images included in the update are provided by the RHBA-2018:3745 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2018-12-13
OpenShift Container Platform release 3.11 is now available with updates to packages for ppc64le. The list of packages and bug fixes included in the update are documented in the RHBA-2018:3688 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-01-10
OpenShift Container Platform release 3.11.59 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:0024 advisory. The container images included in the update are provided by the RHBA-2019:0023 advisory.
Space precluded documenting all of the bug fixes and enhancements for this release in the advisory. See the following sections for notes on upgrading and details on the bug fixes and enhancements included in this release.
The openshift-ansible OpenStack playbook defaulted to the Kuryr-Kubernetes
multi-pool driver, but that functionality was not merged on stable/queens
kuryr-controller. This bug fix adds the option to select the pool driver to use
for versions older than stable/queens. For newer versions, it will suffice with
setting the
kuryr_openstack_pool_driver to
multi as described in the
documentation.
(BZ#1573128)
The Openshift Ansible installer did not check if any CNS are created before
creating a security group. It would create a security group for CNS even when
there were none created. The Openshift Ansible installer now checks that
openshift_openstack_num_cns is greater than zero before creating a security
group for CNS. CNS security groups are now only created when there is at least
one CNS created.
(BZ#1613438)
The ability to leave swap enabled is now removed and the
openshift_disable_swap variable is deprecated. This variable was never
publicly documented and was only used internally. Documentation has stated that
system swap should be disabled since version 3.4.
(BZ#1623333)
An incorrect
etcdctl command was used during etcd backup for system
containers, causing the etcd backup to fail during upgrade. The etcd system
container is now identified correctly. The upgrade succeeds with etcd in the
system container.
(BZ#1625534)
During etcd scaleup, facts about the etcd cluster are required in order to add new hosts. The necessary tasks are now added to ensure those facts are set before configuring new hosts and, therefore, allow the scale-up to complete as expected. (BZ#1628201)155)
sync daemonset did not run on all nodes. The pgrade failed, as some nodes did
not have an annotation set. With this bug fix,
sync daemonset now tolerates
all taints and runs on all nodes and the upgrade succeeds.
(BZ#1635462)
sync daemonset did not wait a sufficient amount of time for nodes to restart.
The sync DS verification task failed, as nodes did not come up in time. A number
of retries was increased and the install or upgrade now succeeds.
(BZ#1636914)
A deployment would take longer than some of the infrastructure or API server-related timeouts. Long-running deployments would fail. The deployer is now fixed to tolerate long running deployments by re-establishing the watch. (BZ#1638140)
Ansible 2.7.0 changed the way variables were passed to roles. Some roles did not have necessary variables set, resulting in a failed installation. The required Ansible version is now set to 2.6.5 and the installation succeeds. (BZ#1638699)
Node, pod, and control-plane images were not pre-pulled when CRI-O was used. Tasks timed out, as they included pull time. Images are now pre-pulled when Docker and CRI-O are used and the installation succeeds. (BZ#1639201)
The scale-up playbooks, when used in conjunction with Calico, did not properly configure the Calico certificate paths causing them to fail. The playbooks have been updated to ensure that master scale-up with Calico works properly. (BZ#1644416)
In some cases, CRI-O was restarted before verifying that the image pre-pull was finished. Images were not pre-pulled. Now, CRI-O is restarted before image pre-pull begins and installation succeeds. (BZ#1647288)
The CA was not copied to the master config directory when GitHub Enterprise was
used as a identity provider. The API server failed to start without a CA. New
variables,
openshift_master_github_ca and
openshift_master_github_ca_file,
were introduced to set the GitHub Enterprise CA and installation now succeeds.
(BZ#1647793)
The curator image was built with the wrong version of the python-elasticsearch package and the curator image would not start. Use the correct version of the python-elasticsearch package to build the curator image and the curator image works as expected. (BZ#1648453)
There was improper evaluation of a user’s Kibana index. A minor upgrade in server version caused an error when the expected configuration object was not as expected. Its reation was skipped due to the existence of kibana index. Remove a user’s Kiana index, evaluate the stored version against the Kibana version, and recreate the configuration object if necessary. With this bug fix, users will no longer see the error. (BZ#1652224). Egress IPs now work reliably. (BZ#1653380)
A bug in earlier releases of cluster-logging introduced Kibana index-patterns
where the title was not properly replaced and was left with the placeholder of
'$TITLE$'. As a result, the user sees a permission error of no permissions for
[indices:data/read/field_caps]. Remove all index-patterns that have the
bad data, either by upgrading or running:
$ oc exec -c elasticsearch -n $NS $pod --es_util \ --query=".kibana.*/_delete_by_query?pretty" -d \ "{\"query\":{\"match\":{\"title\":\"*TITLE*\"}}}"
With this bug fix, the permission error is no longer generated. (BZ#1656086)
A new playbook was added to cleanup etcd2 data If the cluster was upgraded from OpenShift Container Platform 3.5, it might still carry etcd2 data and use up space. The new playbook safely removes etcd2 data. (BZ#1514487)
A new multi-pool driver is added to Kuryr-Kubernetes to support hybrid environments where some nodes are bare metal while others are running inside VMs, therefore having different pod VIF drivers (e.g., neutron and nested-vlan). To make use of this new feature, the available configuration mappings for the different pools and pod_vif drivers need to be specified in the kuryr.conf configmap. In addition, the nodes must be annotated with the correct information about the pod_vif to be used. Otherwise, the default one is used. (BZ#1553070)
Scale out Ansible playbooks for the OpenStack deployed clusters are now adeded.
When installing OpenShift on top of OpenStack with the OpenStack provisioning
playbooks (
playbooks/openstack/openshift-cluster/provision_install.yml),
scaling the cluster out required several manual steps such as writing the
inventory by hand and running two extra playbooks. This was more brittle,
required more complex documentation, and did not match the initial deployment
experience. To scale out OpenShift on OpenStack, your can now change the desired
number of nodes and run one of the following playbooks (depending on whether you
want to scale the worker or master nodes):
playbooks/openstack/openshift-cluster/node-scaleup.yml playbooks/openstack/openshift-cluster/master-scaleup.yml
Define the recreate strategy timeout for Elasticsearch. There are examples on AWS OpenShift clusters where rollout of new Elasticsearch pods fail because the cluster is having issues attaching storage. Defining a long recreate timeout allows the cluster more time to attach storage to the new pod. Elasticsearch pods have more time to restart and experience fewer rollbacks. (BZ#1655675)
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-01-31
OpenShift Container Platform release 3.11.69 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:0096 advisory. The container images included in the update are provided by the RHBA-2019:0097 advisory.
Space precluded documenting all of the bug fixes and enhancements for this release in the advisory. See the following sections for notes on upgrading and details on the bug fixes and enhancements included in this release.
The location of the master proxy API changed. Since the MetricsApiProxy diagnostic uses this endpoint, it broke. The diagnostic was updated to look at the correct endpoint and it should now work as expected. (BZ#1632983)
Pods would not schedule because they did not have free ports. This issue is now resolved. (BZ#1647674)
Bootstrap v3.3.5 contains a Cross-Site Scripting (XSS) vulnerability. The management console does not allow user input to be displayed via a data-target attribute. Upgrade Bootstrap to v3.4.0, which fixes the vulnerability. With this bu fix, the management console is not longer at risk of possible exploit via the Cross-Site Scripting (XSS) vulnerability in Bootstrap v3.3.5. (BZ#1656438)
Improper error checking ignored errors from object creation during template instantiation. Template instances would report successful instantiation when some objects in the template failed to be created. Errors on creation are now properly checked and the template instance will report failure if any object within it cannot be created. (BZ#1662339)
The rsync package was removed from the registry image, so rsync cannot be used to backup content from the registry container. The rsync package is now added back to the image and can now be used. (BZ#1664853)
This enhancement ensures that OpenShift-on-OpenStack playbook execution will fail at the prerequisites check if the public net ID is not configured when the Kuryr SDN is used. (BZ#1579414)
You can now control the assignment of floating IP addresses for OpenStack cloud provisioning. The playbook responsible for creating the OpenStack virtual servers would always associate a floating IP address with each virtual machine (each OpenShift node). This had two negative implications:
The OpenShift cluster size was limited by the number of floating IPs available to the OpenStack user.
All OpenShift nodes were directly accessible from the outside, increasing the potential attack surface.
A role-based control over which nodes get floating IPs and which do not is now introduced. This is controlled by the following inventory variables:
openshift_openstack_master_floating_ip
openshift_openstack_infra_floating_ip
openshift_openstack_compute_floating_ip
openshift_openstack_load_balancer_floating_ip
They are all boolean and all default to
true. This allows for use cases such as:
A cluster where all the master and infra nodes have floating IPs but the compute nodes do not.
A cluster where none of the nodes have floating IPs, but the load balancers do (so OpenShift is used through the load balancers, but none of the nodes are directly accessible).
If some of the nodes do not have floating IPs (by setting
openshift_openstack_compute_floating_ip = false), the openshift-ansible
playbooks must be run from inside the node network. This is because a server
without a floating IP is only accessible from the network it is in. A common way
to do this is to pre-create the node network and subnet, create a "bastion" host
in it, and run Ansible there:
$ openstack network create openshift $ openstack subnet create --subnet-range 192.168.0.0/24 --dns-nameserver 10.20.30.40 --network openshift openshift $ openstack router create openshift-router $ openstack router set --external-gateway public openshift-router $ openstack router add subnet openshift-router openshift $ openstack server create --wait --image RHEL7 --flavor m1.medium --key-name openshift --network openshift bastion $ openstack floating ip create public $ openstack server add floating ip bastion 172.24.4.10 $ ping 172.24.4.10 $ ssh [email protected]
Then, install openshift-ansible and add the following to the inventory (inventory/group_vars/all.yml):
openshift_openstack_node_network_name: openshift openshift_openstack_router_name: openshift-router openshift_openstack_node_subnet_name: openshift openshift_openstack_master_floating_ip: false openshift_openstack_infra_floating_ip: false openshift_openstack_compute_floating_ip: false openshift_openstack_load_balancer_floating_ip: false
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-02-20
OpenShift Container Platform release 3.11.82 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:0326 advisory. The container images included in the update are provided by the RHBA-2019:0327 advisory.
Space precluded documenting all of the bug fixes and enhancements for this release in the advisory. See the following sections for notes on upgrading and details on the bug fixes and enhancements included in this release.
All Docker related packages are not removed during the uninstall process.Docker is not re-installed properly during installation, causing Docker CLI tasks to fail. With this bug fix, all related Docker packages to uninstall are now added. Re-installation succeeds after running the uninstall playbook. (BZ#1635254)
Polling of quotas resulted in undesirable toast notifications. Now, quota polling errors are suppressed and users no longer see these notifications. (BZ#1651090)
Previously, running the install playbook multiple times with no changes to the cluster console configuration could cause the cluster console login to stop working. The underlying problem has been fixed, and now running the playbook more than once will correctly roll out a new console deployment. This problem can be worked around without the installer fix by manually deleting the console pods using the command:
$ oc delete --all pods -n openshift-console
Certain certificate expiry check playbooks did not call properly initialization functions resulting in an error. Those playbooks have been updated to avoid this problem. (BZ#1655183)
The OpenShift SDN/OVS DaemonSets were upgraded during control plane
upgrades with an
updateStrategy of
RollingUpdate, an upgrade of the
pods in the entire cluster was performed. This caused unexpected network
and application outages on nodes. This bug changed the
updateStrategy for
SDN/OVS pods to
OnDelete in the template, affecting only new
installations. Control plane upgrade tasks were added to modify SDN/OVS
daemonsets to use
On
updateStrategy. Node upgrade tasks were
added to delete all SDN/OVS pods while nodes are drained. Network outages
for nodes should only occur during the node upgrade when nodes are drained.
(BZ#1657019)
Previously, the 3.11 admin console did not correctly display whether a storage class was the default storage class, as it was checking an out-of-date annotation value. The admin console has been updated to use the
storageclass.kubernetes.io/is-default-class=true annotation, and service classes are now properly marked as default when that value is set.
(BZ#1659976)
A changed introduced in Kubernetes 1.11 affected nodes with many IP addresses in
vSphere deployments. Under vSphere, a node hosting several
Egress IPs or
Router HA addresses would sporadically lose IP addresses and start using one of the other ones, causing networking problems. Now, if a
node IP is specified in the node configuration, it will be used correctly, regardless of how many other IP addresses are assigned to the node.
(BZ#1666820)
A type error in the OpenStack code prevented installation on OpenShift nodes without floating IP addresses. This error has been corrected, and installation proceeds as expected. (BZ#1667270)
Certain certificate expiry check playbooks did not call initialization functions properly, resulting in an error. Those playbooks have been updated to avoid this issue. (BZ#1667618)
The cluster role
system:image-pruner was required for all DELETE
requests to the registry. As a result, the regular client could not cancel
its uploads, and the
S3 multipart uploads were accumulating. Now, the
cluster role
system:image-pruner will accept DELETE requests for uploads
from clients who are allowed to write into them.
(BZ#1668412)
If the specified router certificate, key, or CA did not end with a new line character, the router deployment would fail. A new line is now appended to each of the input files ensuring this problem doesn’t occur. (BZ#1668970)
The
volume-config.yaml was not copied to `/etc/origin/node. As a result, volume quotas were not observed, so local storage size was not limited. Now, the
volume-config.yaml is copied to
/etc/origin/node. Volume quotas are observed and local storage size is limited by setting
openshift_node_local_quota_per_fsgroup in the inventory.
(BZ#1669555)
oc image mirror failed with error
tag: unexpected end of JSON input when attempting to mirror images from Red Hat registry. This was a result of commits from a dependency were dropped from the product build. The commits have been re-introduced, and the command can now parse the output successfully, as well as mirror from the Red Hat registry.
(BZ#1670551)
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-03-14
OpenShift Container Platform release 3.11.88 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:0407 advisory. The container images included in the update are provided by the RHBA-2019:0406 advisory.
With this release, Kuryr is now moved out of Technology Preview and now generally available.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-04-10
An update for jenkins-2-plugin is now available for OpenShift Container Platform 3.11. Details of the update are documented in the RHSA-2019:0739 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-04-11
OpenShift Container Platform release 3.11.98 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:0636 advisory. The container images included in the update are provided by the RHBA-2019:0637 advisory.
Space precluded documenting all of the bug fixes and enhancements for this release in the advisory. See the following sections for notes on upgrading and details on the bug fixes and enhancements included in this release.
Administrative users were not able to access the cluster endpoints because permissions were defined incorrectly. Now, the correct permissions have been defined, and administrative users can use the
_cat endpoints.
(BZ#1548640)
Image garbage collection failed to remove an image correctly if it has only one tag but more than one repository associated with the image. This has now been resolved and garbage collection completes successfully. (BZ#1647348)
The
docker registry Health Check would fail if the bucket was empty on AWS S3 environments, returning a
PathNotFound message. Now,
PathNotFound is treated as a success and Health Check works as expected for empty buckets.
(BZ#1655641)
Playbooks ran a check to see if images existed on the disk with specific version tags, but did not ensure the version on the disk was up-to-date to the tagged image in the repo, resulting in skipping the z-stream image pulls, and z-stream upgrades would fail. Now, the on-disk check has been removed, and image pulls are efficient so that there is no need to check whether the image exists on the disk prior to downloading. (BZ#1658387)
Health Check playbooks would fail at checking
Elasticsearch because the exec call would not specify a container. The call failed because the output included incorrectly formatted JSON text. Now, the target container is included in the
exec call and the Health Check succeeds.
(BZ#1660956)
An error in
glusterfs pod mount points prevented the use of
gluster-block. As a result, the provisioner would fail to create devices. The mount points have now been updated and the provisioning process succeeds as expected.
(BZ#1662312)
The
openshift-ansible package was incorrectly checking if a value in the
etcd-servers-overrides was a valid path. Some values were considered invalid by the
openshift-ansible-3.11.51-2.git.0.51c90a3.el7.noarch package. Now,
etcd-servers-overrides does not contain paths, and is ignored during path checks.
(BZ#1666491)
etcd non-master host nodes were excluded from upgrades. Now,
etcd host nodes are able to be upgraded.
(BZ#1668317)
The Ansible variable
openshift_master_image_policy_allowed_registries_for_import was incorrectly parsed, causing a corrupted
master-config.yaml file. Now, the
openshift_master_image_policy_allowed_registries_for_import
variable is correctly parsed and a simple registry image policy can be set as expected.
(BZ#1670473)
The playbooks and manual configuration steps to redeploy router certificates were replaced with service serving certificates secret. This would overwrite or miss the router wild certificates secret, causing certificate errors due to incorrect certificates redeployed. Now, the playbooks and manual redeployment steps do not overwrite router certificates secret. The router certificates are redeployed based on the specified sub domain or customer certificates. (BZ#1672011)
The
ImageStream used in the
BuildConfig editor did not have edit properties, causing runtime errors in the
BuildConfig editor. Now, the editor is initializing tags and objects, even if
ImageStream in the
BuildConfig is missing or if the user does not have the correct permissions to use it.
(BZ#1672904)
Master pods did not match time zones with worker nodes, which led to errors in logging timestamps. Now, the host’s timezone configuration is mounted into the control plane pods. (BZ#1674170)
When a cluster was installed, the user name in the loopback kubeconfig is the same as the host name of the master. Now, the variable in the playbook is changed to a different value. (BZ#1675133)
The Ansible Health Check playbook failed when checking the
curator status. This occurred because the Health Check assumed
curator is a
DeploymentConfig instead of a
cronjob, resulting in a failed check. Now, Health Check properly evaluates for a
cronjob instead of a
DeploymentConfig.
(BZ#1676720)
Some namespaces would be missing from
oc get projects if more than 1,000 projects were listed. Now, all items correctly appear when looking at large resource lists.
(BZ#1677545)
High network latency existed between
Kibana and
Elasticsearch due to either network issues or under-allocated memory for
Elasticsearch. As a result,
Kibana would be unusable because of a gateway timeout. Now, changes are backported from
Kibana version 6, which allows modification to the ping timeout. Administrators are not able to override the default
pingTimeout of 3000ms by setting the
ELASTICSEARCH_REQUESTTIMEOUT environment variable.
Kibana is functional until the underlying network issues or under-allocated memory conditions can be resolved.
(BZ#1679159)
The
deafultIndex in the
Kibana config entry was null, causing the seeding process to fail and the user was presented with a white screen. Now, the
defaultIndex value is evaluated and returns to the default screen if there is a null value. The
Kibana seeding process completes successfully.
(BZ#1679613)
Previously, the upgrade process for
CRI-O would attempt to stop
docker on nodes that had been configured to only run
CRI-O, resulting in playbook failures. Now, the playbook does not stop
docker on nodes that are configured only for
CRI-O operation, ensuring successful upgrades.
(BZ#1685072)
Using
MERGE_JSON_LOG=true would create fields in the record that would cause syntax violations or create too many fields in
Elasticsearch, causing severe performance problems. Now, users who experience these problems can tune
fluentd to accommodate their log record fields without errors or
Elasticsearch performance degradation.
(BZ#1685243)
The SSL and TLS service uses Diffie-Hellman groups with insufficient strength (a key size less than 2048 bytes). As a result, the keys are more vulnerable. Now, the key strength has been increased and certificates are more secure. (BZ#1685618)
The
fluentd daemonset did not include a
tolerate everything toleration. If a node became tainted, the
fluentd pod would get evicted. Now, a
tolerate everything toleration has been added, and
fluentd pods do not get evicted.
(BZ#1685970)
Upgrade playbooks ran several
oc commands that used resource aliases that may not be immediately available after a restart or other reasons. Now, the
oc suite of commands uses the fully qualified resource name to avoid potential failure.
(BZ#1686590)
The files that implemented log rotation functionality were not copied to the correct
fluentd directory. As a result, logs were not being rotated. Now, the container build has been changed to inspect the
fluentd gem to find out where to install the files. The files that implement log rotation are copied to the correct directory for
fluentd usage.
(BZ#1686941)
The command
oc label --list is now added, and now shows the resource and name of all the labels.
(BZ#1268877)
This enhancement allows the AWS cloud provider to parse additional endpoint configuration and customization of both core Kubernetes and cluster autoscaler environments. AWS now allows custom and private regions, which do not follow the conventions of their public cloud endpoints. OpenShift Container Platform deployments were limited to the public AWS cloud regions only, and this limited the adoption of the product in these scenarios. Additional configuration elements can be added to the
aws.conf file and will be honored by OpenShift Container Platform as well as the
cluster-autoscaler to ensure the correct cloud endpoints are used to automatically provision EBS volumes, load balancers, and EC2 instances.
(BZ#1644084)
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-06-06
OpenShift Container Platform release 3.11.104 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:0794 advisory. The container images included in the update are provided by the RHBA-2019:0795 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, use the automated upgrade playbook. See Performing Automated In-place Cluster Upgrades for instructions.
Issued: 2019-06-26
OpenShift Container Platform release 3.11.117 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:1605 advisory. The container images included in the update are provided by the RHBA-2019:1606 advisory.
The
oc create route dry-run -o yaml command would not output a route object.
This has been resolved by implementing the printing of the route object to the
command line.
(BZ#1418021)
Some
.operations index projects were given a value of
default openshift-.
This has now been changed to
kube-system.
(BZ#1571190)
On a director-deployed OpenShift environment, the GlusterFS playbooks auto-generate a new heketi secret key for each run. As a result of this, operations such as scale out or configuration changes on CNS deployments fail. As a workaround, complete the following steps:
Post-deployment, retrieve the heketi secret key. Use this command on one of the master nodes:
$ sudo oc get secret heketi-storage-admin-secret --namespace glusterfs -o json | jq -r .data.key | base64 -d
In an environment file, set the following parameters to that value:
openshift_storage_glusterfs_heketi_admin_key openshift_storage_glusterfs_registry_heketi_admin_key
As a result of this workaround, operations such as scale out or configuration changes on CNS deployments work as long as the parameters were manually extracted. (BZ#1640382)
When a new CA was generated, the certificates on the nodes were not updated and
would not become ready. Now, the redeploy-certificates playbook will copy the
certificates and join nodes. Nodes no longer go to a
NotReady state when
replacing the CA.
(BZ#1652746)
The oc_adm_router Ansible module allowed edits to add duplicate environment variables to the router DeploymentConfig. An Ansible inventory file that specified edits to the router DeploymentConfig that added duplicate environment variables could produce a DeploymentConfig with unpredictable behavior. If an edit appends an environment variable to the router DeploymentConfig, and a variable by that name already exists, the oc_adm_router module now deletes the old variable. Using an Ansible inventory file to append environment variables to the router DeploymentConfig now has predictable behavior and allows users to override default environment variable settings. (BZ#1656487)
A playbook which redeployed master certificates did not update web console secrets, causing the web console to fail to start. Now, web console secrets are recreated when the master certificate redeployment playbook is run. (BZ#1667063)
The logging playbooks did not work with Ansible 2.7. The
include_role and
import_role behavior changed between versions 2.6 and 2.7, which caused issues
with logging. As a result, errors with "-ops" suffixes would appear even when
not deploying with the ops cluster. To resolve this, use
include_role instead
of
import_role in logging playbooks and roles. The logging Ansible code works
on both Ansible 2.6 and Ansible 2.7.
(BZ#1671315)
Undesired DNS IP addresses were selected by the OpenShift service if multiple network cards were present. As a result, DNS requests failed to work from pods. Now, there are sane defaults present for DNS and it follows a similar pattern used by kubelet to fetch routable node IP addresses. (BZ#1680059)
Initialization during upgrades was slow. Sanity checks were using inefficient code to validate host variables. This code has been updated and host variables are now stored in the class. As a result, the host variables are not being copied on every check. The sanity checks and initialization during upgrades takes less time to complete. (BZ#1682924)
The
oreg_url variable would not function correctly on disconnected installs
using Satellite because the etcd image could not perform pulls on disconnected
installs. Now, guidance and examples have been added to specify the etcd image
URL issuing
osm_etcd_image in the associated documentation.
(BZ#1689796)
If a build pod was evicted, the build reported a
GenericBuildFailure.
Determining the cause of build failures was difficult as a result. Now a new
failure reason,
BuildPodEvicted, has been added.
(BZ#1690066)
Nodes would sometimes panic due to cadvisor index reporting out of range errors. This has now been resolved by a backporting of kube code. (BZ#1691023)
ElasticSearch could not be monitored with Prometheus because the
oauth-proxy
was not passing a user’s token. Now, the token is exchanged to ElasticSearch and
users with proper roles can retrieve metrics in Prometheus.
(BZ#1695903)
Deploying nodes would fail in the
setup_dns.yaml playbook during multi-node
setup. This was resolved by fixing the host name that was passed to the
add_host function. Now, multi-node setup proceeds as expected.
(BZ#1698922)
Upgrading between minor versions would fail because several OpenShift variables
were not used during the upgrade process. Now,
api_port and other
apiserver-related variables are read during the upgrade process and upgrades
complete successfully.
(BZ#1699696)
ElasticSearch would fail to start due to invalid certificate dates if hosts had
non-UTC timezones. When OpenShift nodes' timezone is not set to UTC, the
current non-UTC timestamp is used for the
NotBefore checking. If the timezone
is ahead of UTC, the
NotBefore checking would fail. Now, regardless of the
nodes' timezone, the UTC timestamp is set to the start date in the certificates
and failures are not reported due to non-UTC timestamps.
(BZ#1702544)
CustomResourceDefinition errors were presented in a confusing manner that made
troubleshooting difficult. Now, the CRD error messages have been clarified to
assist in troubleshooting CRD errors.
(BZ#1702693)
There was a missing
@ for an instance variable in the Fluentd remote
syslog plugin code. In some cases, systemd-journald logged errant values.
This resulted in rsyslog forwarding failures. Now, the variable has been
corrected and remote logging completes successfully.
(BZ#1703904)
Long running Jenkins agents and slave pods would experience defunct
process errors, causing a high number of processes to appear in process
listings until the pod is terminated. Now,
dumb-init is deployed to clean
up these defunct processes.
(BZ#1707448)
The environment variable
JOURNAL_READ_FROM_HEAD was set to an empty
string. This caused the default value of
read_from_head for the journald
input to be true. When Fluentd starts up for the first time on a node, it
reads in the entire journal. This could result in hours of delays for
system messages to show up in ElasticSearch and Kibana. Now, Fluentd will
check if the value is set and is not empty, or will use the default value
of false. Fluentd will read from the tail of the journal when it starts on
a new node.
(BZ#1707524)
The script
99-origin-dns.sh had a debug flag set to enabled, which would
log debug level messages by default. This has been resolved and debug is
now set to false.
(BZ#1707799)
Kubernetes pod templates were removed at random. This was because the OpenShift Jenkins Sync plugin confused ImagesStreams and ConfigMaps with the same name while processing them. An event for one type could delete the pod template created for another type. The plugin has been modified to keep track of which API object type created the pod template of a given name. (BZ#1709626)
The
openshift_set_node_ip variable was deprecated, but still included
in inventory example files. This has now been removed from example files
and code for the
openshift_set_node_ip variable has been cleaned up.
(BZ#1712488)
Previously, the web console could show an incorrect "Scaling to…" value for stateful sets in the project overview under some conditions. The stateful set desired replicas value now correctly updates in the web console project overview. (BZ#1713211)
Previously, a service would not correctly show up in the project overview when
it selected the
DeploymentConfig label that is automatically set for pods
created by a deployment config. Now, it correctly show services that select the
DeploymentConfig label on the overview.
(BZ#1717028)
The cluster autoscaler did not have the
clusterrole permission to evict pods
and nodes would not be automatically scaled as a result. Now, eviction
permissions have been added to the autoscaler cluster role. Pods can be evicted
and nodes can be scaled down.
(BZ#1718458)
If a pod using an egress IP tried to contact an external host that was not responding, the egress IP monitoring code may have mistakenly interpreted that as meaning that the node hosting the egress IP was not responding. High-availability egress IPs may have been switched from one node to another spuriously. The monitoring code now distinguishes the case of "egress node not responding" from "final destination not responding". High-availability egress IPs will not be switched between nodes unnecessarily. (BZ#1718542)
Refactoring of
openshift_facts caused the MTU to be improperly set. Hosts
could not communicate on networks with non-default MTU settings. The
openshift_facts.py script was updated to properly detect and set the MTU for the
host environment. Hosts now can properly communicate on networks with
non-default MTU.
(BZ#1720581)
The Cisco ACI CNI plugin is now available. (BZ#1708552)
You can now use an Ansible playbook to perform a certificate rotation for the EFK stack without needing to run the install/upgrade playbook. This playbook deletes the current certificate files, generates new EFK certificates, updates certificate secrets, and restarts ElasticSearch and Kibana. (BZ#1710424)
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-06-27
An update for atomic-openshift is now available for OpenShift Container Platform 3.11. Details of the update are documented in the RHSA-2019:1633 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, use the automated upgrade playbook. See Performing Automated In-place Cluster Upgrades for instructions.
Issued: 2019-07-23
OpenShift Container Platform release 3.11.129 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:1753 advisory. The container images included in the update are provided by the RHBA-2019:1754 advisory.
In OpenShift on Azure environments, conditional arguments were missing that
would result in incorrect kubelet node names in certain cases. The conditionals
to set
nodeName in
node-config were added, and now kubelet names can be set
as required. (BZ#1656983)
Health check playbooks would assume
Curator was a
deploymentconfig instead
of a
cronjob, and would fail the check because the resource type had changed.
Now, the health check playbook properly evaluates for a
cronjob instead of a
deploymentconfig. (BZ#1676720)
Some OpenShift Container Platform installations would fail because the
selinux
check was occurring in the
openshift_node role instead of the
init role.
Now, the
selinux check occurs earlier in the installation process and is
completed successfully. (BZ#1710020)
Access to the
ElasticSearch root URL was denied from a project’s pod in
OpenShift Container Platform 3.11 instances that had been upgraded from version
3.10. This was due to strenuous permissions that denied non-administrative users
access to the root endpoints. Now, permissions have been changed so that all
users are able to access the root endpoints. (BZ#1710868)
ElasticSearch metrics were unavailable in the
Prometheus role. Now, the
Prometheus role has been enabled access to monitor all
ElasticSearch indices.
(BZ#1712423)
ImageStreams would fail if not using a hosted managed registry due to an unset
referencePolicy field. Now, the dictionary has been changed to read and modify
the
referencePolicy as needed, and
ImageStreams can be used without a hosted
managed registry. (BZ#1712496)
The
templateinstance controller did not properly manage cluster level objects
in its create path, and as a result failed to create projects specified in
templates. Now, the
templateinstance controller determines if the objects in
its create path and passes correct values in secrets through namespaces. The
templateinstance can now create projects as defined in templates. (BZ#1713982)
Redeployment of certificates did not recreate the
ansible-service-broker pod
secrets, causing the service catalog to fail. A new playbook has been created to
support updating the certificates. (BZ#1715322)
The IPv4 dictionary was recently modified and MTU was set incorrectly as a result. This IPv4 conditional has been removed, and now MTU is established correctly. (BZ#1719362)
The
pom.xml of some of the OpenShift Jenkins plugins had
http:// references
instead of
https:// references for some of its build time dependencies, and
dependency downloads would occur over
http instead of the
https protocol.
The
pom.xml references have now been corrected and dependency downloads only
occur using the
https protocol. (BZ#1719477)
The readiness probe for
ElasticSearch
curl commands used
NSS, which bloated
the dentry cache. This would cause
ElasticSearch to become unresponsive. To
resolve this, set the
NSS_SDB_USE_CACHE=no flag in the readiness probe to work
around the dentry cache bloating. (BZ#1720479)
Previously, the web console showed a misleading warning that metrics might not configured for horizontal pod autoscalers when only the metrics server had been set up. The warning has been removed. (BZ#1721428)
Previously, the
image-signature-import controller would only import up to
three signatures, but the registry would often have more than three signatures.
This would cause importing signatures to fail. The limit of signatures has been
increased, and importing signatures from
registry.redhat.io completes
successfully. (BZ#1722581)
The
prerequisites playbook would fail because default values were not loaded
correctly, causing sanity checks to fail. A step to run
openshift_facts has
been added to load all the default values, and sanity checks complete
successfully. (BZ#1724718)
Kibana would present a blank page or timeout if a large number of projects
were creating too many calls to the
ElasticSearch cluster, resulting in the
timeout before a response is returned. Now, API calls are cached and processing
is more efficient, reducing the opportunity for page timeouts. (BZ#1726433)
The service catalog did not have a
redeploy-certificate playbook. The
certificates for the service catalog need to be rotated like other components of
OpenShift Container Platform, and a playbook has now been created for the
service catalog. (BZ#1702401)
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-08-13
OpenShift Container Platform release 3.11.135 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:2352 advisory. The container images included in the update are provided by the RHBA-2019:2353 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-09-03
OpenShift Container Platform release 3.11.141 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:2581 advisory. The container images included in the update are provided by the RHBA-2019:2580 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-09-23
OpenShift Container Platform release 3.11.146 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:2816 advisory. The container images included in the update are provided by the RHBA-2019:2824 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-10-17
OpenShift Container Platform release 3.11.153 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:3138 advisory. The container images included in the update are provided by the RHBA-2019:3139 advisory.
This release updates the Red Hat Container Registry
(
registry.redhat.io) with the following images:
openshift3/ose-ansible:v3.11.153-3 openshift3/ose-cluster-autoscaler:v3.11.153-2 openshift3/ose-descheduler:v3.11.153-2 openshift3/ose-metrics-server:v3.11.153-2 openshift3/ose-node-problem-detector:v3.11.153-2 openshift3/automation-broker-apb:v3.11.153-2 openshift3/ose-cluster-monitoring-operator:v3.11.153-2 openshift3/ose-configmap-reloader:v3.11.153-2 openshift3/csi-attacher:v3.11.153-2 openshift3/csi-driver-registrar:v3.11.153-2 openshift3/csi-livenessprobe:v3.11.153-2 openshift3/csi-provisioner:v3.11.153-2 openshift3/ose-efs-provisioner:v3.11.153-2 openshift3/oauth-proxy:v3.11.153-2 openshift3/prometheus-alertmanager:v3.11.153-2 openshift3/prometheus-node-exporter:v3.11.153-2 openshift3/prometheus:v3.11.153-2 openshift3/grafana:v3.11.153-2 openshift3/jenkins-agent-maven-35-rhel7:v3.11.153-2 openshift3/jenkins-agent-nodejs-8-rhel7:v3.11.153-2 openshift3/jenkins-slave-base-rhel7:v3.11.153-2 openshift3/jenkins-slave-maven-rhel7:v3.11.153-2 openshift3/jenkins-slave-nodejs-rhel7:v3.11.153-2 openshift3/ose-kube-rbac-proxy:v3.11.153-2 openshift3/ose-kube-state-metrics:v3.11.153-2 openshift3/kuryr-cni:v3.11.153-2 openshift3/ose-logging-curator5:v3.11.153-2 openshift3/ose-logging-elasticsearch5:v3.11.153-2 openshift3/ose-logging-eventrouter:v3.11.153-2 openshift3/ose-logging-fluentd:v3.11.153-2 openshift3/ose-logging-kibana5:v3.11.153-2 openshift3/ose-metrics-cassandra:v3.11.153-2 openshift3/metrics-hawkular-metrics:v3.11.153-2 openshift3/ose-metrics-hawkular-openshift-agent:v3.11.153-2 openshift3/ose-metrics-heapster:v3.11.153-2 openshift3/metrics-schema-installer:v3.11.153-2 openshift3/apb-base:v3.11.153-2 openshift3/apb-tools:v3.11.153-2 openshift3/ose-ansible-service-broker:v3.11.153-2 openshift3/ose-docker-builder:v3.11.153-2 openshift3/ose-cli:v3.11.153-2 openshift3/ose-cluster-capacity:v3.11.153-2 openshift3/ose-console:v3.11.153-2 openshift3/ose-control-plane:v3.11.153-2 openshift3/ose-deployer:v3.11.153-2 openshift3/ose-egress-dns-proxy:v3.11.153-2 openshift3/ose-egress-router:v3.11.153-2 openshift3/ose-haproxy-router:v3.11.153-2 openshift3/ose-hyperkube:v3.11.153-2 openshift3/ose-hypershift:v3.11.153-2 openshift3/ose-keepalived-ipfailover:v3.11.153-2 openshift3/mariadb-apb:v3.11.153-2 openshift3/mediawiki-apb:v3.11.153-2 openshift3/mediawiki:v3.11.153-2 openshift3/mysql-apb:v3.11.153-2 openshift3/node:v3.11.153-2 openshift3/ose-pod:v3.11.153-2 openshift3/postgresql-apb:v3.11.153-2 openshift3/ose-recycler:v3.11.153-2 openshift3/ose-docker-registry:v3.11.153-2 openshift3/ose-service-catalog:v3.11.153-2 openshift3/ose-tests:v3.11.153-2 openshift3/jenkins-2-rhel7:v3.11.153-2 openshift3/local-storage-provisioner:v3.11.153-2 openshift3/manila-provisioner:v3.11.153-2 openshift3/ose-operator-lifecycle-manager:v3.11.153-2 openshift3/ose-web-console:v3.11.153-2 openshift3/ose-egress-http-proxy:v3.11.153-2 openshift3/kuryr-controller:v3.11.153-2 openshift3/ose-ovn-kubernetes:v3.11.153-2 openshift3/ose-prometheus-config-reloader:v3.11.153-2 openshift3/ose-prometheus-operator:v3.11.153-2 openshift3/registry-console:v3.11.153-2 openshift3/snapshot-controller:v3.11.153-2 openshift3/snapshot-provisioner:v3.11.153-2 openshift3/ose-template-service-broker:v3.11.153-2
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-10-18
An update for atomic-openshift is now available for OpenShift Container Platform 3.11. Details of the update are documented in the RHSA-2019:3143 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-11-18
OpenShift Container Platform release 3.11.154 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:3817 advisory. The container images included in the update are provided by the RHBA-2019:3818 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-11-18
An update for atomic-openshift is now available for OpenShift Container Platform 3.11. Details of the update are documented in the RHSA-2019:3905 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-12-10
OpenShift Container Platform release 3.11.157 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2019:4050 advisory. The container images included in the update are provided by the RHBA-2019:4051 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-12-16
An update for openshift-enterprise-console-container is now available for OpenShift Container Platform 3.11. Details of the update are documented in the RHSA-2019:4053 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2020-01-09
OpenShift Container Platform release 3.11.161 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2020:0017 advisory. The container images included in the update are provided by the RHBA-2020:0018 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions.
Issued: 2019-12-16
An update for atomic-openshift is now available for OpenShift Container Platform 3.11. Details of the update are documented in the RHSA-2020:0020 advisory.
To upgrade an existing OpenShift Container Platform 3.10 or 3.11 cluster to this latest release, see Upgrade methods and strategies for instructions. | https://docs.openshift.com/container-platform/3.11/release_notes/ocp_3_11_release_notes.html | 2020-01-17T17:03:53 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.openshift.com |
REST API¶
Each DeepPavlov model can be easily made available for inference as a REST web service. The general method is:
python -m deeppavlov riseapi <config_path> [-d] [-p <port>]
-d: downloads model specific data before starting the service.
-p <port>: sets the port to
<port>. Overrides default settings/labels section of the model
config. Value of
server_utils label from model config should
match with properties key from
model_defaults section of
server_config.json.
For example,
metadata/labels/server_utils tag from
go_bot/gobot_dstc2.json references to the GoalOrientedBot section
of
server_config.json. Therefore, all parameters with non empty (i.e. not
"", not
[] etc.) values from
model_defaults/GoalOrientedBot: | http://docs.deeppavlov.ai/en/0.6.0/integrations/rest_api.html | 2020-01-17T16:30:57 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.deeppavlov.ai |
Building a Secure Foundation for IoT
You may have heard that IoT stands for Internet of Threats, however this doesn't mean that you can't have a secure foundation to adopt IoT (which is really Internet of Things). Security for IoT is not one single button that you will press and say: now we are secure. There are many variables to consider and due the vast amount of elements involved in IoT, you must think broadly while architecting your solution. This week we released three new articles that will provide guidelines on how enhance your security for IoT adoption.
In this page you will see a new Security option that has links for these three new articles:
The new articles are:
- Internet of Things (IoT) security architecture
- Securing your Internet of Things from the ground up
- Internet of Things (IoT) security best practices
Stay safe! | https://docs.microsoft.com/en-us/archive/blogs/yuridiogenes/building-a-secure-foundation-for-iot | 2020-01-17T17:43:32 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.microsoft.com |
Connect(); 2017
Volume 32 Number 13
Joseph Sirosh.
Larry O'Brien
Deliver cutting-edge AI/ML solutions on mobile devices using Xamarin technologies and native libraries such as CoreML and Tensorflow Android Inference.
Matt Gibbs
App Center streamlines your mobile app workflow. Learn how to automate your app lifecycle in a few easy steps, by connecting your iOS, Android and Windows apps to App Center for continuous integration, automated testing, distribution, monitoring and engagement.
Willy-Peter Schaub.
Alex Karcher
This article explores common Serverless API design patterns, showcasing Azure Functions Proxies as the cornerstone feature in developing a Serverless API. Readers will learn how to design a Serverless API and master common patterns.
Julie Lerman
Take a tour of the new SQL Operations Studio, a free, standalone tool that works with Azure SQL Database, Azure SQL Data Warehouse and SQL Server running anywhere. Even better—it’s cross-platform!
Immo Landwerth.
Stephen Toub
Span<T> is a new type in .NET that enables efficient access to contiguous regions of arbitrary memory. This article introduces Span<T>, Memory<T>, and related functionality, and provides details on how they are quickly permeating their way throughout the .NET ecosystem.
Michael Saunders
Whether you’re a data scientist or the developer of an analytics-related service, Excel has the tools for you to build your own functions. Learn how they work and why you should use them!
Mike Ammerlaan
Learn how to leverage Azure Functions to build human and technical processes on top of data within Microsoft Graph. This allows you to build out the full API to your organization, and transform productivity for all.
Michael Desmond
Artificial intelligence, DevOps, and cross-platform development were all on display at the Microsoft Connect(); 2017 event in November. This special issue of MSDN Magazine explores the tools, technologies and techniques highlighted at the conference. | https://docs.microsoft.com/en-us/archive/msdn-magazine/2017/connect/connect-;-2017 | 2020-01-17T16:56:03 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.microsoft.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.