text
stringlengths 2
99.9k
| meta
dict |
---|---|
Bash history reverse search
Clear the terminal instantly
Clear bash history
Find any Unix / Linux command
Terminal based browser | {
"pile_set_name": "Github"
} |
=pod
=head1 NAME
pkey - public or private key processing tool
=head1 SYNOPSIS
B<openssl> B<pkey>
[B<-inform PEM|DER>]
[B<-outform PEM|DER>]
[B<-in filename>]
[B<-passin arg>]
[B<-out filename>]
[B<-passout arg>]
[B<-cipher>]
[B<-text>]
[B<-text_pub>]
[B<-noout>]
[B<-pubin>]
[B<-pubout>]
[B<-engine id>]
=head1 DESCRIPTION
The B<pkey> command processes public or private keys. They can be converted
between various forms and their components printed out.
=head1 COMMAND OPTIONS
=over 4
=item B<-inform DER|PEM>
This specifies the input format DER or PEM.
=item B<-outform DER|PEM>
This specifies the output format, the options have the same meaning as the
B<-inform> option.
=item B<-in filename>
This specifies the input filename to read a key from or standard input if this
option is not specified. If the key is encrypted a pass phrase will be
prompted for.
=item B<-passin arg>
the input file password source. For more information about the format of B<arg>
see the B<PASS PHRASE ARGUMENTS> section in L<openssl(1)|openssl(1)>.
=item B<-out filename>
This specifies the output filename to write a key to or standard output if this
option is not specified. If any encryption options are set then a pass phrase
will be prompted for. The output filename should B<not> be the same as the input
filename.
=item B<-passout password>
the output file password source. For more information about the format of B<arg>
see the B<PASS PHRASE ARGUMENTS> section in L<openssl(1)|openssl(1)>.
=item B<-cipher>
These options encrypt the private key with the supplied cipher. Any algorithm
name accepted by EVP_get_cipherbyname() is acceptable such as B<des3>.
=item B<-text>
prints out the various public or private key components in
plain text in addition to the encoded version.
=item B<-text_pub>
print out only public key components even if a private key is being processed.
=item B<-noout>
do not output the encoded version of the key.
=item B<-pubin>
by default a private key is read from the input file: with this
option a public key is read instead.
=item B<-pubout>
by default a private key is output: with this option a public
key will be output instead. This option is automatically set if
the input is a public key.
=item B<-engine id>
specifying an engine (by its unique B<id> string) will cause B<pkey>
to attempt to obtain a functional reference to the specified engine,
thus initialising it if needed. The engine will then be set as the default
for all available algorithms.
=back
=head1 EXAMPLES
To remove the pass phrase on an RSA private key:
openssl pkey -in key.pem -out keyout.pem
To encrypt a private key using triple DES:
openssl pkey -in key.pem -des3 -out keyout.pem
To convert a private key from PEM to DER format:
openssl pkey -in key.pem -outform DER -out keyout.der
To print out the components of a private key to standard output:
openssl pkey -in key.pem -text -noout
To print out the public components of a private key to standard output:
openssl pkey -in key.pem -text_pub -noout
To just output the public part of a private key:
openssl pkey -in key.pem -pubout -out pubkey.pem
=head1 SEE ALSO
L<genpkey(1)|genpkey(1)>, L<rsa(1)|rsa(1)>, L<pkcs8(1)|pkcs8(1)>,
L<dsa(1)|dsa(1)>, L<genrsa(1)|genrsa(1)>, L<gendsa(1)|gendsa(1)>
=cut
| {
"pile_set_name": "Github"
} |
{
"images" : [
{
"idiom" : "tv"
}
],
"info" : {
"version" : 1,
"author" : "xcode"
}
} | {
"pile_set_name": "Github"
} |
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
---
loader: taskgraph.loader.transform:loader
transforms:
- taskgraph.transforms.release_deps:transforms
- taskgraph.transforms.partner_attribution:transforms
- taskgraph.transforms.job:transforms
- taskgraph.transforms.task:transforms
kind-dependencies:
- repackage-signing
- repackage-signing-l10n
# move this into the single job ??
job-defaults:
name: partner-attribution
description: Release Promotion partner attribution
run-on-projects: [] # to make sure this never runs as part of CI
shipping-product: firefox
shipping-phase: promote
worker-type: b-linux
worker:
docker-image:
in-tree: "partner-repack"
chain-of-trust: true
max-run-time: 1800
run:
using: mach
mach: python python/mozrelease/mozrelease/partner_attribution.py
jobs:
partner-attribution:
attributes:
build_platform: linux-shippable
build_type: opt
artifact_prefix: releng/partner
shippable: true
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2018 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package androidx.work.testing.workers;
import android.content.Context;
import android.util.Log;
import androidx.annotation.NonNull;
import androidx.work.Logger;
import androidx.work.Worker;
import androidx.work.WorkerParameters;
/** A test {@link Worker} that prints a log and returns a successful result. */
public class TestWorker extends Worker {
private static final String TAG = Logger.tagWithPrefix("TestWorker");
public TestWorker(@NonNull Context context, @NonNull WorkerParameters workerParams) {
super(context, workerParams);
}
@Override
public @NonNull Result doWork() {
Log.i(TAG, "Doing work.");
return Result.success();
}
}
| {
"pile_set_name": "Github"
} |
/**
* @public
*/
export default class TestAccessClassPublic {}
/**
* @protected
*/
export class TestAccessClassProtected {}
/**
* @package
*/
export class TestAccessClassPackage {}
/**
* @private
*/
export class TestAccessClassPrivate {}
| {
"pile_set_name": "Github"
} |
'use strict';
/**
* Copyright (c) 2013-present, Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the BSD-style license found in the
* LICENSE file in the root directory of this source tree. An additional grant
* of patent rights can be found in the PATENTS file in the same directory.
*
*
*/
var isTextNode = require('./isTextNode');
/*eslint-disable no-bitwise */
/**
* Checks if a given DOM node contains or is another DOM node.
*/
function containsNode(outerNode, innerNode) {
if (!outerNode || !innerNode) {
return false;
} else if (outerNode === innerNode) {
return true;
} else if (isTextNode(outerNode)) {
return false;
} else if (isTextNode(innerNode)) {
return containsNode(outerNode, innerNode.parentNode);
} else if ('contains' in outerNode) {
return outerNode.contains(innerNode);
} else if (outerNode.compareDocumentPosition) {
return !!(outerNode.compareDocumentPosition(innerNode) & 16);
} else {
return false;
}
}
module.exports = containsNode; | {
"pile_set_name": "Github"
} |
# Adapted for numpy/ma/cdms2 by convertcdms.py
# First import necessary modules
import sys,os,thermo,vcs,cdms2 as cdms
# initialize the VCS Canvas and creates the "Thermodynamic Diagram" graphic method
x=vcs.init()
x.portrait()
th=thermo.Gth(x=x,name='test')
## List setable items
th.list()
## Setting type of thermodynamic diagram, you can choose from: 'emagram', 'tephigram', 'stuve' or 'skewT'
th.type='skewT'
## Skewness of the plot
## th.skewness=-35.
## Graphic finess, higher number are better quality but slower
th.detail=75
## World Coordinates
## Temperatures at the bottom of the graph (in C)
th.datawc_x1=-50.
th.datawc_x2=50.
## Pressure at bottom and top of graph (in hPa)
## WARNING: worldcoordinate here are set in hPA but data level axis must be in Pa, not consistent
th.datawc_y1=1050.
th.datawc_y2=100.
## Drawing of paper, decide what to draw or not (1:yes , 0: no)
th.drawisothermsfilled=1
th.drawisotherms=1
th.drawisobars=1
th.drawdryadiabats=1
th.drawpseudoadiabats=1
th.drawmixingratio=1
## Create a template for T(P) i.e skewT paper
template=x.createtemplate('new')
template.data.x1=.1
template.data.x2=.85
template.data.y1=.1
template.data.y2=.9
template.box1.x1=template.data.x1
template.box1.x2=template.data.x2
template.box1.y1=template.data.y1
template.box1.y2=template.data.y2
template.xlabel1.y=template.data.y1*.9
template.ylabel1.y=template.data.x1*.9
## Now open the sample dataset and reads in the data for temperature as a function of level
## Open the file, read the T
f=cdms.open(os.path.join(vcs.sample_data,'thermo.nc'))
t=f('t')
# In this example we need to redefine the the "level" axis on "ta" because it needs to be in Pa
## WARNING: in Pa, worldcoordiante are set in hPa, not consistent!
p=t.getLevel()
p=cdms.createAxis(p[:]*100)
p.id='level'
t.setAxis(1,p) ## Reset the axis on T
# Now we are good to go and plot t
th.plot_TP(t,template=template)
| {
"pile_set_name": "Github"
} |
// Common/Alloc.h
#ifndef __COMMON_ALLOC_H
#define __COMMON_ALLOC_H
#include <stddef.h>
void *MyAlloc(size_t size) throw();
void MyFree(void *address) throw();
#ifdef _WIN32
bool SetLargePageSize();
void *MidAlloc(size_t size) throw();
void MidFree(void *address) throw();
void *BigAlloc(size_t size) throw();
void BigFree(void *address) throw();
#else
#define MidAlloc(size) MyAlloc(size)
#define MidFree(address) MyFree(address)
#define BigAlloc(size) MyAlloc(size)
#define BigFree(address) MyFree(address)
#endif
#endif
| {
"pile_set_name": "Github"
} |
/*
Copyright 2018 The Knative Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"github.com/knative/pkg/apis/istio/common/v1alpha1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// VirtualService
type Policy struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec PolicySpec `json:"spec"`
}
// Policy defines what authentication methods can be accepted on workload(s),
// and if authenticated, which method/certificate will set the request principal
// (i.e request.auth.principal attribute).
//
// Authentication policy is composed of 2-part authentication:
// - peer: verify caller service credentials. This part will set source.user
// (peer identity).
// - origin: verify the origin credentials. This part will set request.auth.user
// (origin identity), as well as other attributes like request.auth.presenter,
// request.auth.audiences and raw claims. Note that the identity could be
// end-user, service account, device etc.
//
// Last but not least, the principal binding rule defines which identity (peer
// or origin) should be used as principal. By default, it uses peer.
//
// Examples:
//
// Policy to enable mTLS for all services in namespace frod
//
// apiVersion: authentication.istio.io/v1alpha1
// kind: Policy
// metadata:
// name: mTLS_enable
// namespace: frod
// spec:
// peers:
// - mtls:
//
// Policy to disable mTLS for "productpage" service
//
// apiVersion: authentication.istio.io/v1alpha1
// kind: Policy
// metadata:
// name: mTLS_disable
// namespace: frod
// spec:
// targets:
// - name: productpage
//
// Policy to require mTLS for peer authentication, and JWT for origin authenticationn
// for productpage:9000. Principal is set from origin identity.
//
// apiVersion: authentication.istio.io/v1alpha1
// kind: Policy
// metadata:
// name: mTLS_enable
// namespace: frod
// spec:
// target:
// - name: productpage
// ports:
// - number: 9000
// peers:
// - mtls:
// origins:
// - jwt:
// issuer: "https://securetoken.google.com"
// audiences:
// - "productpage"
// jwksUri: "https://www.googleapis.com/oauth2/v1/certs"
// jwt_headers:
// - "x-goog-iap-jwt-assertion"
// principaBinding: USE_ORIGIN
//
// Policy to require mTLS for peer authentication, and JWT for origin authenticationn
// for productpage:9000, but allow origin authentication failed. Principal is set
// from origin identity.
// Note: this example can be used for use cases when we want to allow request from
// certain peers, given it comes with an approperiate authorization poicy to check
// and reject request accoridingly.
//
// apiVersion: authentication.istio.io/v1alpha1
// kind: Policy
// metadata:
// name: mTLS_enable
// namespace: frod
// spec:
// target:
// - name: productpage
// ports:
// - number: 9000
// peers:
// - mtls:
// origins:
// - jwt:
// issuer: "https://securetoken.google.com"
// audiences:
// - "productpage"
// jwksUri: "https://www.googleapis.com/oauth2/v1/certs"
// jwt_headers:
// - "x-goog-iap-jwt-assertion"
// originIsOptional: true
// principalBinding: USE_ORIGIN
type PolicySpec struct {
// List rules to select destinations that the policy should be applied on.
// If empty, policy will be used on all destinations in the same namespace.
Targets []TargetSelector `json:"targets,omitempty"`
// List of authentication methods that can be used for peer authentication.
// They will be evaluated in order; the first validate one will be used to
// set peer identity (source.user) and other peer attributes. If none of
// these methods pass, and peer_is_optional flag is false (see below),
// request will be rejected with authentication failed error (401).
// Leave the list empty if peer authentication is not required
Peers []PeerAuthenticationMethod `json:"peers,omitempty"`
// Set this flag to true to accept request (for peer authentication perspective),
// even when none of the peer authentication methods defined above satisfied.
// Typically, this is used to delay the rejection decision to next layer (e.g
// authorization).
// This flag is ignored if no authentication defined for peer (peers field is empty).
PeerIsOptional bool `json:"peerIsOptional,omitempty"`
// List of authentication methods that can be used for origin authentication.
// Similar to peers, these will be evaluated in order; the first validate one
// will be used to set origin identity and attributes (i.e request.auth.user,
// request.auth.issuer etc). If none of these methods pass, and origin_is_optional
// is false (see below), request will be rejected with authentication failed
// error (401).
// Leave the list empty if origin authentication is not required.
Origins []OriginAuthenticationMethod `json:"origins,omitempty"`
// Set this flag to true to accept request (for origin authentication perspective),
// even when none of the origin authentication methods defined above satisfied.
// Typically, this is used to delay the rejection decision to next layer (e.g
// authorization).
// This flag is ignored if no authentication defined for origin (origins field is empty).
OriginIsOptional bool `json:"originIsOptional,omitempty"`
// Define whether peer or origin identity should be use for principal. Default
// value is USE_PEER.
// If peer (or origin) identity is not available, either because of peer/origin
// authentication is not defined, or failed, principal will be left unset.
// In other words, binding rule does not affect the decision to accept or
// reject request.
PrincipalBinding PrincipalBinding `json:"principalBinding,omitempty"`
}
// TargetSelector defines a matching rule to a service/destination.
type TargetSelector struct {
// REQUIRED. The name must be a short name from the service registry. The
// fully qualified domain name will be resolved in a platform specific manner.
Name string `json:"name"`
// Specifies the ports on the destination. Leave empty to match all ports
// that are exposed.
Ports []PortSelector `json:"ports,omitempty"`
}
// PortSelector specifies the name or number of a port to be used for
// matching targets for authenticationn policy. This is copied from
// networking API to avoid dependency.
type PortSelector struct {
// It is required to specify exactly one of the fields:
// Number or Name
// Valid port number
Number uint32 `json:"number,omitempty"`
// Port name
Name string `json:"name,omitempty"`
}
// PeerAuthenticationMethod defines one particular type of authentication, e.g
// mutual TLS, JWT etc, (no authentication is one type by itself) that can
// be used for peer authentication.
// The type can be progammatically determine by checking the type of the
// "params" field.
type PeerAuthenticationMethod struct {
// It is required to specify exactly one of the fields:
// Mtls or Jwt
// Set if mTLS is used.
Mtls *MutualTLS `json:"mtls,omitempty"`
// Set if JWT is used. This option is not yet available.
Jwt *Jwt `json:"jwt,omitempty"`
}
// Defines the acceptable connection TLS mode.
type Mode string
const (
// Client cert must be presented, connection is in TLS.
ModeStrict Mode = "STRICT"
// Connection can be either plaintext or TLS, and client cert can be omitted.
ModePermissive Mode = "PERMISSIVE"
)
// TLS authentication params.
type MutualTLS struct {
// WILL BE DEPRECATED, if set, will translates to `TLS_PERMISSIVE` mode.
// Set this flag to true to allow regular TLS (i.e without client x509
// certificate). If request carries client certificate, identity will be
// extracted and used (set to peer identity). Otherwise, peer identity will
// be left unset.
// When the flag is false (default), request must have client certificate.
AllowTLS bool `json:"allowTls,omitempty"`
// Defines the mode of mTLS authentication.
Mode Mode `json:"mode,omitempty"`
}
// JSON Web Token (JWT) token format for authentication as defined by
// https://tools.ietf.org/html/rfc7519. See [OAuth
// 2.0](https://tools.ietf.org/html/rfc6749) and [OIDC
// 1.0](http://openid.net/connect) for how this is used in the whole
// authentication flow.
//
// Example,
//
// issuer: https://example.com
// audiences:
// - bookstore_android.apps.googleusercontent.com
// bookstore_web.apps.googleusercontent.com
// jwksUri: https://example.com/.well-known/jwks.json
//
type Jwt struct {
// Identifies the issuer that issued the JWT. See
// [issuer](https://tools.ietf.org/html/rfc7519#section-4.1.1)
// Usually a URL or an email address.
//
// Example: https://securetoken.google.com
// Example: [email protected]
Issuer string `json:"issuer,omitempty"`
// The list of JWT
// [audiences](https://tools.ietf.org/html/rfc7519#section-4.1.3).
// that are allowed to access. A JWT containing any of these
// audiences will be accepted.
//
// The service name will be accepted if audiences is empty.
//
// Example:
//
// ```yaml
// audiences:
// - bookstore_android.apps.googleusercontent.com
// bookstore_web.apps.googleusercontent.com
// ```
Audiences []string `json:"audiences,omitempty"`
// URL of the provider's public key set to validate signature of the
// JWT. See [OpenID
// Discovery](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata).
//
// Optional if the key set document can either (a) be retrieved from
// [OpenID
// Discovery](https://openid.net/specs/openid-connect-discovery-1_0.html) of
// the issuer or (b) inferred from the email domain of the issuer (e.g. a
// Google service account).
//
// Example: https://www.googleapis.com/oauth2/v1/certs
JwksURI string `json:"jwksUri,omitempty"`
// Two fields below define where to extract the JWT from an HTTP request.
//
// If no explicit location is specified the following default
// locations are tried in order:
//
// 1) The Authorization header using the Bearer schema,
// e.g. Authorization: Bearer <token>. (see
// [Authorization Request Header
// Field](https://tools.ietf.org/html/rfc6750#section-2.1))
//
// 2) `access_token` query parameter (see
// [URI Query Parameter](https://tools.ietf.org/html/rfc6750#section-2.3))
// JWT is sent in a request header. `header` represents the
// header name.
//
// For example, if `header=x-goog-iap-jwt-assertion`, the header
// format will be x-goog-iap-jwt-assertion: <JWT>.
JwtHeaders []string `json:"jwtHeaders,omitempty"`
// JWT is sent in a query parameter. `query` represents the
// query parameter name.
//
// For example, `query=jwt_token`.
JwtParams []string `json:"jwtParams,omitempty"`
// URL paths that should be excluded from the JWT validation. If the request path is matched,
// the JWT validation will be skipped and the request will proceed regardless.
// This is useful to keep a couple of URLs public for external health checks.
// Example: "/health_check", "/status/cpu_usage".
ExcludedPaths []v1alpha1.StringMatch `json:"excludedPaths,omitempty"`
}
// OriginAuthenticationMethod defines authentication method/params for origin
// authentication. Origin could be end-user, device, delegate service etc.
// Currently, only JWT is supported for origin authentication.
type OriginAuthenticationMethod struct {
// Jwt params for the method.
Jwt *Jwt `json:"jwt,omitempty"`
}
// Associates authentication with request principal.
type PrincipalBinding string
const (
// Principal will be set to the identity from peer authentication.
PrincipalBindingUserPeer PrincipalBinding = "USE_PEER"
// Principal will be set to the identity from peer authentication.
PrincipalBindingUserOrigin PrincipalBinding = "USE_ORIGIN"
)
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// PolicyLIst is a list of Policy resources
type PolicyList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata"`
Items []Policy `json:"items"`
}
| {
"pile_set_name": "Github"
} |
{
"types": "./datePicker.tsx"
} | {
"pile_set_name": "Github"
} |
0:a:
0:b:
1:?:
| {
"pile_set_name": "Github"
} |
<?php
/*
* This file is part of Sulu.
*
* (c) Sulu GmbH
*
* This source file is subject to the MIT license that is bundled
* with this source code in the file LICENSE.
*/
namespace Sulu\Bundle\MediaBundle\Tests\Functional\Controller;
use Prophecy\Argument;
use Sulu\Bundle\AdminBundle\Admin\Admin;
use Sulu\Bundle\ContactBundle\Entity\Contact;
use Sulu\Bundle\MediaBundle\Admin\MediaAdmin;
use Sulu\Bundle\MediaBundle\DataFixtures\ORM\LoadCollectionTypes;
use Sulu\Bundle\MediaBundle\DataFixtures\ORM\LoadMediaTypes;
use Sulu\Bundle\MediaBundle\Entity\Collection;
use Sulu\Bundle\SecurityBundle\Entity\Permission;
use Sulu\Bundle\SecurityBundle\Entity\Role;
use Sulu\Bundle\SecurityBundle\Entity\User;
use Sulu\Bundle\SecurityBundle\Entity\UserRole;
use Sulu\Bundle\TestBundle\Testing\SuluTestCase;
use Sulu\Component\Security\Authorization\PermissionTypes;
use Sulu\Component\Security\Authorization\SecurityCheckerInterface;
use Sulu\Component\Security\Authorization\SecurityCondition;
use Symfony\Component\HttpFoundation\File\UploadedFile;
class MediaStreamControllerAdminTest extends SuluTestCase
{
public function testDownloadActionCheckPermissionCalled()
{
$this->initDatabase();
$filePath = $this->createMediaFile('test.jpg');
$media = $this->createMedia($filePath, 'file-without-extension');
// teardown needed to set a mocked service
$this->tearDown();
$client = $this->createAuthenticatedClient();
$securityChecker = $this->prophesize(SecurityCheckerInterface::class);
$securityChecker->checkPermission(Argument::that(function(SecurityCondition $securityCondition) use ($media) {
$this->assertSame(MediaAdmin::SECURITY_CONTEXT, $securityCondition->getSecurityContext());
$this->assertSame(null, $securityCondition->getLocale());
$this->assertSame(Collection::class, $securityCondition->getObjectType());
$this->assertSame($media->getCollection(), $securityCondition->getObjectId());
return true;
}), PermissionTypes::VIEW)
->shouldBeCalled();
self::$container->set('sulu_security.security_checker', $securityChecker->reveal());
$client->request('GET', $media->getAdminUrl());
$response = $client->getResponse();
$this->assertHttpStatusCode(200, $response);
}
public function testDownloadActionCheckPermissionDenied()
{
$client = $this->createAuthenticatedClient([], [
'PHP_AUTH_USER' => 'secured_user',
'PHP_AUTH_PW' => 'secured_user',
]);
$this->initDatabase();
$this->createTestUser('secured_user', false);
$this->getEntityManager()->flush();
$this->getEntityManager()->clear();
$filePath = $this->createMediaFile('test.jpg');
$media = $this->createMedia($filePath, 'file-without-extension');
$client->request('GET', $media->getAdminUrl());
$response = $client->getResponse();
$this->assertHttpStatusCode(403, $response);
}
public function testDownloadActionCheckPermissionAllowed()
{
$client = $this->createAuthenticatedClient([], [
'PHP_AUTH_USER' => 'secured_user',
'PHP_AUTH_PW' => 'secured_user',
]);
$this->initDatabase();
$this->createTestUser('secured_user', true);
$this->getEntityManager()->flush();
$this->getEntityManager()->clear();
$filePath = $this->createMediaFile('test.jpg');
$media = $this->createMedia($filePath, 'file-without-extension');
$client->request('GET', $media->getAdminUrl());
$response = $client->getResponse();
$this->assertHttpStatusCode(200, $response);
}
private function createTestUser(string $username, bool $hasPermission)
{
$contact = new Contact();
$this->getEntityManager()->persist($contact);
$contact->setFirstName('Max');
$contact->setLastName('Mustermann');
$user = new User();
$this->getEntityManager()->persist($user);
$user->setUsername($username);
$user->setContact($contact);
$user->setSalt('');
$encoder = self::$container->get('security.encoder_factory')->getEncoder($user);
$user->setPassword($encoder->encodePassword($username, $user->getSalt()));
$user->setLocale('en');
$role = new Role();
$this->getEntityManager()->persist($role);
$role->setName('Secure Test Role');
$role->setSystem(Admin::SULU_ADMIN_SECURITY_SYSTEM);
$role->setAnonymous(false);
$userRole = new UserRole();
$this->getEntityManager()->persist($userRole);
$user->addUserRole($userRole);
$userRole->setUser($user);
$userRole->setRole($role);
$userRole->setLocale('["en"]');
if ($hasPermission) {
$permission = new Permission();
$this->getEntityManager()->persist($permission);
$permission->setRole($role);
$permission->setContext(MediaAdmin::SECURITY_CONTEXT);
$permission->setPermissions(64);
$role->addPermission($permission);
}
}
private function initDatabase(): void
{
$this->purgeDatabase();
$collectionTypes = new LoadCollectionTypes();
$collectionTypes->load($this->getEntityManager());
$mediaTypes = new LoadMediaTypes();
$mediaTypes->load($this->getEntityManager());
}
private function createUploadedFile($path)
{
return new UploadedFile($path, \basename($path), \mime_content_type($path));
}
private function createCollection($title = 'Test')
{
$collection = $this->getCollectionManager()->save(
[
'title' => $title,
'locale' => 'en',
'type' => ['id' => 1],
],
1
);
return $collection->getId();
}
private function createMedia($path, $title)
{
return $this->getMediaManager()->save(
$this->createUploadedFile($path),
[
'title' => $title,
'collection' => $this->createCollection(),
'locale' => 'en',
],
null
);
}
private function createMediaVersion($id, $path, $title)
{
return $this->getMediaManager()->save(
$this->createUploadedFile($path),
[
'id' => $id,
'title' => $title,
'collection' => $this->createCollection(),
'locale' => 'en',
],
null
);
}
private function getMediaManager()
{
return $this->getContainer()->get('sulu_media.media_manager');
}
private function getCollectionManager()
{
return $this->getContainer()->get('sulu_media.collection_manager');
}
private function createMediaFile(string $name, string $fileName = 'photo.jpeg')
{
$filePath = \sys_get_temp_dir() . '/' . $name;
\copy(__DIR__ . '/../../Fixtures/files/' . $fileName, $filePath);
return $filePath;
}
}
| {
"pile_set_name": "Github"
} |
/*
* SonarQube Java
* Copyright (C) 2012-2020 SonarSource SA
* mailto:info AT sonarsource DOT com
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 3 of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package org.sonar.java.model.declaration;
import java.util.ArrayList;
import java.util.List;
import org.junit.jupiter.api.Test;
import org.sonar.java.model.JParserTestUtils;
import org.sonar.plugins.java.api.tree.BaseTreeVisitor;
import org.sonar.plugins.java.api.tree.CompilationUnitTree;
import org.sonar.plugins.java.api.tree.ExportsDirectiveTree;
import org.sonar.plugins.java.api.tree.ExpressionTree;
import org.sonar.plugins.java.api.tree.IdentifierTree;
import org.sonar.plugins.java.api.tree.ListTree;
import org.sonar.plugins.java.api.tree.MemberSelectExpressionTree;
import org.sonar.plugins.java.api.tree.ModuleNameTree;
import org.sonar.plugins.java.api.tree.RequiresDirectiveTree;
import org.sonar.plugins.java.api.tree.Tree;
import static org.assertj.core.api.Assertions.assertThat;
class ExportsDirectiveTreeImplTest {
private static ExportsDirectiveTree exportsDirective(String exportsDirective) {
CompilationUnitTree compilationUnitTree = JParserTestUtils.parseModule("module org.foo {\n " + exportsDirective + "\n}");
return (ExportsDirectiveTree) compilationUnitTree.moduleDeclaration().moduleDirectives().get(0);
}
@Test
void simple_exports() {
ExportsDirectiveTree exports = exportsDirective("exports foo;");
assertThat(exports.kind()).isEqualTo(Tree.Kind.EXPORTS_DIRECTIVE);
assertThat(exports.directiveKeyword().text()).isEqualTo("exports");
assertThat(((IdentifierTree) exports.packageName()).name()).isEqualTo("foo");
assertThat(exports.toKeyword()).isNull();
assertThat(exports.moduleNames()).isEmpty();
assertThat(exports.semicolonToken().text()).isEqualTo(";");
}
@Test
void exports_with_modules() {
ExportsDirectiveTree exports = exportsDirective("exports org.foo to com.module1, module2;");
assertThat(exports.kind()).isEqualTo(Tree.Kind.EXPORTS_DIRECTIVE);
assertThat(exports.directiveKeyword().text()).isEqualTo("exports");
ExpressionTree packageName = exports.packageName();
assertThat(packageName.is(Tree.Kind.MEMBER_SELECT)).isTrue();
MemberSelectExpressionTree mset = (MemberSelectExpressionTree) packageName;
assertThat(((IdentifierTree) mset.expression()).name()).isEqualTo("org");
assertThat(mset.identifier().name()).isEqualTo("foo");
assertThat(exports.toKeyword().text()).isEqualTo("to");
ListTree<ModuleNameTree> moduleNames = exports.moduleNames();
assertThat(moduleNames).hasSize(2);
assertThat(moduleNames.get(0).stream().map(IdentifierTree::name)).containsExactly("com", "module1");
assertThat(moduleNames.separators()).hasSize(1);
assertThat(moduleNames.separators().iterator().next().text()).isEqualTo(",");
assertThat(moduleNames.get(1).stream().map(IdentifierTree::name)).containsExactly("module2");
assertThat(exports.semicolonToken().text()).isEqualTo(";");
}
@Test
void test_BaseTreeVisitor() {
CompilationUnitTree cut = JParserTestUtils.parseModule(
"@org.foo.Bar",
"open module com.greetings {",
" exports foo;",
" requires org.gul;",
" exports org.bar to com.module1, module2;",
"}");
ExportsDirectiveVisitor moduleDeclarationVisitor = new ExportsDirectiveVisitor();
cut.accept(moduleDeclarationVisitor);
assertThat(moduleDeclarationVisitor.visited).isTrue();
assertThat(moduleDeclarationVisitor.directives).hasSize(2);
assertThat(moduleDeclarationVisitor.identifiers).containsExactly("com", "greetings", "foo", "com", "module1", "module2");
}
private static class ExportsDirectiveVisitor extends BaseTreeVisitor {
boolean visited = false;
List<ExportsDirectiveTree> directives = new ArrayList<>();
List<String> identifiers = new ArrayList<>();
@Override
public void visitRequiresDirectiveTree(RequiresDirectiveTree tree) {
// skip requires directives
}
@Override
public void visitExportsDirectiveTree(ExportsDirectiveTree tree) {
visited = true;
directives.add(tree);
super.visitExportsDirectiveTree(tree);
}
@Override
public void visitMemberSelectExpression(MemberSelectExpressionTree tree) {
// skip member selects
}
@Override
public void visitIdentifier(IdentifierTree tree) {
identifiers.add(tree.name());
}
}
}
| {
"pile_set_name": "Github"
} |
/* iCheck plugin Flat skin
----------------------------------- */
.icheckbox_flat,
.iradio_flat {
display: inline-block;
*display: inline;
vertical-align: middle;
margin: 0;
padding: 0;
width: 20px;
height: 20px;
background: url(flat.png) no-repeat;
border: none;
cursor: pointer;
}
.icheckbox_flat {
background-position: 0 0;
}
.icheckbox_flat.checked {
background-position: -22px 0;
}
.icheckbox_flat.disabled {
background-position: -44px 0;
cursor: default;
}
.icheckbox_flat.checked.disabled {
background-position: -66px 0;
}
.iradio_flat {
background-position: -88px 0;
}
.iradio_flat.checked {
background-position: -110px 0;
}
.iradio_flat.disabled {
background-position: -132px 0;
cursor: default;
}
.iradio_flat.checked.disabled {
background-position: -154px 0;
}
/* Retina support */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-moz-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min-device-pixel-ratio: 1.5) {
.icheckbox_flat,
.iradio_flat {
background-image: url([email protected]);
-webkit-background-size: 176px 22px;
background-size: 176px 22px;
}
}
/* red */
.icheckbox_flat-red,
.iradio_flat-red {
display: inline-block;
*display: inline;
vertical-align: middle;
margin: 0;
padding: 0;
width: 20px;
height: 20px;
background: url(red.png) no-repeat;
border: none;
cursor: pointer;
}
.icheckbox_flat-red {
background-position: 0 0;
}
.icheckbox_flat-red.checked {
background-position: -22px 0;
}
.icheckbox_flat-red.disabled {
background-position: -44px 0;
cursor: default;
}
.icheckbox_flat-red.checked.disabled {
background-position: -66px 0;
}
.iradio_flat-red {
background-position: -88px 0;
}
.iradio_flat-red.checked {
background-position: -110px 0;
}
.iradio_flat-red.disabled {
background-position: -132px 0;
cursor: default;
}
.iradio_flat-red.checked.disabled {
background-position: -154px 0;
}
/* Retina support */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-moz-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min-device-pixel-ratio: 1.5) {
.icheckbox_flat-red,
.iradio_flat-red {
background-image: url([email protected]);
-webkit-background-size: 176px 22px;
background-size: 176px 22px;
}
}
/* green */
.icheckbox_flat-green,
.iradio_flat-green {
display: inline-block;
*display: inline;
vertical-align: middle;
margin: 0;
padding: 0;
width: 20px;
height: 20px;
background: url(green.png) no-repeat;
border: none;
cursor: pointer;
}
.icheckbox_flat-green {
background-position: 0 0;
}
.icheckbox_flat-green.checked {
background-position: -22px 0;
}
.icheckbox_flat-green.disabled {
background-position: -44px 0;
cursor: default;
}
.icheckbox_flat-green.checked.disabled {
background-position: -66px 0;
}
.iradio_flat-green {
background-position: -88px 0;
}
.iradio_flat-green.checked {
background-position: -110px 0;
}
.iradio_flat-green.disabled {
background-position: -132px 0;
cursor: default;
}
.iradio_flat-green.checked.disabled {
background-position: -154px 0;
}
/* Retina support */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-moz-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min-device-pixel-ratio: 1.5) {
.icheckbox_flat-green,
.iradio_flat-green {
background-image: url([email protected]);
-webkit-background-size: 176px 22px;
background-size: 176px 22px;
}
}
/* blue */
.icheckbox_flat-blue,
.iradio_flat-blue {
display: inline-block;
*display: inline;
vertical-align: middle;
margin: 0;
padding: 0;
width: 20px;
height: 20px;
background: url(blue.png) no-repeat;
border: none;
cursor: pointer;
}
.icheckbox_flat-blue {
background-position: 0 0;
}
.icheckbox_flat-blue.checked {
background-position: -22px 0;
}
.icheckbox_flat-blue.disabled {
background-position: -44px 0;
cursor: default;
}
.icheckbox_flat-blue.checked.disabled {
background-position: -66px 0;
}
.iradio_flat-blue {
background-position: -88px 0;
}
.iradio_flat-blue.checked {
background-position: -110px 0;
}
.iradio_flat-blue.disabled {
background-position: -132px 0;
cursor: default;
}
.iradio_flat-blue.checked.disabled {
background-position: -154px 0;
}
/* Retina support */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-moz-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min-device-pixel-ratio: 1.5) {
.icheckbox_flat-blue,
.iradio_flat-blue {
background-image: url([email protected]);
-webkit-background-size: 176px 22px;
background-size: 176px 22px;
}
}
/* aero */
.icheckbox_flat-aero,
.iradio_flat-aero {
display: inline-block;
*display: inline;
vertical-align: middle;
margin: 0;
padding: 0;
width: 20px;
height: 20px;
background: url(aero.png) no-repeat;
border: none;
cursor: pointer;
}
.icheckbox_flat-aero {
background-position: 0 0;
}
.icheckbox_flat-aero.checked {
background-position: -22px 0;
}
.icheckbox_flat-aero.disabled {
background-position: -44px 0;
cursor: default;
}
.icheckbox_flat-aero.checked.disabled {
background-position: -66px 0;
}
.iradio_flat-aero {
background-position: -88px 0;
}
.iradio_flat-aero.checked {
background-position: -110px 0;
}
.iradio_flat-aero.disabled {
background-position: -132px 0;
cursor: default;
}
.iradio_flat-aero.checked.disabled {
background-position: -154px 0;
}
/* Retina support */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-moz-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min-device-pixel-ratio: 1.5) {
.icheckbox_flat-aero,
.iradio_flat-aero {
background-image: url([email protected]);
-webkit-background-size: 176px 22px;
background-size: 176px 22px;
}
}
/* grey */
.icheckbox_flat-grey,
.iradio_flat-grey {
display: inline-block;
*display: inline;
vertical-align: middle;
margin: 0;
padding: 0;
width: 20px;
height: 20px;
background: url(grey.png) no-repeat;
border: none;
cursor: pointer;
}
.icheckbox_flat-grey {
background-position: 0 0;
}
.icheckbox_flat-grey.checked {
background-position: -22px 0;
}
.icheckbox_flat-grey.disabled {
background-position: -44px 0;
cursor: default;
}
.icheckbox_flat-grey.checked.disabled {
background-position: -66px 0;
}
.iradio_flat-grey {
background-position: -88px 0;
}
.iradio_flat-grey.checked {
background-position: -110px 0;
}
.iradio_flat-grey.disabled {
background-position: -132px 0;
cursor: default;
}
.iradio_flat-grey.checked.disabled {
background-position: -154px 0;
}
/* Retina support */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-moz-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min-device-pixel-ratio: 1.5) {
.icheckbox_flat-grey,
.iradio_flat-grey {
background-image: url([email protected]);
-webkit-background-size: 176px 22px;
background-size: 176px 22px;
}
}
/* orange */
.icheckbox_flat-orange,
.iradio_flat-orange {
display: inline-block;
*display: inline;
vertical-align: middle;
margin: 0;
padding: 0;
width: 20px;
height: 20px;
background: url(orange.png) no-repeat;
border: none;
cursor: pointer;
}
.icheckbox_flat-orange {
background-position: 0 0;
}
.icheckbox_flat-orange.checked {
background-position: -22px 0;
}
.icheckbox_flat-orange.disabled {
background-position: -44px 0;
cursor: default;
}
.icheckbox_flat-orange.checked.disabled {
background-position: -66px 0;
}
.iradio_flat-orange {
background-position: -88px 0;
}
.iradio_flat-orange.checked {
background-position: -110px 0;
}
.iradio_flat-orange.disabled {
background-position: -132px 0;
cursor: default;
}
.iradio_flat-orange.checked.disabled {
background-position: -154px 0;
}
/* Retina support */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-moz-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min-device-pixel-ratio: 1.5) {
.icheckbox_flat-orange,
.iradio_flat-orange {
background-image: url([email protected]);
-webkit-background-size: 176px 22px;
background-size: 176px 22px;
}
}
/* yellow */
.icheckbox_flat-yellow,
.iradio_flat-yellow {
display: inline-block;
*display: inline;
vertical-align: middle;
margin: 0;
padding: 0;
width: 20px;
height: 20px;
background: url(yellow.png) no-repeat;
border: none;
cursor: pointer;
}
.icheckbox_flat-yellow {
background-position: 0 0;
}
.icheckbox_flat-yellow.checked {
background-position: -22px 0;
}
.icheckbox_flat-yellow.disabled {
background-position: -44px 0;
cursor: default;
}
.icheckbox_flat-yellow.checked.disabled {
background-position: -66px 0;
}
.iradio_flat-yellow {
background-position: -88px 0;
}
.iradio_flat-yellow.checked {
background-position: -110px 0;
}
.iradio_flat-yellow.disabled {
background-position: -132px 0;
cursor: default;
}
.iradio_flat-yellow.checked.disabled {
background-position: -154px 0;
}
/* Retina support */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-moz-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min-device-pixel-ratio: 1.5) {
.icheckbox_flat-yellow,
.iradio_flat-yellow {
background-image: url([email protected]);
-webkit-background-size: 176px 22px;
background-size: 176px 22px;
}
}
/* pink */
.icheckbox_flat-pink,
.iradio_flat-pink {
display: inline-block;
*display: inline;
vertical-align: middle;
margin: 0;
padding: 0;
width: 20px;
height: 20px;
background: url(pink.png) no-repeat;
border: none;
cursor: pointer;
}
.icheckbox_flat-pink {
background-position: 0 0;
}
.icheckbox_flat-pink.checked {
background-position: -22px 0;
}
.icheckbox_flat-pink.disabled {
background-position: -44px 0;
cursor: default;
}
.icheckbox_flat-pink.checked.disabled {
background-position: -66px 0;
}
.iradio_flat-pink {
background-position: -88px 0;
}
.iradio_flat-pink.checked {
background-position: -110px 0;
}
.iradio_flat-pink.disabled {
background-position: -132px 0;
cursor: default;
}
.iradio_flat-pink.checked.disabled {
background-position: -154px 0;
}
/* Retina support */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-moz-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min-device-pixel-ratio: 1.5) {
.icheckbox_flat-pink,
.iradio_flat-pink {
background-image: url([email protected]);
-webkit-background-size: 176px 22px;
background-size: 176px 22px;
}
}
/* purple */
.icheckbox_flat-purple,
.iradio_flat-purple {
display: inline-block;
*display: inline;
vertical-align: middle;
margin: 0;
padding: 0;
width: 20px;
height: 20px;
background: url(purple.png) no-repeat;
border: none;
cursor: pointer;
}
.icheckbox_flat-purple {
background-position: 0 0;
}
.icheckbox_flat-purple.checked {
background-position: -22px 0;
}
.icheckbox_flat-purple.disabled {
background-position: -44px 0;
cursor: default;
}
.icheckbox_flat-purple.checked.disabled {
background-position: -66px 0;
}
.iradio_flat-purple {
background-position: -88px 0;
}
.iradio_flat-purple.checked {
background-position: -110px 0;
}
.iradio_flat-purple.disabled {
background-position: -132px 0;
cursor: default;
}
.iradio_flat-purple.checked.disabled {
background-position: -154px 0;
}
/* Retina support */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-moz-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min-device-pixel-ratio: 1.5) {
.icheckbox_flat-purple,
.iradio_flat-purple {
background-image: url([email protected]);
-webkit-background-size: 176px 22px;
background-size: 176px 22px;
}
} | {
"pile_set_name": "Github"
} |
<?xml version="1.0"?>
<ZopeData>
<record id="1" aka="AAAAAAAAAAE=">
<pickle>
<global name="Category" module="erp5.portal_type"/>
</pickle>
<pickle>
<dictionary>
<item>
<key> <string>_count</string> </key>
<value>
<persistent> <string encoding="base64">AAAAAAAAAAI=</string> </persistent>
</value>
</item>
<item>
<key> <string>_mt_index</string> </key>
<value>
<persistent> <string encoding="base64">AAAAAAAAAAM=</string> </persistent>
</value>
</item>
<item>
<key> <string>_tree</string> </key>
<value>
<persistent> <string encoding="base64">AAAAAAAAAAQ=</string> </persistent>
</value>
</item>
<item>
<key> <string>categories</string> </key>
<value>
<tuple>
<string>gap/ohada/syscohada/9/90/905</string>
</tuple>
</value>
</item>
<item>
<key> <string>id</string> </key>
<value> <string>905</string> </value>
</item>
<item>
<key> <string>portal_type</string> </key>
<value> <string>Category</string> </value>
</item>
<item>
<key> <string>title</string> </key>
<value> <string>Engagements de financement accordés</string> </value>
</item>
</dictionary>
</pickle>
</record>
<record id="2" aka="AAAAAAAAAAI=">
<pickle>
<global name="Length" module="BTrees.Length"/>
</pickle>
<pickle> <int>0</int> </pickle>
</record>
<record id="3" aka="AAAAAAAAAAM=">
<pickle>
<global name="OOBTree" module="BTrees.OOBTree"/>
</pickle>
<pickle>
<none/>
</pickle>
</record>
<record id="4" aka="AAAAAAAAAAQ=">
<pickle>
<global name="OOBTree" module="BTrees.OOBTree"/>
</pickle>
<pickle>
<none/>
</pickle>
</record>
</ZopeData>
| {
"pile_set_name": "Github"
} |
/*
** $Id: lfunc.h,v 2.4.1.1 2007/12/27 13:02:25 roberto Exp $
** Auxiliary functions to manipulate prototypes and closures
** See Copyright Notice in lua.h
*/
#ifndef lfunc_h
#define lfunc_h
#include "lobject.h"
#define sizeCclosure(n) (cast(int, sizeof(CClosure)) + \
cast(int, sizeof(TValue)*((n)-1)))
#define sizeLclosure(n) (cast(int, sizeof(LClosure)) + \
cast(int, sizeof(TValue *)*((n)-1)))
LUAI_FUNC Proto *luaF_newproto (lua_State *L);
LUAI_FUNC Closure *luaF_newCclosure (lua_State *L, int nelems, Table *e);
LUAI_FUNC Closure *luaF_newLclosure (lua_State *L, int nelems, Table *e);
LUAI_FUNC UpVal *luaF_newupval (lua_State *L);
LUAI_FUNC UpVal *luaF_findupval (lua_State *L, StkId level);
LUAI_FUNC void luaF_close (lua_State *L, StkId level);
LUAI_FUNC void luaF_freeproto (lua_State *L, Proto *f);
LUAI_FUNC void luaF_freeclosure (lua_State *L, Closure *c);
LUAI_FUNC void luaF_freeupval (lua_State *L, UpVal *uv);
LUAI_FUNC const char *luaF_getlocalname (const Proto *func, int local_number,
int pc);
#endif
| {
"pile_set_name": "Github"
} |
package gw.gosudoc.doc
uses com.sun.javadoc.RootDoc
uses com.sun.javadoc.ClassDoc
uses com.sun.javadoc.PackageDoc
uses com.sun.javadoc.SourcePosition
uses gw.gosudoc.filter.*
uses gw.gosudoc.type.GSArrayTypeImpl
uses gw.gosudoc.type.GSClassTypeImpl
uses gw.gosudoc.type.GSFunctionalTypeImpl
uses gw.gosudoc.type.GSParameterizedTypeImpl
uses gw.gosudoc.type.GSPrimitiveTypeImpl
uses gw.gosudoc.type.GSTypeImpl
uses gw.gosudoc.type.GSTypeVariableImpl
uses gw.gosudoc.type.GSVoidTypeImpl
uses gw.lang.reflect.IConstructorInfo
uses gw.lang.reflect.IFunctionType
uses gw.lang.reflect.IMethodInfo
uses gw.lang.reflect.IPropertyInfo
uses gw.lang.reflect.IType
uses gw.lang.reflect.ITypeVariableType
uses gw.lang.reflect.TypeSystem
uses gw.lang.reflect.gs.IGosuClass
uses java.io.File
uses java.lang.System
uses java.util.ArrayList
uses java.util.HashMap
uses java.util.IdentityHashMap
uses java.util.regex.Pattern
class GSRootDocImpl extends GSDocImpl implements RootDoc{
static var GOSU_SOURCE_EXTENSIONS : Set<String> = {"gs", "gsx"}
var _voidType = new GSVoidTypeImpl( this )
var _typesByType = new IdentityHashMap<IType, GSTypeImpl>()
var _packagesByName = new HashMap<String, GSPackageDocImpl>().toAutoMap( \name -> new GSPackageDocImpl( name, this ) )
// Config info
var _externalDocs : List<String> as ExternalDocs = {}
private final var _filesToDoc: Set<String>
var _exclusions : List<Pattern> as Exclusions = {}
var _outputDirectory : File as OutputDirectory
var _typeFilters : List<TypeFilter> as readonly TypeFilters = {}
var _methodFilters : List<MethodFilter> as readonly MethodFilters = {}
var _propertyFilters : List<PropertyFilter> as readonly PropertyFilters = {}
var _constructorFilters : List<ConstructorFilter> as readonly ConstructorFilters = {}
var _featureFilters : List<FeatureFilter> as readonly FeatureFilters = {}
var _pathFilters : List<PathFilter> as readonly PathFilters = {}
var _verbose: boolean
construct( inputDirs : List<File>, outputDir: File, filters : List = null,
externalDocs : List<String> = null, verbose = false ){
super( "Root", null )
_verbose = verbose
_outputDirectory = outputDir
_externalDocs = externalDocs
if(filters != null) {
initFilters(filters)
}
_filesToDoc = typesToDoc(inputDirs)
}
private function typesToDoc(inputDirs: List<File>): Set<String> {
debug(\-> "Beginning souce file scan")
var types = new HashSet<String>()
for(file in inputDirs) {
addTypesToDoc(file, file, types);
}
debug(\-> "Finished souce file scan")
return types
}
private function addTypesToDoc(root : File, file: File, types: HashSet<String>) {
if(shouldIncludeFile(file.AbsolutePath)) {
if(file.Directory) {
debug(\-> " Scanning " + file.AbsolutePath)
for(child in file.Children) {
addTypesToDoc(root, child, types)
}
} else {
if(GOSU_SOURCE_EXTENSIONS.contains(file.Extension)) {
var typeName = computeTypeName(root, file)
debug(\-> " Adding " + file.AbsolutePath + " as " + typeName)
types.add(typeName)
}
}
} else {
debug(\-> " Ignoring " + file.AbsolutePath)
}
}
private function debug(s: block():String) {
if(_verbose) {
print(s())
}
}
private function computeTypeName(root: File, file: File): String {
var absolutePath = file.AbsolutePath
var relativePath = absolutePath.substring(root.AbsolutePath.length )
var withoutExtension = relativePath.substring(1, relativePath.length - file.Extension.length - 1 )
var asTypeName = withoutExtension.replace(File.separatorChar, '.')
return asTypeName
}
private function shouldIncludeFile(absolutePath: String): boolean {
return not _pathFilters.hasMatch(\ f -> not f.shouldIncludePath( absolutePath ) )
}
private function initFilters( filters: List<Object> ){
for(o in filters) {
if(o typeis TypeFilter) TypeFilters.add(o)
if(o typeis MethodFilter ) MethodFilters.add(o)
if(o typeis PropertyFilter ) PropertyFilters.add(o)
if(o typeis ConstructorFilter ) ConstructorFilters.add(o)
if(o typeis FeatureFilter ) FeatureFilters.add(o)
}
}
function shouldDocumentType( iType : Type ): boolean{
var shouldDoc = shouldDocumentTypeImpl(iType);
debug(\ -> iType.Name + " - document : " + shouldDoc)
return shouldDoc;
}
private function shouldDocumentTypeImpl(iType: Type): boolean{
if(isExcluded( iType.Name )) {
return false
}
for(filter in TypeFilters) {
if(not filter.shouldIncludeType( iType )) {
return false
}
}
if(iType typeis IGosuClass ) {
if(_filesToDoc.contains(iType.Name)) {
return true;
}
if(iType.EnclosingType typeis Type) {
return shouldDocumentType(iType.EnclosingType)
}
}
return false
}
function isExcluded( name: String ): boolean{
return _exclusions.hasMatch( \p -> p.matcher( name ).find() ) || isSynthetic( name )
}
function isSynthetic( name: String ): boolean{
return name.endsWith( ".PLACEHOLDER" ) ||
name.startsWith( "_proxy_" ) ||
name.startsWith( "OSGI-OPT" ) ||
name.contains( ".block_" ) ||
name.contains( ".AnonymouS__" ) ||
name.contains( ".ProxyFor_" ) ||
name.equals( 'Key' )
}
override function options(): String[][]{
var l = new ArrayList<String[]>()
for( externalJavadoc in _externalDocs ){
l.add( {"-link", externalJavadoc} )
}
l.add( {"-d", _outputDirectory.getCanonicalPath()} )
return l.toTypedArray()
}
function genDocs(){
printNotice( "Loading types to generate GosuDoc" )
for( typeName in TypeSystem.getAllTypeNames() ){
try{
if(!isExcluded( typeName.toString() )) {
var iType = TypeSystem.getByFullName( typeName.toString() )
if( shouldDocumentType( iType ) ){
getOrCreateClass( iType )
}
}
} catch( e ){
printWarning( "Could not load type ${typeName}: ${e.Message}" )
}
}
}
override function specifiedPackages(): PackageDoc[]{
return getAllPackages()
}
override function specifiedClasses(): ClassDoc[]{
return classes()
}
override function classes(): ClassDoc[]{
return specifiedPackages().flatMap( \elt -> elt.allClasses() )
}
override function packageNamed( name: String ): GSPackageDocImpl{
return _packagesByName.get( name )
}
override function classNamed( qualifiedName: String ): ClassDoc{
var packageName = getPackageNameFromTypeName( qualifiedName )
return packageNamed( packageName ).findClass( qualifiedName )
}
override function printError( msg: String ){
System.err.println( "ERROR: ${msg}}" )
}
override function printError( pos: SourcePosition, msg: String ){
System.err.println( "ERROR: ${msg} ${processPos( pos )}" )
}
override function printWarning( msg: String ){
if( warningIsUnexpected( msg ) ){
System.err.println( "WARNING: ${msg}}" )
}
}
override function printWarning( pos: SourcePosition, msg: String ){
if( warningIsUnexpected( msg ) ){
System.err.println( "WARNING: ${msg} ${processPos( pos )}" )
}
}
override function printNotice( msg: String ){
System.out.println( msg )
}
override function printNotice( pos: SourcePosition, msg: String ){
System.out.println( "${msg} ${pos}" )
}
function getOrCreateClass( iType: IType ): GSClassDocImpl {
var baseType = getBaseClassType( iType )
var className = iType.Name
var pack = packageNamed( baseType.getNamespace() ?: "" )
var clazz = pack.findClass( className )
if( clazz == null ){
clazz = pack.addClass( baseType )
}
return clazz
}
// Returns empty string for default package.
function getPackageNameFromTypeName( fqtn: String ): String{
var i = fqtn.lastIndexOf( "." )
var packagePortion = i >= 0 ? fqtn.substring( 0, i ) : ""
return packagePortion
}
function getType( type: IType, owner: GSProgramElementDocImpl ): GSTypeImpl {
if( type == null ){
// there are some weird methods that have null return type.
return _voidType
}
var typeImpl = _typesByType.get( type )
if( typeImpl == null ){
if( type.isArray() ){
typeImpl = new GSArrayTypeImpl( type, this, owner )
} else if( isVoid( type ) ){
typeImpl = _voidType
} else if( type.isPrimitive() ){
typeImpl = new GSPrimitiveTypeImpl( type, this, owner )
} else if( type typeis IFunctionType ){ // handle blocks, etc.
typeImpl = new GSFunctionalTypeImpl( type, this, owner )
} else if( type typeis ITypeVariableType ){
typeImpl = new GSTypeVariableImpl( type.Name, type.BoundingType, this, owner )
} else if( type.isParameterizedType() ){
typeImpl = new GSParameterizedTypeImpl( type, this, owner )
} else{
typeImpl = new GSClassTypeImpl( type, this, owner )
}
_typesByType.put( type, typeImpl )
typeImpl.initialize()
}
return typeImpl
}
function getAllPackages(): PackageDoc[]{
return _packagesByName.values().where( \elt -> elt.Included ).toTypedArray()
}
private function getBaseClassType( iType: IType ): IType {
if( iType == null ){
return null
}
if( iType.isArray() ){
return getBaseClassType( iType.getComponentType() )
}
if( iType.isPrimitive() ){
return iType
}
if( iType typeis ITypeVariableType ){
return getBaseClassType( iType.BoundingType )
}
if( iType.isParameterizedType() ){
return getBaseClassType( iType.getGenericType() )
}
return iType
}
private function processPos( pos: SourcePosition ) : Object {
return pos ?: ""
}
private function warningIsUnexpected( msg: String ): boolean{
return !(msg.startsWith( "Parameter" ) && msg.contains( " is documented more than once" ))
}
function shouldDocumentConstructor( iConstructorInfo: IConstructorInfo ): boolean{
for(filter in ConstructorFilters) {
if(not filter.shouldIncludeConstructor( iConstructorInfo )) {
return false
}
}
for(filter in FeatureFilters) {
if(not filter.shouldIncludeFeature( iConstructorInfo )) {
return false
}
}
return true
}
function shouldDocumentProperty( iPropertyInfo: IPropertyInfo ): boolean{
for(filter in PropertyFilters) {
if(not filter.shouldIncludeProperty( iPropertyInfo )) {
return false
}
}
for(filter in FeatureFilters) {
if(not filter.shouldIncludeFeature( iPropertyInfo )) {
return false
}
}
return true
}
function shouldDocumentMethod( iMethodInfo: IMethodInfo ): boolean{
for(filter in MethodFilters) {
if(not filter.shouldIncludeMethod( iMethodInfo )) {
return false
}
}
for(filter in FeatureFilters) {
if(not filter.shouldIncludeFeature( iMethodInfo )) {
return false
}
}
return true
}
} | {
"pile_set_name": "Github"
} |
{
"index": 14,
"lineNumber": 1,
"column": 15,
"message": "Error: Line 1: Unexpected token -"
} | {
"pile_set_name": "Github"
} |
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Color Palettes in Scribus (1)</title>
</head>
<style>
@import "manual.css";
</style>
<body>
<h2>Color Palettes in Scribus (1): Open Source Palettes</h2>
<p>Note that all color palettes described here and in subsequent sections include three basic colors: 100% CMYK Black (C: 100, M: 100, Y: 100, K: 100), 100% CMYK White (C: 0, M: 0, Y: 0, K: 0), and the <a href="color1.html">Registration Color</a>. These will not be counted as separate colors in the tables.</p>
<p>It is also important to note that no physical reference (color fan, color chips) exists for the colors in the palettes listed on this page so verification of color correctness is impossible.</p><br>
<table border="1" cellpadding="8" cellspacing="0" bordercolor="#888888">
<tbody>
<tr valign=top>
<td bgcolor="#eeeeee" align="center" valign="middle"><p><b>Name</b></p></td>
<td bgcolor="#eeeeee" align="center" valign="middle"><p><b>Description</b></p></td>
<td bgcolor="#eeeeee" align="center" valign="middle"><p><b>Number of Colors</b></p></td>
<td bgcolor="#eeeeee" align="center" valign="middle"><p><b>Color Model</b></p></td>
<td bgcolor="#eeeeee" align="center" valign="middle"><p><b>Spot</b></p></td>
</tr>
<tr valign=top>
<td><p><b>Scribus Small</b></p></td>
<td><p>A set of primary CMYK colors. These are based on a mix of percentages of the standard CMYK inks.</p></td>
<td><p>6</p></td>
<td><p>CMYK</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>Scribus Basic</b></p></td>
<td><p>A set of primary CMYK and RGB colors, as well as three variants of black, namely “Cool Black”, “Rich Black” and “Warm Black”. The latter consist of a mix of C, M, Y and K colors instead of just 100% K (K=Black) and the values are the result of intensive discussions between the Scribus Team and commercial printers. The “Rich Black” variants may not meet every printer’s needs, but they can serve as a basis when it comes to printing a rich black. More “Rich Black” versions are available in the <a href="color7a.html">Galaxy Gauge™ Neutrals and Rich Blacks</a> palette.</p></td>
<td><p>9</p></td>
<td><p>RGB and CMYK</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>All Color</b></p></td>
<td><p>A collection of a wide range of RGB colors as traditionally available in image editing software like Photoshop.</p></td>
<td><p>734</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>Android TM</b></p></td>
<td><p>The default color scheme used for <a href="http://developer.android.com/design/downloads/index.html">Android™</a> development and useful in Android-related publications.</p></td>
<td><p>18</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>Classic Kit</b></p></td>
<td><p>A color set that tries to mimic the colors used in traditional art.</p></td>
<td><p>78</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>Creative Commons 2013</b></p></td>
<td><p>The color scheme used by <a href="http://wiki.creativecommons.org/Colors">Creative Commons</a> on their website and publications.</p></td>
<td><p>42</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<!-- <tr valign=top>
<td><p><b>Fedora RGB</b></p></td>
<td><p>A set of named RGB colors that are mandatory for publications of the <a href="http://fedoraproject.org/wiki/Logo/UsageGuidelines">Fedora Linux distribution</a>. </p></td>
<td><p>50</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>Fedora CMYK</b></p></td>
<td><p>A set of named CMYK colors that are mandatory for publications of the <a href="http://fedoraproject.org/wiki/Logo/UsageGuidelines">Fedora Linux distribution</a>. </p></td>
<td><p>50</p></td>
<td><p>CMYK</p></td>
<td><p>No</p></td>
</tr> -->
<tr valign=top>
<td><p><b>Gnome</b></p></td>
<td><p>A set of named RGB colors that are mandatory for <a href="http://developer.gnome.org/hig-book/3.0/design-color.html.en">Gnome</a> desktop applications. </p></td>
<td><p>32</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>Inkscape</b></p></td>
<td><p>The default RGB color set of <a href="http://inkscape.org">Inkscape</a>. </p></td>
<td><p>431</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>LaTeX-Beamer</b></p></td>
<td><p>The default RGB color set of the <a href="http://latex-beamer.sourceforge.net/">LaTeX Beamer</a> class used in LaTeX presentations. </p></td>
<td><p>136</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>LibreOffice</b></p></td>
<td><p>A set of named RGB colors used for project “branding” by <a href="http://wiki.documentfoundation.org/Marketing/Branding#Colors">LibreOffice</a>. </p></td>
<td><p>30</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>OpenOffice dot org CMYK</b></p></td>
<td><p>A set of CMYK colors based on the “CMYK” palette found in <a href="http://en.openoffice.org">OpenOffice.org</a>.</p></td>
<td><p>94</p></td>
<td><p>CMYK</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>OpenOffice dot org Galaxy</b></p></td>
<td><p>A set of RGB colors used in <a href="http://en.openoffice.org">OpenOffice.org</a> since version 3.x.</p></td>
<td><p>53</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>OpenSUSE</b></p></td>
<td><p>A set of named RGB colors that are mandatory for publications of the <a href="http://en.opensuse.org/openSUSE:Artwork_guidelines#Colors">OpenSUSE Linux distribution</a>.</p></td>
<td><p>15</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>Oxygen</b></p></td>
<td><p>A set of named RGB colors used in KDE 4’s <a href="http://techbase.kde.org/Projects/Oxygen/Style#Color_Usage">Oxygen</a> desktop theme.</p></td>
<td><p>126</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>Scribus Splash</b> </p></td>
<td><p>The CMYK version of the colors used in the Scribus logo and the splashscreen. Its main purpose is to guarantee a consistent use of colors in Scribus-related publications.</p></td>
<td><p>19</p></td>
<td><p>CMYK</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>Shades of K</b> </p></td>
<td><p>A color palette with different degrees of shades of process color black (K), ranging from 1% to 99% K.</p></td>
<td><p>100</p></td>
<td><p>CMYK</p></td>
<td><p>No</p></td>
</tr>
<tr>
<td><p><b>SVG</b></p></td>
<td><p>A set of RGB colors based on the named colors defined in the <a href="http://www.w3.org/TR/css3-color/#svg-color">SVG specification</a>.</p></td>
<td><p>149</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr>
<td><p><b>Tango</b></p></td>
<td><p>A set of the RGB colors used in the <a href="http://tango.freedesktop.org/Tango_Icon_Theme_Guidelines">Tango icon project</a> for Free desktops. Since Scribus uses the Tango icon set, this color set may also be interesting for Scribus-related projects.</p></td>
<td><p>68</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
</tbody>
<tr>
<td><p><b>Ubuntu RGB</b></p></td>
<td><p>A set of named RGB colors that are mandatory for publications of the <a href="http://design.canonical.com/the-toolkit/ubuntu-brand-guidelines/">Ubuntu Linux distribution</a>.</p></td>
<td><p>4</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr>
<td><p><b>Ubuntu CMYK</b></p></td>
<td><p>A set of named CMYK colors that are mandatory for publications of the <a href="http://design.canonical.com/the-toolkit/ubuntu-brand-guidelines/">Ubuntu Linux distribution</a>.</p></td>
<td><p>4</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>X11</b></p></td>
<td><p>A set of RGB colors based on the named colors from <a href="http://www.xfree86.org/current/X.7.html#toc10">X-Window</a>.</p></td>
<td><p>550</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr valign=top>
<td><p><b>X11 Grey</b></p></td>
<td><p>A set of grayscale shades based on the named shades from X-Window.</p></td>
<td><p>110</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
<tr>
<td><p><b>Xfig</b></p></td>
<td><p>A set of the named RGB colors used by <a href="http://www.xfig.org/">Xfig</a>.</p></td>
<td><p>32</p></td>
<td><p>RGB</p></td>
<td><p>No</p></td>
</tr>
</table>
<br>
<br>
</body>
</html>
| {
"pile_set_name": "Github"
} |
#include "bsmemory/bsmemory.hpp"
#include "sufamiturbo/sufamiturbo.hpp"
| {
"pile_set_name": "Github"
} |
package org.camunda.bpm.example.invoice;
import org.camunda.bpm.engine.delegate.CaseExecutionListener;
import org.camunda.bpm.engine.delegate.DelegateCaseExecution;
import org.camunda.bpm.example.cmis.CMISConnector;
public class CaseGetDocuments implements CaseExecutionListener {
@Override
public void notify(DelegateCaseExecution execution) throws Exception {
String folderName = (String) execution.getVariable("folderName");
if (folderName != null) {
CMISConnector cmis = new CMISConnector();
String linkObject = cmis.getFileLinks(folderName);
execution.setVariable("files", linkObject);
} else {
execution.setVariable("files", "nofiles");
}
}
}
| {
"pile_set_name": "Github"
} |
@keyframes scaleUp {
0% {
transform: scaleY(.8);
}
100% {
transform: scaleY(1);
}
}
| {
"pile_set_name": "Github"
} |
/***************************************************************
Copyright (C) 2006-2009 Hewlett-Packard Development Company, L.P.
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
version 2 as published by the Free Software Foundation.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
***************************************************************/
#ifndef _PARSE_H
#define _PARSE_H
#include "doctorBuffer_utils.h"
char *parseLicenses(char *filetext, int size, scanres_t *scp, int isML, int isPS);
#endif /* _PARSE_H */
| {
"pile_set_name": "Github"
} |
// +build !luminous,!mimic
//
// Ceph Nautilus is the first release that includes rbd_namespace_create(),
// rbd_namespace_remove(), rbd_namespace_exists() and rbd_namespace_list().
package rbd
// #cgo LDFLAGS: -lrbd
// #include <rados/librados.h>
// #include <rbd/librbd.h>
// #include <stdlib.h>
// #include <errno.h>
import "C"
import (
"unsafe"
"github.com/ceph/go-ceph/internal/cutil"
"github.com/ceph/go-ceph/internal/retry"
"github.com/ceph/go-ceph/rados"
)
// NamespaceCreate creates the namespace for a given Rados IOContext.
//
// Implements:
// int rbd_namespace_create(rados_ioctx_t io, const char *namespace_name);
func NamespaceCreate(ioctx *rados.IOContext, namespaceName string) error {
if ioctx == nil {
return ErrNoIOContext
}
if namespaceName == "" {
return ErrNoNamespaceName
}
cNamespaceName := C.CString(namespaceName)
defer C.free(unsafe.Pointer(cNamespaceName))
ret := C.rbd_namespace_create(cephIoctx(ioctx), cNamespaceName)
return getError(ret)
}
// NamespaceRemove removes a given namespace.
//
// Implements:
// int rbd_namespace_remove(rados_ioctx_t io, const char *namespace_name);
func NamespaceRemove(ioctx *rados.IOContext, namespaceName string) error {
if ioctx == nil {
return ErrNoIOContext
}
if namespaceName == "" {
return ErrNoNamespaceName
}
cNamespaceName := C.CString(namespaceName)
defer C.free(unsafe.Pointer(cNamespaceName))
ret := C.rbd_namespace_remove(cephIoctx(ioctx), cNamespaceName)
return getError(ret)
}
// NamespaceExists checks whether a given namespace exists or not.
//
// Implements:
// int rbd_namespace_exists(rados_ioctx_t io, const char *namespace_name, bool *exists);
func NamespaceExists(ioctx *rados.IOContext, namespaceName string) (bool, error) {
if ioctx == nil {
return false, ErrNoIOContext
}
if namespaceName == "" {
return false, ErrNoNamespaceName
}
cNamespaceName := C.CString(namespaceName)
defer C.free(unsafe.Pointer(cNamespaceName))
var exists C.bool
ret := C.rbd_namespace_exists(cephIoctx(ioctx), cNamespaceName, &exists)
return bool(exists), getErrorIfNegative(ret)
}
// NamespaceList returns a slice containing the names of existing rbd namespaces.
//
// Implements:
// int rbd_namespace_list(rados_ioctx_t io, char *namespace_names, size_t *size);
func NamespaceList(ioctx *rados.IOContext) (names []string, err error) {
if ioctx == nil {
return nil, ErrNoIOContext
}
var (
buf []byte
cSize C.size_t
)
retry.WithSizes(4096, 262144, func(size int) retry.Hint {
cSize = C.size_t(size)
buf = make([]byte, cSize)
ret := C.rbd_namespace_list(cephIoctx(ioctx),
(*C.char)(unsafe.Pointer(&buf[0])),
&cSize)
err = getErrorIfNegative(ret)
return retry.Size(int(cSize)).If(err == errRange)
})
if err != nil {
return nil, err
}
names = cutil.SplitSparseBuffer(buf[:cSize])
return names, nil
}
| {
"pile_set_name": "Github"
} |
namespace Ice\I18n\Plural;
/**
* Plural rules for Polish language:
*
* Locales: pl
*
* Rules:
* one → n is 1;
* few → n mod 10 in 2..4 and n mod 100 not in 12..14 and n mod 100 not in 22..24;
* other → everything else (fractions)
*
* Reference CLDR Version 21 (2012-03-01 03:27:30 GMT)
* @see http://unicode.org/repos/cldr-tmp/trunk/diff/supplemental/language_plural_rules.html
* @see http://unicode.org/repos/cldr/trunk/common/supplemental/plurals.xml
*
* @package Ice/I18n
* @category Plural rules
* @author Ice Team
* @copyright (c) 2014-2020 Ice Team
* @license http://iceframework.org/license
*/
class Polish implements PluralInterface
{
public function getCategory(int count) -> string
{
var isInt;
int i10, i100;
let isInt = this->isInt(count),
i10 = count % 10,
i100 = count % 100;
if count == 1 {
return "one";
} elseif isInt && i10 >= 2 && i10 <= 4 && !(i100 >= 12 && i100 <= 14) && !(i100 >= 22 && i100 <= 24) {
return "few";
} else {
return "other";
}
}
protected function isInt(value)
{
return is_numeric(value) && value - intval(value) == 0;
}
}
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2016-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.vault.authentication;
import java.net.URI;
import java.util.Arrays;
import java.util.UUID;
import org.springframework.lang.Nullable;
import org.springframework.util.Assert;
/**
* Authentication options for {@link AwsEc2Authentication}.
* <p>
* Authentication options provide the path, the Identity Document URI and an optional
* role. {@link AwsEc2AuthenticationOptions} can be constructed using {@link #builder()}.
* Instances of this class are immutable once constructed.
*
* @author Mark Paluch
* @see AwsEc2Authentication
* @see #builder()
*/
public class AwsEc2AuthenticationOptions {
public static final URI DEFAULT_PKCS7_IDENTITY_DOCUMENT_URI = URI
.create("http://169.254.169.254/latest/dynamic/instance-identity/pkcs7");
public static final String DEFAULT_AWS_AUTHENTICATION_PATH = "aws-ec2";
/**
* Default {@link AwsEc2AuthenticationOptions} using
* {@link #DEFAULT_AWS_AUTHENTICATION_PATH} and
* {@link #DEFAULT_PKCS7_IDENTITY_DOCUMENT_URI}.
*/
public static final AwsEc2AuthenticationOptions DEFAULT = new AwsEc2AuthenticationOptions();
/**
* Path of the aws-ec2 authentication backend mount.
*/
private final String path;
/**
* {@link URI} to the AWS EC2 PKCS#7-signed identity document.
*/
private final URI identityDocumentUri;
/**
* EC2 instance role name. May be {@literal null} if none.
*/
@Nullable
private final String role;
/**
* Authentication nonce.
*/
private final Nonce nonce;
private AwsEc2AuthenticationOptions() {
this(DEFAULT_AWS_AUTHENTICATION_PATH, DEFAULT_PKCS7_IDENTITY_DOCUMENT_URI, "", Nonce.generated());
}
private AwsEc2AuthenticationOptions(String path, URI identityDocumentUri, @Nullable String role, Nonce nonce) {
this.path = path;
this.identityDocumentUri = identityDocumentUri;
this.role = role;
this.nonce = nonce;
}
/**
* @return a new {@link AwsEc2AuthenticationOptionsBuilder}.
*/
public static AwsEc2AuthenticationOptionsBuilder builder() {
return new AwsEc2AuthenticationOptionsBuilder();
}
/**
* @return the path of the aws-ec2 authentication backend mount.
*/
public String getPath() {
return this.path;
}
/**
* @return the {@link URI} to the AWS EC2 PKCS#7-signed identity document.
*/
public URI getIdentityDocumentUri() {
return this.identityDocumentUri;
}
/**
* @return the role, may be {@literal null} if none.
*/
@Nullable
public String getRole() {
return this.role;
}
/**
* @return the configured {@link Nonce}.
*/
public Nonce getNonce() {
return this.nonce;
}
/**
* Builder for {@link AwsEc2AuthenticationOptionsBuilder}.
*/
public static class AwsEc2AuthenticationOptionsBuilder {
private String path = DEFAULT_AWS_AUTHENTICATION_PATH;
private URI identityDocumentUri = DEFAULT_PKCS7_IDENTITY_DOCUMENT_URI;
@Nullable
private String role;
private Nonce nonce = Nonce.generated();
AwsEc2AuthenticationOptionsBuilder() {
}
/**
* Configure the mount path.
* @param path must not be empty or {@literal null}.
* @return {@code this} {@link AwsEc2AuthenticationOptionsBuilder}.
*/
public AwsEc2AuthenticationOptionsBuilder path(String path) {
Assert.hasText(path, "Path must not be empty");
this.path = path;
return this;
}
/**
* Configure the Identity Document {@link URI}.
* @param identityDocumentUri must not be {@literal null}.
* @return {@code this} {@link AwsEc2AuthenticationOptionsBuilder}.
* @see #DEFAULT_PKCS7_IDENTITY_DOCUMENT_URI
*/
public AwsEc2AuthenticationOptionsBuilder identityDocumentUri(URI identityDocumentUri) {
Assert.notNull(identityDocumentUri, "Identity document URI must not be null");
this.identityDocumentUri = identityDocumentUri;
return this;
}
/**
* Configure the name of the role against which the login is being attempted.If
* role is not specified, then the login endpoint looks for a role bearing the
* name of the AMI ID of the EC2 instance that is trying to login.
* @param role may be empty or {@literal null}.
* @return {@code this} {@link AwsEc2AuthenticationOptionsBuilder}.
*/
public AwsEc2AuthenticationOptionsBuilder role(@Nullable String role) {
this.role = role;
return this;
}
/**
* Configure a {@link Nonce} for login requests. Defaults to
* {@link Nonce#generated()}.
* @param nonce must not be {@literal null}.
* @return {@code this} {@link AwsEc2AuthenticationOptionsBuilder}.
* @since 1.1
*/
public AwsEc2AuthenticationOptionsBuilder nonce(Nonce nonce) {
Assert.notNull(nonce, "Nonce must not be null");
this.nonce = nonce;
return this;
}
/**
* Build a new {@link AwsEc2AuthenticationOptions} instance.
* @return a new {@link AwsEc2AuthenticationOptions}.
*/
public AwsEc2AuthenticationOptions build() {
Assert.notNull(this.identityDocumentUri, "IdentityDocumentUri must not be null");
return new AwsEc2AuthenticationOptions(this.path, this.identityDocumentUri, this.role, this.nonce);
}
}
/**
* Value object for an authentication nonce.
*
* @since 1.1
*/
public static class Nonce {
private final char[] value;
protected Nonce(char[] value) {
this.value = value;
}
/**
* Create a new generated {@link Nonce} using {@link UUID}.
* @return a new generated {@link Nonce} using {@link UUID}.
*/
public static Nonce generated() {
return new Generated();
}
/**
* Create a wrapped {@link Nonce} given a {@code nonce} value.
* @return a wrapped {@link Nonce} given for the {@code nonce} value.
*/
public static Nonce provided(char[] nonce) {
Assert.notNull(nonce, "Nonce must not be null");
return new Provided(Arrays.copyOf(nonce, nonce.length));
}
/**
* @return the nonce value.
*/
public char[] getValue() {
return this.value;
}
static class Generated extends Nonce {
Generated() {
super(UUID.randomUUID().toString().toCharArray());
}
}
static class Provided extends Nonce {
Provided(char[] nonce) {
super(nonce);
}
}
}
}
| {
"pile_set_name": "Github"
} |
import Vapor
extension Request {
var leaf: LeafRenderer {
var userInfo = self.application.leaf.userInfo
userInfo["request"] = self
userInfo["application"] = self.application
return .init(
configuration: self.application.leaf.configuration,
tags: self.application.leaf.tags,
cache: self.application.leaf.cache,
sources: self.application.leaf.sources,
eventLoop: self.eventLoop,
userInfo: userInfo
)
}
}
extension LeafContext {
public var request: Request? {
self.userInfo["request"] as? Request
}
}
| {
"pile_set_name": "Github"
} |
/*
* Scalar fixed time AES core transform
*
* Copyright (C) 2017 Linaro Ltd <[email protected]>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <crypto/aes.h>
#include <linux/crypto.h>
#include <linux/module.h>
#include <asm/unaligned.h>
/*
* Emit the sbox as volatile const to prevent the compiler from doing
* constant folding on sbox references involving fixed indexes.
*/
static volatile const u8 __cacheline_aligned __aesti_sbox[] = {
0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5,
0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76,
0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0,
0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0,
0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc,
0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15,
0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a,
0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75,
0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0,
0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84,
0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b,
0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf,
0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85,
0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8,
0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5,
0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2,
0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17,
0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73,
0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88,
0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb,
0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c,
0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79,
0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9,
0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08,
0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6,
0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a,
0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e,
0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e,
0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94,
0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf,
0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68,
0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16,
};
static volatile const u8 __cacheline_aligned __aesti_inv_sbox[] = {
0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38,
0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb,
0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87,
0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb,
0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d,
0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e,
0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2,
0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25,
0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16,
0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92,
0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda,
0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84,
0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a,
0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06,
0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02,
0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b,
0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea,
0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73,
0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85,
0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e,
0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89,
0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b,
0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20,
0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4,
0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31,
0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f,
0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d,
0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef,
0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0,
0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61,
0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26,
0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d,
};
static u32 mul_by_x(u32 w)
{
u32 x = w & 0x7f7f7f7f;
u32 y = w & 0x80808080;
/* multiply by polynomial 'x' (0b10) in GF(2^8) */
return (x << 1) ^ (y >> 7) * 0x1b;
}
static u32 mul_by_x2(u32 w)
{
u32 x = w & 0x3f3f3f3f;
u32 y = w & 0x80808080;
u32 z = w & 0x40404040;
/* multiply by polynomial 'x^2' (0b100) in GF(2^8) */
return (x << 2) ^ (y >> 7) * 0x36 ^ (z >> 6) * 0x1b;
}
static u32 mix_columns(u32 x)
{
/*
* Perform the following matrix multiplication in GF(2^8)
*
* | 0x2 0x3 0x1 0x1 | | x[0] |
* | 0x1 0x2 0x3 0x1 | | x[1] |
* | 0x1 0x1 0x2 0x3 | x | x[2] |
* | 0x3 0x1 0x1 0x2 | | x[3] |
*/
u32 y = mul_by_x(x) ^ ror32(x, 16);
return y ^ ror32(x ^ y, 8);
}
static u32 inv_mix_columns(u32 x)
{
/*
* Perform the following matrix multiplication in GF(2^8)
*
* | 0xe 0xb 0xd 0x9 | | x[0] |
* | 0x9 0xe 0xb 0xd | | x[1] |
* | 0xd 0x9 0xe 0xb | x | x[2] |
* | 0xb 0xd 0x9 0xe | | x[3] |
*
* which can conveniently be reduced to
*
* | 0x2 0x3 0x1 0x1 | | 0x5 0x0 0x4 0x0 | | x[0] |
* | 0x1 0x2 0x3 0x1 | | 0x0 0x5 0x0 0x4 | | x[1] |
* | 0x1 0x1 0x2 0x3 | x | 0x4 0x0 0x5 0x0 | x | x[2] |
* | 0x3 0x1 0x1 0x2 | | 0x0 0x4 0x0 0x5 | | x[3] |
*/
u32 y = mul_by_x2(x);
return mix_columns(x ^ y ^ ror32(y, 16));
}
static __always_inline u32 subshift(u32 in[], int pos)
{
return (__aesti_sbox[in[pos] & 0xff]) ^
(__aesti_sbox[(in[(pos + 1) % 4] >> 8) & 0xff] << 8) ^
(__aesti_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^
(__aesti_sbox[(in[(pos + 3) % 4] >> 24) & 0xff] << 24);
}
static __always_inline u32 inv_subshift(u32 in[], int pos)
{
return (__aesti_inv_sbox[in[pos] & 0xff]) ^
(__aesti_inv_sbox[(in[(pos + 3) % 4] >> 8) & 0xff] << 8) ^
(__aesti_inv_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^
(__aesti_inv_sbox[(in[(pos + 1) % 4] >> 24) & 0xff] << 24);
}
static u32 subw(u32 in)
{
return (__aesti_sbox[in & 0xff]) ^
(__aesti_sbox[(in >> 8) & 0xff] << 8) ^
(__aesti_sbox[(in >> 16) & 0xff] << 16) ^
(__aesti_sbox[(in >> 24) & 0xff] << 24);
}
static int aesti_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
unsigned int key_len)
{
u32 kwords = key_len / sizeof(u32);
u32 rc, i, j;
if (key_len != AES_KEYSIZE_128 &&
key_len != AES_KEYSIZE_192 &&
key_len != AES_KEYSIZE_256)
return -EINVAL;
ctx->key_length = key_len;
for (i = 0; i < kwords; i++)
ctx->key_enc[i] = get_unaligned_le32(in_key + i * sizeof(u32));
for (i = 0, rc = 1; i < 10; i++, rc = mul_by_x(rc)) {
u32 *rki = ctx->key_enc + (i * kwords);
u32 *rko = rki + kwords;
rko[0] = ror32(subw(rki[kwords - 1]), 8) ^ rc ^ rki[0];
rko[1] = rko[0] ^ rki[1];
rko[2] = rko[1] ^ rki[2];
rko[3] = rko[2] ^ rki[3];
if (key_len == 24) {
if (i >= 7)
break;
rko[4] = rko[3] ^ rki[4];
rko[5] = rko[4] ^ rki[5];
} else if (key_len == 32) {
if (i >= 6)
break;
rko[4] = subw(rko[3]) ^ rki[4];
rko[5] = rko[4] ^ rki[5];
rko[6] = rko[5] ^ rki[6];
rko[7] = rko[6] ^ rki[7];
}
}
/*
* Generate the decryption keys for the Equivalent Inverse Cipher.
* This involves reversing the order of the round keys, and applying
* the Inverse Mix Columns transformation to all but the first and
* the last one.
*/
ctx->key_dec[0] = ctx->key_enc[key_len + 24];
ctx->key_dec[1] = ctx->key_enc[key_len + 25];
ctx->key_dec[2] = ctx->key_enc[key_len + 26];
ctx->key_dec[3] = ctx->key_enc[key_len + 27];
for (i = 4, j = key_len + 20; j > 0; i += 4, j -= 4) {
ctx->key_dec[i] = inv_mix_columns(ctx->key_enc[j]);
ctx->key_dec[i + 1] = inv_mix_columns(ctx->key_enc[j + 1]);
ctx->key_dec[i + 2] = inv_mix_columns(ctx->key_enc[j + 2]);
ctx->key_dec[i + 3] = inv_mix_columns(ctx->key_enc[j + 3]);
}
ctx->key_dec[i] = ctx->key_enc[0];
ctx->key_dec[i + 1] = ctx->key_enc[1];
ctx->key_dec[i + 2] = ctx->key_enc[2];
ctx->key_dec[i + 3] = ctx->key_enc[3];
return 0;
}
static int aesti_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len)
{
struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
int err;
err = aesti_expand_key(ctx, in_key, key_len);
if (err)
return err;
/*
* In order to force the compiler to emit data independent Sbox lookups
* at the start of each block, xor the first round key with values at
* fixed indexes in the Sbox. This will need to be repeated each time
* the key is used, which will pull the entire Sbox into the D-cache
* before any data dependent Sbox lookups are performed.
*/
ctx->key_enc[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[128];
ctx->key_enc[1] ^= __aesti_sbox[32] ^ __aesti_sbox[160];
ctx->key_enc[2] ^= __aesti_sbox[64] ^ __aesti_sbox[192];
ctx->key_enc[3] ^= __aesti_sbox[96] ^ __aesti_sbox[224];
ctx->key_dec[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[128];
ctx->key_dec[1] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[160];
ctx->key_dec[2] ^= __aesti_inv_sbox[64] ^ __aesti_inv_sbox[192];
ctx->key_dec[3] ^= __aesti_inv_sbox[96] ^ __aesti_inv_sbox[224];
return 0;
}
static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
const u32 *rkp = ctx->key_enc + 4;
int rounds = 6 + ctx->key_length / 4;
u32 st0[4], st1[4];
int round;
st0[0] = ctx->key_enc[0] ^ get_unaligned_le32(in);
st0[1] = ctx->key_enc[1] ^ get_unaligned_le32(in + 4);
st0[2] = ctx->key_enc[2] ^ get_unaligned_le32(in + 8);
st0[3] = ctx->key_enc[3] ^ get_unaligned_le32(in + 12);
st0[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[128];
st0[1] ^= __aesti_sbox[32] ^ __aesti_sbox[160];
st0[2] ^= __aesti_sbox[64] ^ __aesti_sbox[192];
st0[3] ^= __aesti_sbox[96] ^ __aesti_sbox[224];
for (round = 0;; round += 2, rkp += 8) {
st1[0] = mix_columns(subshift(st0, 0)) ^ rkp[0];
st1[1] = mix_columns(subshift(st0, 1)) ^ rkp[1];
st1[2] = mix_columns(subshift(st0, 2)) ^ rkp[2];
st1[3] = mix_columns(subshift(st0, 3)) ^ rkp[3];
if (round == rounds - 2)
break;
st0[0] = mix_columns(subshift(st1, 0)) ^ rkp[4];
st0[1] = mix_columns(subshift(st1, 1)) ^ rkp[5];
st0[2] = mix_columns(subshift(st1, 2)) ^ rkp[6];
st0[3] = mix_columns(subshift(st1, 3)) ^ rkp[7];
}
put_unaligned_le32(subshift(st1, 0) ^ rkp[4], out);
put_unaligned_le32(subshift(st1, 1) ^ rkp[5], out + 4);
put_unaligned_le32(subshift(st1, 2) ^ rkp[6], out + 8);
put_unaligned_le32(subshift(st1, 3) ^ rkp[7], out + 12);
}
static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
const u32 *rkp = ctx->key_dec + 4;
int rounds = 6 + ctx->key_length / 4;
u32 st0[4], st1[4];
int round;
st0[0] = ctx->key_dec[0] ^ get_unaligned_le32(in);
st0[1] = ctx->key_dec[1] ^ get_unaligned_le32(in + 4);
st0[2] = ctx->key_dec[2] ^ get_unaligned_le32(in + 8);
st0[3] = ctx->key_dec[3] ^ get_unaligned_le32(in + 12);
st0[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[128];
st0[1] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[160];
st0[2] ^= __aesti_inv_sbox[64] ^ __aesti_inv_sbox[192];
st0[3] ^= __aesti_inv_sbox[96] ^ __aesti_inv_sbox[224];
for (round = 0;; round += 2, rkp += 8) {
st1[0] = inv_mix_columns(inv_subshift(st0, 0)) ^ rkp[0];
st1[1] = inv_mix_columns(inv_subshift(st0, 1)) ^ rkp[1];
st1[2] = inv_mix_columns(inv_subshift(st0, 2)) ^ rkp[2];
st1[3] = inv_mix_columns(inv_subshift(st0, 3)) ^ rkp[3];
if (round == rounds - 2)
break;
st0[0] = inv_mix_columns(inv_subshift(st1, 0)) ^ rkp[4];
st0[1] = inv_mix_columns(inv_subshift(st1, 1)) ^ rkp[5];
st0[2] = inv_mix_columns(inv_subshift(st1, 2)) ^ rkp[6];
st0[3] = inv_mix_columns(inv_subshift(st1, 3)) ^ rkp[7];
}
put_unaligned_le32(inv_subshift(st1, 0) ^ rkp[4], out);
put_unaligned_le32(inv_subshift(st1, 1) ^ rkp[5], out + 4);
put_unaligned_le32(inv_subshift(st1, 2) ^ rkp[6], out + 8);
put_unaligned_le32(inv_subshift(st1, 3) ^ rkp[7], out + 12);
}
static struct crypto_alg aes_alg = {
.cra_name = "aes",
.cra_driver_name = "aes-fixed-time",
.cra_priority = 100 + 1,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_module = THIS_MODULE,
.cra_cipher.cia_min_keysize = AES_MIN_KEY_SIZE,
.cra_cipher.cia_max_keysize = AES_MAX_KEY_SIZE,
.cra_cipher.cia_setkey = aesti_set_key,
.cra_cipher.cia_encrypt = aesti_encrypt,
.cra_cipher.cia_decrypt = aesti_decrypt
};
static int __init aes_init(void)
{
return crypto_register_alg(&aes_alg);
}
static void __exit aes_fini(void)
{
crypto_unregister_alg(&aes_alg);
}
module_init(aes_init);
module_exit(aes_fini);
MODULE_DESCRIPTION("Generic fixed time AES");
MODULE_AUTHOR("Ard Biesheuvel <[email protected]>");
MODULE_LICENSE("GPL v2");
| {
"pile_set_name": "Github"
} |
// Copyright (c) 2015-present Mattermost, Inc. All Rights Reserved.
// See LICENSE.txt for license information.
package sqlstore
import (
"database/sql"
"fmt"
"github.com/mattermost/gorp"
"github.com/mattermost/mattermost-server/v5/model"
"github.com/mattermost/mattermost-server/v5/store"
"github.com/pkg/errors"
)
type SqlOAuthStore struct {
SqlStore
}
func newSqlOAuthStore(sqlStore SqlStore) store.OAuthStore {
as := &SqlOAuthStore{sqlStore}
for _, db := range sqlStore.GetAllConns() {
table := db.AddTableWithName(model.OAuthApp{}, "OAuthApps").SetKeys(false, "Id")
table.ColMap("Id").SetMaxSize(26)
table.ColMap("CreatorId").SetMaxSize(26)
table.ColMap("ClientSecret").SetMaxSize(128)
table.ColMap("Name").SetMaxSize(64)
table.ColMap("Description").SetMaxSize(512)
table.ColMap("CallbackUrls").SetMaxSize(1024)
table.ColMap("Homepage").SetMaxSize(256)
table.ColMap("IconURL").SetMaxSize(512)
tableAuth := db.AddTableWithName(model.AuthData{}, "OAuthAuthData").SetKeys(false, "Code")
tableAuth.ColMap("UserId").SetMaxSize(26)
tableAuth.ColMap("ClientId").SetMaxSize(26)
tableAuth.ColMap("Code").SetMaxSize(128)
tableAuth.ColMap("RedirectUri").SetMaxSize(256)
tableAuth.ColMap("State").SetMaxSize(1024)
tableAuth.ColMap("Scope").SetMaxSize(128)
tableAccess := db.AddTableWithName(model.AccessData{}, "OAuthAccessData").SetKeys(false, "Token")
tableAccess.ColMap("ClientId").SetMaxSize(26)
tableAccess.ColMap("UserId").SetMaxSize(26)
tableAccess.ColMap("Token").SetMaxSize(26)
tableAccess.ColMap("RefreshToken").SetMaxSize(26)
tableAccess.ColMap("RedirectUri").SetMaxSize(256)
tableAccess.ColMap("Scope").SetMaxSize(128)
tableAccess.SetUniqueTogether("ClientId", "UserId")
}
return as
}
func (as SqlOAuthStore) createIndexesIfNotExists() {
as.CreateIndexIfNotExists("idx_oauthapps_creator_id", "OAuthApps", "CreatorId")
as.CreateIndexIfNotExists("idx_oauthaccessdata_client_id", "OAuthAccessData", "ClientId")
as.CreateIndexIfNotExists("idx_oauthaccessdata_user_id", "OAuthAccessData", "UserId")
as.CreateIndexIfNotExists("idx_oauthaccessdata_refresh_token", "OAuthAccessData", "RefreshToken")
as.CreateIndexIfNotExists("idx_oauthauthdata_client_id", "OAuthAuthData", "Code")
}
func (as SqlOAuthStore) SaveApp(app *model.OAuthApp) (*model.OAuthApp, error) {
if len(app.Id) > 0 {
return nil, store.NewErrInvalidInput("OAuthApp", "Id", app.Id)
}
app.PreSave()
if err := app.IsValid(); err != nil {
return nil, err
}
if err := as.GetMaster().Insert(app); err != nil {
return nil, errors.Wrap(err, "failed to save OAuthApp")
}
return app, nil
}
func (as SqlOAuthStore) UpdateApp(app *model.OAuthApp) (*model.OAuthApp, error) {
app.PreUpdate()
if err := app.IsValid(); err != nil {
return nil, err
}
oldAppResult, err := as.GetMaster().Get(model.OAuthApp{}, app.Id)
if err != nil {
return nil, errors.Wrapf(err, "failed to get OAuthApp with id=%s", app.Id)
}
if oldAppResult == nil {
return nil, store.NewErrInvalidInput("OAuthApp", "Id", app.Id)
}
oldApp := oldAppResult.(*model.OAuthApp)
app.CreateAt = oldApp.CreateAt
app.CreatorId = oldApp.CreatorId
count, err := as.GetMaster().Update(app)
if err != nil {
return nil, errors.Wrapf(err, "failed to update OAuthApp with id=%s", app.Id)
}
if count > 1 {
return nil, store.NewErrInvalidInput("OAuthApp", "Id", app.Id)
}
return app, nil
}
func (as SqlOAuthStore) GetApp(id string) (*model.OAuthApp, error) {
obj, err := as.GetReplica().Get(model.OAuthApp{}, id)
if err != nil {
return nil, errors.Wrapf(err, "failed to get OAuthApp with id=%s", id)
}
if obj == nil {
return nil, store.NewErrNotFound("OAuthApp", id)
}
return obj.(*model.OAuthApp), nil
}
func (as SqlOAuthStore) GetAppByUser(userId string, offset, limit int) ([]*model.OAuthApp, error) {
var apps []*model.OAuthApp
if _, err := as.GetReplica().Select(&apps, "SELECT * FROM OAuthApps WHERE CreatorId = :UserId LIMIT :Limit OFFSET :Offset", map[string]interface{}{"UserId": userId, "Offset": offset, "Limit": limit}); err != nil {
return nil, errors.Wrapf(err, "failed to find OAuthApps with userId=%s", userId)
}
return apps, nil
}
func (as SqlOAuthStore) GetApps(offset, limit int) ([]*model.OAuthApp, error) {
var apps []*model.OAuthApp
if _, err := as.GetReplica().Select(&apps, "SELECT * FROM OAuthApps LIMIT :Limit OFFSET :Offset", map[string]interface{}{"Offset": offset, "Limit": limit}); err != nil {
return nil, errors.Wrap(err, "failed to find OAuthApps")
}
return apps, nil
}
func (as SqlOAuthStore) GetAuthorizedApps(userId string, offset, limit int) ([]*model.OAuthApp, error) {
var apps []*model.OAuthApp
if _, err := as.GetReplica().Select(&apps,
`SELECT o.* FROM OAuthApps AS o INNER JOIN
Preferences AS p ON p.Name=o.Id AND p.UserId=:UserId LIMIT :Limit OFFSET :Offset`, map[string]interface{}{"UserId": userId, "Offset": offset, "Limit": limit}); err != nil {
return nil, errors.Wrapf(err, "failed to find OAuthApps with userId=%s", userId)
}
return apps, nil
}
func (as SqlOAuthStore) DeleteApp(id string) error {
// wrap in a transaction so that if one fails, everything fails
transaction, err := as.GetMaster().Begin()
if err != nil {
return errors.Wrap(err, "begin_transaction")
}
defer finalizeTransaction(transaction)
if err := as.deleteApp(transaction, id); err != nil {
return err
}
if err := transaction.Commit(); err != nil {
// don't need to rollback here since the transaction is already closed
return errors.Wrap(err, "commit_transaction")
}
return nil
}
func (as SqlOAuthStore) SaveAccessData(accessData *model.AccessData) (*model.AccessData, error) {
if err := accessData.IsValid(); err != nil {
return nil, err
}
if err := as.GetMaster().Insert(accessData); err != nil {
return nil, errors.Wrap(err, "failed to save AccessData")
}
return accessData, nil
}
func (as SqlOAuthStore) GetAccessData(token string) (*model.AccessData, error) {
accessData := model.AccessData{}
if err := as.GetReplica().SelectOne(&accessData, "SELECT * FROM OAuthAccessData WHERE Token = :Token", map[string]interface{}{"Token": token}); err != nil {
return nil, errors.Wrapf(err, "failed to get OAuthAccessData with token=%s", token)
}
return &accessData, nil
}
func (as SqlOAuthStore) GetAccessDataByUserForApp(userId, clientId string) ([]*model.AccessData, error) {
var accessData []*model.AccessData
if _, err := as.GetReplica().Select(&accessData,
"SELECT * FROM OAuthAccessData WHERE UserId = :UserId AND ClientId = :ClientId",
map[string]interface{}{"UserId": userId, "ClientId": clientId}); err != nil {
return nil, errors.Wrapf(err, "failed to delete OAuthAccessData with userId=%s and clientId=%s", userId, clientId)
}
return accessData, nil
}
func (as SqlOAuthStore) GetAccessDataByRefreshToken(token string) (*model.AccessData, error) {
accessData := model.AccessData{}
if err := as.GetReplica().SelectOne(&accessData, "SELECT * FROM OAuthAccessData WHERE RefreshToken = :Token", map[string]interface{}{"Token": token}); err != nil {
return nil, errors.Wrapf(err, "failed to find OAuthAccessData with refreshToken=%s", token)
}
return &accessData, nil
}
func (as SqlOAuthStore) GetPreviousAccessData(userId, clientId string) (*model.AccessData, error) {
accessData := model.AccessData{}
if err := as.GetReplica().SelectOne(&accessData, "SELECT * FROM OAuthAccessData WHERE ClientId = :ClientId AND UserId = :UserId",
map[string]interface{}{"ClientId": clientId, "UserId": userId}); err != nil {
if err == sql.ErrNoRows {
return nil, nil
}
return nil, errors.Wrapf(err, "failed to get AccessData with clientId=%s and userId=%s", clientId, userId)
}
return &accessData, nil
}
func (as SqlOAuthStore) UpdateAccessData(accessData *model.AccessData) (*model.AccessData, error) {
if err := accessData.IsValid(); err != nil {
return nil, err
}
if _, err := as.GetMaster().Exec("UPDATE OAuthAccessData SET Token = :Token, ExpiresAt = :ExpiresAt, RefreshToken = :RefreshToken WHERE ClientId = :ClientId AND UserID = :UserId",
map[string]interface{}{"Token": accessData.Token, "ExpiresAt": accessData.ExpiresAt, "RefreshToken": accessData.RefreshToken, "ClientId": accessData.ClientId, "UserId": accessData.UserId}); err != nil {
return nil, errors.Wrapf(err, "failed to update OAuthAccessData with userId=%s and clientId=%s", accessData.UserId, accessData.ClientId)
}
return accessData, nil
}
func (as SqlOAuthStore) RemoveAccessData(token string) error {
if _, err := as.GetMaster().Exec("DELETE FROM OAuthAccessData WHERE Token = :Token", map[string]interface{}{"Token": token}); err != nil {
return errors.Wrapf(err, "failed to delete OAuthAccessData with token=%s", token)
}
return nil
}
func (as SqlOAuthStore) RemoveAllAccessData() error {
if _, err := as.GetMaster().Exec("DELETE FROM OAuthAccessData", map[string]interface{}{}); err != nil {
return errors.Wrap(err, "failed to delete OAuthAccessData")
}
return nil
}
func (as SqlOAuthStore) SaveAuthData(authData *model.AuthData) (*model.AuthData, error) {
authData.PreSave()
if err := authData.IsValid(); err != nil {
return nil, err
}
if err := as.GetMaster().Insert(authData); err != nil {
return nil, errors.Wrap(err, "failed to save AuthData")
}
return authData, nil
}
func (as SqlOAuthStore) GetAuthData(code string) (*model.AuthData, error) {
obj, err := as.GetReplica().Get(model.AuthData{}, code)
if err != nil {
return nil, errors.Wrapf(err, "failed to get AuthData with code=%s", code)
}
if obj == nil {
return nil, store.NewErrNotFound("AuthData", fmt.Sprintf("code=%s", code))
}
return obj.(*model.AuthData), nil
}
func (as SqlOAuthStore) RemoveAuthData(code string) error {
_, err := as.GetMaster().Exec("DELETE FROM OAuthAuthData WHERE Code = :Code", map[string]interface{}{"Code": code})
if err != nil {
return errors.Wrapf(err, "failed to delete AuthData with code=%s", code)
}
return nil
}
func (as SqlOAuthStore) PermanentDeleteAuthDataByUser(userId string) error {
_, err := as.GetMaster().Exec("DELETE FROM OAuthAccessData WHERE UserId = :UserId", map[string]interface{}{"UserId": userId})
if err != nil {
return errors.Wrapf(err, "failed to delete OAuthAccessData with userId=%s", userId)
}
return nil
}
func (as SqlOAuthStore) deleteApp(transaction *gorp.Transaction, clientId string) error {
if _, err := transaction.Exec("DELETE FROM OAuthApps WHERE Id = :Id", map[string]interface{}{"Id": clientId}); err != nil {
return errors.Wrapf(err, "failed to delete OAuthApp with id=%s", clientId)
}
return as.deleteOAuthAppSessions(transaction, clientId)
}
func (as SqlOAuthStore) deleteOAuthAppSessions(transaction *gorp.Transaction, clientId string) error {
query := ""
if as.DriverName() == model.DATABASE_DRIVER_POSTGRES {
query = "DELETE FROM Sessions s USING OAuthAccessData o WHERE o.Token = s.Token AND o.ClientId = :Id"
} else if as.DriverName() == model.DATABASE_DRIVER_MYSQL {
query = "DELETE s.* FROM Sessions s INNER JOIN OAuthAccessData o ON o.Token = s.Token WHERE o.ClientId = :Id"
}
if _, err := transaction.Exec(query, map[string]interface{}{"Id": clientId}); err != nil {
return errors.Wrapf(err, "failed to delete Session with OAuthAccessData.Id=%s", clientId)
}
return as.deleteOAuthTokens(transaction, clientId)
}
func (as SqlOAuthStore) deleteOAuthTokens(transaction *gorp.Transaction, clientId string) error {
if _, err := transaction.Exec("DELETE FROM OAuthAccessData WHERE ClientId = :Id", map[string]interface{}{"Id": clientId}); err != nil {
return errors.Wrapf(err, "failed to delete OAuthAccessData with id=%s", clientId)
}
return as.deleteAppExtras(transaction, clientId)
}
func (as SqlOAuthStore) deleteAppExtras(transaction *gorp.Transaction, clientId string) error {
if _, err := transaction.Exec(
`DELETE FROM
Preferences
WHERE
Category = :Category
AND Name = :Name`, map[string]interface{}{"Category": model.PREFERENCE_CATEGORY_AUTHORIZED_OAUTH_APP, "Name": clientId}); err != nil {
return errors.Wrapf(err, "failed to delete Preferences with name=%s", clientId)
}
return nil
}
| {
"pile_set_name": "Github"
} |
/*
Copyright (c) 2009-2016 mingw-w64 project
Contributing authors: Kai Tietz, Jonathan Yong
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
*/
#ifndef _LIBMANGLE_HXX
#define _LIBMANGLE_HXX
#ifdef __cplusplus
extern "C" {
#endif
/**
* Garbage collector elements.
* Tracks allocated memory and points to the next element from the same context.
* Opaque structure.
* @see libmangle_gc_context_t
*/
typedef void *libmangle_gc_t;
/**
* Garbage collector context.
* Tracks first and last of elements in gc context.
* @see generate_gc()
* @see release_gc()
*/
typedef struct libmangle_gc_context_t {
libmangle_gc_t head; /**< Pointer to first gc element in context.*/
libmangle_gc_t tail; /**< Pointer to last gc element in context. */
} libmangle_gc_context_t;
/**
* Generic token instances.
* Type of token determined by base descriptor in members.
* Opaque structure.
* @see gen_tok()
*/
typedef void *libmangle_tokens_t;
/**
* Releases memory tracked by context.
* @param[in] gc Garbage collection context to work on.
* @see libmangle_generate_gc()
*/
void libmangle_release_gc (libmangle_gc_context_t *gc);
/**
* Constructs a garbage collection context token.
* @return Pointer to context.
* @see libmangle_release_gc()
*/
libmangle_gc_context_t *libmangle_generate_gc (void);
/**
* Dumps pMToken to a file descriptor for debugging.
* @param[in] fp File descriptor to print the token to.
* @param[in] p libmangle_tokens_t chain to print.
*/
void libmangle_dump_tok (FILE *fp, libmangle_tokens_t p);
/**
* Prints C++ name to file descriptor.
* @param[in] fp Output file descriptor.
* @param[in] p Token containing information about the C++ name.
* @see libmangle_decode_ms_name()
*/
void libmangle_print_decl (FILE *fp, libmangle_tokens_t p);
/**
* Get pointer to decoded C++ name string.
* Use free() to release returned string.
* @param[in] r C++ name token.
* @return pointer to decoded C++ name string.
* @see libmangle_decode_ms_name()
*/
char *libmangle_sprint_decl (libmangle_tokens_t r);
/**
* Decodes an MSVC export name.
* @param[in] gc libmangle_gc_context_t pointer for collecting memory allocations.
* @param[in] name MSVC C++ mangled export string.
* @see libmangle_sprint_decl()
* @see libmangle_release_gc()
* @see libmangle_tokens_t
* @return Token containing information about the mangled string,
* use libmangle_release_gc() to free after use.
*/
libmangle_tokens_t libmangle_decode_ms_name (libmangle_gc_context_t *gc, const char *name);
char *libmangle_encode_ms_name (libmangle_gc_context_t *gc, libmangle_tokens_t tok);
#ifdef __cplusplus
}
#endif
#endif
| {
"pile_set_name": "Github"
} |
// AUTOGENERATED FILE - DO NOT MODIFY!
// This file generated by Djinni from client_interface.djinni
package com.dropbox.djinni.test;
import java.util.concurrent.atomic.AtomicBoolean;
import javax.annotation.CheckForNull;
import javax.annotation.Nonnull;
public interface ReverseClientInterface {
@Nonnull
public String returnStr();
@Nonnull
public String methTakingInterface(@CheckForNull ReverseClientInterface i);
@Nonnull
public String methTakingOptionalInterface(@CheckForNull ReverseClientInterface i);
@CheckForNull
public static ReverseClientInterface create()
{
return CppProxy.create();
}
static final class CppProxy implements ReverseClientInterface
{
private final long nativeRef;
private final AtomicBoolean destroyed = new AtomicBoolean(false);
private CppProxy(long nativeRef)
{
if (nativeRef == 0) throw new RuntimeException("nativeRef is zero");
this.nativeRef = nativeRef;
}
private native void nativeDestroy(long nativeRef);
public void _djinni_private_destroy()
{
boolean destroyed = this.destroyed.getAndSet(true);
if (!destroyed) nativeDestroy(this.nativeRef);
}
protected void finalize() throws java.lang.Throwable
{
_djinni_private_destroy();
super.finalize();
}
@Override
public String returnStr()
{
assert !this.destroyed.get() : "trying to use a destroyed object";
return native_returnStr(this.nativeRef);
}
private native String native_returnStr(long _nativeRef);
@Override
public String methTakingInterface(ReverseClientInterface i)
{
assert !this.destroyed.get() : "trying to use a destroyed object";
return native_methTakingInterface(this.nativeRef, i);
}
private native String native_methTakingInterface(long _nativeRef, ReverseClientInterface i);
@Override
public String methTakingOptionalInterface(ReverseClientInterface i)
{
assert !this.destroyed.get() : "trying to use a destroyed object";
return native_methTakingOptionalInterface(this.nativeRef, i);
}
private native String native_methTakingOptionalInterface(long _nativeRef, ReverseClientInterface i);
@CheckForNull
public static native ReverseClientInterface create();
}
}
| {
"pile_set_name": "Github"
} |
/*-
* <<
* UAVStack
* ==
* Copyright (C) 2016 - 2017 UAVStack
* ==
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
* >>
*/
package com.creditease.uav.monitorframework.adaptors;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.lang.reflect.Modifier;
import java.util.HashSet;
import java.util.Set;
import javassist.CannotCompileException;
import javassist.ClassPool;
import javassist.CtClass;
import javassist.CtMethod;
import javassist.LoaderClassPath;
import javassist.NotFoundException;
public class TtlAdaptor {
private static final String RUNNABLE_CLASS_NAME = "java.lang.Runnable";
private static final String CALLABLE_CLASS_NAME = "java.util.concurrent.Callable";
private static final String TTL_RUNNABLE_CLASS_NAME = "com.alibaba.ttl.TtlRunnable";
private static final String TTL_CALLABLE_CLASS_NAME = "com.alibaba.ttl.TtlCallable";
private static final String THREAD_POOL_CLASS_FILE = "java.util.concurrent.ThreadPoolExecutor".replace('.', '/');
private static final String SCHEDULER_CLASS_FILE = "java.util.concurrent.ScheduledThreadPoolExecutor".replace('.',
'/');
private static final String TIMER_TASK_CLASS_FILE = "java.util.TimerTask".replace('.', '/');
private static final byte[] EMPTY_BYTE_ARRAY = {};
public byte[] adaptorTransform(byte[] classFileBuffer, ClassLoader classLoader, String classFile) {
try {
// Lambda has no class file, no need to transform, just return.
if (classFile == null) {
return EMPTY_BYTE_ARRAY;
}
if (THREAD_POOL_CLASS_FILE.equals(classFile) || SCHEDULER_CLASS_FILE.equals(classFile)) {
CtClass clazz = getCtClass(classFileBuffer, classLoader);
for (CtMethod method : clazz.getDeclaredMethods()) {
updateMethod(clazz, method);
}
return clazz.toBytecode();
}
else if (TIMER_TASK_CLASS_FILE.equals(classFile)) {
CtClass clazz = getCtClass(classFileBuffer, classLoader);
while (true) {
String name = clazz.getSuperclass().getName();
if (Object.class.getName().equals(name)) {
break;
}
if (TIMER_TASK_CLASS_FILE.equals(name)) {
// FIXME add code here
return EMPTY_BYTE_ARRAY;
}
}
}
}
catch (Throwable t) {
StringWriter stringWriter = new StringWriter();
PrintWriter printWriter = new PrintWriter(stringWriter);
t.printStackTrace(printWriter);
String msg = "Fail to transform class " + classFile + ", cause: " + stringWriter.toString();
throw new IllegalStateException(msg, t);
}
return EMPTY_BYTE_ARRAY;
}
private CtClass getCtClass(byte[] classFileBuffer, ClassLoader classLoader) throws IOException {
ClassPool classPool = new ClassPool(true);
if (null != classLoader) {
classPool.appendClassPath(new LoaderClassPath(classLoader));
}
CtClass clazz = classPool.makeClass(new ByteArrayInputStream(classFileBuffer), false);
clazz.defrost();
return clazz;
}
static final Set<String> updateMethodNames = new HashSet<String>();
static {
updateMethodNames.add("execute");
updateMethodNames.add("submit");
updateMethodNames.add("schedule");
updateMethodNames.add("scheduleAtFixedRate");
updateMethodNames.add("scheduleWithFixedDelay");
}
static void updateMethod(CtClass clazz, CtMethod method) throws NotFoundException, CannotCompileException {
if (!updateMethodNames.contains(method.getName())) {
return;
}
if (method.getDeclaringClass() != clazz) {
return;
}
final int modifiers = method.getModifiers();
if (!Modifier.isPublic(modifiers) || Modifier.isStatic(modifiers)) {
return;
}
CtClass[] parameterTypes = method.getParameterTypes();
StringBuilder insertCode = new StringBuilder();
for (int i = 0; i < parameterTypes.length; i++) {
CtClass paraType = parameterTypes[i];
if (RUNNABLE_CLASS_NAME.equals(paraType.getName())) {
String code = String.format("$%d = %s.get($%d, false, true);", i + 1, TTL_RUNNABLE_CLASS_NAME, i + 1);
insertCode.append(code);
}
else if (CALLABLE_CLASS_NAME.equals(paraType.getName())) {
String code = String.format("$%d = %s.get($%d, false, true);", i + 1, TTL_CALLABLE_CLASS_NAME, i + 1);
insertCode.append(code);
}
}
if (insertCode.length() > 0) {
method.insertBefore(insertCode.toString());
}
}
}
| {
"pile_set_name": "Github"
} |
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Barnaby gets to sit</title>
<script>
function Dog(name, breed, weight) {
this.name = name;
this.breed = breed;
this.weight = weight;
}
Dog.prototype.species = "Canine";
Dog.prototype.bark = function() {
if (this.weight > 25) {
console.log(this.name + " says Woof!");
} else {
console.log(this.name + " says Yip!");
}
};
Dog.prototype.run = function() {
console.log("Run!");
};
Dog.prototype.wag = function() {
console.log("Wag!");
};
var fido = new Dog("Fido", "Mixed", 38);
var fluffy = new Dog("Fluffy", "Poodle", 30);
var spot = new Dog("Spot", "Chihuahua", 10);
spot.bark = function() {
console.log(this.name + " says WOOF!");
};
fido.bark();
fido.run();
fido.wag();
fluffy.bark();
fluffy.run();
fluffy.wag();
spot.bark();
spot.run();
spot.wag();
var barnaby = new Dog("Barnaby", "Basset Hound", 55);
Dog.prototype.sit = function() {
console.log(this.name + " is now sitting");
}
barnaby.sit();
</script>
</head>
<body>
</body>
</html>
| {
"pile_set_name": "Github"
} |
package slimeknights.tconstruct.tables;
import net.minecraft.client.gui.ScreenManager;
import net.minecraft.resources.IReloadableResourceManager;
import net.minecraftforge.api.distmarker.Dist;
import net.minecraftforge.client.event.ModelRegistryEvent;
import net.minecraftforge.client.model.ModelLoaderRegistry;
import net.minecraftforge.eventbus.api.SubscribeEvent;
import net.minecraftforge.fml.client.registry.ClientRegistry;
import net.minecraftforge.fml.common.Mod.EventBusSubscriber;
import net.minecraftforge.fml.common.Mod.EventBusSubscriber.Bus;
import net.minecraftforge.fml.event.lifecycle.FMLClientSetupEvent;
import slimeknights.mantle.client.render.InventoryTileEntityRenderer;
import slimeknights.tconstruct.TConstruct;
import slimeknights.tconstruct.common.ClientEventBase;
import slimeknights.tconstruct.library.Util;
import slimeknights.tconstruct.library.client.model.block.TableModel;
import slimeknights.tconstruct.tables.client.PatternGuiTextureLoader;
import slimeknights.tconstruct.tables.client.SlotInformationLoader;
import slimeknights.tconstruct.tables.client.TableTileEntityRenderer;
import slimeknights.tconstruct.tables.client.inventory.chest.PartChestScreen;
import slimeknights.tconstruct.tables.client.inventory.chest.PatternChestScreen;
import slimeknights.tconstruct.tables.client.inventory.table.CraftingStationScreen;
import slimeknights.tconstruct.tables.client.inventory.table.PartBuilderScreen;
import slimeknights.tconstruct.tables.client.inventory.table.TinkerStationScreen;
@SuppressWarnings("unused")
@EventBusSubscriber(modid=TConstruct.modID, value=Dist.CLIENT, bus=Bus.MOD)
public class TableClientEvents extends ClientEventBase {
/**
* Called by TinkerClient to add the resource listeners, runs during constructor
*/
public static void addResourceListener(IReloadableResourceManager manager) {
manager.addReloadListener(PatternGuiTextureLoader.INSTANCE);
manager.addReloadListener(SlotInformationLoader.INSTANCE);
}
@SubscribeEvent
static void registerModelLoader(ModelRegistryEvent event) {
ModelLoaderRegistry.registerLoader(Util.getResource("table"), TableModel.LOADER);
}
@SubscribeEvent
static void setupClient(final FMLClientSetupEvent event) {
ScreenManager.registerFactory(TinkerTables.craftingStationContainer.get(), CraftingStationScreen::new);
ScreenManager.registerFactory(TinkerTables.tinkerStationContainer.get(), TinkerStationScreen::new);
ScreenManager.registerFactory(TinkerTables.partBuilderContainer.get(), PartBuilderScreen::new);
ScreenManager.registerFactory(TinkerTables.patternChestContainer.get(), PatternChestScreen::new);
ScreenManager.registerFactory(TinkerTables.partChestContainer.get(), PartChestScreen::new);
ClientRegistry.bindTileEntityRenderer(TinkerTables.craftingStationTile.get(), InventoryTileEntityRenderer::new);
ClientRegistry.bindTileEntityRenderer(TinkerTables.tinkerStationTile.get(), TableTileEntityRenderer::new);
ClientRegistry.bindTileEntityRenderer(TinkerTables.partBuilderTile.get(), TableTileEntityRenderer::new);
}
}
| {
"pile_set_name": "Github"
} |
/*
* Copyright (c) 1995, 2013, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
package java.lang;
/**
* Thrown to indicate that an attempt has been made to store the
* wrong type of object into an array of objects. For example, the
* following code generates an <code>ArrayStoreException</code>:
* <blockquote><pre>
* Object x[] = new String[3];
* x[0] = new Integer(0);
* </pre></blockquote>
*
* @author unascribed
* @since 1.0
*/
public
class ArrayStoreException extends RuntimeException {
private static final long serialVersionUID = -4522193890499838241L;
/**
* Constructs an <code>ArrayStoreException</code> with no detail message.
*/
public ArrayStoreException() {
super();
}
/**
* Constructs an <code>ArrayStoreException</code> with the specified
* detail message.
*
* @param s the detail message.
*/
public ArrayStoreException(String s) {
super(s);
}
}
| {
"pile_set_name": "Github"
} |
package events
import (
"fmt"
nraySchema "github.com/nray-scanner/nray/schemas"
"github.com/golang/protobuf/jsonpb"
"github.com/spf13/viper"
"github.com/thedevsaddam/gojsonq"
)
// RegisteredHandlers contains all handlers that may be configured by a user
var RegisteredHandlers = []string{"json-file", "terminal", "elasticsearch"}
var protomarshaller = jsonpb.Marshaler{
EnumsAsInts: false,
EmitDefaults: true,
Indent: "",
OrigName: true,
AnyResolver: nil,
}
// GetEventHandler returns the correct event handler for a event handler name
func GetEventHandler(EventHandlerName string) EventHandler {
switch EventHandlerName {
case "json-file":
return &JSONFileEventHandler{}
case "terminal":
return &TerminalEventHandler{}
case "elasticsearch":
return &ElasticsearchEventHandler{}
default:
return nil
}
}
// EventHandler is the interface each type of handling events has to implement
type EventHandler interface {
Configure(*viper.Viper) error
ProcessEvents([]*nraySchema.Event)
ProcessEventStream(<-chan *nraySchema.Event)
Close() error
}
// FilterMatchesEvent returns true if the event has the filter string and its value matches the provided value
func FilterMatchesEvent(event string, filter string, value interface{}) bool {
if value == nil {
return gojsonq.New().JSONString(event).Find(filter) != nil
}
return fmt.Sprintf("%#v", gojsonq.New().JSONString(event).Find(filter)) == fmt.Sprintf("%#v", value)
}
| {
"pile_set_name": "Github"
} |
# Be sure to restart your server when you modify this file.
Rails.application.config.session_store :cookie_store, key: '_wei_session'
| {
"pile_set_name": "Github"
} |
from __future__ import division
import sys
from data_iterator import TextIterator
import cPickle as pkl
from os.path import expanduser
import torch
import numpy
from model_batch import *
import time
from datetime import timedelta
from torchtext.vocab import load_word_vectors
# batch preparation
# batch preparation
def prepare_data(seqs_x, seqs_y, labels, maxlen=None):
lengths_x = [len(s) for s in seqs_x]
lengths_y = [len(s) for s in seqs_y]
if maxlen is not None:
new_seqs_x = []
new_seqs_y = []
new_lengths_x = []
new_lengths_y = []
new_labels = []
for l_x, s_x, l_y, s_y, l in zip(lengths_x, seqs_x, lengths_y, seqs_y, labels):
if l_x < maxlen and l_y < maxlen:
new_seqs_x.append(s_x)
new_lengths_x.append(l_x)
new_seqs_y.append(s_y)
new_lengths_y.append(l_y)
new_labels.append(l)
lengths_x = new_lengths_x
seqs_x = new_seqs_x
lengths_y = new_lengths_y
seqs_y = new_seqs_y
labels = new_labels
if len(lengths_x) < 1 or len(lengths_y) < 1:
return None, None, None, None, None
n_samples = len(seqs_x)
maxlen_x = numpy.max(lengths_x)
maxlen_y = numpy.max(lengths_y)
x = numpy.zeros((maxlen_x, n_samples)).astype('int64')
y = numpy.zeros((maxlen_y, n_samples)).astype('int64')
x_mask = numpy.zeros((maxlen_x, n_samples)).astype('float32')
y_mask = numpy.zeros((maxlen_y, n_samples)).astype('float32')
l = numpy.zeros((n_samples,)).astype('int64')
for idx, [s_x, s_y, ll] in enumerate(zip(seqs_x, seqs_y, labels)):
x[:lengths_x[idx], idx] = s_x
x_mask[:lengths_x[idx], idx] = 1.
y[:lengths_y[idx], idx] = s_y
y_mask[:lengths_y[idx], idx] = 1.
l[idx] = ll
if torch.cuda.is_available():
x=Variable(torch.LongTensor(x)).cuda()
x_mask=Variable(torch.Tensor(x_mask)).cuda()
y=Variable(torch.LongTensor(y)).cuda()
y_mask=Variable(torch.Tensor(y_mask)).cuda()
l=Variable(torch.LongTensor(l)).cuda()
else:
x = Variable(torch.LongTensor(x))
x_mask = Variable(torch.FloatTensor(x_mask))
y = Variable(torch.LongTensor(y))
y_mask = Variable(torch.FloatTensor(y_mask))
l = Variable(torch.LongTensor(l))
return x, x_mask, y, y_mask, l
# some utilities
def ortho_weight(ndim):
"""
Random orthogonal weights
Used by norm_weights(below), in which case, we
are ensuring that the rows are orthogonal
(i.e W = U \Sigma V, U has the same
# of rows, V has the same # of cols)
"""
W = numpy.random.randn(ndim, ndim)
u, s, v = numpy.linalg.svd(W)
return u.astype('float32')
def norm_weight(nin, nout=None, scale=0.01, ortho=True):
"""
Random weights drawn from a Gaussian
"""
if nout is None:
nout = nin
if nout == nin and ortho:
W = ortho_weight(nin)
else:
W = scale * numpy.random.randn(nin, nout)
return W.astype('float32')
if torch.cuda.is_available():
print('CUDA is available!')
base_path = expanduser("~") + '/pytorch/ESIM'
embedding_path = expanduser("~") + '/pytorch/DeepPairWiseWord/VDPWI-NN-Torch/data/glove'
else:
base_path = expanduser("~") + '/Documents/research/pytorch/ESIM'
embedding_path = expanduser("~") + '/Documents/research/pytorch/DeepPairWiseWord/VDPWI-NN-Torch/data/glove'
task='snli'
print('task: '+task)
if task=='snli':
dictionary=base_path+'/data/word_sequence/snli_vocab_cased.pkl'
datasets = [base_path+'/data/word_sequence/premise_snli_1.0_train.txt',
base_path+'/data/word_sequence/hypothesis_snli_1.0_train.txt',
base_path+'/data/word_sequence/label_snli_1.0_train.txt']
valid_datasets = [base_path+'/data/word_sequence/premise_snli_1.0_dev.txt',
base_path+'/data/word_sequence/hypothesis_snli_1.0_dev.txt',
base_path+'/data/word_sequence/label_snli_1.0_dev.txt']
test_datasets = [base_path+'/data/word_sequence/premise_snli_1.0_test.txt',
base_path+'/data/word_sequence/hypothesis_snli_1.0_test.txt',
base_path+'/data/word_sequence/label_snli_1.0_test.txt']
elif task=='mnli':
dic = {'entailment': '0', 'neutral': '1', 'contradiction': '2'}
dictionary = base_path + '/data/word_sequence/mnli_vocab_cased.pkl'
datasets = [base_path + '/data/word_sequence/premise_multinli_1.0_train.txt',
base_path + '/data/word_sequence/hypothesis_multinli_1.0_train.txt',
base_path + '/data/word_sequence/label_multinli_1.0_train.txt']
valid_datasets_m = [base_path + '/data/word_sequence/premise_multinli_1.0_dev_matched.txt',
base_path + '/data/word_sequence/hypothesis_multinli_1.0_dev_matched.txt',
base_path + '/data/word_sequence/label_multinli_1.0_dev_matched.txt']
valid_datasets_um = [base_path + '/data/word_sequence/premise_multinli_1.0_dev_mismatched.txt',
base_path + '/data/word_sequence/hypothesis_multinli_1.0_dev_mismatched.txt',
base_path + '/data/word_sequence/label_multinli_1.0_dev_mismatched.txt']
test_datasets_m = [base_path + '/data/word_sequence/premise_multinli_1.0_test_matched.txt',
base_path + '/data/word_sequence/hypothesis_multinli_1.0_test_matched.txt',
base_path + '/data/word_sequence/label_multinli_1.0_test_matched.txt']
test_datasets_um = [base_path + '/data/word_sequence/premise_multinli_1.0_test_mismatched.txt',
base_path + '/data/word_sequence/hypothesis_multinli_1.0_test_mismatched.txt',
base_path + '/data/word_sequence/label_multinli_1.0_test_mismatched.txt']
#n_words=42394
dim_word=300
batch_size=32
num_epochs=500
valid_batch_size=1
print 'Loading data'
with open(dictionary, 'rb') as f:
worddicts = pkl.load(f)
n_words=len(worddicts)
wv_dict, wv_arr, wv_size = load_word_vectors(embedding_path, 'glove.840B', dim_word)
pretrained_emb=norm_weight(n_words, dim_word)
for word in worddicts.keys():
try:
pretrained_emb[worddicts[word]]=wv_arr[wv_dict[word]].numpy()
except:
pretrained_emb[worddicts[word]] = torch.normal(torch.zeros(dim_word),std=1).numpy()
train = TextIterator(datasets[0], datasets[1], datasets[2],
dictionary,
n_words=n_words,
batch_size=batch_size)
'''
train_valid = TextIterator(datasets[0], datasets[1], datasets[2],
dictionary,
n_words=n_words,
batch_size=valid_batch_size,
shuffle=False)
'''
'''
valid_m = TextIterator(valid_datasets_m[0], valid_datasets_m[1], valid_datasets_m[2],
dictionary,
n_words=n_words,
batch_size=valid_batch_size,
shuffle=False)
valid_um = TextIterator(valid_datasets_um[0], valid_datasets_um[1], valid_datasets_um[2],
dictionary,
n_words=n_words,
batch_size=valid_batch_size,
shuffle=False)
test_m = TextIterator(test_datasets_m[0], test_datasets_m[1], test_datasets_m[2],
dictionary,
n_words=n_words,
batch_size=valid_batch_size,
shuffle=False)
test_um = TextIterator(test_datasets_um[0], test_datasets_um[1], test_datasets_um[2],
dictionary,
n_words=n_words,
batch_size=valid_batch_size,
shuffle=False)
'''
valid = TextIterator(valid_datasets[0],valid_datasets[1],valid_datasets[2],
dictionary,
n_words=n_words,
batch_size=valid_batch_size,
shuffle=False)
test = TextIterator(test_datasets[0], test_datasets[1], test_datasets[2],
dictionary,
n_words=n_words,
batch_size=valid_batch_size,
shuffle=False)
criterion = torch.nn.CrossEntropyLoss()
model = ESIM(dim_word, 3, n_words, dim_word, pretrained_emb)
if torch.cuda.is_available():
model = model.cuda()
criterion = criterion.cuda()
optimizer = torch.optim.Adam(model.parameters(),lr=0.0004)
print('start training...')
accumulated_loss=0
batch_counter=0
report_interval = 1000
best_dev_loss=10e10
best_dev_loss2=10e10
clip_c=10
max_len=500
max_result=0
model.train()
for epoch in range(num_epochs):
accumulated_loss = 0
model.train()
print('--' * 20)
start_time = time.time()
train_sents_scaned = 0
train_num_correct = 0
batch_counter=0
train = TextIterator(datasets[0], datasets[1], datasets[2],
dictionary,
n_words=n_words,
batch_size=batch_size)
for x1, x2, y in train:
x1, x1_mask, x2, x2_mask, y = prepare_data(x1, x2, y, maxlen=max_len)
train_sents_scaned += len(y)
optimizer.zero_grad()
output = model(x1, x1_mask, x2, x2_mask)
result = output.data.cpu().numpy()
a = np.argmax(result, axis=1)
b = y.data.cpu().numpy()
train_num_correct += np.sum(a == b)
loss = criterion(output, y)
loss.backward()
''''''
grad_norm = 0.
for m in list(model.parameters()):
grad_norm+=m.grad.data.norm() ** 2
for m in list(model.parameters()):
if grad_norm>clip_c**2:
try:
m.grad.data= m.grad.data / torch.sqrt(grad_norm) * clip_c
except:
pass
''''''
optimizer.step()
accumulated_loss += loss.data[0]
batch_counter += 1
#print(batch_counter)
if batch_counter % report_interval == 0:
msg = '%d completed epochs, %d batches' % (epoch, batch_counter)
msg += '\t training batch loss: %f' % (accumulated_loss / train_sents_scaned)
msg += '\t train accuracy: %f' % (train_num_correct / train_sents_scaned)
print(msg)
#if batch_counter>(int(17168/64)):
# break
msg = '%d completed epochs, %d batches' % (epoch, batch_counter)
msg += '\t training batch loss: %f' % (accumulated_loss / train_sents_scaned)
msg += '\t train accuracy: %f' % (train_num_correct / train_sents_scaned)
print(msg)
# valid after each epoch
model.eval()
''''''
msg = '%d completed epochs, %d batches' % (epoch, batch_counter)
accumulated_loss = 0
dev_num_correct = 0
n_done=0
for dev_x1, dev_x2, dev_y in valid:
n_done += len(dev_x1)
x1, x1_mask, x2, x2_mask, y = prepare_data(dev_x1, dev_x2, dev_y, maxlen=max_len)
with torch.no_grad():
output = model(x1, x1_mask, x2, x2_mask)
result = output.data.cpu().numpy()
loss = criterion(output, y)
accumulated_loss += loss.data[0]
a = numpy.argmax(result, axis=1)
b = y.data.cpu().numpy()
dev_num_correct += numpy.sum(a == b)
msg += '\t dev loss: %f' % (accumulated_loss/n_done)
dev_acc = dev_num_correct / n_done
msg += '\t dev accuracy: %f' % dev_acc
print(msg)
''''''
if dev_acc>max_result:
max_result=dev_acc
msg = '%d completed epochs, %d batches' % (epoch, batch_counter)
accumulated_loss = 0
dev_num_correct = 0
n_done = 0
pred=[]
error_analysis = []
for test_x1, test_x2, test_y in test:
n_done+=len(test_y)
x1, x1_mask, x2, x2_mask, y = prepare_data(test_x1, test_x2, test_y, maxlen=max_len)
with torch.no_grad():
output = model(x1, x1_mask, x2, x2_mask)
result = output.data.cpu().numpy()
loss = criterion(output, y)
accumulated_loss += loss.data[0]
a = numpy.argmax(result, axis=1)
b = y.data.cpu().numpy()
dev_num_correct += numpy.sum(a == b)
pred.extend(result)
if a != b:
my_dict = {}
s1 = ''
for word in x1:
s1 += worddicts.keys()[word.data[0]] + ' '
# print(s1)
my_dict['s1'] = s1
s2 = ''
for word in x2:
s2 += worddicts.keys()[word.data[0]] + ' '
# print(s2)
my_dict['s2'] = s2
# print(model.alpha[:,:,0].data.cpu().numpy())
# print(model.beta[:,:,0].data.cpu().numpy())
my_dict['alpha'] = model.alpha[:, :, 0].data.cpu().numpy()
my_dict['beta'] = model.beta[:, :, 0].data.cpu().numpy()
my_dict['pred_label']=a[0]
my_dict['true_label']=b[0]
error_analysis.append(my_dict)
pkl.dump(error_analysis, open(base_path + '/ESIM_snli_error_analysis.pkl', 'wb'))
msg += '\t test loss: %f' % (accumulated_loss / n_done)
test_acc = dev_num_correct / n_done
msg += '\t test accuracy: %f' % test_acc
print(msg)
torch.save(model, base_path + '/model_ESIM_snli_visualize' + '.pkl')
# max_result=test_acc
# with open(base_path+'/result_ESIM_prob.txt','w') as f:
# for item in pred:
# f.writelines(str(item[0])+'\t'+str(item[1])+'\t'+str(item[2])+'\n')
elapsed_time = time.time() - start_time
print('Epoch ' + str(epoch) + ' finished within ' + str(timedelta(seconds=elapsed_time))) | {
"pile_set_name": "Github"
} |
<?php
/**
* Smarty plugin
* @package Smarty
* @subpackage plugins
*/
/**
* Smarty count_sentences modifier plugin
*
* Type: modifier<br>
* Name: count_sentences
* Purpose: count the number of sentences in a text
* @link http://smarty.php.net/manual/en/language.modifier.count.paragraphs.php
* count_sentences (Smarty online manual)
* @author Monte Ohrt <monte at ohrt dot com>
* @param string
* @return integer
*/
function smarty_modifier_count_sentences($string)
{
// find periods with a word before but not after.
return preg_match_all('/[^\s]\.(?!\w)/', $string, $match);
}
/* vim: set expandtab: */
?>
| {
"pile_set_name": "Github"
} |
point image
10,20 http://boston.openguides.org/markers/ORANGE.png
15,25 http://boston.openguides.org/markers/ORANGE.png
| {
"pile_set_name": "Github"
} |
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache license, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the license for the specific language governing permissions and
* limitations under the license.
*/
package org.apache.logging.log4j.mongodb4;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.core.Core;
import org.apache.logging.log4j.core.appender.nosql.NoSqlProvider;
import org.apache.logging.log4j.core.config.plugins.PluginBuilderAttribute;
import org.apache.logging.log4j.core.config.plugins.PluginBuilderFactory;
import org.apache.logging.log4j.core.filter.AbstractFilterable;
import org.apache.logging.log4j.plugins.Plugin;
import org.apache.logging.log4j.plugins.validation.constraints.Required;
import org.apache.logging.log4j.status.StatusLogger;
import org.bson.codecs.configuration.CodecRegistries;
import org.bson.codecs.configuration.CodecRegistry;
import com.mongodb.ConnectionString;
import com.mongodb.MongoClientSettings;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoDatabase;
/**
* The MongoDB implementation of {@link NoSqlProvider} using the MongoDB driver
* version 4 API.
*/
@Plugin(name = "MongoDb4", category = Core.CATEGORY_NAME, printObject = true)
public final class MongoDb4Provider implements NoSqlProvider<MongoDb4Connection> {
public static class Builder<B extends Builder<B>> extends AbstractFilterable.Builder<B>
implements org.apache.logging.log4j.core.util.Builder<MongoDb4Provider> {
@PluginBuilderAttribute(value = "connection")
@Required(message = "No connection string provided")
private String connection;
@PluginBuilderAttribute
private int collectionSize = DEFAULT_COLLECTION_SIZE;
@PluginBuilderAttribute("capped")
private boolean capped = false;
@Override
public MongoDb4Provider build() {
return new MongoDb4Provider(connection, capped, collectionSize);
}
public B setCapped(final boolean isCapped) {
this.capped = isCapped;
return asBuilder();
}
public B setCollectionSize(final int collectionSize) {
this.collectionSize = collectionSize;
return asBuilder();
}
}
private static final Logger LOGGER = StatusLogger.getLogger();
// @formatter:off
private static final CodecRegistry CODEC_REGISTRIES = CodecRegistries.fromRegistries(
MongoClientSettings.getDefaultCodecRegistry(),
CodecRegistries.fromCodecs(MongoDb4LevelCodec.INSTANCE));
// @formatter:on
// TODO Where does this number come from?
private static final int DEFAULT_COLLECTION_SIZE = 536_870_912;
@PluginBuilderFactory
public static <B extends Builder<B>> B newBuilder() {
return new Builder<B>().asBuilder();
}
private final Integer collectionSize;
private final boolean isCapped;
private final MongoClient mongoClient;
private final MongoDatabase mongoDatabase;
private final ConnectionString connectionString;
private MongoDb4Provider(final String connectionStringSource, final boolean isCapped,
final Integer collectionSize) {
LOGGER.debug("Creating ConnectionString {}...", connectionStringSource);
this.connectionString = new ConnectionString(connectionStringSource);
LOGGER.debug("Created ConnectionString {}", connectionString);
LOGGER.debug("Creating MongoClientSettings...");
// @formatter:off
final MongoClientSettings settings = MongoClientSettings.builder()
.applyConnectionString(this.connectionString)
.codecRegistry(CODEC_REGISTRIES)
.build();
// @formatter:on
LOGGER.debug("Created MongoClientSettings {}", settings);
LOGGER.debug("Creating MongoClient {}...", settings);
this.mongoClient = MongoClients.create(settings);
LOGGER.debug("Created MongoClient {}", mongoClient);
String databaseName = this.connectionString.getDatabase();
LOGGER.debug("Getting MongoDatabase {}...", databaseName);
this.mongoDatabase = this.mongoClient.getDatabase(databaseName);
LOGGER.debug("Got MongoDatabase {}", mongoDatabase);
this.isCapped = isCapped;
this.collectionSize = collectionSize;
}
@Override
public MongoDb4Connection getConnection() {
return new MongoDb4Connection(connectionString, mongoClient, mongoDatabase, isCapped, collectionSize);
}
@Override
public String toString() {
return String.format(
"%s [connectionString=%s, collectionSize=%s, isCapped=%s, mongoClient=%s, mongoDatabase=%s]",
MongoDb4Provider.class.getSimpleName(), connectionString, collectionSize, isCapped, mongoClient,
mongoDatabase);
}
}
| {
"pile_set_name": "Github"
} |
// Copyright (c) Brock Allen & Dominick Baier. All rights reserved.
// Licensed under the Apache License, Version 2.0. See LICENSE in the project root for license information.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using IdentityServer4.Configuration;
using IdentityServer4.Events;
using IdentityServer4.Extensions;
using IdentityServer4.Models;
using IdentityServer4.Services;
using IdentityServer4.Validation;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
namespace IdentityServerHost.Quickstart.UI
{
[Authorize]
[SecurityHeaders]
public class DeviceController : Controller
{
private readonly IDeviceFlowInteractionService _interaction;
private readonly IEventService _events;
private readonly IOptions<IdentityServerOptions> _options;
private readonly ILogger<DeviceController> _logger;
public DeviceController(
IDeviceFlowInteractionService interaction,
IEventService eventService,
IOptions<IdentityServerOptions> options,
ILogger<DeviceController> logger)
{
_interaction = interaction;
_events = eventService;
_options = options;
_logger = logger;
}
[HttpGet]
public async Task<IActionResult> Index()
{
string userCodeParamName = _options.Value.UserInteraction.DeviceVerificationUserCodeParameter;
string userCode = Request.Query[userCodeParamName];
if (string.IsNullOrWhiteSpace(userCode)) return View("UserCodeCapture");
var vm = await BuildViewModelAsync(userCode);
if (vm == null) return View("Error");
vm.ConfirmUserCode = true;
return View("UserCodeConfirmation", vm);
}
[HttpPost]
[ValidateAntiForgeryToken]
public async Task<IActionResult> UserCodeCapture(string userCode)
{
var vm = await BuildViewModelAsync(userCode);
if (vm == null) return View("Error");
return View("UserCodeConfirmation", vm);
}
[HttpPost]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Callback(DeviceAuthorizationInputModel model)
{
if (model == null) throw new ArgumentNullException(nameof(model));
var result = await ProcessConsent(model);
if (result.HasValidationError) return View("Error");
return View("Success");
}
private async Task<ProcessConsentResult> ProcessConsent(DeviceAuthorizationInputModel model)
{
var result = new ProcessConsentResult();
var request = await _interaction.GetAuthorizationContextAsync(model.UserCode);
if (request == null) return result;
ConsentResponse grantedConsent = null;
// user clicked 'no' - send back the standard 'access_denied' response
if (model.Button == "no")
{
grantedConsent = new ConsentResponse { Error = AuthorizationError.AccessDenied };
// emit event
await _events.RaiseAsync(new ConsentDeniedEvent(User.GetSubjectId(), request.Client.ClientId, request.ValidatedResources.RawScopeValues));
}
// user clicked 'yes' - validate the data
else if (model.Button == "yes")
{
// if the user consented to some scope, build the response model
if (model.ScopesConsented != null && model.ScopesConsented.Any())
{
var scopes = model.ScopesConsented;
if (ConsentOptions.EnableOfflineAccess == false)
{
scopes = scopes.Where(x => x != IdentityServer4.IdentityServerConstants.StandardScopes.OfflineAccess);
}
grantedConsent = new ConsentResponse
{
RememberConsent = model.RememberConsent,
ScopesValuesConsented = scopes.ToArray(),
Description = model.Description
};
// emit event
await _events.RaiseAsync(new ConsentGrantedEvent(User.GetSubjectId(), request.Client.ClientId, request.ValidatedResources.RawScopeValues, grantedConsent.ScopesValuesConsented, grantedConsent.RememberConsent));
}
else
{
result.ValidationError = ConsentOptions.MustChooseOneErrorMessage;
}
}
else
{
result.ValidationError = ConsentOptions.InvalidSelectionErrorMessage;
}
if (grantedConsent != null)
{
// communicate outcome of consent back to identityserver
await _interaction.HandleRequestAsync(model.UserCode, grantedConsent);
// indicate that's it ok to redirect back to authorization endpoint
result.RedirectUri = model.ReturnUrl;
result.Client = request.Client;
}
else
{
// we need to redisplay the consent UI
result.ViewModel = await BuildViewModelAsync(model.UserCode, model);
}
return result;
}
private async Task<DeviceAuthorizationViewModel> BuildViewModelAsync(string userCode, DeviceAuthorizationInputModel model = null)
{
var request = await _interaction.GetAuthorizationContextAsync(userCode);
if (request != null)
{
return CreateConsentViewModel(userCode, model, request);
}
return null;
}
private DeviceAuthorizationViewModel CreateConsentViewModel(string userCode, DeviceAuthorizationInputModel model, DeviceFlowAuthorizationRequest request)
{
var vm = new DeviceAuthorizationViewModel
{
UserCode = userCode,
Description = model?.Description,
RememberConsent = model?.RememberConsent ?? true,
ScopesConsented = model?.ScopesConsented ?? Enumerable.Empty<string>(),
ClientName = request.Client.ClientName ?? request.Client.ClientId,
ClientUrl = request.Client.ClientUri,
ClientLogoUrl = request.Client.LogoUri,
AllowRememberConsent = request.Client.AllowRememberConsent
};
vm.IdentityScopes = request.ValidatedResources.Resources.IdentityResources.Select(x => CreateScopeViewModel(x, vm.ScopesConsented.Contains(x.Name) || model == null)).ToArray();
var apiScopes = new List<ScopeViewModel>();
foreach (var parsedScope in request.ValidatedResources.ParsedScopes)
{
var apiScope = request.ValidatedResources.Resources.FindApiScope(parsedScope.ParsedName);
if (apiScope != null)
{
var scopeVm = CreateScopeViewModel(parsedScope, apiScope, vm.ScopesConsented.Contains(parsedScope.RawValue) || model == null);
apiScopes.Add(scopeVm);
}
}
if (ConsentOptions.EnableOfflineAccess && request.ValidatedResources.Resources.OfflineAccess)
{
apiScopes.Add(GetOfflineAccessScope(vm.ScopesConsented.Contains(IdentityServer4.IdentityServerConstants.StandardScopes.OfflineAccess) || model == null));
}
vm.ApiScopes = apiScopes;
return vm;
}
private ScopeViewModel CreateScopeViewModel(IdentityResource identity, bool check)
{
return new ScopeViewModel
{
Value = identity.Name,
DisplayName = identity.DisplayName ?? identity.Name,
Description = identity.Description,
Emphasize = identity.Emphasize,
Required = identity.Required,
Checked = check || identity.Required
};
}
public ScopeViewModel CreateScopeViewModel(ParsedScopeValue parsedScopeValue, ApiScope apiScope, bool check)
{
return new ScopeViewModel
{
Value = parsedScopeValue.RawValue,
// todo: use the parsed scope value in the display?
DisplayName = apiScope.DisplayName ?? apiScope.Name,
Description = apiScope.Description,
Emphasize = apiScope.Emphasize,
Required = apiScope.Required,
Checked = check || apiScope.Required
};
}
private ScopeViewModel GetOfflineAccessScope(bool check)
{
return new ScopeViewModel
{
Value = IdentityServer4.IdentityServerConstants.StandardScopes.OfflineAccess,
DisplayName = ConsentOptions.OfflineAccessDisplayName,
Description = ConsentOptions.OfflineAccessDescription,
Emphasize = true,
Checked = check
};
}
}
} | {
"pile_set_name": "Github"
} |
/*
* Power BI Visualizations
*
* Copyright (c) Microsoft Corporation
* All rights reserved.
* MIT License
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the ""Software""), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
module powerbi.extensibility.visual {
"use strict";
import DataViewObjectsParser = powerbi.extensibility.utils.dataview.DataViewObjectsParser;
export class VisualSettings extends DataViewObjectsParser {
public rcv_script: rcv_scriptSettings = new rcv_scriptSettings();
}
export class rcv_scriptSettings {
// undefined
public provider // undefined
public source }
}
| {
"pile_set_name": "Github"
} |
{
"name" : "1610.05858.pdf",
"metadata" : {
"source" : "CRF",
"title" : null,
"authors" : [ ],
"emails" : [ "[email protected]", "[email protected]", "[email protected]" ],
"sections" : [ {
"heading" : null,
"text" : "ar X\niv :1\n61 0.\n05 85\n8v 1\n[ cs\n.C L\n] 1\n9 O\nct 2\n01 6"
}, {
"heading" : "1 Introduction",
"text" : "Patient clinical records contain longitudinal record of patient health, disease, test’s conducted and response to treatment, often useful for epidemiologic and clinical research. Thus extracting these information has been of immense value for both clinical practise and to improve quality of patient care provided while reducing healthcare costs. Concept extraction (CE) aims to identify medical concept mentions such as problems, test, treatments in clinical records (Eg: discharge summaries, progress reports) and classify them into pre-defined categories. The concepts in clinical records are often expressed with unstructured free text, rendering their extraction a daunting task for clinical Natural Language Processing (NLP) systems. The CE problem is analogous to well-studied Named Entity Recognition (NER) task in general NLP domain. Traditional approaches to extracting concepts relied on rule based systems or dictionaries (lexicon’s) using string comparision to recognise concepts of interest. The concepts represent drug names, anatomical nomenclature, other specialised names and phrases which are not part of mundane English vocabulary. For instance ”resp status” should be interpreted as ”response status”. Furthermore the use of abbreviated phrases are very common among medical fraternity and many of these abbreviations have alternative meanings in other genres of English. Intrinsically, rule based systems are hard to scale, and ineffective in the presence of informal sentences and abbreviated phrases (Liu et al., 2015). Dictionary based systems perform a fast look-up from medical ontologies such as Unified Medical Language System (UMLS) to extract concepts (Kipper-Schuler et al., 2008). Although these systems achieve high precison but suffer from low recall ( i,e they may not identify significant number of concepts) due to missplelled words or medical jargons not present in dictionaries. To overcome these limitations various supervised and semi-supervised machine learning (ML) approaches and its variants have been proposed utilizing conditional random fields (CRF), maximum entropy and support vector machines (SVM) models which utilize both textual and contextual information while reducing the dependency on lexicon lookup (Lafferty et al., 2001; Berger et al., 1996; Joachims, 1998). However these state-of-the-art ML approaches follow two step process of domain specific feature engineering and classification, which are highly dedicated hand-crafted systems and require labour intensive expert knowledge. For this reason, this paper employs bidirectional LSTM-CRF intialized with general purpose off-the-shelf neural word embeddings derived from Glove (Pennington et al., 2014) and Word2Vec (Mikolov et al., 2013) for automatic feature learning thus avoiding time-consuming feature engineering, which deliver system performance comparable to the best submissions from the 2010 i2b2/VA challenge.\nMost of the research to date have formulated CE as a sequence labelling NER problem employing various supervised and semi-supervised ML algorithms employing focussed domain-dependent attributes and specialized text features (Uzuner et al., 2011). Similarly hybrid models obtained by cascading CRF and SVM algorithms along with several pattern matching rules are shown to produce effective results (Boag et al., 2015). The efficacy of including pre-processing technique (such as truecasing and annotation combination) along with CRF based NER system to improve concept extraction performace was exemplified by (Fu and Ananiadou, 2014). The best performing system for 2010 i2b2/VA concept extract task adopted unsupervised feature representations derived from unlabeled corpora using Brown clustering technique along with semi-supervised Markov HMM models (de Bruijn et al., 2011). However, the unsupervised one-hot word feature representations derived from Brown clustering fails to capture multiple aspect relation between words. Subsequently (Jonnalagadda et al., 2012) demonstrated that random indexing model with distributional word representations improve clinical concept extraction. With recent success of incorporating word embeddings derived from the entire English wikipedia in various NER task (Collobert et al., 2011), binarized word embeddings derived from domain specific copora (Eg: Monitoring in Intensive Care (MIMIC) II corpus) has improved performance of CRF based concept extraction system (Wu et al., 2015). In the broader field of machine learning, the recent years have witnessed proliferation of deep neural networks, with unprecedented results in tasks such as visual, speech and NER. One of the main advantages of neural networks is that they learn features automatically thus avoiding laborious feature engineerin. Given these promising results obtained the main goal of this paper is to employ bidirectional LSTM CRF intialized with general off-the-shelf unsupervised word embeddings derived from Glove and Word2Vec models and evaluate its performance. The experimental results obtained on 2010 i2b2/VA reference standard corpora without use of any extensive feature engineering and domain specific resources is very encouraging."
}, {
"heading" : "3 The Proposed Approach",
"text" : "CE can be formulated as a joint segmentation and classification task over a predefined set of classes. As an example, consider the input sentence provided in Table 1. The notation follows the widely adopted in/out/begin (IOB) entity representation with, in this instance, HCT as the test, 2U PRBC as the treatment. In this paper, we approach the CE task by bidirectional LSTM CRF and we therefore provide a brief description hereafter. In a bidirectional LSTM CRF, each word in the input sentence is first mapped to a random real-valued vector of arbitrary dimension, d. Then, a measurement for the word, noted as x(t), is formed by concatenating the word’s own vector with a window of preceding and following vectors (the\nw3(t) = [His,HCT, dropped],\n‘His′ → xHCT ∈ R d,\n‘HCT ′ → xHis ∈ R d,\n‘dropped′ → xdropped ∈ R d,\nx(t) = [xHis, xHCT, xdropped] ∈ R 3d\n(1)\nwhere w3(t) is the context window centered around the t-th word, ′HCT ′, and xword represents the numerical vector for word."
}, {
"heading" : "3.1 Word Embeddings",
"text" : "Word embeddings are dense vector representations of natural language words that preserves the semantic and syntactic similarities between them. The vector representations could be generated by either count based such as Hellinger-PCA (Lebret and Collobert, 2013), direct prediction models such as Word2Vec comprising of Skip-gram or Common Bag of Words (CBOW) or Glove word embeddings. Glove vector representations captures complex patterns beyond word similarity through by combining efficient use of word co-occurance statistics and generate a global vector representation for any given word."
}, {
"heading" : "3.2 Bidirectional LSTM-CRF Networks",
"text" : "The LSTM was designed to overcome this limitation by incorporating a gated memory-cell to capture long-range dependencies within the data (Hochreiter and Schmidhuber, 1997). In the bidirectional LSTM, for any given sentence, the network computes both a left, −→ h (t), and a right, ←− h (t), representations of the sentence context at every input, x(t). The final representation is created by concatenating them as h(t) = [ −→ h (t); ←− h (t)]. All these networks utilize the h(t) layer as an implicit feature for entity class prediction: although this model has proved effective in many cases, it is not able to provide joint decoding of the outputs in a Viterbi-style manner (e.g., an I-group cannot follow a B-brand; etc). Thus, another modification to the bidirectional LSTM is the addition of a conditional random field (CRF) (Lafferty et al., 2001) as the output layer to provide optimal sequential decoding. The resulting network is commonly referred to as the bidirectional LSTM-CRF (Lample et al., 2016)."
}, {
"heading" : "4 Experiments",
"text" : ""
}, {
"heading" : "4.1 Datasets",
"text" : "The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks, one among them is concept extraction task focused on the extraction of medical concepts from patient reports. A total of 394 training reports, 477 test reports, and 877 unannotated reports\nwere de-identified and released to challenge participants with data use agreements (Uzuner et al., 2011). However part of that data set is no longer being distributed due to Institutional Review Board (IRB) restrictions. Table 2 summarizes the basic statistics of the training and test datasets used in our experiments. We split training dataset into a training and validation sets with approximately 70% of sentences for training and the remaining for validation."
}, {
"heading" : "4.2 Evaluation Methodology",
"text" : "Our models have been blindly evaluated on unseen 2010 i2b2/VA CE test data using the strict evaluation metrics. With this evaluation, the predicted entities have to match the ground-truth entities exactly, both in boundary and class. To facilitate the replication of our experimental results, we have used a publiclyavailable library for the implementation (i.e., the Theano neural network toolkit (Bergstra et al., 2010)) and we publicly release our code1. The experiments have been run over a range of values for the hyperparameters, using the validation set for selection (Bergstra and Bengio, 2012). The hyper-parameters include the number of hidden-layer nodes, H ∈ {25, 50, 100}, the context window size, s ∈ {1, 3, 5}, and the embedding dimension, d ∈ {50, 100, 300, 500, 1000}. Two additional parameters, the learning and drop-out rates, were sampled from a uniform distribution in the range [0.05, 0.1]. To begin with, the embedding and initial weight matrices were all randomly initialized from the uniform distribution within range [−1, 1] subsequently word embeddings with d = 300 derived from Word2Vec and Glove was utilized in the experiments. Early training stopping was set to 100 epochs to mollify over-fitting, and the model that gave the best performance on the validation set was retained. The accuracy is reported in terms of micro-average F1 score computed using the CoNLL score function (Nadeau and Sekine, 2007)."
}, {
"heading" : "4.3 Results and Analysis",
"text" : "Table 3 shows the performance comparison between the employed bidirectional LSTM-CRF and stateof-the-art CE systems. As an overall note, the bidirectional LSTM-CRF have not reached the same accuracy as the top system, semi-supervised Markov HMM (de Bruijn et al., 2011). However, our approach has achieved the second-best score on 2010 i2b2/VA. These results seem interesting on the ground that the bidirectional LSTM-CRF provide CE without utilizing any manually-engineered features. Given that our system learn entirely from the data, it is also robust to any new concept or unseen words additions. In our current experimental setting about 20% of tokens were either alpha-numeric or abbreviated strings whose Word2Vec or Glove pretrained vector embeddings were not available. These special strings in text were randomly initialized with d = 300 vector embeddings and input to bidirectional LSTM-CRF system. Subsequently the system was able to learn meaningfull representations with remaining 80% of pre-trained vector embeddings and produce comparable results to the state-of-the-art CE systems.\nConclusion\nThis paper has used the contemporary bidirectional LSTM-CRF, for clinical concept extraction. The most appealing feature of this sytem is their ability to provide end-to-end recognition initialized with general purpose off-the-shelf word embeddings sparing effort from laborious feature construction. To the best of our knowledge, ours is the first paper to adopt bidirectional LSTM-CRF for concept extraction from clinical records. The experimental results over the 2010 i2b2/VA reference standard corpora look promising, with the bidirectional LSTM-CRF ranking closely to the state of the art. A potential way to further improve its performance would be to initialize its training with unsupervised word embeddings such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) trained with domain specific resources such as Monitoring in Intensive Care (MIMIC) II copora. This approach has proved effective in many other domains and still dispenses with expert annotation effort; we plan this exploration for the near future.\n1https://github.com/raghavchalapathy/Bidirectional-LSTM-CRF-for-Clinical-Concept-Extraction\n[Berger et al.1996] Adam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational linguistics, 22(1):39–71.\n[Bergstra and Bengio2012] James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13:281–305.\n[Bergstra et al.2010] James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: A CPU and GPU math compiler in Python. In The 9th Python in Science Conference, pages 1–7.\n[Boag et al.2015] William Boag, Kevin Wacome, and MS Tristan Naumann. 2015. Cliner: A lightweight tool for clinical named entity recognition.\n[Collobert et al.2011] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537.\n[de Bruijn et al.2011] Berry de Bruijn, Colin Cherry, Svetlana Kiritchenko, Joel Martin, and Xiaodan Zhu. 2011. Machine-learned solutions for three stages of clinical information extraction: the state of the art at i2b2 2010. Journal of the American Medical Informatics Association, 18(5):557–562.\n[Fu and Ananiadou2014] Xiao Fu and Sophia Ananiadou. 2014. Improving the extraction of clinical concepts from clinical records. Proceedings of BioTxtM14.\n[Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780.\n[Joachims1998] Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. In European conference on machine learning, pages 137–142. Springer.\n[Jonnalagadda et al.2012] Siddhartha Jonnalagadda, Trevor Cohen, Stephen Wu, and Graciela Gonzalez. 2012. Enhancing clinical concept extraction with distributional semantics. Journal of biomedical informatics, 45(1):129–140.\n[Kipper-Schuler et al.2008] Karin Kipper-Schuler, Vinod Kaggal, James Masanz, Philip Ogren, and Guergana Savova. 2008. System evaluation on a named entity corpus from clinical notes. In Language Resources and Evaluation Conference, LREC, pages 3001–3007.\n[Lafferty et al.2001] John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282–289.\n[Lample et al.2016] Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL-HLT.\n[Lebret and Collobert2013] Rémi Lebret and Ronan Collobert. 2013. Word emdeddings through hellinger pca. arXiv preprint arXiv:1312.5542.\n[Liu et al.2015] Shengyu Liu, Buzhou Tang, Qingcai Chen, and Xiaolong Wang. 2015. Drug name recognition: Approaches and resources. Information, 6(4):790–810.\n[Mikolov et al.2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119.\n[Nadeau and Sekine2007] David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Linguisticae Investigationes, 30(1):3–26.\n[Pennington et al.2014] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP, pages 1532–1543.\n[Uzuner et al.2011] Özlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association, 18(5):552–556.\n[Wu et al.2015] Yonghui Wu, Jun Xu, Min Jiang, Yaoyun Zhang, and Hua Xu. 2015. A study of neural word embeddings for named entity recognition in clinical text. In AMIA Annual Symposium Proceedings, volume 2015, page 1326. American Medical Informatics Association."
} ],
"references" : [ {
"title" : "A maximum entropy approach to natural language processing",
"author" : [ "Berger et al.1996] Adam L Berger", "Vincent J Della Pietra", "Stephen A Della Pietra" ],
"venue" : null,
"citeRegEx" : "Berger et al\\.,? \\Q1996\\E",
"shortCiteRegEx" : "Berger et al\\.",
"year" : 1996
}, {
"title" : "Random search for hyper-parameter optimization",
"author" : [ "Bergstra", "Bengio2012] James Bergstra", "Yoshua Bengio" ],
"venue" : "Journal of Machine Learning Research,",
"citeRegEx" : "Bergstra et al\\.,? \\Q2012\\E",
"shortCiteRegEx" : "Bergstra et al\\.",
"year" : 2012
}, {
"title" : "Theano: A CPU and GPU math compiler in Python",
"author" : [ "Olivier Breuleux", "Frédéric Bastien", "Pascal Lamblin", "Razvan Pascanu", "Guillaume Desjardins", "Joseph Turian", "David Warde-Farley", "Yoshua Bengio" ],
"venue" : "In The 9th Python in Science Conference,",
"citeRegEx" : "Bergstra et al\\.,? \\Q2010\\E",
"shortCiteRegEx" : "Bergstra et al\\.",
"year" : 2010
}, {
"title" : "Cliner: A lightweight tool for clinical named entity recognition",
"author" : [ "Boag et al.2015] William Boag", "Kevin Wacome", "MS Tristan Naumann" ],
"venue" : null,
"citeRegEx" : "Boag et al\\.,? \\Q2015\\E",
"shortCiteRegEx" : "Boag et al\\.",
"year" : 2015
}, {
"title" : "Natural language processing (almost) from scratch",
"author" : [ "Jason Weston", "Léon Bottou", "Michael Karlen", "Koray Kavukcuoglu", "Pavel Kuksa" ],
"venue" : "Journal of Machine Learning Research,",
"citeRegEx" : "Collobert et al\\.,? \\Q2011\\E",
"shortCiteRegEx" : "Collobert et al\\.",
"year" : 2011
}, {
"title" : "Machine-learned solutions for three stages of clinical information extraction: the state of the art at i2b2",
"author" : [ "Colin Cherry", "Svetlana Kiritchenko", "Joel Martin", "Xiaodan Zhu" ],
"venue" : "Journal of the American Medical Informatics Association,",
"citeRegEx" : "Bruijn et al\\.,? \\Q2011\\E",
"shortCiteRegEx" : "Bruijn et al\\.",
"year" : 2011
}, {
"title" : "Improving the extraction of clinical concepts from clinical records",
"author" : [ "Fu", "Ananiadou2014] Xiao Fu", "Sophia Ananiadou" ],
"venue" : "Proceedings of BioTxtM14",
"citeRegEx" : "Fu et al\\.,? \\Q2014\\E",
"shortCiteRegEx" : "Fu et al\\.",
"year" : 2014
}, {
"title" : "Long short-term memory",
"author" : [ "Hochreiter", "Schmidhuber1997] Sepp Hochreiter", "Jürgen Schmidhuber" ],
"venue" : "Neural Computation,",
"citeRegEx" : "Hochreiter et al\\.,? \\Q1997\\E",
"shortCiteRegEx" : "Hochreiter et al\\.",
"year" : 1997
}, {
"title" : "Text categorization with support vector machines: Learning with many relevant features",
"author" : [ "Thorsten Joachims" ],
"venue" : "In European conference on machine learning,",
"citeRegEx" : "Joachims.,? \\Q1998\\E",
"shortCiteRegEx" : "Joachims.",
"year" : 1998
}, {
"title" : "Enhancing clinical concept extraction with distributional semantics",
"author" : [ "Trevor Cohen", "Stephen Wu", "Graciela Gonzalez" ],
"venue" : "Journal of biomedical informatics,",
"citeRegEx" : "Jonnalagadda et al\\.,? \\Q2012\\E",
"shortCiteRegEx" : "Jonnalagadda et al\\.",
"year" : 2012
}, {
"title" : "System evaluation on a named entity corpus from clinical notes",
"author" : [ "Vinod Kaggal", "James Masanz", "Philip Ogren", "Guergana Savova" ],
"venue" : "In Language Resources and Evaluation Conference,",
"citeRegEx" : "Kipper.Schuler et al\\.,? \\Q2008\\E",
"shortCiteRegEx" : "Kipper.Schuler et al\\.",
"year" : 2008
}, {
"title" : "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"author" : [ "Andrew McCallum", "Fernando Pereira" ],
"venue" : "In ICML,",
"citeRegEx" : "Lafferty et al\\.,? \\Q2001\\E",
"shortCiteRegEx" : "Lafferty et al\\.",
"year" : 2001
}, {
"title" : "Neural architectures for named entity recognition",
"author" : [ "Miguel Ballesteros", "Sandeep Subramanian", "Kazuya Kawakami", "Chris Dyer" ],
"venue" : null,
"citeRegEx" : "Lample et al\\.,? \\Q2016\\E",
"shortCiteRegEx" : "Lample et al\\.",
"year" : 2016
}, {
"title" : "Word emdeddings through hellinger pca. arXiv preprint arXiv:1312.5542",
"author" : [ "Lebret", "Collobert2013] Rémi Lebret", "Ronan Collobert" ],
"venue" : null,
"citeRegEx" : "Lebret et al\\.,? \\Q2013\\E",
"shortCiteRegEx" : "Lebret et al\\.",
"year" : 2013
}, {
"title" : "Distributed representations of words and phrases and their compositionality",
"author" : [ "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ],
"venue" : null,
"citeRegEx" : "Mikolov et al\\.,? \\Q2013\\E",
"shortCiteRegEx" : "Mikolov et al\\.",
"year" : 2013
}, {
"title" : "A survey of named entity recognition and classification",
"author" : [ "Nadeau", "Sekine2007] David Nadeau", "Satoshi Sekine" ],
"venue" : "Linguisticae Investigationes,",
"citeRegEx" : "Nadeau et al\\.,? \\Q2007\\E",
"shortCiteRegEx" : "Nadeau et al\\.",
"year" : 2007
}, {
"title" : "GloVe: Global vectors for word representation",
"author" : [ "Richard Socher", "Christopher D. Manning" ],
"venue" : "In EMNLP,",
"citeRegEx" : "Pennington et al\\.,? \\Q2014\\E",
"shortCiteRegEx" : "Pennington et al\\.",
"year" : 2014
}, {
"title" : "A study of neural word embeddings for named entity recognition in clinical text",
"author" : [ "Wu et al.2015] Yonghui Wu", "Jun Xu", "Min Jiang", "Yaoyun Zhang", "Hua Xu" ],
"venue" : "In AMIA Annual Symposium Proceedings,",
"citeRegEx" : "Wu et al\\.,? \\Q2015\\E",
"shortCiteRegEx" : "Wu et al\\.",
"year" : 2015
} ],
"referenceMentions" : [ {
"referenceID" : 10,
"context" : "Dictionary based systems perform a fast look-up from medical ontologies such as Unified Medical Language System (UMLS) to extract concepts (Kipper-Schuler et al., 2008).",
"startOffset" : 139,
"endOffset" : 168
}, {
"referenceID" : 11,
"context" : "To overcome these limitations various supervised and semi-supervised machine learning (ML) approaches and its variants have been proposed utilizing conditional random fields (CRF), maximum entropy and support vector machines (SVM) models which utilize both textual and contextual information while reducing the dependency on lexicon lookup (Lafferty et al., 2001; Berger et al., 1996; Joachims, 1998).",
"startOffset" : 340,
"endOffset" : 400
}, {
"referenceID" : 0,
"context" : "To overcome these limitations various supervised and semi-supervised machine learning (ML) approaches and its variants have been proposed utilizing conditional random fields (CRF), maximum entropy and support vector machines (SVM) models which utilize both textual and contextual information while reducing the dependency on lexicon lookup (Lafferty et al., 2001; Berger et al., 1996; Joachims, 1998).",
"startOffset" : 340,
"endOffset" : 400
}, {
"referenceID" : 8,
"context" : "To overcome these limitations various supervised and semi-supervised machine learning (ML) approaches and its variants have been proposed utilizing conditional random fields (CRF), maximum entropy and support vector machines (SVM) models which utilize both textual and contextual information while reducing the dependency on lexicon lookup (Lafferty et al., 2001; Berger et al., 1996; Joachims, 1998).",
"startOffset" : 340,
"endOffset" : 400
}, {
"referenceID" : 16,
"context" : "For this reason, this paper employs bidirectional LSTM-CRF intialized with general purpose off-the-shelf neural word embeddings derived from Glove (Pennington et al., 2014) and Word2Vec (Mikolov et al.",
"startOffset" : 147,
"endOffset" : 172
}, {
"referenceID" : 14,
"context" : ", 2014) and Word2Vec (Mikolov et al., 2013) for automatic feature learning thus avoiding time-consuming feature engineering, which deliver system performance comparable to the best submissions from the 2010 i2b2/VA challenge.",
"startOffset" : 21,
"endOffset" : 43
}, {
"referenceID" : 3,
"context" : "Similarly hybrid models obtained by cascading CRF and SVM algorithms along with several pattern matching rules are shown to produce effective results (Boag et al., 2015).",
"startOffset" : 150,
"endOffset" : 169
}, {
"referenceID" : 9,
"context" : "Subsequently (Jonnalagadda et al., 2012) demonstrated that random indexing model with distributional word representations improve clinical concept extraction.",
"startOffset" : 13,
"endOffset" : 40
}, {
"referenceID" : 4,
"context" : "With recent success of incorporating word embeddings derived from the entire English wikipedia in various NER task (Collobert et al., 2011), binarized word embeddings derived from domain specific copora (Eg: Monitoring in Intensive Care (MIMIC) II corpus) has improved performance of CRF based concept extraction system (Wu et al.",
"startOffset" : 115,
"endOffset" : 139
}, {
"referenceID" : 17,
"context" : ", 2011), binarized word embeddings derived from domain specific copora (Eg: Monitoring in Intensive Care (MIMIC) II corpus) has improved performance of CRF based concept extraction system (Wu et al., 2015).",
"startOffset" : 188,
"endOffset" : 205
}, {
"referenceID" : 11,
"context" : "Thus, another modification to the bidirectional LSTM is the addition of a conditional random field (CRF) (Lafferty et al., 2001) as the output layer to provide optimal sequential decoding.",
"startOffset" : 105,
"endOffset" : 128
}, {
"referenceID" : 12,
"context" : "The resulting network is commonly referred to as the bidirectional LSTM-CRF (Lample et al., 2016).",
"startOffset" : 76,
"endOffset" : 97
}, {
"referenceID" : 9,
"context" : "23 distributonal semantics-CRF (Jonnalagadda et al., 2012) 85.",
"startOffset" : 31,
"endOffset" : 58
}, {
"referenceID" : 17,
"context" : "70 binarized neural embedding CRF(Wu et al., 2015) 85.",
"startOffset" : 33,
"endOffset" : 50
}, {
"referenceID" : 3,
"context" : "80 CliNER (Boag et al., 2015) 79.",
"startOffset" : 10,
"endOffset" : 29
}, {
"referenceID" : 2,
"context" : ", the Theano neural network toolkit (Bergstra et al., 2010)) and we publicly release our code1.",
"startOffset" : 36,
"endOffset" : 59
}, {
"referenceID" : 14,
"context" : "A potential way to further improve its performance would be to initialize its training with unsupervised word embeddings such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al.",
"startOffset" : 138,
"endOffset" : 160
}, {
"referenceID" : 16,
"context" : ", 2013) and GloVe (Pennington et al., 2014) trained with domain specific resources such as Monitoring in Intensive Care (MIMIC) II copora.",
"startOffset" : 18,
"endOffset" : 43
} ],
"year" : 2016,
"abstractText" : "Extraction of concepts present in patient clinical records is an essential step in clinical research. The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for clinical records presented concept extraction (CE) task, with aim to identify concepts (such as treatments, tests, problems) and classify them into predefined categories. State-of-the-art CE approaches heavily rely on hand crafted features and domain specific resources which are hard to collect and tune. For this reason, this paper employs bidirectional LSTM with CRF decoding initialized with general purpose off-the-shelf word embeddings for CE. The experimental results achieved on 2010 i2b2/VA reference standard corpora using bidirectional LSTM CRF ranks closely with top ranked systems.",
"creator" : "LaTeX with hyperref package"
}
} | {
"pile_set_name": "Github"
} |
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
using System.Diagnostics.ContractsLight;
using System.IO;
namespace BuildXL.Utilities
{
/// <summary>
/// Static helper class to format path
/// </summary>
public static class PathFormatter
{
private static readonly char s_hostOsSeparator = Path.DirectorySeparatorChar;
/// <summary>
/// Returns the separator for the given pathformat
/// </summary>
public static char GetPathSeparator(PathFormat pathFormat)
{
switch (pathFormat)
{
case PathFormat.HostOs:
return s_hostOsSeparator;
case PathFormat.Script:
return '/';
case PathFormat.Windows:
return '\\';
case PathFormat.Unix:
return '/';
default:
Contract.Assert(false, "Unexpected pathFormat");
return '\0';
}
}
}
}
| {
"pile_set_name": "Github"
} |
/*
* Copyright (c) 2012, Michael Lehn
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1) Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2) Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* 3) Neither the name of the FLENS development group nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef CXXBLAS_TINYLEVEL1_GESCAL_H
#define CXXBLAS_TINYLEVEL1_GESCAL_H 1
#include "xflens/cxxblas/typedefs.h"
namespace cxxblas {
template <int m, int n, typename ALPHA, typename MA, int ldA>
void
gescal(const ALPHA &alpha, MA *A);
} // namespace cxxblas
#endif // CXXBLAS_TINYLEVEL1_GESCAL_H
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="UTF-8"?>
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="https://schema.phpunit.de/6.3/phpunit.xsd"
backupGlobals="false"
bootstrap="tests/bootstrap.php"
backupStaticAttributes="false"
cacheTokens="false"
colors="false"
processIsolation="false"
stopOnError="false"
stopOnFailure="false"
stopOnIncomplete="false"
stopOnSkipped="false"
stopOnRisky="false"
verbose="false"
>
<php>
<ini name="error_reporting" value="-1" />
<server name="PANTHER_WEB_SERVER_DIR" value="interface"/>
<!-- ATTENTION use only on local machine you can not activate chrome visual mode on travis. -->
<!-- to disable browser's headless mode (will display the testing window, useful to debug) -->
<!--<server name="PANTHER_NO_HEADLESS" value="1"/>-->
</php>
<testsuites>
<testsuite name="openemr">
<directory>tests/Tests/Unit</directory>
<directory>tests/Tests/E2e</directory>
<directory>tests/Tests/Api</directory>
<directory>tests/Tests/Fixtures</directory>
<directory>tests/Tests/Services</directory>
<directory>tests/Tests/Validators</directory>
<directory>tests/Tests/RestControllers</directory>
<directory>tests/Tests/Common</directory>
</testsuite>
<testsuite name="unit">
<directory>tests/Tests/Unit</directory>
</testsuite>
<testsuite name="e2e">
<directory>tests/Tests/E2e</directory>
</testsuite>
<testsuite name="api">
<directory>tests/Tests/Api</directory>
</testsuite>
<testsuite name="fixtures">
<directory>tests/Tests/Fixtures</directory>
</testsuite>
<testsuite name="services">
<directory>tests/Tests/Services</directory>
</testsuite>
<testsuite name="validators">
<directory>tests/Tests/Validators</directory>
</testsuite>
<testsuite name="controllers">
<directory>tests/Tests/RestControllers</directory>
</testsuite>
<testsuite name="common">
<directory>tests/Tests/Common</directory>
</testsuite>
</testsuites>
</phpunit>
| {
"pile_set_name": "Github"
} |
package parse
import (
"testing"
)
func TestKustoDatabasePrincipalId(t *testing.T) {
testData := []struct {
Name string
Input string
Expected *KustoDatabasePrincipalId
}{
{
Name: "Empty",
Input: "",
Expected: nil,
},
{
Name: "Missing FQN",
Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.Kusto/Databases/database1",
Expected: nil,
},
{
Name: "Missing Role",
Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.Kusto/Clusters/cluster1/Databases/database1/FQN/aaduser=;",
Expected: nil,
},
{
Name: "Database Principal ID",
Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.Kusto/Clusters/cluster1/Databases/database1/Role/Viewer/FQN/aaduser=00000000-0000-0000-0000-000000000000;00000000-0000-0000-0000-000000000000",
Expected: &KustoDatabasePrincipalId{
Name: "aaduser=00000000-0000-0000-0000-000000000000;00000000-0000-0000-0000-000000000000",
Role: "Viewer",
Database: "database1",
Cluster: "cluster1",
ResourceGroup: "group1",
},
},
}
for _, v := range testData {
t.Logf("[DEBUG] Testing %q", v.Name)
actual, err := KustoDatabasePrincipalID(v.Input)
if err != nil {
if v.Expected == nil {
continue
}
t.Fatalf("Expected a value but got an error: %s", err)
}
if actual.Name != v.Expected.Name {
t.Fatalf("Expected %q but got %q for Name", v.Expected.Name, actual.Name)
}
if actual.Role != v.Expected.Role {
t.Fatalf("Expected %q but got %q for Role", v.Expected.Role, actual.Role)
}
if actual.Database != v.Expected.Database {
t.Fatalf("Expected %q but got %q for Database", v.Expected.Database, actual.Database)
}
if actual.Cluster != v.Expected.Cluster {
t.Fatalf("Expected %q but got %q for Cluster", v.Expected.Cluster, actual.Cluster)
}
if actual.ResourceGroup != v.Expected.ResourceGroup {
t.Fatalf("Expected %q but got %q for Resource Group", v.Expected.ResourceGroup, actual.ResourceGroup)
}
}
}
| {
"pile_set_name": "Github"
} |
// © 2016 and later: Unicode, Inc. and others.
// License & terms of use: http://www.unicode.org/copyright.html#License
es_CR{
%%Parent{"es_419"}
Countries{
BA{"Bosnia y Herzegovina"}
TA{"Tristán de Acuña"}
TL{"Timor-Leste"}
UM{"Islas menores alejadas de EE. UU."}
}
Countries%short{
GB{"RU"}
}
Version{"2.1.47.70"}
}
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" ?>
<annotation>
<folder>widerface</folder>
<filename>12--Group_12_Group_Group_12_Group_Group_12_447.jpg</filename>
<source>
<database>wider face Database</database>
<annotation>PASCAL VOC2007</annotation>

<flickrid>-1</flickrid>
</source>
<owner>
<flickrid>yanyu</flickrid>
<name>yanyu</name>
</owner>
<size>
<width>1024</width>
<height>751</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>843</xmin>
<ymin>215</ymin>
<xmax>894</xmax>
<ymax>274</ymax>
</bndbox>
<lm>
<x1>851.125</x1>
<y1>236.125</y1>
<x2>872.125</x2>
<y2>235.75</y2>
<x3>860.5</x3>
<y3>248.5</y3>
<x4>855.25</x4>
<y4>260.125</y4>
<x5>874.0</x5>
<y5>258.25</y5>
<visible>0</visible>
<blur>0.67</blur>
</lm>
<has_lm>1</has_lm>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>427</xmin>
<ymin>101</ymin>
<xmax>471</xmax>
<ymax>158</ymax>
</bndbox>
<lm>
<x1>434.214</x1>
<y1>126.857</y1>
<x2>461.0</x2>
<y2>123.643</y2>
<x3>447.071</x3>
<y3>137.571</y3>
<x4>440.286</x4>
<y4>144.357</y4>
<x5>458.857</x5>
<y5>142.571</y5>
<visible>0</visible>
<blur>0.39</blur>
</lm>
<has_lm>1</has_lm>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>431</xmin>
<ymin>221</ymin>
<xmax>483</xmax>
<ymax>271</ymax>
</bndbox>
<lm>
<x1>458.129</x1>
<y1>237.027</y1>
<x2>478.009</x2>
<y2>241.915</y2>
<x3>469.21</x3>
<y3>252.018</y3>
<x4>453.567</x4>
<y4>259.188</y4>
<x5>461.714</x5>
<y5>260.817</y5>
<visible>1</visible>
<blur>0.28</blur>
</lm>
<has_lm>1</has_lm>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>425</xmin>
<ymin>320</ymin>
<xmax>474</xmax>
<ymax>372</ymax>
</bndbox>
<lm>
<x1>435.0</x1>
<y1>341.0</y1>
<x2>457.0</x2>
<y2>336.0</y2>
<x3>446.0</x3>
<y3>352.0</y3>
<x4>441.0</x4>
<y4>359.0</y4>
<x5>458.0</x5>
<y5>357.0</y5>
<visible>0</visible>
<blur>0.23</blur>
</lm>
<has_lm>1</has_lm>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>256</xmin>
<ymin>248</ymin>
<xmax>303</xmax>
<ymax>306</ymax>
</bndbox>
<lm>
<x1>274.683</x1>
<y1>272.424</y1>
<x2>297.656</x2>
<y2>273.165</y2>
<x3>287.281</x3>
<y3>285.393</y3>
<x4>274.683</x4>
<y4>289.469</y4>
<x5>291.728</x5>
<y5>292.062</y5>
<visible>0</visible>
<blur>0.65</blur>
</lm>
<has_lm>1</has_lm>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>160</xmin>
<ymin>264</ymin>
<xmax>209</xmax>
<ymax>321</ymax>
</bndbox>
<lm>
<x1>180.384</x1>
<y1>283.183</y1>
<x2>200.996</x2>
<y2>283.545</y2>
<x3>193.402</x3>
<y3>297.647</y3>
<x4>180.746</x4>
<y4>306.326</y4>
<x5>198.464</x5>
<y5>305.603</y5>
<visible>0</visible>
<blur>0.66</blur>
</lm>
<has_lm>1</has_lm>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>71</xmin>
<ymin>227</ymin>
<xmax>132</xmax>
<ymax>300</ymax>
</bndbox>
<lm>
<x1>95.464</x1>
<y1>252.321</y1>
<x2>121.464</x2>
<y2>254.179</y2>
<x3>111.25</x3>
<y3>267.643</y3>
<x4>95.0</x4>
<y4>277.393</y4>
<x5>116.821</x5>
<y5>279.714</y5>
<visible>0</visible>
<blur>0.71</blur>
</lm>
<has_lm>1</has_lm>
</object>
</annotation>
| {
"pile_set_name": "Github"
} |
define(["dojo/_base/declare", "dojo/data/ItemFileWriteStore", "./AndOrReadStore"],
function(declare, ItemFileWriteStore, AndOrReadStore){
// module:
// dojox/data/AndOrWriteStore
// summary:
// TODOC
return declare("dojox.data.AndOrWriteStore", [ItemFileWriteStore, AndOrReadStore], {});
});
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2000-2009 JetBrains s.r.o.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.intellij.codeEditor.printing;
import com.intellij.CommonBundle;
import com.intellij.openapi.actionSystem.*;
import com.intellij.openapi.project.Project;
import com.intellij.psi.PsiDirectory;
import com.intellij.psi.PsiElement;
import com.intellij.psi.PsiFile;
import javax.annotation.Nonnull;
import consulo.ui.annotation.RequiredUIAccess;
import javax.swing.*;
import java.io.FileNotFoundException;
public class ExportToHTMLAction extends AnAction {
@RequiredUIAccess
@Override
public void actionPerformed(@Nonnull AnActionEvent e) {
DataContext dataContext = e.getDataContext();
Project project = dataContext.getData(CommonDataKeys.PROJECT);
if (project == null) {
return;
}
try {
ExportToHTMLManager.executeExport(dataContext);
}
catch (FileNotFoundException ex) {
JOptionPane.showMessageDialog(null, CodeEditorBundle.message("file.not.found", ex.getMessage()), CommonBundle.getErrorTitle(), JOptionPane.ERROR_MESSAGE);
}
}
@RequiredUIAccess
@Override
public void update(@Nonnull AnActionEvent event) {
Presentation presentation = event.getPresentation();
DataContext dataContext = event.getDataContext();
PsiElement psiElement = dataContext.getData(LangDataKeys.PSI_ELEMENT);
if (psiElement instanceof PsiDirectory) {
presentation.setEnabled(true);
return;
}
PsiFile psiFile = dataContext.getData(LangDataKeys.PSI_FILE);
presentation.setEnabled(psiFile != null && psiFile.getContainingDirectory() != null);
}
} | {
"pile_set_name": "Github"
} |
<?php
/*
* Copyright 2016 Google Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not
* use this file except in compliance with the License. You may obtain a copy of
* the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations under
* the License.
*/
/**
* The "globalOperations" collection of methods.
* Typical usage is:
* <code>
* $computeService = new Google_Service_Compute(...);
* $globalOperations = $computeService->globalOperations;
* </code>
*/
class Google_Service_Compute_Resource_GlobalOperations extends Google_Service_Resource
{
/**
* Retrieves an aggregated list of all operations.
* (globalOperations.aggregatedList)
*
* @param string $project Project ID for this request.
* @param array $optParams Optional parameters.
*
* @opt_param string filter Sets a filter expression for filtering listed
* resources, in the form filter={expression}. Your {expression} must be in the
* format: field_name comparison_string literal_string.
*
* The field_name is the name of the field you want to compare. Only atomic
* field types are supported (string, number, boolean). The comparison_string
* must be either eq (equals) or ne (not equals). The literal_string is the
* string value to filter to. The literal value must be valid for the type of
* field you are filtering by (string, number, boolean). For string fields, the
* literal value is interpreted as a regular expression using RE2 syntax. The
* literal value must match the entire field.
*
* For example, to filter for instances that do not have a name of example-
* instance, you would use filter=name ne example-instance.
*
* Compute Engine Beta API Only: When filtering in the Beta API, you can also
* filter on nested fields. For example, you could filter on instances that have
* set the scheduling.automaticRestart field to true. Use filtering on nested
* fields to take advantage of labels to organize and search for results based
* on label values.
*
* The Beta API also supports filtering on multiple expressions by providing
* each separate expression within parentheses. For example,
* (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple
* expressions are treated as AND expressions, meaning that resources must match
* all expressions to pass the filters.
* @opt_param string maxResults The maximum number of results per page that
* should be returned. If the number of available results is larger than
* maxResults, Compute Engine returns a nextPageToken that can be used to get
* the next page of results in subsequent list requests.
* @opt_param string pageToken Specifies a page token to use. Set pageToken to
* the nextPageToken returned by a previous list request to get the next page of
* results.
* @return Google_Service_Compute_OperationAggregatedList
*/
public function aggregatedList($project, $optParams = array())
{
$params = array('project' => $project);
$params = array_merge($params, $optParams);
return $this->call('aggregatedList', array($params), "Google_Service_Compute_OperationAggregatedList");
}
/**
* Deletes the specified Operations resource. (globalOperations.delete)
*
* @param string $project Project ID for this request.
* @param string $operation Name of the Operations resource to delete.
* @param array $optParams Optional parameters.
*/
public function delete($project, $operation, $optParams = array())
{
$params = array('project' => $project, 'operation' => $operation);
$params = array_merge($params, $optParams);
return $this->call('delete', array($params));
}
/**
* Retrieves the specified Operations resource. Get a list of operations by
* making a list() request. (globalOperations.get)
*
* @param string $project Project ID for this request.
* @param string $operation Name of the Operations resource to return.
* @param array $optParams Optional parameters.
* @return Google_Service_Compute_Operation
*/
public function get($project, $operation, $optParams = array())
{
$params = array('project' => $project, 'operation' => $operation);
$params = array_merge($params, $optParams);
return $this->call('get', array($params), "Google_Service_Compute_Operation");
}
/**
* Retrieves a list of Operation resources contained within the specified
* project. (globalOperations.listGlobalOperations)
*
* @param string $project Project ID for this request.
* @param array $optParams Optional parameters.
*
* @opt_param string filter Sets a filter expression for filtering listed
* resources, in the form filter={expression}. Your {expression} must be in the
* format: field_name comparison_string literal_string.
*
* The field_name is the name of the field you want to compare. Only atomic
* field types are supported (string, number, boolean). The comparison_string
* must be either eq (equals) or ne (not equals). The literal_string is the
* string value to filter to. The literal value must be valid for the type of
* field you are filtering by (string, number, boolean). For string fields, the
* literal value is interpreted as a regular expression using RE2 syntax. The
* literal value must match the entire field.
*
* For example, to filter for instances that do not have a name of example-
* instance, you would use filter=name ne example-instance.
*
* Compute Engine Beta API Only: When filtering in the Beta API, you can also
* filter on nested fields. For example, you could filter on instances that have
* set the scheduling.automaticRestart field to true. Use filtering on nested
* fields to take advantage of labels to organize and search for results based
* on label values.
*
* The Beta API also supports filtering on multiple expressions by providing
* each separate expression within parentheses. For example,
* (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple
* expressions are treated as AND expressions, meaning that resources must match
* all expressions to pass the filters.
* @opt_param string maxResults The maximum number of results per page that
* should be returned. If the number of available results is larger than
* maxResults, Compute Engine returns a nextPageToken that can be used to get
* the next page of results in subsequent list requests.
* @opt_param string pageToken Specifies a page token to use. Set pageToken to
* the nextPageToken returned by a previous list request to get the next page of
* results.
* @return Google_Service_Compute_OperationList
*/
public function listGlobalOperations($project, $optParams = array())
{
$params = array('project' => $project);
$params = array_merge($params, $optParams);
return $this->call('list', array($params), "Google_Service_Compute_OperationList");
}
}
| {
"pile_set_name": "Github"
} |
已发布:https://studygolang.com/articles/12339
# 让我们用 Go 语言创建一个 NTP 客户端
在网络编程做了一些研究之后,我邂逅了一篇题目为《Let's Make a NTP Client in C》,由 David Lettier(Lettier) 编写的文章。这篇文章鼓舞了我用 Go 去做相似的事。
> 这篇博文提到的代码都在这里 [https://github.com/vladimirvivien/go-ntp-client](https://github.com/vladimirvivien/go-ntp-client)。
这篇博文描述了一个(真正的) NTP 客户端的结构,使用 Go 语言编写。它通过 encoding/binary 库去封装,解封装,发送和接收来自远端 NTP 服务器基于 UDP 协议的 NTP 包。
你能通过[这里](http://www.ntp.org/)学到更多关于 NTP 协议的内容,或者阅读 [RFC5905](https://tools.ietf.org/html/rfc5905) 规范、研究一个实现了更多的功能,(可能)比 Go NTP 客户端更好的客户端 [https://github.com/beevik/ntp](https://github.com/beevik/ntp)。
## NTP 包结构
时间同步的概念是非常复杂的,我还不能完全理解,也超过这篇博文的范围。但幸运的是,NTP 使用的数据包格式很简单,对于客户端来说也小而足够了。下面的图展示了 NTP v4 的包格式。关于这篇博文,我们只关注前 48 个字节,忽略掉 v4 版本的扩展部分。

NTP v4 data format (abbreviated) — https://tools.ietf.org/html/rfc5905
## NTP 包
客户端和对应的服务端都使用上面提到的相同的包格式。下面的结构体定义了 NTP 包和它的属性,跟上面提到的格式一一对应。
```go
type packet struct {
Settings uint8 // leap yr indicator, ver number, and mode
Stratum uint8 // stratum of local clock
Poll int8 // poll exponent
Precision int8 // precision exponent
RootDelay uint32 // root delay
RootDispersion uint32 // root dispersion
ReferenceID uint32 // reference id
RefTimeSec uint32 // reference timestamp sec
RefTimeFrac uint32 // reference timestamp fractional
OrigTimeSec uint32 // origin time secs
OrigTimeFrac uint32 // origin time fractional
RxTimeSec uint32 // receive time secs
RxTimeFrac uint32 // receive time frac
TxTimeSec uint32 // transmit time secs
TxTimeFrac uint32 // transmit time frac
}
```
## 启动 UDP 连接
接下来,我们通过 UDP 协议,使用 net.Dial 函数去启动一个 socket,与 NTP 服务器联系,并设定 15 秒的超时时间。
```go
conn, err := net.Dial("udp", host)
if err != nil {
log.Fatal("failed to connect:", err)
}
defer conn.Close()
if err := conn.SetDeadline(time.Now().Add(15 * time.Second)); err != nil {
log.Fatal("failed to set deadline: ", err)
}
```
## 从服务端获取时间
在发送请求包给服务端前,第一个字节是用来设置通信的配置,我们这里用 0x1B(或者二进制 00011011),代表客户端模式为 3,NTP版本为 3,润年为 0,如下所示:
```go
// configure request settings by specifying the first byte as
// 00 011 011 (or 0x1B)
// | | +-- client mode (3)
// | + ----- version (3)
// + -------- leap year indicator, 0 no warning
req := &packet{Settings: 0x1B}
```
接下来,我们使用 binary 库去自动地将 packet 结构体封装成字节流,并以大端格式发送出去。
```go
if err := binary.Write(conn, binary.BigEndian, req); err != nil {
log.Fatalf("failed to send request: %v", err)
}
```
## 从服务端读取时间
接下来,我们使用 binary 包再次将从服务端读取的字节流自动地解封装成对应的 packet 结构体。
```go
rsp := &packet{}
if err := binary.Read(conn, binary.BigEndian, rsp); err != nil {
log.Fatalf("failed to read server response: %v", err)
}
```
## 解析时间
在这个超普通的例子里面,我们只对 Transmit Time 字段 (rsp.TxTimeSec 和 rspTxTimeFrac) 感兴趣,它们是从服务端发出时的时间。但我们不能直接使用它们,必须先转成 Unix 时间。
Unix 时间是一个开始于 1970 年的纪元(或者说从 1970 年开始的秒数)。然而 NTP 使用的是另外一个纪元,从 1900 年开始的秒数。因此,从 NTP 服务端获取到的值要正确地转成 Unix 时间必须减掉这 70 年间的秒数 (1970-1900),或者说 2208988800 秒。
```go
const ntpEpochOffset = 2208988800
...
secs := float64(rsp.TxTimeSec) - ntpEpochOffset
nanos := (int64(rsp.TxTimeFrac) * 1e9) >> 32
```
NTP 值的分数部分转成纳秒。在这个平凡的例子里,这里是可选的,展示只是为了完整性。
## 显示时间
最后,函数 time.Unix 被用来创建一个秒数部分使用 secs,分数部分使用 nanos 值的时间。然后这个时间会被打印到终端。
```go
fmt.Printf("%v\n", time.Unix(int64(secs), nanos))
```
## 结论
这篇博文展示了一个关于 NTP 客户端的普通的例子。描述了如何利用 encoding/binary 库,非常容易地将一个结构体转成字节形式。相反,我们使用 binary 库将一个字节流转成对应的结构体值。
这个 NTP 客户端还不是一个可用于生产环境的产品,毕竟它缺少了 NTP 规范指定的很多功能。从服务端返回的大部分字段都被忽略了。你可以从[这里](https://github.com/beevik/ntp)获取到一个用 Go 写的更完整的 NTP 客户端。
---
via: https://medium.com/learning-the-go-programming-language/lets-make-an-ntp-client-in-go-287c4b9a969f
作者:[Vladimir Vivien](https://twitter.com/VladimirVivien)
译者:[gogeof](https://github.com/gogeof)
校对:[polaris1119](https://github.com/polaris1119)
本文由 [GCTT](https://github.com/studygolang/GCTT) 原创编译,[Go 中文网](https://studygolang.com/) 荣誉推出
| {
"pile_set_name": "Github"
} |
#
# OpenSSL/crypto/md/Makefile
#
DIR= md2
TOP= ../..
CC= cc
INCLUDES=
CFLAG=-g
MAKEFILE= Makefile
AR= ar r
CFLAGS= $(INCLUDES) $(CFLAG)
GENERAL=Makefile
TEST=md2test.c
APPS=
LIB=$(TOP)/libcrypto.a
LIBSRC=md2_dgst.c md2_one.c
LIBOBJ=md2_dgst.o md2_one.o
SRC= $(LIBSRC)
EXHEADER= md2.h
HEADER= $(EXHEADER)
ALL= $(GENERAL) $(SRC) $(HEADER)
top:
(cd ../..; $(MAKE) DIRS=crypto SDIRS=$(DIR) sub_all)
all: lib
lib: $(LIBOBJ)
$(AR) $(LIB) $(LIBOBJ)
$(RANLIB) $(LIB) || echo Never mind.
@touch lib
files:
$(PERL) $(TOP)/util/files.pl Makefile >> $(TOP)/MINFO
links:
@$(PERL) $(TOP)/util/mklink.pl ../../include/openssl $(EXHEADER)
@$(PERL) $(TOP)/util/mklink.pl ../../test $(TEST)
@$(PERL) $(TOP)/util/mklink.pl ../../apps $(APPS)
install:
@[ -n "$(INSTALLTOP)" ] # should be set by top Makefile...
@headerlist="$(EXHEADER)"; for i in $$headerlist ; \
do \
(cp $$i $(INSTALL_PREFIX)$(INSTALLTOP)/include/openssl/$$i; \
chmod 644 $(INSTALL_PREFIX)$(INSTALLTOP)/include/openssl/$$i ); \
done;
tags:
ctags $(SRC)
tests:
lint:
lint -DLINT $(INCLUDES) $(SRC)>fluff
update: depend
depend:
@[ -n "$(MAKEDEPEND)" ] # should be set by upper Makefile...
$(MAKEDEPEND) -- $(CFLAG) $(INCLUDES) $(DEPFLAG) -- $(PROGS) $(LIBSRC)
dclean:
$(PERL) -pe 'if (/^# DO NOT DELETE THIS LINE/) {print; exit(0);}' $(MAKEFILE) >Makefile.new
mv -f Makefile.new $(MAKEFILE)
clean:
rm -f *.o *.obj lib tags core .pure .nfs* *.old *.bak fluff
# DO NOT DELETE THIS LINE -- make depend depends on it.
md2_dgst.o: ../../include/openssl/crypto.h ../../include/openssl/e_os2.h
md2_dgst.o: ../../include/openssl/md2.h ../../include/openssl/opensslconf.h
md2_dgst.o: ../../include/openssl/opensslv.h ../../include/openssl/ossl_typ.h
md2_dgst.o: ../../include/openssl/safestack.h ../../include/openssl/stack.h
md2_dgst.o: ../../include/openssl/symhacks.h md2_dgst.c
md2_one.o: ../../e_os.h ../../include/openssl/bio.h
md2_one.o: ../../include/openssl/buffer.h ../../include/openssl/crypto.h
md2_one.o: ../../include/openssl/e_os2.h ../../include/openssl/err.h
md2_one.o: ../../include/openssl/lhash.h ../../include/openssl/md2.h
md2_one.o: ../../include/openssl/opensslconf.h ../../include/openssl/opensslv.h
md2_one.o: ../../include/openssl/ossl_typ.h ../../include/openssl/safestack.h
md2_one.o: ../../include/openssl/stack.h ../../include/openssl/symhacks.h
md2_one.o: ../cryptlib.h md2_one.c
| {
"pile_set_name": "Github"
} |
// Package static serves static files like CSS, JavaScript, and images.
package static
import (
"net/http"
"os"
"path"
"github.com/blue-jay/blueprint/controller/status"
"github.com/blue-jay/blueprint/lib/flight"
"github.com/blue-jay/core/router"
)
// Load the routes.
func Load() {
// Serve static files
router.Get("/static/*filepath", Index)
}
// Index maps static files.
func Index(w http.ResponseWriter, r *http.Request) {
c := flight.Context(w, r)
// File path
path := path.Join(c.Config.Asset.Folder, r.URL.Path[1:])
// Only serve files
if fi, err := os.Stat(path); err == nil && !fi.IsDir() {
http.ServeFile(w, r, path)
return
}
status.Error404(w, r)
}
| {
"pile_set_name": "Github"
} |
[[etcd-keys-component]]
= Etcd Keys Component
:docTitle: Etcd Keys
:artifactId: camel-etcd
:description: Get, set or delete keys in etcd key-value store.
:since: 2.18
:supportLevel: Stable
:component-header: Only producer is supported
include::{cq-version}@camel-quarkus:ROOT:partial$reference/components/etcd-keys.adoc[]
*Since Camel {since}*
*{component-header}*
*Since Camel 2.18*
The camel Etcd component allows you to work with Etcd, a distributed reliable key-value store.
== URI Format
[source,java]
----------------------------
etcd-keys:path[?options]
----------------------------
== URI Options
// component options: START
The Etcd Keys component supports 12 options, which are listed below.
[width="100%",cols="2,5,^1,2",options="header"]
|===
| Name | Description | Default | Type
| *configuration* (producer) | Component configuration. | | EtcdConfiguration
| *lazyStartProducer* (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean
| *recursive* (producer) | To apply an action recursively. | false | boolean
| *servicePath* (producer) | The path to look for for service discovery | /services/ | String
| *timeout* (producer) | To set the maximum time an action could take to complete. | | Long
| *uris* (common) | To set the URIs the client connects. | http://localhost:2379,http://localhost:4001 | String
| *timeToLive* (producer) | To set the lifespan of a key in milliseconds. | | Integer
| *basicPropertyBinding* (advanced) | *Deprecated* Whether the component should use basic property binding (Camel 2.x) or the newer property binding with additional capabilities | false | boolean
| *password* (security) | The password to use for basic authentication. | | String
| *sslContextParameters* (security) | To configure security using SSLContextParameters. | | SSLContextParameters
| *useGlobalSslContextParameters* (security) | Enable usage of global SSL context parameters. | false | boolean
| *userName* (security) | The user name to use for basic authentication. | | String
|===
// component options: END
// endpoint options: START
The Etcd Keys endpoint is configured using URI syntax:
----
etcd-keys:path
----
with the following path and query parameters:
=== Path Parameters (1 parameters):
[width="100%",cols="2,5,^1,2",options="header"]
|===
| Name | Description | Default | Type
| *path* | The path the endpoint refers to | | String
|===
=== Query Parameters (11 parameters):
[width="100%",cols="2,5,^1,2",options="header"]
|===
| Name | Description | Default | Type
| *recursive* (producer) | To apply an action recursively. | false | boolean
| *servicePath* (producer) | The path to look for for service discovery | /services/ | String
| *timeout* (producer) | To set the maximum time an action could take to complete. | | Long
| *uris* (common) | To set the URIs the client connects. | http://localhost:2379,http://localhost:4001 | String
| *lazyStartProducer* (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean
| *timeToLive* (producer) | To set the lifespan of a key in milliseconds. | | Integer
| *basicPropertyBinding* (advanced) | Whether the endpoint should use basic property binding (Camel 2.x) or the newer property binding with additional capabilities | false | boolean
| *synchronous* (advanced) | Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). | false | boolean
| *password* (security) | The password to use for basic authentication. | | String
| *sslContextParameters* (security) | To configure security using SSLContextParameters. | | SSLContextParameters
| *userName* (security) | The user name to use for basic authentication. | | String
|===
// endpoint options: END
include::camel-spring-boot::page$etcd-starter.adoc[]
| {
"pile_set_name": "Github"
} |
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.thrift.transport;
import java.io.IOException;
import java.net.SocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.SelectionKey;
import java.nio.channels.Selector;
public abstract class TNonblockingTransport extends TTransport {
/**
* Non-blocking connection initialization.
* @see java.nio.channels.SocketChannel#connect(SocketAddress remote)
*/
public abstract boolean startConnect() throws IOException;
/**
* Non-blocking connection completion.
* @see java.nio.channels.SocketChannel#finishConnect()
*/
public abstract boolean finishConnect() throws IOException;
public abstract SelectionKey registerSelector(Selector selector, int interests) throws IOException;
public abstract int read(ByteBuffer buffer) throws IOException;
public abstract int write(ByteBuffer buffer) throws IOException;
}
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="utf-8"?>
<!-- Translations provided by Milly -->
<!-- Translations provided by euphory -->
<content xml:space="preserve">
<!-- NOTE: This file must appear LAST in the list of content files
so that these items override the matching items in the
stock content files! -->
<!-- reference_content.xml overrides -->
<item id="rootTopicTitle">{@RootNamespaceTitle}</item>
<item id="rootTopicTitleLocalized">名前空間</item>
<item id="rootLink"><referenceLink target="R:Project">{@RootNamespaceTitle}</referenceLink></item>
<item id="productTitle">{@HtmlEncHelpTitle}</item>
<item id="runningHeaderText">{@HtmlEncHelpTitle}</item>
<item id="locationInformation">アセンブリ: {0} (モジュール: {1}) バージョン: {2}</item>
<item id="assemblyNameAndModule">{0} ({1}.{2} 内) バージョン: {3}</item>
<!-- shared_content.xml overrides -->
<item id="copyCode">コピー</item>
<item id="locale">{@Locale}</item>
<item id="preliminary"><p style="color: #dc143c; font-size: 8.5pt; font-weight: bold;">[これは仮のドキュメントであり、予告なく変更されます。]</p></item>
<item id="comments"><p/>このトピックに対してコメントを送信
<a id="HT_MailLink" href="mailto:{@UrlEncFeedbackEMailAddress}?Subject={@UrlEncHelpTitle}">{@HtmlEncFeedbackEMailAddress}</a>
<script type="text/javascript">
var HT_mailLink = document.getElementById("HT_MailLink");
var HT_mailLinkText = HT_mailLink.innerHTML;
HT_mailLink.href += ": " + document.title;
HT_mailLink.innerHTML = HT_mailLinkText;
</script></item>
<!-- To format the copyright HREF and/or copyright text into a message of
your choosing, you can specify @HtmlEncCopyrightHref and/or
@HtmlEncCopyrightText in braces -->
<item id="copyright">{@HtmlEncCopyrightInfo}</item>
<!-- SHFB Show Missing Component messages -->
<item id="shfbAutoDocConstructor"><see cref="T:{0}"/> クラスの新しいインスタンスを初期化します。</item>
<item id="shfbAutoDocStaticConstructor"><see cref="T:{0}"/> クラスの静的フィールドを初期化します。</item>
<item id="shfbAutoDocDispose"><see cref="T:{0}"/> によって使用された全てのリソースを解放します。</item>
<item id="shfbAutoDocDisposeBool"><see cref="T:{0}"/> によって使用されたアンマネージド リソースを解放し、任意でマネージド リソースを解放します。</item>
<item id="shfbAutoDocDisposeParam">True でマネージド リソースとアンマネージド リソースの両方とも解放し、False でアンマネージド リソースのみを解放します。</item>
<item id="shfbMissingTag"><p style="color: #dc143c; font-size: 8.5pt; font-weight: bold;">["{1}" の為の説明文 &lt;{0}&gt; が欠落しています。]</p></item>
<item id="shfbMissingParamTag"><p style="color: #dc143c; font-size: 8.5pt; font-weight: bold;">["{2}" の為の説明文 &lt;{0} name="{1}"/&gt; が欠落しています。]</p></item>
<item id="shfbMissingIncludeTarget"><p style="color: #dc143c; font-size: 8.5pt; font-weight: bold;">['{0}'の中の &lt;include&gt; 対象の説明文が欠落しています。ファイル: '{1}' パス: '{2}']</p></item>
</content>
| {
"pile_set_name": "Github"
} |
#ifndef INIT_H
#define INIT_H
#define MTIME (*(volatile long long *)(0x02000000 + 0xbff8))
#define MTIMECMP ((volatile long long *)(0x02000000 + 0x4000))
typedef void* (*trap_handler_t)(unsigned hartid, unsigned mcause, void *mepc,
void *sp);
void set_trap_handler(trap_handler_t handler);
void enable_timer_interrupts();
#endif
| {
"pile_set_name": "Github"
} |
#include "pack.h"
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <errno.h>
#include <assert.h>
#include <sys/stat.h>
#include <sys/mman.h>
#include <arpa/inet.h>
static struct packed_git packroot;
static bool loaded_pack = false;
/* ---- Pack Lookup ---- */
/**
* Note:
* The CRC checksum table has been omitted
*/
static off_t nth_packed_object_offset(uint32_t n)
{
const unsigned char *index = packroot.idx_data;
uint32_t off;
index += 4 * 256;
index += 8 + packroot.nr * 20;
off = ntohl(*((uint32_t *)(index + 4 * n)));
if (!(off & 0x80000000))
return off;
index += packroot.nr * 4 + (off & 0x7fffffff) * 8;
return (((uint64_t)ntohl(*((uint32_t *)(index + 0)))) << 32) |
ntohl(*((uint32_t *)(index + 4)));
}
static int sha1_entry_pos(const void *table,
size_t elem_size,
size_t key_offset,
unsigned lo, unsigned hi, unsigned nr,
const unsigned char *key)
{
const unsigned char *base = table;
const unsigned char *hi_key, *lo_key;
unsigned ofs_0;
if (!nr || lo >= hi)
return -1;
if (nr == hi)
hi_key = NULL;
else
hi_key = base + elem_size * hi + key_offset;
lo_key = base + elem_size * lo + key_offset;
ofs_0 = 0;
do {
int cmp;
unsigned ofs, mi, range;
unsigned lov, hiv, kyv;
const unsigned char *mi_key;
range = hi - lo;
if (hi_key) {
for (ofs = ofs_0; ofs < 20; ofs++)
if (lo_key[ofs] != hi_key[ofs])
break;
ofs_0 = ofs;
/*
* byte 0 thru (ofs-1) are the same between
* lo and hi; ofs is the first byte that is
* different.
*/
hiv = hi_key[ofs_0];
if (ofs_0 < 19)
hiv = (hiv << 8) | hi_key[ofs_0+1];
} else {
hiv = 256;
if (ofs_0 < 19)
hiv <<= 8;
}
lov = lo_key[ofs_0];
kyv = key[ofs_0];
if (ofs_0 < 19) {
lov = (lov << 8) | lo_key[ofs_0+1];
kyv = (kyv << 8) | key[ofs_0+1];
}
assert(lov < hiv);
if (kyv < lov)
return -1 - lo;
if (hiv < kyv)
return -1 - hi;
/*
* Even if we know the target is much closer to 'hi'
* than 'lo', if we pick too precisely and overshoot
* (e.g. when we know 'mi' is closer to 'hi' than to
* 'lo', pick 'mi' that is higher than the target), we
* end up narrowing the search space by a smaller
* amount (i.e. the distance between 'mi' and 'hi')
* than what we would have (i.e. about half of 'lo'
* and 'hi'). Hedge our bets to pick 'mi' less
* aggressively, i.e. make 'mi' a bit closer to the
* middle than we would otherwise pick.
*/
kyv = (kyv * 6 + lov + hiv) / 8;
if (lov < hiv - 1) {
if (kyv == lov)
kyv++;
else if (kyv == hiv)
kyv--;
}
mi = (range - 1) * (kyv - lov) / (hiv - lov) + lo;
if (!(lo <= mi && mi < hi))
die("assertion failure lo %u mi %u hi %u",
lo, mi, hi);
mi_key = base + elem_size * mi + key_offset;
cmp = memcmp(mi_key + ofs_0, key + ofs_0, 20 - ofs_0);
if (!cmp)
return mi;
if (cmp > 0) {
hi = mi;
hi_key = mi_key;
} else {
lo = mi + 1;
lo_key = mi_key + elem_size;
}
} while (lo < hi);
return -lo-1;
}
off_t find_pack_entry(const unsigned char *sha1)
{
const uint32_t *fanout = packroot.idx_data;
const unsigned char *sha1_idx = packroot.idx_data;
unsigned hi, lo, stride;
char sha1_digest[40];
int pos;
/* Skip the header and go to the SHA1 section */
fanout += 2;
sha1_idx += 8;
sha1_idx += 4 * 256;
hi = ntohl(fanout[*sha1]);
lo = ((*sha1 == 0x0) ? 0 : ntohl(fanout[*sha1 - 1]));
stride = 20;
print_sha1(sha1_digest, sha1);
PHOENIXFS_DBG("find_pack_entry:: %s %u %u %u", sha1_digest,
lo, hi, packroot.nr);
pos = sha1_entry_pos(sha1_idx, stride, 0,
lo, hi, packroot.nr, sha1);
PHOENIXFS_DBG("find_pack_entry:: pos = %d", pos);
if (pos < 0)
return 0;
return nth_packed_object_offset(pos);
}
/**
* Format:
* <> = [sha1 | delta_bit | size | data]
*/
int unpack_entry(unsigned char *sha1, const char *loosedir)
{
unsigned char read_sha1[20];
off_t obj_offset, size;
char sha1_digest[40];
char xpath[PATH_MAX];
FILE *loosefh;
bool delta;
print_sha1(sha1_digest, sha1);
/* Convert SHA1 to offset and make sure we were successful */
if (!(obj_offset = find_pack_entry(sha1))) {
PHOENIXFS_DBG("unpack_entry:: missing %s", sha1_digest);
return -1;
}
PHOENIXFS_DBG("unpack_entry:: %s %lld", sha1_digest,
(long long int)obj_offset);
if (fseek(packroot.packfh, obj_offset, SEEK_SET) < 0)
die("Seek error: packroot.packfh");
if (fread(&read_sha1, 20 * sizeof(unsigned char), 1, packroot.packfh) < 1)
die("Read error: read_sha1");
assert(memcmp(sha1, read_sha1, 20) == 0);
if (fread(&delta, sizeof(bool), 1, packroot.packfh) < 1)
die("Read error: delta");
sprintf(xpath, "%s/%s", loosedir, sha1_digest);
if (!(loosefh = fopen(xpath, "wb+"))) {
PHOENIXFS_DBG("unpack_entry:: can't open %s", xpath);
return -errno;
}
if (!delta) {
if (fread(&size, sizeof(off_t), 1, packroot.packfh) < 1)
die("Read error: %lld", (long long int)size);
PHOENIXFS_DBG("unpack_entry:: non-delta %s", sha1_digest);
buffer_copy_bytes(packroot.packfh, loosefh, size);
}
else
PHOENIXFS_DBG("unpack_entry:: delta %s", sha1_digest);
fclose(loosefh);
return 0;
}
/* ---- Pack Read ---- */
int map_pack_idx(FILE *src)
{
int srcfd;
void *idx_map;
size_t idx_size;
uint32_t n, nr, i, *index;
struct stat st;
struct pack_idx_header *hdr;
if ((srcfd = fileno(src)) < 0)
return -errno;
fstat(srcfd, &st);
idx_size = st.st_size;
if (idx_size < 4 * 256) {
close(srcfd);
die("Pack index too small");
}
idx_map = mmap(NULL, idx_size, PROT_READ, MAP_PRIVATE, srcfd, 0);
close(srcfd);
fclose(src);
hdr = idx_map;
if (hdr->signature != htonl(PACK_IDX_SIGNATURE)) {
munmap(idx_map, idx_size);
die("Corrupt pack index signature");
}
else if (hdr->version != htonl(PACK_IDX_VERSION)) {
munmap(idx_map, idx_size);
die("Wrong pack index version");
}
index = (uint32_t *) hdr + 2;
for (i = 0, nr = 0; i < 256; i++) {
n = ntohl(index[i]);
if (n < nr) {
munmap(idx_map, idx_size);
die("Non-monotonic index");
}
nr = n;
}
/*
* Minimum size:
* - 8 bytes of header
* - 256 index entries 4 bytes each
* - 20-byte sha1 entry * nr
* - 4-byte offset entry * nr
*
* Omitted fields:
* - 4-byte crc entry * nr
* - 20-byte SHA1 of the packfile
* - 20-byte SHA1 file checksum
*/
unsigned long min_size = 8 + 4*256 + nr*(20 + 4);
unsigned long max_size = min_size;
if (nr)
max_size += (nr - 1) * 8;
if (idx_size < min_size || idx_size > max_size) {
munmap(idx_map, idx_size);
die("Wrong packfile index file size");
}
if (idx_size != min_size && (sizeof(off_t) <= 4)) {
munmap(idx_map, idx_size);
die("Pack too large for current definition of off_t");
}
packroot.idx_data = idx_map;
packroot.idx_size = idx_size;
packroot.nr = nr;
return 0;
}
static int load_pack_idx(const char *path)
{
FILE *idx_file;
if (packroot.idx_data)
return -1;
if (!(idx_file = fopen(path, "rb")))
return -errno;
return map_pack_idx(idx_file);
}
int load_packing_info(const char *pack_path, const char *idx_path,
bool existing_pack)
{
struct stat st;
struct pack_header hdr;
PHOENIXFS_DBG("load_packing_info:: %s %s %d", pack_path, idx_path, existing_pack);
if (!(loaded_pack = existing_pack)) {
/* Nothing to load */
strcpy(packroot.pack_path, pack_path);
strcpy(packroot.idx_path, idx_path);
return 0;
}
if (load_pack_idx(idx_path) < 0)
die("Packfile index %s missing", idx_path);
if (!(packroot.packfh = fopen(pack_path, "ab+")) ||
(stat(pack_path, &st) < 0))
die("Can't open pack %s", pack_path);
packroot.pack_size = st.st_size;
strcpy(packroot.pack_path, pack_path);
strcpy(packroot.idx_path, idx_path);
/*
* Minimum size:
* - 8 bytes of header
*
* Omitted fields:
* - 4 bytes of entries
* - 20-byte packfile SHA1 checksum
*/
if (packroot.pack_size < 8)
die("Wrong packfile file size");
/* Verify we recognize this pack file format */
rewind(packroot.packfh);
if (fread(&hdr, sizeof(hdr), 1, packroot.packfh) < 1)
die("Read error: hdr");
if (hdr.signature != htonl(PACK_SIGNATURE))
die("Corrupt pack signature: %d", ntohl(hdr.signature));
if (hdr.version != htonl(PACK_VERSION))
die("Wrong pack version: %d", ntohl(hdr.version));
fseek(packroot.packfh, 0L, SEEK_END);
/* Omit entries */
return 0;
}
/* ---- Pack Write ---- */
static int sha1_compare(const void *_a, const void *_b)
{
struct pack_idx_entry *a = *(struct pack_idx_entry **)_a;
struct pack_idx_entry *b = *(struct pack_idx_entry **)_b;
return memcmp(a->sha1, b->sha1, 20);
}
void unmap_write_idx(struct pack_idx_entry **objects, int nr_objects)
{
struct pack_idx_entry **sorted_by_sha, **list, **last;
unsigned int nr_large_offset;
struct pack_idx_header hdr;
off_t last_obj_offset = 0;
char sha1_digest[40];
uint32_t array[256];
register int i;
FILE *idxfh;
if (nr_objects) {
sorted_by_sha = objects;
list = sorted_by_sha;
last = sorted_by_sha + nr_objects;
for (i = 0; i < nr_objects; ++i) {
if (objects[i]->offset > last_obj_offset)
last_obj_offset = objects[i]->offset;
}
qsort(sorted_by_sha, nr_objects, sizeof(sorted_by_sha[0]),
sha1_compare);
}
else
sorted_by_sha = list = last = NULL;
if (loaded_pack)
munmap((void *) packroot.idx_data, packroot.idx_size);
if (!(idxfh = fopen(packroot.idx_path, "wb+")))
die("Can't create idx file: %s", packroot.idx_path);
hdr.signature = htonl(PACK_IDX_SIGNATURE);
hdr.version = htonl(PACK_IDX_VERSION);
fwrite(&hdr, sizeof(hdr), 1, idxfh);
/* Write the fanout table */
for (i = 0; i < 256; i++) {
struct pack_idx_entry **next = list;
while (next < last) {
struct pack_idx_entry *obj = *next;
if (obj->sha1[0] != i)
break;
next++;
}
array[i] = htonl(next - sorted_by_sha);
list = next;
}
fwrite(&array, 256 * sizeof(uint32_t), 1, idxfh);
/* Write the actual SHA1 entries: 20 * nr */
list = sorted_by_sha;
for (i = 0; i < nr_objects; i++) {
struct pack_idx_entry *obj = *list++;
fwrite(obj->sha1, 20, 1, idxfh);
}
/* Omit the crc32 table: 4 * nr */
/* Write the 32-bit offset table: 4 * nr */
nr_large_offset = 0;
list = sorted_by_sha;
for (i = 0; i < nr_objects; i++) {
struct pack_idx_entry *obj = *list++;
uint32_t offset = (obj->offset <= pack_idx_off32_limit) ?
obj->offset : (0x80000000 | nr_large_offset++);
offset = htonl(offset);
print_sha1(sha1_digest, obj->sha1);
PHOENIXFS_DBG("unmap_write_idx:: %s %llu", sha1_digest,
(long long int)obj->offset);
fwrite(&offset, sizeof(uint32_t), 1, idxfh);
}
/* Write the 64-bit offset table: 8 * nr */
list = sorted_by_sha;
while (nr_large_offset) {
struct pack_idx_entry *obj = *list++;
uint64_t offset = obj->offset;
if (offset > pack_idx_off32_limit) {
uint32_t split[2];
split[0] = htonl(offset >> 32);
split[1] = htonl(offset & 0xffffffff);
fwrite(split, sizeof(uint64_t), 1, idxfh);
nr_large_offset--;
}
}
/* Omit the checksum trailer: 2 * 20 */
for (i = 0; i < nr_objects; i++)
free(objects[i]);
fclose(idxfh);
}
static int write_pack_hdr(const char *pack_path)
{
FILE *packfh;
struct stat st;
struct pack_header hdr;
if (!(packfh = fopen(pack_path, "wb+")) ||
(stat(pack_path, &st) < 0))
return -errno;
packroot.packfh = packfh;
packroot.pack_size = st.st_size;
hdr.signature = htonl(PACK_SIGNATURE);
hdr.version = htonl(PACK_VERSION);
fwrite(&hdr, sizeof(hdr), 1, packfh);
return 0;
}
void dump_packing_info(const char *loosedir)
{
PHOENIXFS_DBG("dump_packing_info:: %s", loaded_pack ? "append" : "create");
if (!loaded_pack)
write_pack_hdr(packroot.pack_path);
packup_loose_objects(packroot.packfh, packroot.idx_data,
packroot.nr, loosedir);
fclose(packroot.packfh);
}
void mark_for_packing(const unsigned char *sha1, size_t size)
{
add_loose_entry((unsigned char *)sha1, size);
}
| {
"pile_set_name": "Github"
} |
# -*- coding: utf-8 -*-
#
# Copyright (C) 2017 Red Hat, Inc.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions of
# the GNU General Public License v.2, or (at your option) any later version.
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY expressed or implied, including the implied warranties of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details. You should have received a copy of the
# GNU General Public License along with this program; if not, write to the
# Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA. Any Red Hat trademarks that are incorporated in the
# source code or documentation are not subject to the GNU General Public
# License and may only be used or replicated with the express permission of
# Red Hat, Inc.
#
# Red Hat Author(s): Vendula Poncova <[email protected]>
#
import unittest
import pyanaconda.modules.storage.checker.utils as checks
from blivet.size import Size
from pyanaconda.modules.storage.checker.utils import StorageChecker
class StorageCheckerTests(unittest.TestCase):
def setUp(self):
self.maxDiff = None
def nocheck_test(self):
"""Test a check with no checks."""
checker = StorageChecker()
report = checker.check(None)
self.assertEqual(report.success, True)
self.assertEqual(report.failure, False)
self.assertListEqual(report.errors, [])
self.assertListEqual(report.warnings, [])
def simple_error_test(self):
"""Test an simple error report."""
checker = StorageChecker()
def check(storage, constraints, report_error, report_warning):
report_error("error")
checker.add_check(check)
report = checker.check(None)
self.assertEqual(report.success, False)
self.assertListEqual(report.errors, ["error"])
self.assertListEqual(report.warnings, [])
def simple_warning_test(self):
"""Test an simple warning report."""
checker = StorageChecker()
def check(storage, constraints, report_error, report_warning):
report_warning("warning")
checker.add_check(check)
report = checker.check(None)
self.assertEqual(report.success, False)
self.assertListEqual(report.errors, [])
self.assertListEqual(report.warnings, ["warning"])
def simple_info_test(self):
"""Test simple info messages. """
checker = StorageChecker()
report = checker.check(None)
self.assertListEqual(report.info, [
"Storage check started with constraints {}.",
"Storage check finished with success."
])
def info_test(self):
"""Test info messages. """
checker = StorageChecker()
def error_check(storage, constraints, report_error, report_warning):
report_error("error")
def warning_check(storage, constraints, report_error, report_warning):
report_warning("warning")
def skipped_check(storage, constraints, report_error, report_warning):
report_warning("skipped")
checker.add_constraint("x", None)
checker.add_check(error_check)
checker.add_check(warning_check)
checker.add_check(skipped_check)
report = checker.check(None, skip=(skipped_check,))
self.assertListEqual(report.info, [
"Storage check started with constraints {'x': None}.",
"Run sanity check error_check.",
"Found sanity error: error",
"Run sanity check warning_check.",
"Found sanity warning: warning",
"Skipped sanity check skipped_check.",
"Storage check finished with failure(s)."
])
def simple_constraints_test(self):
"""Test simple constraint adding."""
checker = StorageChecker()
# Try to add a new constraint with a wrong method.
self.assertRaises(KeyError, checker.set_constraint, "x", None)
# Try to add a new constraint two times.
checker.add_constraint("x", None)
self.assertRaises(KeyError, checker.add_constraint, "x", None)
def check_constraints_test(self):
"""Test constraints checking."""
checker = StorageChecker()
def check(storage, constraints, report_error, report_warning):
report_warning("%s" % constraints)
checker.add_check(check)
report = checker.check(None)
self.assertListEqual(report.warnings, ["{}"])
checker.add_constraint("x", 1)
report = checker.check(None)
self.assertListEqual(report.warnings, ["{'x': 1}"])
checker.set_constraint("x", 0)
report = checker.check(None)
self.assertListEqual(report.warnings, ["{'x': 0}"])
def dictionary_constraints_test(self):
"""Test the dictionary constraints."""
checker = StorageChecker()
checker.add_constraint("x", {"a": 1, "b": 2, "c": 3})
self.assertIn("x", checker.constraints)
self.assertEqual(checker.constraints["x"], {"a": 1, "b": 2, "c": 3})
checker.set_constraint("x", {"e": 4, "f": 5})
self.assertIn("x", checker.constraints)
self.assertEqual(checker.constraints["x"], {"e": 4, "f": 5})
def complicated_test(self):
"""Run a complicated check."""
checker = StorageChecker()
# Set the checks,
def check_x(storage, constraints, report_error, report_warning):
if constraints["x"] != 1:
report_error("x is not equal to 1")
def check_y(storage, constraints, report_error, report_warning):
if constraints["y"] != 2:
report_error("y is not equal to 2")
def check_z(storage, constraints, report_error, report_warning):
if constraints["z"] != 3:
report_error("z is not equal to 3")
checker.add_check(check_x)
checker.add_check(check_y)
checker.add_check(check_z)
# Set the constraints.
checker.add_constraint("x", 1)
checker.add_constraint("y", 2)
checker.add_constraint("z", 3)
# Run the checker. OK
report = checker.check(None)
self.assertEqual(report.success, True)
self.assertListEqual(report.errors, [])
self.assertListEqual(report.warnings, [])
# Set constraints to different values.
checker.set_constraint("x", 0)
checker.set_constraint("y", 1)
checker.set_constraint("z", 2)
# Run the checker. FAIL
report = checker.check(None)
self.assertEqual(report.success, False)
self.assertListEqual(report.errors, [
"x is not equal to 1",
"y is not equal to 2",
"z is not equal to 3"
])
self.assertListEqual(report.warnings, [])
# Run the checker. Test SKIP.
report = checker.check(None, skip=(check_y,))
self.assertEqual(report.success, False)
self.assertListEqual(report.errors, [
"x is not equal to 1",
"z is not equal to 3"
])
self.assertListEqual(report.warnings, [])
# Run the checker. Test CONSTRAINTS.
constraints = {"x": 1, "y": 2, "z": 3}
report = checker.check(None, constraints=constraints)
self.assertEqual(report.success, True)
self.assertListEqual(report.errors, [])
self.assertListEqual(report.warnings, [])
# Remove checks.
checker.remove_check(check_x)
checker.remove_check(check_y)
checker.remove_check(check_z)
report = checker.check(None)
self.assertEqual(report.success, True)
self.assertListEqual(report.errors, [])
self.assertListEqual(report.warnings, [])
def default_settings_test(self):
"""Check the default storage checker."""
checker = StorageChecker()
checker.set_default_constraints()
checker.set_default_checks()
self.assertEqual(checker.constraints, {
checks.STORAGE_MIN_RAM: Size("320 MiB"),
checks.STORAGE_ROOT_DEVICE_TYPES: set(),
checks.STORAGE_MIN_PARTITION_SIZES: {
'/': Size("250 MiB"),
'/usr': Size("250 MiB"),
'/tmp': Size("50 MiB"),
'/var': Size("384 MiB"),
'/home': Size("100 MiB"),
'/boot': Size("200 MiB")
},
checks.STORAGE_REQ_PARTITION_SIZES: dict(),
checks.STORAGE_MUST_BE_ON_LINUXFS: {
'/', '/var', '/tmp', '/usr', '/home', '/usr/share', '/usr/lib'
},
checks.STORAGE_MUST_BE_ON_ROOT: {
'/bin', '/dev', '/sbin', '/etc', '/lib', '/root', '/mnt', 'lost+found', '/proc'
},
checks.STORAGE_MUST_NOT_BE_ON_ROOT: set(),
checks.STORAGE_REFORMAT_ALLOWLIST: {
'/boot', '/var', '/tmp', '/usr'
},
checks.STORAGE_REFORMAT_BLOCKLIST: {
'/home', '/usr/local', '/opt', '/var/www'
},
checks.STORAGE_SWAP_IS_RECOMMENDED: False,
checks.STORAGE_LUKS2_MIN_RAM: Size("128 MiB"),
})
self.assertEqual(checker.checks, [
checks.verify_root,
checks.verify_s390_constraints,
checks.verify_partition_formatting,
checks.verify_partition_sizes,
checks.verify_partition_format_sizes,
checks.verify_bootloader,
checks.verify_gpt_biosboot,
checks.verify_swap,
checks.verify_swap_uuid,
checks.verify_mountpoints_on_linuxfs,
checks.verify_mountpoints_on_root,
checks.verify_mountpoints_not_on_root,
checks.verify_unlocked_devices_have_key,
checks.verify_luks_devices_have_key,
checks.verify_luks2_memory_requirements,
checks.verify_mounted_partitions,
checks.verify_lvm_destruction,
])
| {
"pile_set_name": "Github"
} |
module.exports = require('./overEvery');
| {
"pile_set_name": "Github"
} |
// Copyright 2014 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef V8_TYPES_H_
#define V8_TYPES_H_
#include "src/conversions.h"
#include "src/factory.h"
#include "src/handles.h"
#include "src/ostreams.h"
namespace v8 {
namespace internal {
// SUMMARY
//
// A simple type system for compiler-internal use. It is based entirely on
// union types, and all subtyping hence amounts to set inclusion. Besides the
// obvious primitive types and some predefined unions, the type language also
// can express class types (a.k.a. specific maps) and singleton types (i.e.,
// concrete constants).
//
// Types consist of two dimensions: semantic (value range) and representation.
// Both are related through subtyping.
//
//
// SEMANTIC DIMENSION
//
// The following equations and inequations hold for the semantic axis:
//
// None <= T
// T <= Any
//
// Number = Signed32 \/ Unsigned32 \/ Double
// Smi <= Signed32
// Name = String \/ Symbol
// UniqueName = InternalizedString \/ Symbol
// InternalizedString < String
//
// Receiver = Object \/ Proxy
// Array < Object
// Function < Object
// RegExp < Object
// Undetectable < Object
// Detectable = Receiver \/ Number \/ Name - Undetectable
//
// Class(map) < T iff instance_type(map) < T
// Constant(x) < T iff instance_type(map(x)) < T
// Array(T) < Array
// Function(R, S, T0, T1, ...) < Function
// Context(T) < Internal
//
// Both structural Array and Function types are invariant in all parameters;
// relaxing this would make Union and Intersect operations more involved.
// There is no subtyping relation between Array, Function, or Context types
// and respective Constant types, since these types cannot be reconstructed
// for arbitrary heap values.
// Note also that Constant(x) < Class(map(x)) does _not_ hold, since x's map can
// change! (Its instance type cannot, however.)
// TODO(rossberg): the latter is not currently true for proxies, because of fix,
// but will hold once we implement direct proxies.
// However, we also define a 'temporal' variant of the subtyping relation that
// considers the _current_ state only, i.e., Constant(x) <_now Class(map(x)).
//
//
// REPRESENTATIONAL DIMENSION
//
// For the representation axis, the following holds:
//
// None <= R
// R <= Any
//
// UntaggedInt = UntaggedInt1 \/ UntaggedInt8 \/
// UntaggedInt16 \/ UntaggedInt32
// UntaggedFloat = UntaggedFloat32 \/ UntaggedFloat64
// UntaggedNumber = UntaggedInt \/ UntaggedFloat
// Untagged = UntaggedNumber \/ UntaggedPtr
// Tagged = TaggedInt \/ TaggedPtr
//
// Subtyping relates the two dimensions, for example:
//
// Number <= Tagged \/ UntaggedNumber
// Object <= TaggedPtr \/ UntaggedPtr
//
// That holds because the semantic type constructors defined by the API create
// types that allow for all possible representations, and dually, the ones for
// representation types initially include all semantic ranges. Representations
// can then e.g. be narrowed for a given semantic type using intersection:
//
// SignedSmall /\ TaggedInt (a 'smi')
// Number /\ TaggedPtr (a heap number)
//
//
// RANGE TYPES
//
// A range type represents a continuous integer interval by its minimum and
// maximum value. Either value might be an infinity.
//
// Constant(v) is considered a subtype of Range(x..y) if v happens to be an
// integer between x and y.
//
//
// PREDICATES
//
// There are two main functions for testing types:
//
// T1->Is(T2) -- tests whether T1 is included in T2 (i.e., T1 <= T2)
// T1->Maybe(T2) -- tests whether T1 and T2 overlap (i.e., T1 /\ T2 =/= 0)
//
// Typically, the former is to be used to select representations (e.g., via
// T->Is(SignedSmall())), and the latter to check whether a specific case needs
// handling (e.g., via T->Maybe(Number())).
//
// There is no functionality to discover whether a type is a leaf in the
// lattice. That is intentional. It should always be possible to refine the
// lattice (e.g., splitting up number types further) without invalidating any
// existing assumptions or tests.
// Consequently, do not normally use Equals for type tests, always use Is!
//
// The NowIs operator implements state-sensitive subtying, as described above.
// Any compilation decision based on such temporary properties requires runtime
// guarding!
//
//
// PROPERTIES
//
// Various formal properties hold for constructors, operators, and predicates
// over types. For example, constructors are injective and subtyping is a
// complete partial order.
//
// See test/cctest/test-types.cc for a comprehensive executable specification,
// especially with respect to the properties of the more exotic 'temporal'
// constructors and predicates (those prefixed 'Now').
//
//
// IMPLEMENTATION
//
// Internally, all 'primitive' types, and their unions, are represented as
// bitsets. Bit 0 is reserved for tagging. Class is a heap pointer to the
// respective map. Only structured types require allocation.
// Note that the bitset representation is closed under both Union and Intersect.
//
// There are two type representations, using different allocation:
//
// - class Type (zone-allocated, for compiler and concurrent compilation)
// - class HeapType (heap-allocated, for persistent types)
//
// Both provide the same API, and the Convert method can be used to interconvert
// them. For zone types, no query method touches the heap, only constructors do.
// -----------------------------------------------------------------------------
// Values for bitset types
#define MASK_BITSET_TYPE_LIST(V) \
V(Representation, 0xff800000u) \
V(Semantic, 0x007ffffeu)
#define REPRESENTATION(k) ((k) & BitsetType::kRepresentation)
#define SEMANTIC(k) ((k) & BitsetType::kSemantic)
#define REPRESENTATION_BITSET_TYPE_LIST(V) \
V(None, 0) \
V(UntaggedInt1, 1u << 23 | kSemantic) \
V(UntaggedInt8, 1u << 24 | kSemantic) \
V(UntaggedInt16, 1u << 25 | kSemantic) \
V(UntaggedInt32, 1u << 26 | kSemantic) \
V(UntaggedFloat32, 1u << 27 | kSemantic) \
V(UntaggedFloat64, 1u << 28 | kSemantic) \
V(UntaggedPtr, 1u << 29 | kSemantic) \
V(TaggedInt, 1u << 30 | kSemantic) \
V(TaggedPtr, 1u << 31 | kSemantic) \
\
V(UntaggedInt, kUntaggedInt1 | kUntaggedInt8 | \
kUntaggedInt16 | kUntaggedInt32) \
V(UntaggedFloat, kUntaggedFloat32 | kUntaggedFloat64) \
V(UntaggedNumber, kUntaggedInt | kUntaggedFloat) \
V(Untagged, kUntaggedNumber | kUntaggedPtr) \
V(Tagged, kTaggedInt | kTaggedPtr)
#define SEMANTIC_BITSET_TYPE_LIST(V) \
V(Null, 1u << 1 | REPRESENTATION(kTaggedPtr)) \
V(Undefined, 1u << 2 | REPRESENTATION(kTaggedPtr)) \
V(Boolean, 1u << 3 | REPRESENTATION(kTaggedPtr)) \
V(UnsignedSmall, 1u << 4 | REPRESENTATION(kTagged | kUntaggedNumber)) \
V(OtherSignedSmall, 1u << 5 | REPRESENTATION(kTagged | kUntaggedNumber)) \
V(OtherUnsigned31, 1u << 6 | REPRESENTATION(kTagged | kUntaggedNumber)) \
V(OtherUnsigned32, 1u << 7 | REPRESENTATION(kTagged | kUntaggedNumber)) \
V(OtherSigned32, 1u << 8 | REPRESENTATION(kTagged | kUntaggedNumber)) \
V(MinusZero, 1u << 9 | REPRESENTATION(kTagged | kUntaggedNumber)) \
V(NaN, 1u << 10 | REPRESENTATION(kTagged | kUntaggedNumber)) \
V(OtherNumber, 1u << 11 | REPRESENTATION(kTagged | kUntaggedNumber)) \
V(Symbol, 1u << 12 | REPRESENTATION(kTaggedPtr)) \
V(InternalizedString, 1u << 13 | REPRESENTATION(kTaggedPtr)) \
V(OtherString, 1u << 14 | REPRESENTATION(kTaggedPtr)) \
V(Undetectable, 1u << 15 | REPRESENTATION(kTaggedPtr)) \
V(Array, 1u << 16 | REPRESENTATION(kTaggedPtr)) \
V(Buffer, 1u << 17 | REPRESENTATION(kTaggedPtr)) \
V(Function, 1u << 18 | REPRESENTATION(kTaggedPtr)) \
V(RegExp, 1u << 19 | REPRESENTATION(kTaggedPtr)) \
V(OtherObject, 1u << 20 | REPRESENTATION(kTaggedPtr)) \
V(Proxy, 1u << 21 | REPRESENTATION(kTaggedPtr)) \
V(Internal, 1u << 22 | REPRESENTATION(kTagged | kUntagged)) \
\
V(SignedSmall, kUnsignedSmall | kOtherSignedSmall) \
V(Signed32, kSignedSmall | kOtherUnsigned31 | kOtherSigned32) \
V(Unsigned32, kUnsignedSmall | kOtherUnsigned31 | kOtherUnsigned32) \
V(Integral32, kSigned32 | kUnsigned32) \
V(OrderedNumber, kIntegral32 | kMinusZero | kOtherNumber) \
V(Number, kOrderedNumber | kNaN) \
V(String, kInternalizedString | kOtherString) \
V(UniqueName, kSymbol | kInternalizedString) \
V(Name, kSymbol | kString) \
V(NumberOrString, kNumber | kString) \
V(Primitive, kNumber | kName | kBoolean | kNull | kUndefined) \
V(DetectableObject, kArray | kFunction | kRegExp | kOtherObject) \
V(DetectableReceiver, kDetectableObject | kProxy) \
V(Detectable, kDetectableReceiver | kNumber | kName) \
V(Object, kDetectableObject | kUndetectable) \
V(Receiver, kObject | kProxy) \
V(Unique, kBoolean | kUniqueName | kNull | kUndefined | \
kReceiver) \
V(NonNumber, kUnique | kString | kInternal) \
V(Any, 0xfffffffeu)
/*
* The following diagrams show how integers (in the mathematical sense) are
* divided among the different atomic numerical types.
*
* If SmiValuesAre31Bits():
*
* ON OS32 OSS US OU31 OU32 ON
* ______[_______[_______[_______[_______[_______[_______
* -2^31 -2^30 0 2^30 2^31 2^32
*
* Otherwise:
*
* ON OSS US OU32 ON
* ______[_______________[_______________[_______[_______
* -2^31 0 2^31 2^32
*
*
* E.g., OtherUnsigned32 (OU32) covers all integers from 2^31 to 2^32-1.
*
* NOTE: OtherSigned32 (OS32) and OU31 (OtherUnsigned31) are empty if Smis are
* 32-bit wide. They should thus never be used directly, only indirectly
* via e.g. Number.
*/
#define PROPER_BITSET_TYPE_LIST(V) \
REPRESENTATION_BITSET_TYPE_LIST(V) \
SEMANTIC_BITSET_TYPE_LIST(V)
#define BITSET_TYPE_LIST(V) \
MASK_BITSET_TYPE_LIST(V) \
PROPER_BITSET_TYPE_LIST(V)
// -----------------------------------------------------------------------------
// The abstract Type class, parameterized over the low-level representation.
// struct Config {
// typedef TypeImpl<Config> Type;
// typedef Base;
// typedef Struct;
// typedef Region;
// template<class> struct Handle { typedef type; } // No template typedefs...
// template<class T> static Handle<T>::type null_handle();
// template<class T> static Handle<T>::type handle(T* t); // !is_bitset(t)
// template<class T> static Handle<T>::type cast(Handle<Type>::type);
// static bool is_bitset(Type*);
// static bool is_class(Type*);
// static bool is_struct(Type*, int tag);
// static bitset as_bitset(Type*);
// static i::Handle<i::Map> as_class(Type*);
// static Handle<Struct>::type as_struct(Type*);
// static Type* from_bitset(bitset);
// static Handle<Type>::type from_bitset(bitset, Region*);
// static Handle<Type>::type from_class(i::Handle<Map>, Region*);
// static Handle<Type>::type from_struct(Handle<Struct>::type, int tag);
// static Handle<Struct>::type struct_create(int tag, int length, Region*);
// static void struct_shrink(Handle<Struct>::type, int length);
// static int struct_tag(Handle<Struct>::type);
// static int struct_length(Handle<Struct>::type);
// static Handle<Type>::type struct_get(Handle<Struct>::type, int);
// static void struct_set(Handle<Struct>::type, int, Handle<Type>::type);
// template<class V>
// static i::Handle<V> struct_get_value(Handle<Struct>::type, int);
// template<class V>
// static void struct_set_value(Handle<Struct>::type, int, i::Handle<V>);
// }
template<class Config>
class TypeImpl : public Config::Base {
public:
// Auxiliary types.
typedef uint32_t bitset; // Internal
class BitsetType; // Internal
class StructuralType; // Internal
class UnionType; // Internal
class ClassType;
class ConstantType;
class RangeType;
class ContextType;
class ArrayType;
class FunctionType;
typedef typename Config::template Handle<TypeImpl>::type TypeHandle;
typedef typename Config::template Handle<ClassType>::type ClassHandle;
typedef typename Config::template Handle<ConstantType>::type ConstantHandle;
typedef typename Config::template Handle<RangeType>::type RangeHandle;
typedef typename Config::template Handle<ContextType>::type ContextHandle;
typedef typename Config::template Handle<ArrayType>::type ArrayHandle;
typedef typename Config::template Handle<FunctionType>::type FunctionHandle;
typedef typename Config::template Handle<UnionType>::type UnionHandle;
typedef typename Config::Region Region;
// Constructors.
#define DEFINE_TYPE_CONSTRUCTOR(type, value) \
static TypeImpl* type() { \
return BitsetType::New(BitsetType::k##type); \
} \
static TypeHandle type(Region* region) { \
return BitsetType::New(BitsetType::k##type, region); \
}
PROPER_BITSET_TYPE_LIST(DEFINE_TYPE_CONSTRUCTOR)
#undef DEFINE_TYPE_CONSTRUCTOR
static TypeHandle Class(i::Handle<i::Map> map, Region* region) {
return ClassType::New(map, region);
}
static TypeHandle Constant(i::Handle<i::Object> value, Region* region) {
return ConstantType::New(value, region);
}
static TypeHandle Range(
i::Handle<i::Object> min, i::Handle<i::Object> max, Region* region) {
return RangeType::New(min, max, region);
}
static TypeHandle Context(TypeHandle outer, Region* region) {
return ContextType::New(outer, region);
}
static TypeHandle Array(TypeHandle element, Region* region) {
return ArrayType::New(element, region);
}
static FunctionHandle Function(
TypeHandle result, TypeHandle receiver, int arity, Region* region) {
return FunctionType::New(result, receiver, arity, region);
}
static TypeHandle Function(TypeHandle result, Region* region) {
return Function(result, Any(region), 0, region);
}
static TypeHandle Function(
TypeHandle result, TypeHandle param0, Region* region) {
FunctionHandle function = Function(result, Any(region), 1, region);
function->InitParameter(0, param0);
return function;
}
static TypeHandle Function(
TypeHandle result, TypeHandle param0, TypeHandle param1, Region* region) {
FunctionHandle function = Function(result, Any(region), 2, region);
function->InitParameter(0, param0);
function->InitParameter(1, param1);
return function;
}
static TypeHandle Function(
TypeHandle result, TypeHandle param0, TypeHandle param1,
TypeHandle param2, Region* region) {
FunctionHandle function = Function(result, Any(region), 3, region);
function->InitParameter(0, param0);
function->InitParameter(1, param1);
function->InitParameter(2, param2);
return function;
}
static TypeHandle Union(TypeHandle type1, TypeHandle type2, Region* reg);
static TypeHandle Intersect(TypeHandle type1, TypeHandle type2, Region* reg);
static TypeImpl* Union(TypeImpl* type1, TypeImpl* type2) {
return BitsetType::New(type1->AsBitset() | type2->AsBitset());
}
static TypeImpl* Intersect(TypeImpl* type1, TypeImpl* type2) {
return BitsetType::New(type1->AsBitset() & type2->AsBitset());
}
static TypeHandle Of(double value, Region* region) {
return Config::from_bitset(BitsetType::Lub(value), region);
}
static TypeHandle Of(i::Object* value, Region* region) {
return Config::from_bitset(BitsetType::Lub(value), region);
}
static TypeHandle Of(i::Handle<i::Object> value, Region* region) {
return Of(*value, region);
}
// Predicates.
bool IsInhabited() { return BitsetType::IsInhabited(this->BitsetLub()); }
bool Is(TypeImpl* that) { return this == that || this->SlowIs(that); }
template<class TypeHandle>
bool Is(TypeHandle that) { return this->Is(*that); }
bool Maybe(TypeImpl* that);
template<class TypeHandle>
bool Maybe(TypeHandle that) { return this->Maybe(*that); }
bool Equals(TypeImpl* that) { return this->Is(that) && that->Is(this); }
template<class TypeHandle>
bool Equals(TypeHandle that) { return this->Equals(*that); }
// Equivalent to Constant(val)->Is(this), but avoiding allocation.
bool Contains(i::Object* val);
bool Contains(i::Handle<i::Object> val) { return this->Contains(*val); }
// State-dependent versions of the above that consider subtyping between
// a constant and its map class.
inline static TypeHandle NowOf(i::Object* value, Region* region);
static TypeHandle NowOf(i::Handle<i::Object> value, Region* region) {
return NowOf(*value, region);
}
bool NowIs(TypeImpl* that);
template<class TypeHandle>
bool NowIs(TypeHandle that) { return this->NowIs(*that); }
inline bool NowContains(i::Object* val);
bool NowContains(i::Handle<i::Object> val) { return this->NowContains(*val); }
bool NowStable();
// Inspection.
bool IsClass() {
return Config::is_class(this)
|| Config::is_struct(this, StructuralType::kClassTag);
}
bool IsConstant() {
return Config::is_struct(this, StructuralType::kConstantTag);
}
bool IsRange() {
return Config::is_struct(this, StructuralType::kRangeTag);
}
bool IsContext() {
return Config::is_struct(this, StructuralType::kContextTag);
}
bool IsArray() {
return Config::is_struct(this, StructuralType::kArrayTag);
}
bool IsFunction() {
return Config::is_struct(this, StructuralType::kFunctionTag);
}
ClassType* AsClass() { return ClassType::cast(this); }
ConstantType* AsConstant() { return ConstantType::cast(this); }
RangeType* AsRange() { return RangeType::cast(this); }
ContextType* AsContext() { return ContextType::cast(this); }
ArrayType* AsArray() { return ArrayType::cast(this); }
FunctionType* AsFunction() { return FunctionType::cast(this); }
// Minimum and maximum of a numeric type.
// These functions do not distinguish between -0 and +0. If the type equals
// kNaN, they return NaN; otherwise kNaN is ignored. Only call these
// functions on subtypes of Number.
double Min();
double Max();
int NumClasses();
int NumConstants();
template<class T> class Iterator;
Iterator<i::Map> Classes() {
if (this->IsBitset()) return Iterator<i::Map>();
return Iterator<i::Map>(Config::handle(this));
}
Iterator<i::Object> Constants() {
if (this->IsBitset()) return Iterator<i::Object>();
return Iterator<i::Object>(Config::handle(this));
}
// Casting and conversion.
static inline TypeImpl* cast(typename Config::Base* object);
template<class OtherTypeImpl>
static TypeHandle Convert(
typename OtherTypeImpl::TypeHandle type, Region* region);
// Printing.
enum PrintDimension { BOTH_DIMS, SEMANTIC_DIM, REPRESENTATION_DIM };
void PrintTo(std::ostream& os, PrintDimension dim = BOTH_DIMS); // NOLINT
#ifdef DEBUG
void Print();
#endif
protected:
// Friends.
template<class> friend class Iterator;
template<class> friend class TypeImpl;
// Handle conversion.
template<class T>
static typename Config::template Handle<T>::type handle(T* type) {
return Config::handle(type);
}
TypeImpl* unhandle() { return this; }
// Internal inspection.
bool IsNone() { return this == None(); }
bool IsAny() { return this == Any(); }
bool IsBitset() { return Config::is_bitset(this); }
bool IsUnion() { return Config::is_struct(this, StructuralType::kUnionTag); }
bitset AsBitset() {
DCHECK(this->IsBitset());
return static_cast<BitsetType*>(this)->Bitset();
}
UnionType* AsUnion() { return UnionType::cast(this); }
// Auxiliary functions.
bitset BitsetGlb() { return BitsetType::Glb(this); }
bitset BitsetLub() { return BitsetType::Lub(this); }
bool SlowIs(TypeImpl* that);
static bool IsInteger(double x) {
return nearbyint(x) == x && !i::IsMinusZero(x); // Allows for infinities.
}
static bool IsInteger(i::Object* x) {
return x->IsNumber() && IsInteger(x->Number());
}
struct Limits {
i::Handle<i::Object> min;
i::Handle<i::Object> max;
Limits(i::Handle<i::Object> min, i::Handle<i::Object> max) :
min(min), max(max) {}
explicit Limits(RangeType* range) :
min(range->Min()), max(range->Max()) {}
};
static Limits Intersect(Limits lhs, Limits rhs);
static Limits Union(Limits lhs, Limits rhs);
static bool Overlap(RangeType* lhs, RangeType* rhs);
static bool Contains(RangeType* lhs, RangeType* rhs);
static bool Contains(RangeType* range, i::Object* val);
RangeType* GetRange();
static int UpdateRange(
RangeHandle type, UnionHandle result, int size, Region* region);
bool SimplyEquals(TypeImpl* that);
template<class TypeHandle>
bool SimplyEquals(TypeHandle that) { return this->SimplyEquals(*that); }
static int AddToUnion(
TypeHandle type, UnionHandle result, int size, Region* region);
static int IntersectAux(
TypeHandle type, TypeHandle other,
UnionHandle result, int size, Region* region);
static TypeHandle NormalizeUnion(UnionHandle unioned, int size);
};
// -----------------------------------------------------------------------------
// Bitset types (internal).
template<class Config>
class TypeImpl<Config>::BitsetType : public TypeImpl<Config> {
protected:
friend class TypeImpl<Config>;
enum {
#define DECLARE_TYPE(type, value) k##type = (value),
BITSET_TYPE_LIST(DECLARE_TYPE)
#undef DECLARE_TYPE
kUnusedEOL = 0
};
bitset Bitset() { return Config::as_bitset(this); }
static TypeImpl* New(bitset bits) {
DCHECK(bits == kNone || IsInhabited(bits));
return Config::from_bitset(bits);
}
static TypeHandle New(bitset bits, Region* region) {
DCHECK(bits == kNone || IsInhabited(bits));
return Config::from_bitset(bits, region);
}
// TODO(neis): Eventually allow again for types with empty semantics
// part and modify intersection and possibly subtyping accordingly.
static bool IsInhabited(bitset bits) {
return bits & kSemantic;
}
static bool Is(bitset bits1, bitset bits2) {
return (bits1 | bits2) == bits2;
}
static double Min(bitset);
static double Max(bitset);
static bitset Glb(TypeImpl* type); // greatest lower bound that's a bitset
static bitset Lub(TypeImpl* type); // least upper bound that's a bitset
static bitset Lub(i::Map* map);
static bitset Lub(i::Object* value);
static bitset Lub(double value);
static bitset Lub(double min, double max);
static const char* Name(bitset);
static void Print(std::ostream& os, bitset); // NOLINT
#ifdef DEBUG
static void Print(bitset);
#endif
private:
struct BitsetMin{
bitset bits;
double min;
};
static const BitsetMin BitsetMins31[];
static const BitsetMin BitsetMins32[];
static const BitsetMin* BitsetMins() {
return i::SmiValuesAre31Bits() ? BitsetMins31 : BitsetMins32;
}
static size_t BitsetMinsSize() {
return i::SmiValuesAre31Bits() ? 7 : 5;
/* arraysize(BitsetMins31) : arraysize(BitsetMins32); */
// Using arraysize here doesn't compile on Windows.
}
};
// -----------------------------------------------------------------------------
// Superclass for non-bitset types (internal).
// Contains a tag and a variable number of type or value fields.
template<class Config>
class TypeImpl<Config>::StructuralType : public TypeImpl<Config> {
protected:
template<class> friend class TypeImpl;
friend struct ZoneTypeConfig; // For tags.
friend struct HeapTypeConfig;
enum Tag {
kClassTag,
kConstantTag,
kRangeTag,
kContextTag,
kArrayTag,
kFunctionTag,
kUnionTag
};
int Length() {
return Config::struct_length(Config::as_struct(this));
}
TypeHandle Get(int i) {
DCHECK(0 <= i && i < this->Length());
return Config::struct_get(Config::as_struct(this), i);
}
void Set(int i, TypeHandle type) {
DCHECK(0 <= i && i < this->Length());
Config::struct_set(Config::as_struct(this), i, type);
}
void Shrink(int length) {
DCHECK(2 <= length && length <= this->Length());
Config::struct_shrink(Config::as_struct(this), length);
}
template<class V> i::Handle<V> GetValue(int i) {
DCHECK(0 <= i && i < this->Length());
return Config::template struct_get_value<V>(Config::as_struct(this), i);
}
template<class V> void SetValue(int i, i::Handle<V> x) {
DCHECK(0 <= i && i < this->Length());
Config::struct_set_value(Config::as_struct(this), i, x);
}
static TypeHandle New(Tag tag, int length, Region* region) {
DCHECK(1 <= length);
return Config::from_struct(Config::struct_create(tag, length, region));
}
};
// -----------------------------------------------------------------------------
// Union types (internal).
// A union is a structured type with the following invariants:
// - its length is at least 2
// - at most one field is a bitset, and it must go into index 0
// - no field is a union
// - no field is a subtype of any other field
template<class Config>
class TypeImpl<Config>::UnionType : public StructuralType {
public:
static UnionHandle New(int length, Region* region) {
return Config::template cast<UnionType>(
StructuralType::New(StructuralType::kUnionTag, length, region));
}
static UnionType* cast(TypeImpl* type) {
DCHECK(type->IsUnion());
return static_cast<UnionType*>(type);
}
bool Wellformed();
};
// -----------------------------------------------------------------------------
// Class types.
template<class Config>
class TypeImpl<Config>::ClassType : public StructuralType {
public:
TypeHandle Bound(Region* region) {
return Config::is_class(this) ?
BitsetType::New(BitsetType::Lub(*Config::as_class(this)), region) :
this->Get(0);
}
i::Handle<i::Map> Map() {
return Config::is_class(this) ? Config::as_class(this) :
this->template GetValue<i::Map>(1);
}
static ClassHandle New(i::Handle<i::Map> map, Region* region) {
ClassHandle type =
Config::template cast<ClassType>(Config::from_class(map, region));
if (!type->IsClass()) {
type = Config::template cast<ClassType>(
StructuralType::New(StructuralType::kClassTag, 2, region));
type->Set(0, BitsetType::New(BitsetType::Lub(*map), region));
type->SetValue(1, map);
}
return type;
}
static ClassType* cast(TypeImpl* type) {
DCHECK(type->IsClass());
return static_cast<ClassType*>(type);
}
};
// -----------------------------------------------------------------------------
// Constant types.
template<class Config>
class TypeImpl<Config>::ConstantType : public StructuralType {
public:
TypeHandle Bound() { return this->Get(0); }
i::Handle<i::Object> Value() { return this->template GetValue<i::Object>(1); }
static ConstantHandle New(i::Handle<i::Object> value, Region* region) {
ConstantHandle type = Config::template cast<ConstantType>(
StructuralType::New(StructuralType::kConstantTag, 2, region));
type->Set(0, BitsetType::New(BitsetType::Lub(*value), region));
type->SetValue(1, value);
return type;
}
static ConstantType* cast(TypeImpl* type) {
DCHECK(type->IsConstant());
return static_cast<ConstantType*>(type);
}
};
// TODO(neis): Also cache value if numerical.
// TODO(neis): Allow restricting the representation.
// -----------------------------------------------------------------------------
// Range types.
template<class Config>
class TypeImpl<Config>::RangeType : public StructuralType {
public:
int BitsetLub() { return this->Get(0)->AsBitset(); }
i::Handle<i::Object> Min() { return this->template GetValue<i::Object>(1); }
i::Handle<i::Object> Max() { return this->template GetValue<i::Object>(2); }
static RangeHandle New(
i::Handle<i::Object> min, i::Handle<i::Object> max, Region* region) {
DCHECK(IsInteger(min->Number()) && IsInteger(max->Number()));
DCHECK(min->Number() <= max->Number());
RangeHandle type = Config::template cast<RangeType>(
StructuralType::New(StructuralType::kRangeTag, 3, region));
type->Set(0, BitsetType::New(
BitsetType::Lub(min->Number(), max->Number()), region));
type->SetValue(1, min);
type->SetValue(2, max);
return type;
}
static RangeHandle New(Limits lim, Region* region) {
return New(lim.min, lim.max, region);
}
static RangeType* cast(TypeImpl* type) {
DCHECK(type->IsRange());
return static_cast<RangeType*>(type);
}
};
// TODO(neis): Also cache min and max values.
// TODO(neis): Allow restricting the representation.
// -----------------------------------------------------------------------------
// Context types.
template<class Config>
class TypeImpl<Config>::ContextType : public StructuralType {
public:
TypeHandle Outer() { return this->Get(0); }
static ContextHandle New(TypeHandle outer, Region* region) {
ContextHandle type = Config::template cast<ContextType>(
StructuralType::New(StructuralType::kContextTag, 1, region));
type->Set(0, outer);
return type;
}
static ContextType* cast(TypeImpl* type) {
DCHECK(type->IsContext());
return static_cast<ContextType*>(type);
}
};
// -----------------------------------------------------------------------------
// Array types.
template<class Config>
class TypeImpl<Config>::ArrayType : public StructuralType {
public:
TypeHandle Element() { return this->Get(0); }
static ArrayHandle New(TypeHandle element, Region* region) {
ArrayHandle type = Config::template cast<ArrayType>(
StructuralType::New(StructuralType::kArrayTag, 1, region));
type->Set(0, element);
return type;
}
static ArrayType* cast(TypeImpl* type) {
DCHECK(type->IsArray());
return static_cast<ArrayType*>(type);
}
};
// -----------------------------------------------------------------------------
// Function types.
template<class Config>
class TypeImpl<Config>::FunctionType : public StructuralType {
public:
int Arity() { return this->Length() - 2; }
TypeHandle Result() { return this->Get(0); }
TypeHandle Receiver() { return this->Get(1); }
TypeHandle Parameter(int i) { return this->Get(2 + i); }
void InitParameter(int i, TypeHandle type) { this->Set(2 + i, type); }
static FunctionHandle New(
TypeHandle result, TypeHandle receiver, int arity, Region* region) {
FunctionHandle type = Config::template cast<FunctionType>(
StructuralType::New(StructuralType::kFunctionTag, 2 + arity, region));
type->Set(0, result);
type->Set(1, receiver);
return type;
}
static FunctionType* cast(TypeImpl* type) {
DCHECK(type->IsFunction());
return static_cast<FunctionType*>(type);
}
};
// -----------------------------------------------------------------------------
// Type iterators.
template<class Config> template<class T>
class TypeImpl<Config>::Iterator {
public:
bool Done() const { return index_ < 0; }
i::Handle<T> Current();
void Advance();
private:
template<class> friend class TypeImpl;
Iterator() : index_(-1) {}
explicit Iterator(TypeHandle type) : type_(type), index_(-1) {
Advance();
}
inline bool matches(TypeHandle type);
inline TypeHandle get_type();
TypeHandle type_;
int index_;
};
// -----------------------------------------------------------------------------
// Zone-allocated types; they are either (odd) integers to represent bitsets, or
// (even) pointers to structures for everything else.
struct ZoneTypeConfig {
typedef TypeImpl<ZoneTypeConfig> Type;
class Base {};
typedef void* Struct;
typedef i::Zone Region;
template<class T> struct Handle { typedef T* type; };
template<class T> static inline T* null_handle();
template<class T> static inline T* handle(T* type);
template<class T> static inline T* cast(Type* type);
static inline bool is_bitset(Type* type);
static inline bool is_class(Type* type);
static inline bool is_struct(Type* type, int tag);
static inline Type::bitset as_bitset(Type* type);
static inline i::Handle<i::Map> as_class(Type* type);
static inline Struct* as_struct(Type* type);
static inline Type* from_bitset(Type::bitset);
static inline Type* from_bitset(Type::bitset, Zone* zone);
static inline Type* from_class(i::Handle<i::Map> map, Zone* zone);
static inline Type* from_struct(Struct* structured);
static inline Struct* struct_create(int tag, int length, Zone* zone);
static inline void struct_shrink(Struct* structure, int length);
static inline int struct_tag(Struct* structure);
static inline int struct_length(Struct* structure);
static inline Type* struct_get(Struct* structure, int i);
static inline void struct_set(Struct* structure, int i, Type* type);
template<class V>
static inline i::Handle<V> struct_get_value(Struct* structure, int i);
template<class V> static inline void struct_set_value(
Struct* structure, int i, i::Handle<V> x);
};
typedef TypeImpl<ZoneTypeConfig> Type;
// -----------------------------------------------------------------------------
// Heap-allocated types; either smis for bitsets, maps for classes, boxes for
// constants, or fixed arrays for unions.
struct HeapTypeConfig {
typedef TypeImpl<HeapTypeConfig> Type;
typedef i::Object Base;
typedef i::FixedArray Struct;
typedef i::Isolate Region;
template<class T> struct Handle { typedef i::Handle<T> type; };
template<class T> static inline i::Handle<T> null_handle();
template<class T> static inline i::Handle<T> handle(T* type);
template<class T> static inline i::Handle<T> cast(i::Handle<Type> type);
static inline bool is_bitset(Type* type);
static inline bool is_class(Type* type);
static inline bool is_struct(Type* type, int tag);
static inline Type::bitset as_bitset(Type* type);
static inline i::Handle<i::Map> as_class(Type* type);
static inline i::Handle<Struct> as_struct(Type* type);
static inline Type* from_bitset(Type::bitset);
static inline i::Handle<Type> from_bitset(Type::bitset, Isolate* isolate);
static inline i::Handle<Type> from_class(
i::Handle<i::Map> map, Isolate* isolate);
static inline i::Handle<Type> from_struct(i::Handle<Struct> structure);
static inline i::Handle<Struct> struct_create(
int tag, int length, Isolate* isolate);
static inline void struct_shrink(i::Handle<Struct> structure, int length);
static inline int struct_tag(i::Handle<Struct> structure);
static inline int struct_length(i::Handle<Struct> structure);
static inline i::Handle<Type> struct_get(i::Handle<Struct> structure, int i);
static inline void struct_set(
i::Handle<Struct> structure, int i, i::Handle<Type> type);
template<class V>
static inline i::Handle<V> struct_get_value(
i::Handle<Struct> structure, int i);
template<class V>
static inline void struct_set_value(
i::Handle<Struct> structure, int i, i::Handle<V> x);
};
typedef TypeImpl<HeapTypeConfig> HeapType;
// -----------------------------------------------------------------------------
// Type bounds. A simple struct to represent a pair of lower/upper types.
template<class Config>
struct BoundsImpl {
typedef TypeImpl<Config> Type;
typedef typename Type::TypeHandle TypeHandle;
typedef typename Type::Region Region;
TypeHandle lower;
TypeHandle upper;
BoundsImpl() : // Make sure accessing uninitialized bounds crashes big-time.
lower(Config::template null_handle<Type>()),
upper(Config::template null_handle<Type>()) {}
explicit BoundsImpl(TypeHandle t) : lower(t), upper(t) {}
BoundsImpl(TypeHandle l, TypeHandle u) : lower(l), upper(u) {
DCHECK(lower->Is(upper));
}
// Unrestricted bounds.
static BoundsImpl Unbounded(Region* region) {
return BoundsImpl(Type::None(region), Type::Any(region));
}
// Meet: both b1 and b2 are known to hold.
static BoundsImpl Both(BoundsImpl b1, BoundsImpl b2, Region* region) {
TypeHandle lower = Type::Union(b1.lower, b2.lower, region);
TypeHandle upper = Type::Intersect(b1.upper, b2.upper, region);
// Lower bounds are considered approximate, correct as necessary.
lower = Type::Intersect(lower, upper, region);
return BoundsImpl(lower, upper);
}
// Join: either b1 or b2 is known to hold.
static BoundsImpl Either(BoundsImpl b1, BoundsImpl b2, Region* region) {
TypeHandle lower = Type::Intersect(b1.lower, b2.lower, region);
TypeHandle upper = Type::Union(b1.upper, b2.upper, region);
return BoundsImpl(lower, upper);
}
static BoundsImpl NarrowLower(BoundsImpl b, TypeHandle t, Region* region) {
// Lower bounds are considered approximate, correct as necessary.
t = Type::Intersect(t, b.upper, region);
TypeHandle lower = Type::Union(b.lower, t, region);
return BoundsImpl(lower, b.upper);
}
static BoundsImpl NarrowUpper(BoundsImpl b, TypeHandle t, Region* region) {
TypeHandle lower = Type::Intersect(b.lower, t, region);
TypeHandle upper = Type::Intersect(b.upper, t, region);
return BoundsImpl(lower, upper);
}
bool Narrows(BoundsImpl that) {
return that.lower->Is(this->lower) && this->upper->Is(that.upper);
}
};
typedef BoundsImpl<ZoneTypeConfig> Bounds;
} } // namespace v8::internal
#endif // V8_TYPES_H_
| {
"pile_set_name": "Github"
} |
<!DOCTYPE HTML>
<title>Test for privacy restrictions on :visited (Bug 147777)</title>
<style type="text/css">
body > a { text-decoration: underline }
body > a > span { text-decoration: overline }
body > a > span > span { text-decoration: line-through }
:link { color: olive }
:link > span { color: fuchsia }
:link > span > span { color: black }
:visited { color: gray }
:visited > span { color: maroon }
:visited > span > span { color: purple }
</style>
<a href="unvisited-page.html"><span><span>unvisited</span></span></a>
<a href="visited-page.html"><span><span>visited</span></span></a>
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE pkgmetadata SYSTEM "http://www.gentoo.org/dtd/metadata.dtd">
<pkgmetadata>
<maintainer type="project">
<email>[email protected]</email>
<name>Fonts</name>
</maintainer>
<use>
<flag name="extras">Install experimental extra tools: wince_info and
wince_rename for examining and processing Windows CE installation cabinet
header files; cabinfo for examining the structure of a cab file.
</flag>
</use>
</pkgmetadata>
| {
"pile_set_name": "Github"
} |
# frozen_string_literal: false
require 'test/unit'
require 'matrix'
class SubMatrix < Matrix
end
class TestMatrix < Test::Unit::TestCase
def setup
@m1 = Matrix[[1,2,3], [4,5,6]]
@m2 = Matrix[[1,2,3], [4,5,6]]
@m3 = @m1.clone
@m4 = Matrix[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]
@n1 = Matrix[[2,3,4], [5,6,7]]
@c1 = Matrix[[Complex(1,2), Complex(0,1), 0], [1, 2, 3]]
@e1 = Matrix.empty(2,0)
@e2 = Matrix.empty(0,3)
@a3 = Matrix[[4, 1, -3], [0, 3, 7], [11, -4, 2]]
@a5 = Matrix[[2, 0, 9, 3, 9], [8, 7, 0, 1, 9], [7, 5, 6, 6, 5], [0, 7, 8, 3, 0], [7, 8, 2, 3, 1]]
@b3 = Matrix[[-7, 7, -10], [9, -3, -2], [-1, 3, 9]]
end
def test_matrix
assert_equal(1, @m1[0, 0])
assert_equal(2, @m1[0, 1])
assert_equal(3, @m1[0, 2])
assert_equal(4, @m1[1, 0])
assert_equal(5, @m1[1, 1])
assert_equal(6, @m1[1, 2])
end
def test_identity
assert_same @m1, @m1
assert_not_same @m1, @m2
assert_not_same @m1, @m3
assert_not_same @m1, @m4
assert_not_same @m1, @n1
end
def test_equality
assert_equal @m1, @m1
assert_equal @m1, @m2
assert_equal @m1, @m3
assert_equal @m1, @m4
assert_not_equal @m1, @n1
end
def test_hash_equality
assert @m1.eql?(@m1)
assert @m1.eql?(@m2)
assert @m1.eql?(@m3)
assert [email protected]?(@m4)
assert [email protected]?(@n1)
hash = { @m1 => :value }
assert hash.key?(@m1)
assert hash.key?(@m2)
assert hash.key?(@m3)
assert !hash.key?(@m4)
assert !hash.key?(@n1)
end
def test_hash
assert_equal @m1.hash, @m1.hash
assert_equal @m1.hash, @m2.hash
assert_equal @m1.hash, @m3.hash
end
def test_uplus
assert_equal(@m1, +@m1)
end
def test_negate
assert_equal(Matrix[[-1, -2, -3], [-4, -5, -6]], -@m1)
assert_equal(@m1, -(-@m1))
end
def test_rank
[
[[0]],
[[0], [0]],
[[0, 0], [0, 0]],
[[0, 0], [0, 0], [0, 0]],
[[0, 0, 0]],
[[0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]],
].each do |rows|
assert_equal 0, Matrix[*rows].rank
end
[
[[1], [0]],
[[1, 0], [0, 0]],
[[1, 0], [1, 0]],
[[0, 0], [1, 0]],
[[1, 0], [0, 0], [0, 0]],
[[0, 0], [1, 0], [0, 0]],
[[0, 0], [0, 0], [1, 0]],
[[1, 0], [1, 0], [0, 0]],
[[0, 0], [1, 0], [1, 0]],
[[1, 0], [1, 0], [1, 0]],
[[1, 0, 0]],
[[1, 0, 0], [0, 0, 0]],
[[0, 0, 0], [1, 0, 0]],
[[1, 0, 0], [1, 0, 0]],
[[1, 0, 0], [1, 0, 0]],
[[1, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [1, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [1, 0, 0]],
[[1, 0, 0], [1, 0, 0], [0, 0, 0]],
[[0, 0, 0], [1, 0, 0], [1, 0, 0]],
[[1, 0, 0], [0, 0, 0], [1, 0, 0]],
[[1, 0, 0], [1, 0, 0], [1, 0, 0]],
[[1, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[1, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[1, 0, 0], [1, 0, 0], [0, 0, 0], [0, 0, 0]],
[[1, 0, 0], [0, 0, 0], [1, 0, 0], [0, 0, 0]],
[[1, 0, 0], [0, 0, 0], [0, 0, 0], [1, 0, 0]],
[[1, 0, 0], [1, 0, 0], [1, 0, 0], [0, 0, 0]],
[[1, 0, 0], [0, 0, 0], [1, 0, 0], [1, 0, 0]],
[[1, 0, 0], [1, 0, 0], [0, 0, 0], [1, 0, 0]],
[[1, 0, 0], [1, 0, 0], [1, 0, 0], [1, 0, 0]],
[[1]],
[[1], [1]],
[[1, 1]],
[[1, 1], [1, 1]],
[[1, 1], [1, 1], [1, 1]],
[[1, 1, 1]],
[[1, 1, 1], [1, 1, 1]],
[[1, 1, 1], [1, 1, 1], [1, 1, 1]],
[[1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]],
].each do |rows|
matrix = Matrix[*rows]
assert_equal 1, matrix.rank
assert_equal 1, matrix.transpose.rank
end
[
[[1, 0], [0, 1]],
[[1, 0], [0, 1], [0, 0]],
[[1, 0], [0, 1], [0, 1]],
[[1, 0], [0, 1], [1, 1]],
[[1, 0, 0], [0, 1, 0]],
[[1, 0, 0], [0, 0, 1]],
[[1, 0, 0], [0, 1, 0], [0, 0, 0]],
[[1, 0, 0], [0, 0, 1], [0, 0, 0]],
[[1, 0, 0], [0, 0, 0], [0, 1, 0]],
[[1, 0, 0], [0, 0, 0], [0, 0, 1]],
[[1, 0], [1, 1]],
[[1, 2], [1, 1]],
[[1, 2], [0, 1], [1, 1]],
].each do |rows|
m = Matrix[*rows]
assert_equal 2, m.rank
assert_equal 2, m.transpose.rank
end
[
[[1, 0, 0], [0, 1, 0], [0, 0, 1]],
[[1, 1, 0], [0, 1, 1], [1, 0, 1]],
[[1, 1, 0], [0, 1, 1], [1, 0, 1]],
[[1, 1, 0], [0, 1, 1], [1, 0, 1], [0, 0, 0]],
[[1, 1, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]],
[[1, 1, 1], [1, 1, 2], [1, 3, 1], [4, 1, 1]],
].each do |rows|
m = Matrix[*rows]
assert_equal 3, m.rank
assert_equal 3, m.transpose.rank
end
end
def test_inverse
assert_equal(Matrix.empty(0, 0), Matrix.empty.inverse)
assert_equal(Matrix[[-1, 1], [0, -1]], Matrix[[-1, -1], [0, -1]].inverse)
assert_raise(ExceptionForMatrix::ErrDimensionMismatch) { @m1.inverse }
end
def test_determinant
assert_equal(0, Matrix[[0,0],[0,0]].determinant)
assert_equal(45, Matrix[[7,6], [3,9]].determinant)
assert_equal(-18, Matrix[[2,0,1],[0,-2,2],[1,2,3]].determinant)
assert_equal(-7, Matrix[[0,0,1],[0,7,6],[1,3,9]].determinant)
assert_equal(42, Matrix[[7,0,1,0,12],[8,1,1,9,1],[4,0,0,-7,17],[-1,0,0,-4,8],[10,1,1,8,6]].determinant)
end
def test_new_matrix
assert_raise(TypeError) { Matrix[Object.new] }
o = Object.new
def o.to_ary; [1,2,3]; end
assert_equal(@m1, Matrix[o, [4,5,6]])
end
def test_round
a = Matrix[[1.0111, 2.32320, 3.04343], [4.81, 5.0, 6.997]]
b = Matrix[[1.01, 2.32, 3.04], [4.81, 5.0, 7.0]]
assert_equal(a.round(2), b)
end
def test_rows
assert_equal(@m1, Matrix.rows([[1, 2, 3], [4, 5, 6]]))
end
def test_rows_copy
rows1 = [[1], [1]]
rows2 = [[1], [1]]
m1 = Matrix.rows(rows1, copy = false)
m2 = Matrix.rows(rows2, copy = true)
rows1.uniq!
rows2.uniq!
assert_equal([[1]], m1.to_a)
assert_equal([[1], [1]], m2.to_a)
end
def test_to_matrix
assert @m1.equal? @m1.to_matrix
end
def test_columns
assert_equal(@m1, Matrix.columns([[1, 4], [2, 5], [3, 6]]))
end
def test_diagonal
assert_equal(Matrix.empty(0, 0), Matrix.diagonal( ))
assert_equal(Matrix[[3,0,0],[0,2,0],[0,0,1]], Matrix.diagonal(3, 2, 1))
assert_equal(Matrix[[4,0,0,0],[0,3,0,0],[0,0,2,0],[0,0,0,1]], Matrix.diagonal(4, 3, 2, 1))
end
def test_scalar
assert_equal(Matrix.empty(0, 0), Matrix.scalar(0, 1))
assert_equal(Matrix[[2,0,0],[0,2,0],[0,0,2]], Matrix.scalar(3, 2))
assert_equal(Matrix[[2,0,0,0],[0,2,0,0],[0,0,2,0],[0,0,0,2]], Matrix.scalar(4, 2))
end
def test_identity2
assert_equal(Matrix[[1,0,0],[0,1,0],[0,0,1]], Matrix.identity(3))
assert_equal(Matrix[[1,0,0],[0,1,0],[0,0,1]], Matrix.unit(3))
assert_equal(Matrix[[1,0,0],[0,1,0],[0,0,1]], Matrix.I(3))
assert_equal(Matrix[[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]], Matrix.identity(4))
end
def test_zero
assert_equal(Matrix[[0,0,0],[0,0,0],[0,0,0]], Matrix.zero(3))
assert_equal(Matrix[[0,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,0]], Matrix.zero(4))
assert_equal(Matrix[[0]], Matrix.zero(1))
end
def test_row_vector
assert_equal(Matrix[[1,2,3,4]], Matrix.row_vector([1,2,3,4]))
end
def test_column_vector
assert_equal(Matrix[[1],[2],[3],[4]], Matrix.column_vector([1,2,3,4]))
end
def test_empty
m = Matrix.empty(2, 0)
assert_equal(Matrix[ [], [] ], m)
n = Matrix.empty(0, 3)
assert_equal(Matrix.columns([ [], [], [] ]), n)
assert_equal(Matrix[[0, 0, 0], [0, 0, 0]], m * n)
end
def test_row
assert_equal(Vector[1, 2, 3], @m1.row(0))
assert_equal(Vector[4, 5, 6], @m1.row(1))
a = []; @m1.row(0) {|x| a << x }
assert_equal([1, 2, 3], a)
end
def test_column
assert_equal(Vector[1, 4], @m1.column(0))
assert_equal(Vector[2, 5], @m1.column(1))
assert_equal(Vector[3, 6], @m1.column(2))
a = []; @m1.column(0) {|x| a << x }
assert_equal([1, 4], a)
end
def test_collect
m1 = Matrix.zero(2,2)
m2 = Matrix.build(3,4){|row, col| 1}
assert_equal(Matrix[[5, 5, 5, 5], [5, 5, 5, 5], [5, 5, 5, 5]], m2.collect{|e| e * 5})
assert_equal(Matrix[[7, 0],[0, 7]], m1.collect(:diagonal){|e| e + 7})
assert_equal(Matrix[[0, 5],[5, 0]], m1.collect(:off_diagonal){|e| e + 5})
assert_equal(Matrix[[8, 1, 1, 1], [8, 8, 1, 1], [8, 8, 8, 1]], m2.collect(:lower){|e| e + 7})
assert_equal(Matrix[[1, 1, 1, 1], [-11, 1, 1, 1], [-11, -11, 1, 1]], m2.collect(:strict_lower){|e| e - 12})
assert_equal(Matrix[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], m2.collect(:strict_upper){|e| e ** 2})
assert_equal(Matrix[[-1, -1, -1, -1], [1, -1, -1, -1], [1, 1, -1, -1]], m2.collect(:upper){|e| -e})
assert_raise(ArgumentError) {m1.collect(:test){|e| e + 7}}
assert_not_equal(m2, m2.collect {|e| e * 2 })
end
def test_minor
assert_equal(Matrix[[1, 2], [4, 5]], @m1.minor(0..1, 0..1))
assert_equal(Matrix[[2], [5]], @m1.minor(0..1, 1..1))
assert_equal(Matrix[[4, 5]], @m1.minor(1..1, 0..1))
assert_equal(Matrix[[1, 2], [4, 5]], @m1.minor(0, 2, 0, 2))
assert_equal(Matrix[[4, 5]], @m1.minor(1, 1, 0, 2))
assert_equal(Matrix[[2], [5]], @m1.minor(0, 2, 1, 1))
assert_raise(ArgumentError) { @m1.minor(0) }
end
def test_first_minor
assert_equal(Matrix.empty(0, 0), Matrix[[1]].first_minor(0, 0))
assert_equal(Matrix.empty(0, 2), Matrix[[1, 4, 2]].first_minor(0, 1))
assert_equal(Matrix[[1, 3]], @m1.first_minor(1, 1))
assert_equal(Matrix[[4, 6]], @m1.first_minor(0, 1))
assert_equal(Matrix[[1, 2]], @m1.first_minor(1, 2))
assert_raise(RuntimeError) { Matrix.empty(0, 0).first_minor(0, 0) }
assert_raise(ArgumentError) { @m1.first_minor(4, 0) }
assert_raise(ArgumentError) { @m1.first_minor(0, -1) }
assert_raise(ArgumentError) { @m1.first_minor(-1, 4) }
end
def test_cofactor
assert_equal(1, Matrix[[1]].cofactor(0, 0))
assert_equal(9, Matrix[[7,6],[3,9]].cofactor(0, 0))
assert_equal(0, Matrix[[0,0],[0,0]].cofactor(0, 0))
assert_equal(3, Matrix[[0,0,1],[0,7,6],[1,3,9]].cofactor(1, 0))
assert_equal(-21, Matrix[[7,0,1,0,12],[8,1,1,9,1],[4,0,0,-7,17],[-1,0,0,-4,8],[10,1,1,8,6]].cofactor(2, 3))
assert_raise(RuntimeError) { Matrix.empty(0, 0).cofactor(0, 0) }
assert_raise(ArgumentError) { Matrix[[0,0],[0,0]].cofactor(-1, 4) }
assert_raise(ExceptionForMatrix::ErrDimensionMismatch) { Matrix[[2,0,1],[0,-2,2]].cofactor(0, 0) }
end
def test_adjugate
assert_equal(Matrix.empty, Matrix.empty.adjugate)
assert_equal(Matrix[[1]], Matrix[[5]].adjugate)
assert_equal(Matrix[[9,-6],[-3,7]], Matrix[[7,6],[3,9]].adjugate)
assert_equal(Matrix[[45,3,-7],[6,-1,0],[-7,0,0]], Matrix[[0,0,1],[0,7,6],[1,3,9]].adjugate)
assert_equal(Matrix.identity(5), (@a5.adjugate * @a5) / @a5.det)
assert_equal(Matrix.I(3), Matrix.I(3).adjugate)
assert_equal((@a3 * @b3).adjugate, @b3.adjugate * @a3.adjugate)
assert_equal(4**(@a3.row_count-1) * @a3.adjugate, (4 * @a3).adjugate)
assert_raise(ExceptionForMatrix::ErrDimensionMismatch) { @m1.adjugate }
end
def test_laplace_expansion
assert_equal(1, Matrix[[1]].laplace_expansion(row: 0))
assert_equal(45, Matrix[[7,6], [3,9]].laplace_expansion(row: 1))
assert_equal(0, Matrix[[0,0],[0,0]].laplace_expansion(column: 0))
assert_equal(-7, Matrix[[0,0,1],[0,7,6],[1,3,9]].laplace_expansion(column: 2))
assert_equal(Vector[3, -2], Matrix[[Vector[1, 0], Vector[0, 1]], [2, 3]].laplace_expansion(row: 0))
assert_raise(ExceptionForMatrix::ErrDimensionMismatch) { @m1.laplace_expansion(row: 1) }
assert_raise(ArgumentError) { Matrix[[7,6], [3,9]].laplace_expansion() }
assert_raise(ArgumentError) { Matrix[[7,6], [3,9]].laplace_expansion(foo: 1) }
assert_raise(ArgumentError) { Matrix[[7,6], [3,9]].laplace_expansion(row: 1, column: 1) }
assert_raise(ArgumentError) { Matrix[[7,6], [3,9]].laplace_expansion(row: 2) }
assert_raise(ArgumentError) { Matrix[[0,0,1],[0,7,6],[1,3,9]].laplace_expansion(column: -1) }
assert_raise(RuntimeError) { Matrix.empty(0, 0).laplace_expansion(row: 0) }
end
def test_regular?
assert(Matrix[[1, 0], [0, 1]].regular?)
assert(Matrix[[1, 0, 0], [0, 1, 0], [0, 0, 1]].regular?)
assert(!Matrix[[1, 0, 0], [0, 0, 1], [0, 0, 1]].regular?)
end
def test_singular?
assert(!Matrix[[1, 0], [0, 1]].singular?)
assert(!Matrix[[1, 0, 0], [0, 1, 0], [0, 0, 1]].singular?)
assert(Matrix[[1, 0, 0], [0, 0, 1], [0, 0, 1]].singular?)
end
def test_square?
assert(Matrix[[1, 0], [0, 1]].square?)
assert(Matrix[[1, 0, 0], [0, 1, 0], [0, 0, 1]].square?)
assert(Matrix[[1, 0, 0], [0, 0, 1], [0, 0, 1]].square?)
assert(!Matrix[[1, 0, 0], [0, 1, 0]].square?)
end
def test_mul
assert_equal(Matrix[[2,4],[6,8]], Matrix[[2,4],[6,8]] * Matrix.I(2))
assert_equal(Matrix[[4,8],[12,16]], Matrix[[2,4],[6,8]] * 2)
assert_equal(Matrix[[4,8],[12,16]], 2 * Matrix[[2,4],[6,8]])
assert_equal(Matrix[[14,32],[32,77]], @m1 * @m1.transpose)
assert_equal(Matrix[[17,22,27],[22,29,36],[27,36,45]], @m1.transpose * @m1)
assert_equal(Vector[14,32], @m1 * Vector[1,2,3])
o = Object.new
def o.coerce(m)
[m, m.transpose]
end
assert_equal(Matrix[[14,32],[32,77]], @m1 * o)
end
def test_add
assert_equal(Matrix[[6,0],[-4,12]], Matrix.scalar(2,5) + Matrix[[1,0],[-4,7]])
assert_equal(Matrix[[3,5,7],[9,11,13]], @m1 + @n1)
assert_equal(Matrix[[3,5,7],[9,11,13]], @n1 + @m1)
assert_equal(Matrix[[2],[4],[6]], Matrix[[1],[2],[3]] + Vector[1,2,3])
assert_raise(Matrix::ErrOperationNotDefined) { @m1 + 1 }
o = Object.new
def o.coerce(m)
[m, m]
end
assert_equal(Matrix[[2,4,6],[8,10,12]], @m1 + o)
end
def test_sub
assert_equal(Matrix[[4,0],[4,-2]], Matrix.scalar(2,5) - Matrix[[1,0],[-4,7]])
assert_equal(Matrix[[-1,-1,-1],[-1,-1,-1]], @m1 - @n1)
assert_equal(Matrix[[1,1,1],[1,1,1]], @n1 - @m1)
assert_equal(Matrix[[0],[0],[0]], Matrix[[1],[2],[3]] - Vector[1,2,3])
assert_raise(Matrix::ErrOperationNotDefined) { @m1 - 1 }
o = Object.new
def o.coerce(m)
[m, m]
end
assert_equal(Matrix[[0,0,0],[0,0,0]], @m1 - o)
end
def test_div
assert_equal(Matrix[[0,1,1],[2,2,3]], @m1 / 2)
assert_equal(Matrix[[1,1],[1,1]], Matrix[[2,2],[2,2]] / Matrix.scalar(2,2))
o = Object.new
def o.coerce(m)
[m, Matrix.scalar(2,2)]
end
assert_equal(Matrix[[1,1],[1,1]], Matrix[[2,2],[2,2]] / o)
end
def test_hadamard_product
assert_equal(Matrix[[1,4], [9,16]], Matrix[[1,2], [3,4]].hadamard_product(Matrix[[1,2], [3,4]]))
assert_equal(Matrix[[2, 6, 12], [20, 30, 42]], @m1.hadamard_product(@n1))
o = Object.new
def o.to_matrix
Matrix[[1, 2, 3], [-1, 0, 1]]
end
assert_equal(Matrix[[1, 4, 9], [-4, 0, 6]], @m1.hadamard_product(o))
e = Matrix.empty(3, 0)
assert_equal(e, e.hadamard_product(e))
e = Matrix.empty(0, 3)
assert_equal(e, e.hadamard_product(e))
end
def test_exp
assert_equal(Matrix[[67,96],[48,99]], Matrix[[7,6],[3,9]] ** 2)
assert_equal(Matrix.I(5), Matrix.I(5) ** -1)
assert_raise(Matrix::ErrOperationNotDefined) { Matrix.I(5) ** Object.new }
end
def test_det
assert_equal(Matrix.instance_method(:determinant), Matrix.instance_method(:det))
end
def test_rank2
assert_equal(2, Matrix[[7,6],[3,9]].rank)
assert_equal(0, Matrix[[0,0],[0,0]].rank)
assert_equal(3, Matrix[[0,0,1],[0,7,6],[1,3,9]].rank)
assert_equal(1, Matrix[[0,1],[0,1],[0,1]].rank)
assert_equal(2, @m1.rank)
end
def test_trace
assert_equal(1+5+9, Matrix[[1,2,3],[4,5,6],[7,8,9]].trace)
end
def test_transpose
assert_equal(Matrix[[1,4],[2,5],[3,6]], @m1.transpose)
end
def test_conjugate
assert_equal(Matrix[[Complex(1,-2), Complex(0,-1), 0], [1, 2, 3]], @c1.conjugate)
end
def test_eigensystem
m = Matrix[[1, 2], [3, 4]]
v, d, v_inv = m.eigensystem
assert(d.diagonal?)
assert_equal(v.inv, v_inv)
assert_equal((v * d * v_inv).round(5), m)
end
def test_imaginary
assert_equal(Matrix[[2, 1, 0], [0, 0, 0]], @c1.imaginary)
end
def test_lup
m = Matrix[[1, 2], [3, 4]]
l, u, p = m.lup
assert(l.lower_triangular?)
assert(u.upper_triangular?)
assert(p.permutation?)
assert(l * u == p * m)
assert_equal(m.lup.solve([2, 5]), Vector[1, Rational(1,2)])
end
def test_real
assert_equal(Matrix[[1, 0, 0], [1, 2, 3]], @c1.real)
end
def test_rect
assert_equal([Matrix[[1, 0, 0], [1, 2, 3]], Matrix[[2, 1, 0], [0, 0, 0]]], @c1.rect)
end
def test_row_vectors
assert_equal([Vector[1,2,3], Vector[4,5,6]], @m1.row_vectors)
end
def test_column_vectors
assert_equal([Vector[1,4], Vector[2,5], Vector[3,6]], @m1.column_vectors)
end
def test_to_s
assert_equal("Matrix[[1, 2, 3], [4, 5, 6]]", @m1.to_s)
assert_equal("Matrix.empty(0, 0)", Matrix[].to_s)
assert_equal("Matrix.empty(1, 0)", Matrix[[]].to_s)
end
def test_inspect
assert_equal("Matrix[[1, 2, 3], [4, 5, 6]]", @m1.inspect)
assert_equal("Matrix.empty(0, 0)", Matrix[].inspect)
assert_equal("Matrix.empty(1, 0)", Matrix[[]].inspect)
end
def test_scalar_add
s1 = @m1.coerce(1).first
assert_equal(Matrix[[1]], (s1 + 0) * Matrix[[1]])
assert_raise(Matrix::ErrOperationNotDefined) { s1 + Vector[0] }
assert_raise(Matrix::ErrOperationNotDefined) { s1 + Matrix[[0]] }
o = Object.new
def o.coerce(x)
[1, 1]
end
assert_equal(2, s1 + o)
end
def test_scalar_sub
s1 = @m1.coerce(1).first
assert_equal(Matrix[[1]], (s1 - 0) * Matrix[[1]])
assert_raise(Matrix::ErrOperationNotDefined) { s1 - Vector[0] }
assert_raise(Matrix::ErrOperationNotDefined) { s1 - Matrix[[0]] }
o = Object.new
def o.coerce(x)
[1, 1]
end
assert_equal(0, s1 - o)
end
def test_scalar_mul
s1 = @m1.coerce(1).first
assert_equal(Matrix[[1]], (s1 * 1) * Matrix[[1]])
assert_equal(Vector[2], s1 * Vector[2])
assert_equal(Matrix[[2]], s1 * Matrix[[2]])
o = Object.new
def o.coerce(x)
[1, 1]
end
assert_equal(1, s1 * o)
end
def test_scalar_div
s1 = @m1.coerce(1).first
assert_equal(Matrix[[1]], (s1 / 1) * Matrix[[1]])
assert_raise(Matrix::ErrOperationNotDefined) { s1 / Vector[0] }
assert_equal(Matrix[[Rational(1,2)]], s1 / Matrix[[2]])
o = Object.new
def o.coerce(x)
[1, 1]
end
assert_equal(1, s1 / o)
end
def test_scalar_pow
s1 = @m1.coerce(1).first
assert_equal(Matrix[[1]], (s1 ** 1) * Matrix[[1]])
assert_raise(Matrix::ErrOperationNotDefined) { s1 ** Vector[0] }
assert_raise(Matrix::ErrOperationNotImplemented) { s1 ** Matrix[[1]] }
o = Object.new
def o.coerce(x)
[1, 1]
end
assert_equal(1, s1 ** o)
end
def test_abs
s1 = @a3.abs
assert_equal(s1, Matrix[[4, 1, 3], [0, 3, 7], [11, 4, 2]])
end
def test_hstack
assert_equal Matrix[[1,2,3,2,3,4,1,2,3], [4,5,6,5,6,7,4,5,6]],
@m1.hstack(@n1, @m1)
# Error checking:
assert_raise(TypeError) { @m1.hstack(42) }
assert_raise(TypeError) { Matrix.hstack(42, @m1) }
assert_raise(Matrix::ErrDimensionMismatch) { @m1.hstack(Matrix.identity(3)) }
assert_raise(Matrix::ErrDimensionMismatch) { @e1.hstack(@e2) }
# Corner cases:
assert_equal @m1, @m1.hstack
assert_equal @e1, @e1.hstack(@e1)
assert_equal Matrix.empty(0,6), @e2.hstack(@e2)
assert_equal SubMatrix, SubMatrix.hstack(@e1).class
# From Vectors:
assert_equal Matrix[[1, 3],[2, 4]], Matrix.hstack(Vector[1,2], Vector[3, 4])
end
def test_vstack
assert_equal Matrix[[1,2,3], [4,5,6], [2,3,4], [5,6,7], [1,2,3], [4,5,6]],
@m1.vstack(@n1, @m1)
# Error checking:
assert_raise(TypeError) { @m1.vstack(42) }
assert_raise(TypeError) { Matrix.vstack(42, @m1) }
assert_raise(Matrix::ErrDimensionMismatch) { @m1.vstack(Matrix.identity(2)) }
assert_raise(Matrix::ErrDimensionMismatch) { @e1.vstack(@e2) }
# Corner cases:
assert_equal @m1, @m1.vstack
assert_equal Matrix.empty(4,0), @e1.vstack(@e1)
assert_equal @e2, @e2.vstack(@e2)
assert_equal SubMatrix, SubMatrix.vstack(@e1).class
# From Vectors:
assert_equal Matrix[[1],[2],[3]], Matrix.vstack(Vector[1,2], Vector[3])
end
def test_combine
x = Matrix[[6, 6], [4, 4]]
y = Matrix[[1, 2], [3, 4]]
assert_equal Matrix[[5, 4], [1, 0]], Matrix.combine(x, y) {|a, b| a - b}
assert_equal Matrix[[5, 4], [1, 0]], x.combine(y) {|a, b| a - b}
# Without block
assert_equal Matrix[[5, 4], [1, 0]], Matrix.combine(x, y).each {|a, b| a - b}
# With vectors
assert_equal Matrix[[111], [222]], Matrix.combine(Matrix[[1], [2]], Vector[10,20], Vector[100,200], &:sum)
# Basic checks
assert_raise(Matrix::ErrDimensionMismatch) { @m1.combine(x) { raise } }
# Edge cases
assert_equal Matrix.empty, Matrix.combine{ raise }
assert_equal Matrix.empty(3,0), Matrix.combine(Matrix.empty(3,0), Matrix.empty(3,0)) { raise }
assert_equal Matrix.empty(0,3), Matrix.combine(Matrix.empty(0,3), Matrix.empty(0,3)) { raise }
end
def test_set_element
src = Matrix[
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
]
rows = {
range: [1..2, 1...3, 1..-1, -2..2, 1.., 1..., -2.., -2...],
int: [2, -1],
invalid: [-4, 4, -4..2, 2..-4, 0...0, 2..0, -4..],
}
columns = {
range: [2..3, 2...4, 2..-1, -2..3, 2.., 2..., -2..., -2..],
int: [3, -1],
invalid: [-5, 5, -5..2, 2..-5, 0...0, -5..],
}
values = {
element: 42,
matrix: Matrix[[20, 21], [22, 23]],
vector: Vector[30, 31],
row: Matrix[[60, 61]],
column: Matrix[[50], [51]],
mismatched_matrix: Matrix.identity(3),
mismatched_vector: Vector[0, 1, 2, 3],
}
solutions = {
[:int, :int] => {
element: Matrix[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 42]],
},
[:range , :int] => {
element: Matrix[[1, 2, 3, 4], [5, 6, 7, 42], [9, 10, 11, 42]],
column: Matrix[[1, 2, 3, 4], [5, 6, 7, 50], [9, 10, 11, 51]],
vector: Matrix[[1, 2, 3, 4], [5, 6, 7, 30], [9, 10, 11, 31]],
},
[:int, :range] => {
element: Matrix[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 42, 42]],
row: Matrix[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 60, 61]],
vector: Matrix[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 30, 31]],
},
[:range , :range] => {
element: Matrix[[1, 2, 3, 4], [5, 6, 42, 42], [9, 10, 42, 42]],
matrix: Matrix[[1, 2, 3, 4], [5, 6, 20, 21], [9, 10, 22, 23]],
},
}
solutions.default = Hash.new(IndexError)
rows.each do |row_style, row_arguments|
row_arguments.each do |row_argument|
columns.each do |column_style, column_arguments|
column_arguments.each do |column_argument|
values.each do |value_type, value|
expected = solutions[[row_style, column_style]][value_type] || Matrix::ErrDimensionMismatch
result = src.clone
begin
result[row_argument, column_argument] = value
assert_equal expected, result,
"m[#{row_argument.inspect}][#{column_argument.inspect}] = #{value.inspect} failed"
rescue Exception => e
raise if e.class != expected
end
end
end
end
end
end
end
def test_map!
m1 = Matrix.zero(2,2)
m2 = Matrix.build(3,4){|row, col| 1}
m3 = Matrix.zero(3,5).freeze
m4 = Matrix.empty.freeze
assert_equal Matrix[[5, 5, 5, 5], [5, 5, 5, 5], [5, 5, 5, 5]], m2.map!{|e| e * 5}
assert_equal Matrix[[7, 0],[0, 7]], m1.map!(:diagonal){|e| e + 7}
assert_equal Matrix[[7, 5],[5, 7]], m1.map!(:off_diagonal){|e| e + 5}
assert_equal Matrix[[12, 5, 5, 5], [12, 12, 5, 5], [12, 12, 12, 5]], m2.map!(:lower){|e| e + 7}
assert_equal Matrix[[12, 5, 5, 5], [0, 12, 5, 5], [0, 0, 12, 5]], m2.map!(:strict_lower){|e| e - 12}
assert_equal Matrix[[12, 25, 25, 25], [0, 12, 25, 25], [0, 0, 12, 25]], m2.map!(:strict_upper){|e| e ** 2}
assert_equal Matrix[[-12, -25, -25, -25], [0, -12, -25, -25], [0, 0, -12, -25]], m2.map!(:upper){|e| -e}
assert_equal m1, m1.map!{|e| e ** 2 }
assert_equal m2, m2.map!(:lower){ |e| e - 3 }
assert_raise(ArgumentError) {m1.map!(:test){|e| e + 7}}
assert_raise(FrozenError) { m3.map!{|e| e * 2} }
assert_raise(FrozenError) { m4.map!{} }
end
def test_freeze
m = Matrix[[1, 2, 3],[4, 5, 6]]
f = m.freeze
assert_equal true, f.frozen?
assert m.equal?(f)
assert m.equal?(f.freeze)
assert_raise(FrozenError){ m[0, 1] = 56 }
assert_equal m.dup, m
end
def test_clone
a = Matrix[[4]]
def a.foo
42
end
m = a.clone
m[0, 0] = 2
assert_equal a, m * 2
assert_equal 42, m.foo
a.freeze
m = a.clone
assert m.frozen?
assert_equal 42, m.foo
end
def test_dup
a = Matrix[[4]]
def a.foo
42
end
a.freeze
m = a.dup
m[0, 0] = 2
assert_equal a, m * 2
assert !m.respond_to?(:foo)
end
def test_eigenvalues_and_eigenvectors_symmetric
m = Matrix[
[8, 1],
[1, 8]
]
values = m.eigensystem.eigenvalues
assert_in_epsilon(7.0, values[0])
assert_in_epsilon(9.0, values[1])
vectors = m.eigensystem.eigenvectors
assert_in_epsilon(-vectors[0][0], vectors[0][1])
assert_in_epsilon(vectors[1][0], vectors[1][1])
end
def test_eigenvalues_and_eigenvectors_nonsymmetric
m = Matrix[
[8, 1],
[4, 5]
]
values = m.eigensystem.eigenvalues
assert_in_epsilon(9.0, values[0])
assert_in_epsilon(4.0, values[1])
vectors = m.eigensystem.eigenvectors
assert_in_epsilon(vectors[0][0], vectors[0][1])
assert_in_epsilon(-4 * vectors[1][0], vectors[1][1])
end
end
| {
"pile_set_name": "Github"
} |
/*
* Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
//!
//! sampleMovieLens.cpp
//! This file contains the implementation of the MovieLens sample. It creates the network using
//! the MLP NCF Uff model.
//! It can be run with the following command line:
//! Command: ./sample_movielens [-h or --help] [-b NUM_USERS] [--useDLACore=<int>] [--verbose]
//!
#include <algorithm>
#include <cassert>
#include <cmath>
#include <cstring>
#include <ctime>
#include <cuda_profiler_api.h>
#include <cuda_runtime_api.h>
#include <fstream>
#include <iomanip>
#include <iostream>
#include <map>
#include <sstream>
#include <vector>
#include "NvInfer.h"
#include "NvUffParser.h"
#include "argsParser.h"
#include "buffers.h"
#include "common.h"
#include "logger.h"
const std::string gSampleName = "TensorRT.sample_movielens";
// The OutputParams struct holds intermediate/final outputs generated by the MovieLens structure per user.
struct OutputParams
{
int32_t userId; // The user Id per batch.
int32_t expectedPredictedMaxRatingItem; // The Expected Max Rating Item per user (inference ground truth).
float expectedPredictedMaxRatingItemProb; // The Expected Max Rating Probability. (inference ground truth).
std::vector<int32_t> allItems; // All inferred items per user.
std::vector<std::pair<int32_t, float>> itemProbPairVec; // Expected topK items and prob per user.
}; // struct pargs
//!
//! \brief The SampleMovieLensParams structure groups the additional parameters required by
//! the MovieLens sample.
//!
struct SampleMovieLensParams : public samplesCommon::UffSampleParams
{
int32_t embeddingVecSize;
int32_t numUsers; // Total number of users. Should be equal to ratings file users count.
int32_t topKMovies; // TopK movies per user.
int32_t numMoviesPerUser; // The number of movies per user.
std::string ratingInputFile; // The input rating file.
bool strict; // Option to run with strict type requirements.
// The below structures are used to compare the predicted values to inference (ground truth)
std::map<int32_t, std::vector<int32_t>> userToItemsMap; // Lookup for inferred items for each user.
std::map<int32_t, std::vector<std::pair<int32_t, float>>>
userToExpectedItemProbMap; // Lookup for topK items and probs for each user.
std::vector<OutputParams> outParamsVec;
};
//!
//! \brief The SampleMovieLens class implements the MovieLens sample
//!
//! \details It creates the network using a uff model
//!
class SampleMovieLens
{
template <typename T>
using SampleUniquePtr = std::unique_ptr<T, samplesCommon::InferDeleter>;
public:
SampleMovieLens(const SampleMovieLensParams& params)
: mParams(params)
{
}
//!
//! \brief Builds the network engine
//!
bool build();
//!
//! \brief Runs the TensorRT inference engine for this sample
//!
bool infer();
//!
//! \brief Used to clean up any state created in the sample class
//!
bool teardown();
private:
//!
//! \brief Parses a Uff model for a MLP NCF model, creates a TensorRT network, and builds a TensorRT engine.
//!
void constructNetwork(SampleUniquePtr<nvinfer1::IBuilder>& builder,
SampleUniquePtr<nvinfer1::INetworkDefinition>& network, SampleUniquePtr<nvinfer1::IBuilderConfig>& config,
SampleUniquePtr<nvuffparser::IUffParser>& parser);
//!
//! \brief Copies a batch of input data from SampleMovieLensParams into managed input buffers
//!
bool processInput(const samplesCommon::BufferManager& buffers);
//!
//! \brief Helper function to read the next line of the MovieLens dataset
//! .csv file and return the contents of the line after the delimeter.
std::string readNextLine(std::ifstream& file, char delim);
//!
//! \brief Extracts needed dataset values for a single user in the MovieLens,
//! dataset .csv file, and populates the corresponding ground truth data struct
//!
void readInputSample(std::ifstream& file, OutputParams& outParams, std::string line);
//!
//! \brief Parses the MovieLens dataset and populates the SampleMovieLensParams data structure
//!
void parseMovieLensData();
//!
//! \brief Prints the expected recommendation results (ground truth)
//! from the MovieLens dataset for a given user
//!
void printOutputParams(OutputParams& outParams);
//!
//! \brief Verifies the inference output with ground truth and logs the results
//!
bool verifyOutput(
uint32_t* userInputPtr, uint32_t* /*itemInputPtr*/, uint32_t* topKItemNumberPtr, float* topKItemProbPtr);
SampleMovieLensParams mParams;
std::shared_ptr<nvinfer1::ICudaEngine> mEngine{nullptr}; //!< The TensorRT engine used to run the network
};
//!
//! \brief Creates the network, configures the builder and creates
//! the network engine
//!
//! \details This function creates the MLP NCF network by parsing the Uff model
//! and builds the engine that will be used to generate recommendations (mEngine)
//!
//! \return Returns true if the engine was created successfully and false
//! otherwise
//!
bool SampleMovieLens::build()
{
auto builder = SampleUniquePtr<nvinfer1::IBuilder>(nvinfer1::createInferBuilder(sample::gLogger.getTRTLogger()));
if (!builder)
{
return false;
}
auto network = SampleUniquePtr<nvinfer1::INetworkDefinition>(builder->createNetwork());
if (!network)
{
return false;
}
auto config = SampleUniquePtr<nvinfer1::IBuilderConfig>(builder->createBuilderConfig());
if (!config)
{
return false;
}
auto parser = SampleUniquePtr<nvuffparser::IUffParser>(nvuffparser::createUffParser());
if (!parser)
{
return false;
}
builder->setMaxBatchSize(mParams.batchSize);
config->setMaxWorkspaceSize(1_GiB);
config->setFlag(BuilderFlag::kGPU_FALLBACK);
config->setFlag(BuilderFlag::kSTRICT_TYPES);
if (mParams.fp16)
{
config->setFlag(BuilderFlag::kFP16);
}
samplesCommon::enableDLA(builder.get(), config.get(), mParams.dlaCore);
constructNetwork(builder, network, config, parser);
if (!mEngine)
{
return false;
}
return true;
}
//!
//! \brief Parses a Uff model for a MLP NCF model, creates a TensorRT network, and builds a TensorRT engine.
//!
void SampleMovieLens::constructNetwork(SampleUniquePtr<nvinfer1::IBuilder>& builder,
SampleUniquePtr<nvinfer1::INetworkDefinition>& network, SampleUniquePtr<nvinfer1::IBuilderConfig>& config,
SampleUniquePtr<nvuffparser::IUffParser>& parser)
{
nvinfer1::Dims inputIndices;
inputIndices.nbDims = 3;
inputIndices.d[0] = mParams.numMoviesPerUser;
inputIndices.d[1] = 1;
inputIndices.d[2] = 1;
// There should be two input and three output tensors
assert(mParams.inputTensorNames.size() == 2);
assert(mParams.outputTensorNames.size() == 3);
parser->registerInput(mParams.inputTensorNames[0].c_str(), inputIndices, nvuffparser::UffInputOrder::kNCHW);
parser->registerInput(mParams.inputTensorNames[1].c_str(), inputIndices, nvuffparser::UffInputOrder::kNCHW);
parser->registerOutput(mParams.outputTensorNames[0].c_str());
auto dType = mParams.fp16 ? nvinfer1::DataType::kHALF : nvinfer1::DataType::kFLOAT;
sample::gLogInfo << "Begin parsing model..." << std::endl;
// Parse the uff model to populate the network
if (!parser->parse(mParams.uffFileName.c_str(), *network, dType))
{
sample::gLogError << "Failure while parsing UFF file" << std::endl;
return;
}
sample::gLogInfo << "End parsing model..." << std::endl;
// Add postprocessing i.e. topk layer to the UFF Network
// Retrieve last layer of UFF Network
auto uffLastLayer = network->getLayer(network->getNbLayers() - 1);
// Reshape output of fully connected layer numOfMovies x 1 x 1 x 1 to numOfMovies x 1 x 1.
auto reshapeLayer = network->addShuffle(*uffLastLayer->getOutput(0));
reshapeLayer->setReshapeDimensions(nvinfer1::Dims3(1, mParams.numMoviesPerUser, 1));
assert(reshapeLayer != nullptr);
// Apply TopK layer to retrieve item probabilities and corresponding index number.
auto topK = network->addTopK(*reshapeLayer->getOutput(0), nvinfer1::TopKOperation::kMAX, mParams.topKMovies, 0x2);
assert(topK != nullptr);
// Mark outputs for index and probs. Also need to set the item layer type == kINT32.
topK->getOutput(0)->setName(mParams.outputTensorNames[1].c_str());
topK->getOutput(1)->setName(mParams.outputTensorNames[2].c_str());
// Specify topK tensors as outputs
network->markOutput(*topK->getOutput(0));
network->markOutput(*topK->getOutput(1));
// Set the topK indices tensor as INT32 type
topK->getOutput(1)->setType(nvinfer1::DataType::kINT32);
sample::gLogInfo << "Done constructing network..." << std::endl;
mEngine = std::shared_ptr<nvinfer1::ICudaEngine>(
builder->buildEngineWithConfig(*network, *config), samplesCommon::InferDeleter());
}
//!
//! \brief Runs the TensorRT inference engine for this sample
//!
//! \details This function is the main execution function of the sample. It
//! allocates the buffer, sets inputs, executes the engine, and verifies the output.
//!
bool SampleMovieLens::infer()
{
// Create RAII buffer manager object
samplesCommon::BufferManager buffers(mEngine, mParams.batchSize);
auto context = SampleUniquePtr<nvinfer1::IExecutionContext>(mEngine->createExecutionContext());
if (!context)
{
return false;
}
if (!processInput(buffers))
{
return false;
}
// Create CUDA stream for the execution of this inference.
cudaStream_t stream;
CHECK(cudaStreamCreate(&stream));
samplesCommon::GpuTimer timer{stream};
timer.start();
// Asynchronously copy data from host input buffers to device input buffers
buffers.copyInputToDeviceAsync(stream);
// Asynchronously enqueue the inference work
if (!context->enqueue(mParams.batchSize, buffers.getDeviceBindings().data(), stream, nullptr))
{
return false;
}
// Asynchronously copy data from device output buffers to host output buffers
buffers.copyOutputToHostAsync(stream);
// Wait for the work in the stream to complete
cudaStreamSynchronize(stream);
timer.stop();
sample::gLogInfo << "Done execution. Duration : " << timer.microseconds() << " microseconds." << std::endl;
// Release stream
cudaStreamDestroy(stream);
float* topKItemProb = static_cast<float*>(buffers.getHostBuffer(mParams.outputTensorNames[1]));
uint32_t* topKItemNumber = static_cast<uint32_t*>(buffers.getHostBuffer(mParams.outputTensorNames[2]));
uint32_t* userInput = static_cast<uint32_t*>(buffers.getHostBuffer(mParams.inputTensorNames[0]));
uint32_t* itemInput = static_cast<uint32_t*>(buffers.getHostBuffer(mParams.inputTensorNames[1]));
return SampleMovieLens::verifyOutput(userInput, itemInput, topKItemNumber, topKItemProb);
}
//!
//! \brief Copies a batch of input data from SampleMovieLensParams into managed input buffers
//!
bool SampleMovieLens::processInput(const samplesCommon::BufferManager& buffers)
{
// Parse ground truth data and inputs
SampleMovieLens::parseMovieLensData();
uint32_t* userInput = static_cast<uint32_t*>(buffers.getHostBuffer(mParams.inputTensorNames[0]));
uint32_t* itemInput = static_cast<uint32_t*>(buffers.getHostBuffer(mParams.inputTensorNames[1]));
// Copy batch of inputs to host buffers
for (int i = 0; i < mParams.batchSize; ++i)
{
for (int k = 0; k < mParams.numMoviesPerUser; ++k)
{
int idx = i * mParams.numMoviesPerUser + k;
userInput[idx] = mParams.outParamsVec[i].userId;
itemInput[idx] = mParams.outParamsVec[i].allItems.at(k);
}
}
return true;
}
//!
//! \brief Helper function to read the next line of the MovieLens dataset
//! .csv file and return the contents of the line after the delimeter.
//!
//! \details This function is called from SampleMovieLens::readInputSample()
//! to extract the needed values per user.
std::string SampleMovieLens::readNextLine(std::ifstream& file, char delim)
{
std::string line;
std::getline(file, line);
auto pos = line.find(delim);
line = line.substr(pos + 1);
return line;
}
//!
//! \brief Extracts needed dataset values for a single user in the MovieLens,
//! dataset .csv file, and populates the corresponding ground truth data struct
//!
void SampleMovieLens::readInputSample(std::ifstream& file, OutputParams& outParams, std::string line)
{
// read user name
char delim = ':';
auto pos = line.find(delim);
line = line.substr(pos + 1);
outParams.userId = std::stoi(line);
// read items
std::string items = readNextLine(file, delim);
items = items.substr(2, items.size() - 2);
std::stringstream ss(items);
std::string i;
while (ss >> i)
{
if (ss.peek() == ',' || ss.peek() == ' ')
{
ss.ignore();
}
i = i.substr(0, i.size() - 1);
outParams.allItems.push_back(std::stoi(i));
}
// read expected predicted max rating item
outParams.expectedPredictedMaxRatingItem = std::stoi(readNextLine(file, delim));
// read expected predicted max rating prob
std::string prob = readNextLine(file, delim);
prob = prob.substr(2, prob.size() - 3);
outParams.expectedPredictedMaxRatingItemProb = std::stof(prob);
// skip line
std::getline(file, line);
std::getline(file, line);
// read all the top 10 prediction ratings
for (int i = 0; i < 10; ++i)
{
auto pos = line.find(delim);
int32_t item = std::stoi(line.substr(0, pos - 1));
float prob = std::stof(line.substr(pos + 2));
outParams.itemProbPairVec.emplace_back((std::make_pair(item, prob)));
std::getline(file, line);
}
}
//!
//! \brief Parses the MovieLens dataset and populates the SampleMovieLensParams data structure
//!
void SampleMovieLens::parseMovieLensData()
{
std::ifstream file;
file.open(mParams.ratingInputFile, std::ios::binary);
std::string line;
int userIdx = 0;
while (std::getline(file, line) && userIdx < mParams.batchSize)
{
OutputParams outParams;
readInputSample(file, outParams, line);
// store the outParams in the class data structure.
mParams.outParamsVec.push_back(outParams);
mParams.userToItemsMap[userIdx] = std::move(outParams.allItems);
mParams.userToExpectedItemProbMap[userIdx] = std::move(outParams.itemProbPairVec);
userIdx++;
printOutputParams(outParams);
}
// number of users should be equal to number of users in rating file
assert(mParams.batchSize == userIdx);
}
bool SampleMovieLens::teardown()
{
nvuffparser::shutdownProtobufLibrary();
return true;
}
//!
//! \brief Prints the expected recommendation results (ground truth)
//! from the MovieLens dataset for a given user
//!
void SampleMovieLens::printOutputParams(OutputParams& outParams)
{
sample::gLogVerbose << "User Id : " << outParams.userId << std::endl;
sample::gLogVerbose << "Expected Predicted Max Rating Item : " << outParams.expectedPredictedMaxRatingItem
<< std::endl;
sample::gLogVerbose << "Expected Predicted Max Rating Prob : " << outParams.expectedPredictedMaxRatingItemProb
<< std::endl;
sample::gLogVerbose << "Total TopK Items : " << outParams.itemProbPairVec.size() << std::endl;
for (unsigned int i = 0; i < outParams.itemProbPairVec.size(); ++i)
{
sample::gLogVerbose << outParams.itemProbPairVec.at(i).first << " : " << outParams.itemProbPairVec.at(i).second
<< std::endl;
}
}
//!
//! \brief Compares the inference output with ground truth and logs the results
//!
bool SampleMovieLens::verifyOutput(
uint32_t* userInput, uint32_t* /*itemInput*/, uint32_t* topKItemNumber, float* topKItemProb)
{
bool pass{true};
sample::gLogInfo << "Num of users : " << mParams.batchSize << std::endl;
sample::gLogInfo << "Num of Movies : " << mParams.numMoviesPerUser << std::endl;
sample::gLogVerbose << "|-----------|------------|-----------------|-----------------|" << std::endl;
sample::gLogVerbose << "| User | Item | Expected Prob | Predicted Prob |" << std::endl;
sample::gLogVerbose << "|-----------|------------|-----------------|-----------------|" << std::endl;
for (int i = 0; i < mParams.batchSize; ++i)
{
int userIdx = userInput[i * mParams.numMoviesPerUser];
int maxPredictedIdx = topKItemNumber[i * mParams.topKMovies];
int maxExpectedItem = mParams.userToExpectedItemProbMap.at(userIdx).at(0).first;
int maxPredictedItem = mParams.userToItemsMap.at(userIdx).at(maxPredictedIdx);
pass &= maxExpectedItem == maxPredictedItem;
for (int k = 0; k < mParams.topKMovies; ++k)
{
int predictedIdx = topKItemNumber[i * mParams.topKMovies + k];
float predictedProb = topKItemProb[i * mParams.topKMovies + k];
float expectedProb = mParams.userToExpectedItemProbMap.at(userIdx).at(k).second;
int predictedItem = mParams.userToItemsMap.at(userIdx).at(predictedIdx);
sample::gLogVerbose << "|" << std::setw(10) << userIdx << " | " << std::setw(10) << predictedItem << " | "
<< std::setw(15) << expectedProb << " | " << std::setw(15) << predictedProb << " | "
<< std::endl;
}
}
for (int i = 0; i < mParams.batchSize; ++i)
{
int userIdx = userInput[i * mParams.numMoviesPerUser];
int maxPredictedIdx = topKItemNumber[i * mParams.topKMovies];
int maxExpectedItem = mParams.userToExpectedItemProbMap.at(userIdx).at(0).first;
int maxPredictedItem = mParams.userToItemsMap.at(userIdx).at(maxPredictedIdx);
sample::gLogInfo << "| User :" << std::setw(4) << userIdx << " | Expected Item :" << std::setw(5)
<< maxExpectedItem << " | Predicted Item :" << std::setw(5) << maxPredictedItem << " | "
<< std::endl;
}
return pass;
}
struct SampleMovieLensArgs
{
bool help{false};
int batchSize{32};
int dlaCore{-1};
bool fp16{false};
bool strict{false};
bool verbose{false};
};
//!
//! \brief Parses the command line arguments for the MovieLens sample, and returns failure
//! if arguments are incorrect
//!
bool parseSampleMovieLensArgs(SampleMovieLensArgs& args, int argc, char* argv[])
{
for (int i = 1; i < argc; ++i)
{
std::string argStr(argv[i]);
if (argStr == "-h" || argStr == "--help")
{
args.help = true;
return true;
}
if (argStr == "-b")
{
i++;
args.batchSize = std::atoi(argv[i]);
}
else if (argStr == "--fp16")
{
args.fp16 = true;
}
else if (argStr == "--strict")
{
args.strict = true;
}
else if (argStr == "--verbose")
{
args.verbose = true;
sample::setReportableSeverity(sample::Logger::Severity::kVERBOSE);
}
else if (argStr.substr(0, 13) == "--useDLACore=" && argStr.size() > 13)
{
args.dlaCore = std::stoi(argv[i] + 13);
}
else
{
return false;
}
}
return true;
}
//!
//! \brief Initializes members of the params struct using the
//! command line args
//!
SampleMovieLensParams initializeSampleParams(const SampleMovieLensArgs& args)
{
SampleMovieLensParams params;
params.dataDirs.push_back("data/movielens/");
params.dataDirs.push_back("data/samples/movielens/");
params.uffFileName = locateFile("sampleMovieLens.uff", params.dataDirs);
params.embeddingVecSize = 32;
params.topKMovies = 1;
params.numMoviesPerUser = 100;
params.ratingInputFile = locateFile("movielens_ratings.txt", params.dataDirs);
params.inputTensorNames.push_back("user_input");
params.inputTensorNames.push_back("item_input");
params.outputTensorNames.push_back("prediction/Sigmoid");
params.outputTensorNames.push_back("topk_values");
params.outputTensorNames.push_back("topk_items");
params.batchSize = args.batchSize;
params.dlaCore = args.dlaCore;
params.fp16 = args.fp16;
params.strict = args.strict;
return params;
}
//!
//! \brief Prints the help information for running this sample
//!
void printHelpInfo()
{
std::cout << "Usage: ./sample_movielens [-h or --help] [-b NUM_USERS] [--useDLACore=<int>] [--verbose]\n";
std::cout << "--help Display help information.\n";
std::cout << "--verbose Enable verbose prints.\n";
std::cout << "-b NUM_USERS Number of Users i.e. Batch Size (default numUsers==32).\n";
std::cout << "--useDLACore=N Specify a DLA engine for layers that support "
"DLA. Value can range from 0 to n-1, where n is the number of "
"DLA engines on the platform."
<< std::endl;
std::cout << "--fp16 Run in FP16 mode.\n";
std::cout << "--strict Run with strict type constraints." << std::endl;
}
int main(int argc, char** argv)
{
SampleMovieLensArgs args;
bool argsOK = parseSampleMovieLensArgs(args, argc, argv);
if (!argsOK)
{
sample::gLogError << "Invalid arguments" << std::endl;
printHelpInfo();
return EXIT_FAILURE;
}
if (args.help)
{
printHelpInfo();
return EXIT_SUCCESS;
}
auto sampleTest = sample::gLogger.defineTest(gSampleName, argc, argv);
sample::gLogger.reportTestStart(sampleTest);
SampleMovieLensParams params = initializeSampleParams(args);
SampleMovieLens sample(params);
sample::gLogInfo << "Building and running a GPU inference engine for MLP NCF model..." << std::endl;
if (!sample.build())
{
return sample::gLogger.reportFail(sampleTest);
}
if (!sample.infer())
{
return sample::gLogger.reportFail(sampleTest);
}
if (!sample.teardown())
{
return sample::gLogger.reportFail(sampleTest);
}
return sample::gLogger.reportPass(sampleTest);
}
| {
"pile_set_name": "Github"
} |
UC
==
<!-- markdownlint-disable line-length no-inline-html -->
<table>
<tr height=240>
<td>
<a href="https://github.com/alrra/browser-logos/tree/a94987f29719142668cdf960b3f624ce1a3c6aa8/src/uc">
<img width=230 src="https://raw.githubusercontent.com/alrra/browser-logos/a94987f29719142668cdf960b3f624ce1a3c6aa8/src/uc/uc_512x512.png" alt="UC browser logo">
</a>
</td>
</tr>
</table>
<!-- markdownlint-enable line-length no-inline-html -->
How to get the logo
-------------------
You can either:
* Install it using:
* [`npm`][npm]: `npm install --save-dev @browser-logos/uc`
* [`Yarn`][yarn]: `yarn add --dev @browser-logos/uc`
* Use [`cdnjs`][cdnjs].
<!-- Link labels: -->
[cdnjs]: https://cdnjs.com/libraries/browser-logos
[npm]: https://www.npmjs.com/
[yarn]: https://yarnpkg.com/
| {
"pile_set_name": "Github"
} |
using System.Linq;
using Microsoft.AspNetCore.Mvc;
using SportsStore.Models;
using SportsStore.Models.ViewModels;
namespace SportsStore.Controllers {
public class CartController : Controller {
private IProductRepository repository;
private Cart cart;
public CartController(IProductRepository repo, Cart cartService) {
repository = repo;
cart = cartService;
}
public ViewResult Index(string returnUrl) {
return View(new CartIndexViewModel {
Cart = cart,
ReturnUrl = returnUrl
});
}
public RedirectToActionResult AddToCart(int productId, string returnUrl) {
Product product = repository.Products
.FirstOrDefault(p => p.ProductID == productId);
if (product != null) {
cart.AddItem(product, 1);
}
return RedirectToAction("Index", new { returnUrl });
}
public RedirectToActionResult RemoveFromCart(int productId,
string returnUrl) {
Product product = repository.Products
.FirstOrDefault(p => p.ProductID == productId);
if (product != null) {
cart.RemoveLine(product);
}
return RedirectToAction("Index", new { returnUrl });
}
}
} | {
"pile_set_name": "Github"
} |
def foo()
# ERROR:
foo(1, 2)
# ERROR:
foo(a_very_long_constant_name, 2)
# ERROR:
foo(unsafe(), # indeed
2)
# ERROR:
foo(bar(1, 3), 2)
foo(2, 1)
end
| {
"pile_set_name": "Github"
} |
<?php
declare(strict_types=1);
namespace OpenStack\Compute\v2;
/**
* Represents common constants.
*/
abstract class Enum
{
const REBOOT_SOFT = 'SOFT';
const REBOOT_HARD = 'HARD';
const CONSOLE_NOVNC = 'novnc';
const CONSOLE_XVPNC = 'xvpvnc';
const CONSOLE_RDP_HTML5 = 'rdp-html5';
const CONSOLE_SPICE_HTML5 = 'spice-html5';
const CONSOLE_SERIAL = 'serial';
}
| {
"pile_set_name": "Github"
} |
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_144) on Wed Sep 06 08:23:23 PDT 2017 -->
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>NoProviderFoundException (Java(TM) EE 8 Specification APIs)</title>
<meta name="date" content="2017-09-06">
<link rel="stylesheet" type="text/css" href="../../stylesheet.css" title="Style">
<script type="text/javascript" src="../../script.js"></script>
</head>
<body>
<script type="text/javascript"><!--
try {
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="NoProviderFoundException (Java(TM) EE 8 Specification APIs)";
}
}
catch(err) {
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar.top">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.top.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../../overview-summary.html">Overview</a></li>
<li><a href="package-summary.html">Package</a></li>
<li class="navBarCell1Rev">Class</li>
<li><a href="class-use/NoProviderFoundException.html">Use</a></li>
<li><a href="package-tree.html">Tree</a></li>
<li><a href="../../deprecated-list.html">Deprecated</a></li>
<li><a href="../../index-all.html">Index</a></li>
<li><a href="../../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="../../javax/validation/MessageInterpolator.Context.html" title="interface in javax.validation"><span class="typeNameLink">Prev Class</span></a></li>
<li><a href="../../javax/validation/OverridesAttribute.html" title="annotation in javax.validation"><span class="typeNameLink">Next Class</span></a></li>
</ul>
<ul class="navList">
<li><a href="../../index.html?javax/validation/NoProviderFoundException.html" target="_top">Frames</a></li>
<li><a href="NoProviderFoundException.html" target="_top">No Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="../../allclasses-noframe.html">All Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<div>
<ul class="subNavList">
<li>Summary: </li>
<li>Nested | </li>
<li>Field | </li>
<li><a href="#constructor.summary">Constr</a> | </li>
<li><a href="#methods.inherited.from.class.java.lang.Throwable">Method</a></li>
</ul>
<ul class="subNavList">
<li>Detail: </li>
<li>Field | </li>
<li><a href="#constructor.detail">Constr</a> | </li>
<li>Method</li>
</ul>
</div>
<a name="skip.navbar.top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<!-- ======== START OF CLASS DATA ======== -->
<div class="header">
<div class="subTitle">javax.validation</div>
<h2 title="Class NoProviderFoundException" class="title">Class NoProviderFoundException</h2>
</div>
<div class="contentContainer">
<ul class="inheritance">
<li><a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true" title="class or interface in java.lang">java.lang.Object</a></li>
<li>
<ul class="inheritance">
<li><a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true" title="class or interface in java.lang">java.lang.Throwable</a></li>
<li>
<ul class="inheritance">
<li><a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Exception.html?is-external=true" title="class or interface in java.lang">java.lang.Exception</a></li>
<li>
<ul class="inheritance">
<li><a href="http://docs.oracle.com/javase/8/docs/api/java/lang/RuntimeException.html?is-external=true" title="class or interface in java.lang">java.lang.RuntimeException</a></li>
<li>
<ul class="inheritance">
<li><a href="../../javax/validation/ValidationException.html" title="class in javax.validation">javax.validation.ValidationException</a></li>
<li>
<ul class="inheritance">
<li>javax.validation.NoProviderFoundException</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<div class="description">
<ul class="blockList">
<li class="blockList">
<dl>
<dt>All Implemented Interfaces:</dt>
<dd><a href="http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true" title="class or interface in java.io">Serializable</a></dd>
</dl>
<hr>
<br>
<pre>public class <span class="typeNameLabel">NoProviderFoundException</span>
extends <a href="../../javax/validation/ValidationException.html" title="class in javax.validation">ValidationException</a></pre>
<div class="block">Exception raised if no Bean Validation provider could be found.</div>
<dl>
<dt><span class="simpleTagLabel">Since:</span></dt>
<dd>2.0</dd>
<dt><span class="simpleTagLabel">Author:</span></dt>
<dd>Gunnar Morling</dd>
<dt><span class="seeLabel">See Also:</span></dt>
<dd><a href="../../serialized-form.html#javax.validation.NoProviderFoundException">Serialized Form</a></dd>
</dl>
</li>
</ul>
</div>
<div class="summary">
<ul class="blockList">
<li class="blockList">
<!-- ======== CONSTRUCTOR SUMMARY ======== -->
<ul class="blockList">
<li class="blockList"><a name="constructor.summary">
<!-- -->
</a>
<h3>Constructor Summary</h3>
<table class="memberSummary" border="0" cellpadding="3" cellspacing="0" summary="Constructor Summary table, listing constructors, and an explanation">
<caption><span>Constructors</span><span class="tabEnd"> </span></caption>
<tr>
<th class="colOne" scope="col">Constructor and Description</th>
</tr>
<tr class="altColor">
<td class="colOne"><code><span class="memberNameLink"><a href="../../javax/validation/NoProviderFoundException.html#NoProviderFoundException--">NoProviderFoundException</a></span>()</code> </td>
</tr>
<tr class="rowColor">
<td class="colOne"><code><span class="memberNameLink"><a href="../../javax/validation/NoProviderFoundException.html#NoProviderFoundException-java.lang.String-">NoProviderFoundException</a></span>(<a href="http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true" title="class or interface in java.lang">String</a> message)</code> </td>
</tr>
<tr class="altColor">
<td class="colOne"><code><span class="memberNameLink"><a href="../../javax/validation/NoProviderFoundException.html#NoProviderFoundException-java.lang.String-java.lang.Throwable-">NoProviderFoundException</a></span>(<a href="http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true" title="class or interface in java.lang">String</a> message,
<a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true" title="class or interface in java.lang">Throwable</a> cause)</code> </td>
</tr>
<tr class="rowColor">
<td class="colOne"><code><span class="memberNameLink"><a href="../../javax/validation/NoProviderFoundException.html#NoProviderFoundException-java.lang.Throwable-">NoProviderFoundException</a></span>(<a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true" title="class or interface in java.lang">Throwable</a> cause)</code> </td>
</tr>
</table>
</li>
</ul>
<!-- ========== METHOD SUMMARY =========== -->
<ul class="blockList">
<li class="blockList"><a name="method.summary">
<!-- -->
</a>
<h3>Method Summary</h3>
<ul class="blockList">
<li class="blockList"><a name="methods.inherited.from.class.java.lang.Throwable">
<!-- -->
</a>
<h3>Methods inherited from class java.lang.<a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true" title="class or interface in java.lang">Throwable</a></h3>
<code><a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#addSuppressed-java.lang.Throwable-" title="class or interface in java.lang">addSuppressed</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#fillInStackTrace--" title="class or interface in java.lang">fillInStackTrace</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#getCause--" title="class or interface in java.lang">getCause</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#getLocalizedMessage--" title="class or interface in java.lang">getLocalizedMessage</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#getMessage--" title="class or interface in java.lang">getMessage</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#getStackTrace--" title="class or interface in java.lang">getStackTrace</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#getSuppressed--" title="class or interface in java.lang">getSuppressed</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#initCause-java.lang.Throwable-" title="class or interface in java.lang">initCause</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#printStackTrace--" title="class or interface in java.lang">printStackTrace</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#printStackTrace-java.io.PrintStream-" title="class or interface in java.lang">printStackTrace</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#printStackTrace-java.io.PrintWriter-" title="class or interface in java.lang">printStackTrace</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#setStackTrace-java.lang.StackTraceElement:A-" title="class or interface in java.lang">setStackTrace</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true#toString--" title="class or interface in java.lang">toString</a></code></li>
</ul>
<ul class="blockList">
<li class="blockList"><a name="methods.inherited.from.class.java.lang.Object">
<!-- -->
</a>
<h3>Methods inherited from class java.lang.<a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true" title="class or interface in java.lang">Object</a></h3>
<code><a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#clone--" title="class or interface in java.lang">clone</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#equals-java.lang.Object-" title="class or interface in java.lang">equals</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#finalize--" title="class or interface in java.lang">finalize</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#getClass--" title="class or interface in java.lang">getClass</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#hashCode--" title="class or interface in java.lang">hashCode</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#notify--" title="class or interface in java.lang">notify</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#notifyAll--" title="class or interface in java.lang">notifyAll</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#wait--" title="class or interface in java.lang">wait</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#wait-long-" title="class or interface in java.lang">wait</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#wait-long-int-" title="class or interface in java.lang">wait</a></code></li>
</ul>
</li>
</ul>
</li>
</ul>
</div>
<div class="details">
<ul class="blockList">
<li class="blockList">
<!-- ========= CONSTRUCTOR DETAIL ======== -->
<ul class="blockList">
<li class="blockList"><a name="constructor.detail">
<!-- -->
</a>
<h3>Constructor Detail</h3>
<a name="NoProviderFoundException--">
<!-- -->
</a>
<ul class="blockList">
<li class="blockList">
<h4>NoProviderFoundException</h4>
<pre>public NoProviderFoundException()</pre>
</li>
</ul>
<a name="NoProviderFoundException-java.lang.String-">
<!-- -->
</a>
<ul class="blockList">
<li class="blockList">
<h4>NoProviderFoundException</h4>
<pre>public NoProviderFoundException(<a href="http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true" title="class or interface in java.lang">String</a> message)</pre>
</li>
</ul>
<a name="NoProviderFoundException-java.lang.Throwable-">
<!-- -->
</a>
<ul class="blockList">
<li class="blockList">
<h4>NoProviderFoundException</h4>
<pre>public NoProviderFoundException(<a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true" title="class or interface in java.lang">Throwable</a> cause)</pre>
</li>
</ul>
<a name="NoProviderFoundException-java.lang.String-java.lang.Throwable-">
<!-- -->
</a>
<ul class="blockListLast">
<li class="blockList">
<h4>NoProviderFoundException</h4>
<pre>public NoProviderFoundException(<a href="http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true" title="class or interface in java.lang">String</a> message,
<a href="http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true" title="class or interface in java.lang">Throwable</a> cause)</pre>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</div>
</div>
<!-- ========= END OF CLASS DATA ========= -->
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar.bottom">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.bottom.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../../overview-summary.html">Overview</a></li>
<li><a href="package-summary.html">Package</a></li>
<li class="navBarCell1Rev">Class</li>
<li><a href="class-use/NoProviderFoundException.html">Use</a></li>
<li><a href="package-tree.html">Tree</a></li>
<li><a href="../../deprecated-list.html">Deprecated</a></li>
<li><a href="../../index-all.html">Index</a></li>
<li><a href="../../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="../../javax/validation/MessageInterpolator.Context.html" title="interface in javax.validation"><span class="typeNameLink">Prev Class</span></a></li>
<li><a href="../../javax/validation/OverridesAttribute.html" title="annotation in javax.validation"><span class="typeNameLink">Next Class</span></a></li>
</ul>
<ul class="navList">
<li><a href="../../index.html?javax/validation/NoProviderFoundException.html" target="_top">Frames</a></li>
<li><a href="NoProviderFoundException.html" target="_top">No Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="../../allclasses-noframe.html">All Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<div>
<ul class="subNavList">
<li>Summary: </li>
<li>Nested | </li>
<li>Field | </li>
<li><a href="#constructor.summary">Constr</a> | </li>
<li><a href="#methods.inherited.from.class.java.lang.Throwable">Method</a></li>
</ul>
<ul class="subNavList">
<li>Detail: </li>
<li>Field | </li>
<li><a href="#constructor.detail">Constr</a> | </li>
<li>Method</li>
</ul>
</div>
<a name="skip.navbar.bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
<p class="legalCopy"><small>Copyright © 1996-2017, <a href="http://www.oracle.com">Oracle</a> and/or its affiliates. All Rights Reserved. Use is subject to <a href="../../doc-files/speclicense.html" target="_top">license terms</a>.</small></p>
</body>
</html>
| {
"pile_set_name": "Github"
} |
/****************************************************************************
Copyright (c) 2013-2015 Chukong Technologies Inc.
http://www.cocos2d-x.org
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
****************************************************************************/
#ifndef __CCCONSOLE_H__
#define __CCCONSOLE_H__
/// @cond DO_NOT_SHOW
#if defined(_MSC_VER) || defined(__MINGW32__)
#include <BaseTsd.h>
#include <WinSock2.h>
#ifndef __SSIZE_T
#define __SSIZE_T
typedef SSIZE_T ssize_t;
#endif // __SSIZE_T
#else
#include <sys/select.h>
#endif
#include <thread>
#include <vector>
#include <map>
#include <functional>
#include <string>
#include <mutex>
#include <stdarg.h>
#include "base/CCRef.h"
#include "base/ccMacros.h"
#include "platform/CCPlatformMacros.h"
NS_CC_BEGIN
/// The max length of CCLog message.
static const int MAX_LOG_LENGTH = 16*1024;
/**
@brief Output Debug message.
*/
void CC_DLL log(const char * format, ...) CC_FORMAT_PRINTF(1, 2);
/** Console is helper class that lets the developer control the game from TCP connection.
Console will spawn a new thread that will listen to a specified TCP port.
Console has a basic token parser. Each token is associated with an std::function<void(int)>.
If the std::function<> needs to use the cocos2d API, it needs to call
```
scheduler->performFunctionInCocosThread( ... );
```
*/
class CC_DLL Console
: public Ref
{
public:
struct Command {
std::string name;
std::string help;
std::function<void(int, const std::string&)> callback;
};
/** Constructor */
Console();
/** Destructor */
virtual ~Console();
/** starts listening to specified TCP port */
bool listenOnTCP(int port);
/** starts listening to specified file descriptor */
bool listenOnFileDescriptor(int fd);
/** stops the Console. 'stop' will be called at destruction time as well */
void stop();
/** add custom command */
void addCommand(const Command& cmd);
/** log something in the console */
void log(const char *buf);
/**
* set bind address
*
* @address : 127.0.0.1
*/
void setBindAddress(const std::string &address);
protected:
void loop();
ssize_t readline(int fd, char *buf, size_t maxlen);
ssize_t readBytes(int fd, char* buffer, size_t maxlen, bool* more);
bool parseCommand(int fd);
void addClient();
// Add commands here
void commandHelp(int fd, const std::string &args);
void commandExit(int fd, const std::string &args);
void commandSceneGraph(int fd, const std::string &args);
void commandFileUtils(int fd, const std::string &args);
void commandConfig(int fd, const std::string &args);
void commandTextures(int fd, const std::string &args);
void commandResolution(int fd, const std::string &args);
void commandProjection(int fd, const std::string &args);
void commandDirector(int fd, const std::string &args);
void commandTouch(int fd, const std::string &args);
void commandUpload(int fd);
void commandAllocator(int fd, const std::string &args);
// file descriptor: socket, console, etc.
int _listenfd;
int _maxfd;
std::vector<int> _fds;
std::thread _thread;
fd_set _read_set;
bool _running;
bool _endThread;
std::map<std::string, Command> _commands;
// strings generated by cocos2d sent to the remote console
bool _sendDebugStrings;
std::mutex _DebugStringsMutex;
std::vector<std::string> _DebugStrings;
intptr_t _touchId;
std::string _bindAddress;
private:
CC_DISALLOW_COPY_AND_ASSIGN(Console);
};
NS_CC_END
/// @endcond
#endif /* defined(__CCCONSOLE_H__) */
| {
"pile_set_name": "Github"
} |
/**
* @file
* This is the IPv4 address tools implementation.
*
*/
/*
* Copyright (c) 2001-2004 Swedish Institute of Computer Science.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
* 3. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
* OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
* OF SUCH DAMAGE.
*
* This file is part of the lwIP TCP/IP stack.
*
* Author: Adam Dunkels <[email protected]>
*
*/
#include "lwip/opt.h"
#include "lwip/ip_addr.h"
#include "lwip/netif.h"
/* used by IP_ADDR_ANY and IP_ADDR_BROADCAST in ip_addr.h */
const ip_addr_t ip_addr_any ICACHE_RODATA_ATTR = { IPADDR_ANY };
const ip_addr_t ip_addr_broadcast ICACHE_RODATA_ATTR = { IPADDR_BROADCAST };
/**
* Determine if an address is a broadcast address on a network interface
*
* @param addr address to be checked
* @param netif the network interface against which the address is checked
* @return returns non-zero if the address is a broadcast address
*/
u8_t
ip4_addr_isbroadcast(u32_t addr, const struct netif *netif)
{
ip_addr_t ipaddr;
ip4_addr_set_u32(&ipaddr, addr);
/* all ones (broadcast) or all zeroes (old skool broadcast) */
if ((~addr == IPADDR_ANY) ||
(addr == IPADDR_ANY)) {
return 1;
/* no broadcast support on this network interface? */
} else if ((netif->flags & NETIF_FLAG_BROADCAST) == 0) {
/* the given address cannot be a broadcast address
* nor can we check against any broadcast addresses */
return 0;
/* address matches network interface address exactly? => no broadcast */
} else if (addr == ip4_addr_get_u32(&netif->ip_addr)) {
return 0;
/* on the same (sub) network... */
} else if (ip_addr_netcmp(&ipaddr, &(netif->ip_addr), &(netif->netmask))
/* ...and host identifier bits are all ones? =>... */
&& ((addr & ~ip4_addr_get_u32(&netif->netmask)) ==
(IPADDR_BROADCAST & ~ip4_addr_get_u32(&netif->netmask)))) {
/* => network broadcast address */
return 1;
} else {
return 0;
}
}
/** Checks if a netmask is valid (starting with ones, then only zeros)
*
* @param netmask the IPv4 netmask to check (in network byte order!)
* @return 1 if the netmask is valid, 0 if it is not
*/
u8_t
ip4_addr_netmask_valid(u32_t netmask)
{
u32_t mask;
u32_t nm_hostorder = lwip_htonl(netmask);
/* first, check for the first zero */
for (mask = 1U << 31 ; mask != 0; mask >>= 1) {
if ((nm_hostorder & mask) == 0) {
break;
}
}
/* then check that there is no one */
for (; mask != 0; mask >>= 1) {
if ((nm_hostorder & mask) != 0) {
/* there is a one after the first zero -> invalid */
return 0;
}
}
/* no one after the first zero -> valid */
return 1;
}
/* Here for now until needed in other places in lwIP */
#ifndef isprint
#define in_range(c, lo, up) ((u8_t)c >= lo && (u8_t)c <= up)
#define isprint(c) in_range(c, 0x20, 0x7f)
//#define isdigit(c) in_range(c, '0', '9')
//#define isxdigit(c) (isdigit(c) || in_range(c, 'a', 'f') || in_range(c, 'A', 'F'))
#define islower(c) in_range(c, 'a', 'z')
#define isspace(c) (c == ' ' || c == '\f' || c == '\n' || c == '\r' || c == '\t' || c == '\v')
#endif
/**
* Ascii internet address interpretation routine.
* The value returned is in network order.
*
* @param cp IP address in ascii represenation (e.g. "127.0.0.1")
* @return ip address in network order
*/
u32_t
ipaddr_addr(const char *cp)
{
ip_addr_t val;
if (ipaddr_aton(cp, &val)) {
return ip4_addr_get_u32(&val);
}
return (IPADDR_NONE);
}
/**
* Check whether "cp" is a valid ascii representation
* of an Internet address and convert to a binary address.
* Returns 1 if the address is valid, 0 if not.
* This replaces inet_addr, the return value from which
* cannot distinguish between failure and a local broadcast address.
*
* @param cp IP address in ascii represenation (e.g. "127.0.0.1")
* @param addr pointer to which to save the ip address in network order
* @return 1 if cp could be converted to addr, 0 on failure
*/
int
ipaddr_aton(const char *cp, ip_addr_t *addr)
{
u32_t val;
u8_t base;
char c;
char ch;
unsigned long cutoff;
int cutlim;
u32_t parts[4];
u32_t *pp = parts;
c = *cp;
for (;;) {
/*
* Collect number up to ``.''.
* Values are specified as for C:
* 0x=hex, 0=octal, 1-9=decimal.
*/
if (!isdigit(c))
return (0);
val = 0;
base = 10;
if (c == '0') {
c = *++cp;
if (c == 'x' || c == 'X') {
base = 16;
c = *++cp;
} else
base = 8;
}
cutoff =(unsigned long)0xffffffff / (unsigned long)base;
cutlim =(unsigned long)0xffffffff % (unsigned long)base;
for (;;) {
if (isdigit(c)) {
ch = (int)(c - '0');
if (val > cutoff || (val == cutoff && ch > cutlim))
return (0);
val = (val * base) + (int)(c - '0');
c = *++cp;
} else if (base == 16 && isxdigit(c)) {
ch = (int)(c + 10 - (islower(c) ? 'a' : 'A'));
if (val > cutoff || (val == cutoff && ch > cutlim))
return (0);
val = (val << 4) | (int)(c + 10 - (islower(c) ? 'a' : 'A'));
c = *++cp;
} else
break;
}
if (c == '.') {
/*
* Internet format:
* a.b.c.d
* a.b.c (with c treated as 16 bits)
* a.b (with b treated as 24 bits)
*/
if (pp >= parts + 3) {
return (0);
}
*pp++ = val;
c = *++cp;
} else
break;
}
/*
* Check for trailing characters.
*/
if (c != '\0' && !isspace(c)) {
return (0);
}
/*
* Concoct the address according to
* the number of parts specified.
*/
switch (pp - parts + 1) {
case 0:
return (0); /* initial nondigit */
case 1: /* a -- 32 bits */
break;
case 2: /* a.b -- 8.24 bits */
if ((val > 0xffffffUL) || (parts[0] > 0xff)) {
return (0);
}
val |= parts[0] << 24;
break;
case 3: /* a.b.c -- 8.8.16 bits */
if ((val > 0xffff) || (parts[0] > 0xff) || (parts[1] > 0xff)) {
return (0);
}
val |= (parts[0] << 24) | (parts[1] << 16);
break;
case 4: /* a.b.c.d -- 8.8.8.8 bits */
if ((val > 0xff) || (parts[0] > 0xff) || (parts[1] > 0xff) || (parts[2] > 0xff)) {
return (0);
}
val |= (parts[0] << 24) | (parts[1] << 16) | (parts[2] << 8);
break;
default:
LWIP_ASSERT("unhandled", 0);
break;
}
if (addr) {
ip4_addr_set_u32(addr, htonl(val));
}
return (1);
}
/**
* Convert numeric IP address into decimal dotted ASCII representation.
* returns ptr to static buffer; not reentrant!
*
* @param addr ip address in network order to convert
* @return pointer to a global static (!) buffer that holds the ASCII
* represenation of addr
*/
char *
ipaddr_ntoa(const ip_addr_t *addr)
{
static char str[16];
return ipaddr_ntoa_r(addr, str, 16);
}
/**
* Same as ipaddr_ntoa, but reentrant since a user-supplied buffer is used.
*
* @param addr ip address in network order to convert
* @param buf target buffer where the string is stored
* @param buflen length of buf
* @return either pointer to buf which now holds the ASCII
* representation of addr or NULL if buf was too small
*/
char *ipaddr_ntoa_r(const ip_addr_t *addr, char *buf, int buflen)
{
u32_t s_addr;
char inv[3];
char *rp;
u8_t *ap;
u8_t rem;
u8_t n;
u8_t i;
int len = 0;
s_addr = ip4_addr_get_u32(addr);
rp = buf;
ap = (u8_t *)&s_addr;
for(n = 0; n < 4; n++) {
i = 0;
do {
rem = *ap % (u8_t)10;
*ap /= (u8_t)10;
inv[i++] = '0' + rem;
} while(*ap);
while(i--) {
if (len++ >= buflen) {
return NULL;
}
*rp++ = inv[i];
}
if (len++ >= buflen) {
return NULL;
}
*rp++ = '.';
ap++;
}
*--rp = 0;
return buf;
}
| {
"pile_set_name": "Github"
} |
<Mcml xmlns="http://schemas.microsoft.com/2008/mcml"
xmlns:cor="assembly://MSCorLib/System"
xmlns:me="Me">
<!-- Simple Radio Group sample. -->
<!-- This example implements a RadioGroup UI element. -->
<!-- This RadioGroup UI element takes a choice model item, and uses a -->
<!-- repeater to display all of the choices. -->
<!-- When the user selects one choice from the list, it will show the -->
<!-- currently selected choice. -->
<!-- The choices can only be selected using the mouse to keep the sample -->
<!-- small and specific to the radio group concept. -->
<!-- Initially, the 0th option within the Choice is chosen by default. -->
<!-- To specify a different initial value, set the DefaultIndex property -->
<!-- on Choice to the desired index. Then, using a conditionless Rule -->
<!-- with an Invoke action, call the DefaultValue method on Choice. This -->
<!-- will set the initial value when the UI is created. DefaultValue can -->
<!-- be used at any time to reset back to the default. -->
<UI Name="TestSimpleRadioGroup">
<Content>
<!-- Use the Radio Group UI element with some sample data. -->
<me:RadioGroup>
<Model>
<!-- Test model: an option with three values -->
<Choice Description="My Radio Group">
<Options>
<cor:String String="Choice 1" />
<cor:String String="Choice 2" />
<cor:String String="Choice 3" />
</Options>
</Choice>
</Model>
</me:RadioGroup>
</Content>
</UI>
<!-- This implementation of the RadioGroup -->
<UI Name="RadioGroup" >
<Properties>
<Choice Name="Model"
Choice="$Required" />
</Properties>
<Rules>
<!-- Bind the selected choice to the 'Selection' text element. -->
<Binding Source="[Model.Chosen!cor:String]"
Target="[Selection.Content]" />
<!-- Bind the sample data as a source to the repeater. -->
<Binding Source="[Model.Options]"
Target="[Repeater.Source]" />
</Rules>
<Content>
<Panel Name="RootPanel"
Layout="VerticalFlow"
MinimumSize="300,300">
<Children>
<!-- Text element to display the usage information. -->
<Text Content="Use the mouse to select a choice"
Color="LightGray"
Font="Arial,24"
Padding="0,0,0,20"/>
<Panel>
<Layout>
<FlowLayout Orientation="Horizontal"
Spacing="5,0" />
</Layout>
<Children>
<!-- Text elements to show the choice selection. -->
<Text Content="Your choice is: "
Color="LightGray"
Font="Arial,20"
Margins="0,0,0,30" />
<Text Name="Selection"
Color="Red"
Font="Arial,20"
Margins="0,0,0,30" />
</Children>
</Panel>
<Repeater Name="Repeater"
ContentName="RadioButton" >
<Layout>
<FlowLayout Orientation="Vertical"
Spacing="5,0"
ItemAlignment="Near"
AllowWrap="true"/>
</Layout>
</Repeater>
</Children>
</Panel>
</Content>
<Content Name="RadioButton">
<me:SimpleRadioButton Model="[Model]"
Option="[RepeatedItem!cor:String]" />
</Content>
</UI>
<!-- Implementation of a simple radio button. -->
<!-- This takes a Choice Model item and displays various choice options. -->
<UI Name="SimpleRadioButton">
<Properties>
<Choice Name="Model"
Choice="$Required"/>
<cor:String Name="Option"
String="$Required"/>
<Size Name="BoxSize"
Size="50,50"/>
</Properties>
<Locals>
<!-- React to "click" input. -->
<ClickHandler Name="Clicker"/>
</Locals>
<Rules>
<!-- The radio button has been clicked - update the model. -->
<Changed Source="[Clicker.Invoked]">
<Actions>
<Set Target="[Model.Chosen]"
Value="[Option]"/>
</Actions>
</Changed>
<!-- If the UI is losing the selection, show it as unselected. -->
<Rule>
<Conditions>
<Equality Source="[Model.Chosen]"
ConditionOp="NotEquals"
Value="[Option]"/>
</Conditions>
<Actions>
<Set Target="[Check.Visible]"
Value="false" />
</Actions>
</Rule>
<!-- If the UI is getting the selection, show it as selected. -->
<Rule>
<Conditions>
<Equality Source="[Model.Chosen]"
Value="[Option]"/>
</Conditions>
<Actions>
<Set Target="[Check.Visible]"
Value="true" />
</Actions>
</Rule>
<!-- Show the choice label. -->
<Binding Source="[Option]"
Target="[Label.Content]"/>
</Rules>
<Content>
<Panel>
<Layout>
<FlowLayout Orientation="Horizontal"
ItemAlignment="Center"/>
</Layout>
<Children>
<!-- The box around the selection. -->
<ColorFill Name="Box"
Content="Green"
Layout="Form"
Margins="0,0,5,0"
MaximumSize="[BoxSize]">
<Children>
<!-- Colorfill that becomes visible on selection. -->
<ColorFill Name="Check"
Content="yellow"
Visible="false">
<LayoutInput>
<FormLayoutInput Left="Parent,0.2"
Right="Parent,0.8"
Top="Parent,0.2"
Bottom="Parent,0.8"/>
</LayoutInput>
</ColorFill>
</Children>
</ColorFill>
<!-- The label to display choice. -->
<Text Name="Label"
Color="White"
Font="Arial, 25"/>
</Children>
</Panel>
</Content>
</UI>
</Mcml>
| {
"pile_set_name": "Github"
} |
# The purpose of this config is two-fold:
1. To provide flexibility for elastic/server admins to bring up additional elasticsearch container nodes on VIC dvSwitch mapped subnets.
2. This configuration will also provide an avenue for moving from non-containerized elasticsearch clusters to a fully containerized configuration.
**Please keep in mind that part of the 'magic' with this configuration is utilizing Named Volumes. Compose does not remove named volumes when performing docker-compose down. This is crucial to allowing the transition to container nodes.**
####Misc Notes
- The hosts referred to as **'log'**, **'elk-node1'**, **'elk-node2'** in this compose file are current elasticsearch nodes being prepped for containerization; Once the cluster has synchronized it should be safe to begin removing the 'old' nodes from allocation and subsequently from use for indexing/storage.
- This configuration assumes that you are using static addressing and have already configured the necessary DNS entries for the elasticsearch nodes contained within this compose file.
- The network referred to as **'vic-elastic'** is mapped to external dvSwitch portgroup **'elasticsearch'**.
- I will also be uploading a customized config based on: [ELK-Stack Compose file](https://github.com/vmware/vic-product/tree/master/tutorials/elk) which will be customized to match this compose file, which would allow the complete transition from the 3-node elastic cluster that I uploaded, to the 3-node redundant ELK cluster linked in this paragraph.
| {
"pile_set_name": "Github"
} |
/**
* OpenAL cross platform audio library
* Copyright (C) 2011 by authors.
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Library General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Library General Public License for more details.
*
* You should have received a copy of the GNU Library General Public
* License along with this library; if not, write to the
* Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
* Or go to http://www.gnu.org/copyleft/lgpl.html
*/
#include "config.h"
#define COBJMACROS
#include <stdlib.h>
#include <stdio.h>
#include <memory.h>
#include <mmdeviceapi.h>
#include <audioclient.h>
#include <cguid.h>
#include <devpropdef.h>
#include <mmreg.h>
#include <propsys.h>
#include <propkey.h>
#include <devpkey.h>
#ifndef _WAVEFORMATEXTENSIBLE_
#include <ks.h>
#include <ksmedia.h>
#endif
#include "alMain.h"
#include "alu.h"
#include "threads.h"
#include "compat.h"
#include "alstring.h"
#include "backends/base.h"
DEFINE_GUID(KSDATAFORMAT_SUBTYPE_PCM, 0x00000001, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71);
DEFINE_GUID(KSDATAFORMAT_SUBTYPE_IEEE_FLOAT, 0x00000003, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71);
DEFINE_DEVPROPKEY(DEVPKEY_Device_FriendlyName, 0xa45c254e, 0xdf1c, 0x4efd, 0x80,0x20, 0x67,0xd1,0x46,0xa8,0x50,0xe0, 14);
DEFINE_PROPERTYKEY(PKEY_AudioEndpoint_FormFactor, 0x1da5d803, 0xd492, 0x4edd, 0x8c,0x23, 0xe0,0xc0,0xff,0xee,0x7f,0x0e, 0);
#define MONO SPEAKER_FRONT_CENTER
#define STEREO (SPEAKER_FRONT_LEFT|SPEAKER_FRONT_RIGHT)
#define QUAD (SPEAKER_FRONT_LEFT|SPEAKER_FRONT_RIGHT|SPEAKER_BACK_LEFT|SPEAKER_BACK_RIGHT)
#define X5DOT1 (SPEAKER_FRONT_LEFT|SPEAKER_FRONT_RIGHT|SPEAKER_FRONT_CENTER|SPEAKER_LOW_FREQUENCY|SPEAKER_SIDE_LEFT|SPEAKER_SIDE_RIGHT)
#define X5DOT1REAR (SPEAKER_FRONT_LEFT|SPEAKER_FRONT_RIGHT|SPEAKER_FRONT_CENTER|SPEAKER_LOW_FREQUENCY|SPEAKER_BACK_LEFT|SPEAKER_BACK_RIGHT)
#define X6DOT1 (SPEAKER_FRONT_LEFT|SPEAKER_FRONT_RIGHT|SPEAKER_FRONT_CENTER|SPEAKER_LOW_FREQUENCY|SPEAKER_BACK_CENTER|SPEAKER_SIDE_LEFT|SPEAKER_SIDE_RIGHT)
#define X7DOT1 (SPEAKER_FRONT_LEFT|SPEAKER_FRONT_RIGHT|SPEAKER_FRONT_CENTER|SPEAKER_LOW_FREQUENCY|SPEAKER_BACK_LEFT|SPEAKER_BACK_RIGHT|SPEAKER_SIDE_LEFT|SPEAKER_SIDE_RIGHT)
#define X7DOT1_WIDE (SPEAKER_FRONT_LEFT|SPEAKER_FRONT_RIGHT|SPEAKER_FRONT_CENTER|SPEAKER_LOW_FREQUENCY|SPEAKER_BACK_LEFT|SPEAKER_BACK_RIGHT|SPEAKER_FRONT_LEFT_OF_CENTER|SPEAKER_FRONT_RIGHT_OF_CENTER)
#define DEVNAME_HEAD "OpenAL Soft on "
typedef struct {
al_string name;
WCHAR *devid;
} DevMap;
TYPEDEF_VECTOR(DevMap, vector_DevMap)
static void clear_devlist(vector_DevMap *list)
{
#define CLEAR_DEVMAP(i) do { \
AL_STRING_DEINIT((i)->name); \
free((i)->devid); \
(i)->devid = NULL; \
} while(0)
VECTOR_FOR_EACH(DevMap, *list, CLEAR_DEVMAP);
VECTOR_RESIZE(*list, 0);
#undef CLEAR_DEVMAP
}
static vector_DevMap PlaybackDevices;
static vector_DevMap CaptureDevices;
static HANDLE ThreadHdl;
static DWORD ThreadID;
typedef struct {
HANDLE FinishedEvt;
HRESULT result;
} ThreadRequest;
#define WM_USER_First (WM_USER+0)
#define WM_USER_OpenDevice (WM_USER+0)
#define WM_USER_ResetDevice (WM_USER+1)
#define WM_USER_StartDevice (WM_USER+2)
#define WM_USER_StopDevice (WM_USER+3)
#define WM_USER_CloseDevice (WM_USER+4)
#define WM_USER_Enumerate (WM_USER+5)
#define WM_USER_Last (WM_USER+5)
static inline void ReturnMsgResponse(ThreadRequest *req, HRESULT res)
{
req->result = res;
SetEvent(req->FinishedEvt);
}
static HRESULT WaitForResponse(ThreadRequest *req)
{
if(WaitForSingleObject(req->FinishedEvt, INFINITE) == WAIT_OBJECT_0)
return req->result;
ERR("Message response error: %lu\n", GetLastError());
return E_FAIL;
}
static void get_device_name(IMMDevice *device, al_string *name)
{
IPropertyStore *ps;
PROPVARIANT pvname;
HRESULT hr;
al_string_copy_cstr(name, DEVNAME_HEAD);
hr = IMMDevice_OpenPropertyStore(device, STGM_READ, &ps);
if(FAILED(hr))
{
WARN("OpenPropertyStore failed: 0x%08lx\n", hr);
al_string_append_cstr(name, "Unknown Device Name");
return;
}
PropVariantInit(&pvname);
hr = IPropertyStore_GetValue(ps, (const PROPERTYKEY*)&DEVPKEY_Device_FriendlyName, &pvname);
if(FAILED(hr))
{
WARN("GetValue Device_FriendlyName failed: 0x%08lx\n", hr);
al_string_append_cstr(name, "Unknown Device Name");
}
else if(pvname.vt == VT_LPWSTR)
al_string_append_wcstr(name, pvname.pwszVal);
else
{
WARN("Unexpected PROPVARIANT type: 0x%04x\n", pvname.vt);
al_string_append_cstr(name, "Unknown Device Name");
}
PropVariantClear(&pvname);
IPropertyStore_Release(ps);
}
static void get_device_formfactor(IMMDevice *device, EndpointFormFactor *formfactor)
{
IPropertyStore *ps;
PROPVARIANT pvform;
HRESULT hr;
hr = IMMDevice_OpenPropertyStore(device, STGM_READ, &ps);
if(FAILED(hr))
{
WARN("OpenPropertyStore failed: 0x%08lx\n", hr);
return;
}
PropVariantInit(&pvform);
hr = IPropertyStore_GetValue(ps, &PKEY_AudioEndpoint_FormFactor, &pvform);
if(FAILED(hr))
WARN("GetValue AudioEndpoint_FormFactor failed: 0x%08lx\n", hr);
else if(pvform.vt == VT_UI4)
*formfactor = pvform.ulVal;
else if(pvform.vt == VT_EMPTY)
*formfactor = UnknownFormFactor;
else
WARN("Unexpected PROPVARIANT type: 0x%04x\n", pvform.vt);
PropVariantClear(&pvform);
IPropertyStore_Release(ps);
}
static void add_device(IMMDevice *device, LPCWSTR devid, vector_DevMap *list)
{
int count = 0;
al_string tmpname;
DevMap entry;
AL_STRING_INIT(tmpname);
AL_STRING_INIT(entry.name);
entry.devid = strdupW(devid);
get_device_name(device, &tmpname);
while(1)
{
const DevMap *iter;
al_string_copy(&entry.name, tmpname);
if(count != 0)
{
char str[64];
snprintf(str, sizeof(str), " #%d", count+1);
al_string_append_cstr(&entry.name, str);
}
#define MATCH_ENTRY(i) (al_string_cmp(entry.name, (i)->name) == 0)
VECTOR_FIND_IF(iter, const DevMap, *list, MATCH_ENTRY);
if(iter == VECTOR_END(*list)) break;
#undef MATCH_ENTRY
count++;
}
TRACE("Got device \"%s\", \"%ls\"\n", al_string_get_cstr(entry.name), entry.devid);
VECTOR_PUSH_BACK(*list, entry);
AL_STRING_DEINIT(tmpname);
}
static LPWSTR get_device_id(IMMDevice *device)
{
LPWSTR devid;
HRESULT hr;
hr = IMMDevice_GetId(device, &devid);
if(FAILED(hr))
{
ERR("Failed to get device id: %lx\n", hr);
return NULL;
}
return devid;
}
static HRESULT probe_devices(IMMDeviceEnumerator *devenum, EDataFlow flowdir, vector_DevMap *list)
{
IMMDeviceCollection *coll;
IMMDevice *defdev = NULL;
LPWSTR defdevid = NULL;
HRESULT hr;
UINT count;
UINT i;
hr = IMMDeviceEnumerator_EnumAudioEndpoints(devenum, flowdir, DEVICE_STATE_ACTIVE, &coll);
if(FAILED(hr))
{
ERR("Failed to enumerate audio endpoints: 0x%08lx\n", hr);
return hr;
}
count = 0;
hr = IMMDeviceCollection_GetCount(coll, &count);
if(SUCCEEDED(hr) && count > 0)
{
clear_devlist(list);
if(!VECTOR_RESERVE(*list, count))
{
IMMDeviceCollection_Release(coll);
return E_OUTOFMEMORY;
}
hr = IMMDeviceEnumerator_GetDefaultAudioEndpoint(devenum, flowdir,
eMultimedia, &defdev);
}
if(SUCCEEDED(hr) && defdev != NULL)
{
defdevid = get_device_id(defdev);
if(defdevid)
add_device(defdev, defdevid, list);
}
for(i = 0;i < count;++i)
{
IMMDevice *device;
LPWSTR devid;
hr = IMMDeviceCollection_Item(coll, i, &device);
if(FAILED(hr)) continue;
devid = get_device_id(device);
if(devid)
{
if(wcscmp(devid, defdevid) != 0)
add_device(device, devid, list);
CoTaskMemFree(devid);
}
IMMDevice_Release(device);
}
if(defdev) IMMDevice_Release(defdev);
if(defdevid) CoTaskMemFree(defdevid);
IMMDeviceCollection_Release(coll);
return S_OK;
}
/* Proxy interface used by the message handler. */
struct ALCmmdevProxyVtable;
typedef struct ALCmmdevProxy {
const struct ALCmmdevProxyVtable *vtbl;
} ALCmmdevProxy;
struct ALCmmdevProxyVtable {
HRESULT (*const openProxy)(ALCmmdevProxy*);
void (*const closeProxy)(ALCmmdevProxy*);
HRESULT (*const resetProxy)(ALCmmdevProxy*);
HRESULT (*const startProxy)(ALCmmdevProxy*);
void (*const stopProxy)(ALCmmdevProxy*);
};
#define DEFINE_ALCMMDEVPROXY_VTABLE(T) \
DECLARE_THUNK(T, ALCmmdevProxy, HRESULT, openProxy) \
DECLARE_THUNK(T, ALCmmdevProxy, void, closeProxy) \
DECLARE_THUNK(T, ALCmmdevProxy, HRESULT, resetProxy) \
DECLARE_THUNK(T, ALCmmdevProxy, HRESULT, startProxy) \
DECLARE_THUNK(T, ALCmmdevProxy, void, stopProxy) \
\
static const struct ALCmmdevProxyVtable T##_ALCmmdevProxy_vtable = { \
T##_ALCmmdevProxy_openProxy, \
T##_ALCmmdevProxy_closeProxy, \
T##_ALCmmdevProxy_resetProxy, \
T##_ALCmmdevProxy_startProxy, \
T##_ALCmmdevProxy_stopProxy, \
}
static void ALCmmdevProxy_Construct(ALCmmdevProxy* UNUSED(self)) { }
static void ALCmmdevProxy_Destruct(ALCmmdevProxy* UNUSED(self)) { }
static DWORD CALLBACK ALCmmdevProxy_messageHandler(void *ptr)
{
ThreadRequest *req = ptr;
IMMDeviceEnumerator *Enumerator;
ALuint deviceCount = 0;
ALCmmdevProxy *proxy;
HRESULT hr, cohr;
MSG msg;
TRACE("Starting message thread\n");
cohr = CoInitialize(NULL);
if(FAILED(cohr))
{
WARN("Failed to initialize COM: 0x%08lx\n", cohr);
ReturnMsgResponse(req, cohr);
return 0;
}
hr = CoCreateInstance(&CLSID_MMDeviceEnumerator, NULL, CLSCTX_INPROC_SERVER, &IID_IMMDeviceEnumerator, &ptr);
if(FAILED(hr))
{
WARN("Failed to create IMMDeviceEnumerator instance: 0x%08lx\n", hr);
CoUninitialize();
ReturnMsgResponse(req, hr);
return 0;
}
Enumerator = ptr;
IMMDeviceEnumerator_Release(Enumerator);
Enumerator = NULL;
CoUninitialize();
/* HACK: Force Windows to create a message queue for this thread before
* returning success, otherwise PostThreadMessage may fail if it gets
* called before GetMessage.
*/
PeekMessage(&msg, NULL, WM_USER, WM_USER, PM_NOREMOVE);
TRACE("Message thread initialization complete\n");
ReturnMsgResponse(req, S_OK);
TRACE("Starting message loop\n");
while(GetMessage(&msg, NULL, WM_USER_First, WM_USER_Last))
{
TRACE("Got message %u (lparam=%p, wparam=%p)\n", msg.message, (void*)msg.lParam, (void*)msg.wParam);
switch(msg.message)
{
case WM_USER_OpenDevice:
req = (ThreadRequest*)msg.wParam;
proxy = (ALCmmdevProxy*)msg.lParam;
hr = cohr = S_OK;
if(++deviceCount == 1)
hr = cohr = CoInitialize(NULL);
if(SUCCEEDED(hr))
hr = V0(proxy,openProxy)();
if(FAILED(hr))
{
if(--deviceCount == 0 && SUCCEEDED(cohr))
CoUninitialize();
}
ReturnMsgResponse(req, hr);
continue;
case WM_USER_ResetDevice:
req = (ThreadRequest*)msg.wParam;
proxy = (ALCmmdevProxy*)msg.lParam;
hr = V0(proxy,resetProxy)();
ReturnMsgResponse(req, hr);
continue;
case WM_USER_StartDevice:
req = (ThreadRequest*)msg.wParam;
proxy = (ALCmmdevProxy*)msg.lParam;
hr = V0(proxy,startProxy)();
ReturnMsgResponse(req, hr);
continue;
case WM_USER_StopDevice:
req = (ThreadRequest*)msg.wParam;
proxy = (ALCmmdevProxy*)msg.lParam;
V0(proxy,stopProxy)();
ReturnMsgResponse(req, S_OK);
continue;
case WM_USER_CloseDevice:
req = (ThreadRequest*)msg.wParam;
proxy = (ALCmmdevProxy*)msg.lParam;
V0(proxy,closeProxy)();
if(--deviceCount == 0)
CoUninitialize();
ReturnMsgResponse(req, S_OK);
continue;
case WM_USER_Enumerate:
req = (ThreadRequest*)msg.wParam;
hr = cohr = S_OK;
if(++deviceCount == 1)
hr = cohr = CoInitialize(NULL);
if(SUCCEEDED(hr))
hr = CoCreateInstance(&CLSID_MMDeviceEnumerator, NULL, CLSCTX_INPROC_SERVER, &IID_IMMDeviceEnumerator, &ptr);
if(SUCCEEDED(hr))
{
Enumerator = ptr;
if(msg.lParam == ALL_DEVICE_PROBE)
hr = probe_devices(Enumerator, eRender, &PlaybackDevices);
else if(msg.lParam == CAPTURE_DEVICE_PROBE)
hr = probe_devices(Enumerator, eCapture, &CaptureDevices);
IMMDeviceEnumerator_Release(Enumerator);
Enumerator = NULL;
}
if(--deviceCount == 0 && SUCCEEDED(cohr))
CoUninitialize();
ReturnMsgResponse(req, hr);
continue;
default:
ERR("Unexpected message: %u\n", msg.message);
continue;
}
}
TRACE("Message loop finished\n");
return 0;
}
typedef struct ALCmmdevPlayback {
DERIVE_FROM_TYPE(ALCbackend);
DERIVE_FROM_TYPE(ALCmmdevProxy);
WCHAR *devid;
IMMDevice *mmdev;
IAudioClient *client;
IAudioRenderClient *render;
HANDLE NotifyEvent;
HANDLE MsgEvent;
volatile UINT32 Padding;
volatile int killNow;
althrd_t thread;
} ALCmmdevPlayback;
static int ALCmmdevPlayback_mixerProc(void *arg);
static void ALCmmdevPlayback_Construct(ALCmmdevPlayback *self, ALCdevice *device);
static void ALCmmdevPlayback_Destruct(ALCmmdevPlayback *self);
static ALCenum ALCmmdevPlayback_open(ALCmmdevPlayback *self, const ALCchar *name);
static HRESULT ALCmmdevPlayback_openProxy(ALCmmdevPlayback *self);
static void ALCmmdevPlayback_close(ALCmmdevPlayback *self);
static void ALCmmdevPlayback_closeProxy(ALCmmdevPlayback *self);
static ALCboolean ALCmmdevPlayback_reset(ALCmmdevPlayback *self);
static HRESULT ALCmmdevPlayback_resetProxy(ALCmmdevPlayback *self);
static ALCboolean ALCmmdevPlayback_start(ALCmmdevPlayback *self);
static HRESULT ALCmmdevPlayback_startProxy(ALCmmdevPlayback *self);
static void ALCmmdevPlayback_stop(ALCmmdevPlayback *self);
static void ALCmmdevPlayback_stopProxy(ALCmmdevPlayback *self);
static DECLARE_FORWARD2(ALCmmdevPlayback, ALCbackend, ALCenum, captureSamples, ALCvoid*, ALCuint)
static DECLARE_FORWARD(ALCmmdevPlayback, ALCbackend, ALCuint, availableSamples)
static ClockLatency ALCmmdevPlayback_getClockLatency(ALCmmdevPlayback *self);
static DECLARE_FORWARD(ALCmmdevPlayback, ALCbackend, void, lock)
static DECLARE_FORWARD(ALCmmdevPlayback, ALCbackend, void, unlock)
DECLARE_DEFAULT_ALLOCATORS(ALCmmdevPlayback)
DEFINE_ALCMMDEVPROXY_VTABLE(ALCmmdevPlayback);
DEFINE_ALCBACKEND_VTABLE(ALCmmdevPlayback);
static void ALCmmdevPlayback_Construct(ALCmmdevPlayback *self, ALCdevice *device)
{
SET_VTABLE2(ALCmmdevPlayback, ALCbackend, self);
SET_VTABLE2(ALCmmdevPlayback, ALCmmdevProxy, self);
ALCbackend_Construct(STATIC_CAST(ALCbackend, self), device);
ALCmmdevProxy_Construct(STATIC_CAST(ALCmmdevProxy, self));
self->devid = NULL;
self->mmdev = NULL;
self->client = NULL;
self->render = NULL;
self->NotifyEvent = NULL;
self->MsgEvent = NULL;
self->Padding = 0;
self->killNow = 0;
}
static void ALCmmdevPlayback_Destruct(ALCmmdevPlayback *self)
{
if(self->NotifyEvent != NULL)
CloseHandle(self->NotifyEvent);
self->NotifyEvent = NULL;
if(self->MsgEvent != NULL)
CloseHandle(self->MsgEvent);
self->MsgEvent = NULL;
free(self->devid);
self->devid = NULL;
ALCmmdevProxy_Destruct(STATIC_CAST(ALCmmdevProxy, self));
ALCbackend_Destruct(STATIC_CAST(ALCbackend, self));
}
FORCE_ALIGN static int ALCmmdevPlayback_mixerProc(void *arg)
{
ALCmmdevPlayback *self = arg;
ALCdevice *device = STATIC_CAST(ALCbackend, self)->mDevice;
UINT32 buffer_len, written;
ALuint update_size, len;
BYTE *buffer;
HRESULT hr;
hr = CoInitialize(NULL);
if(FAILED(hr))
{
ERR("CoInitialize(NULL) failed: 0x%08lx\n", hr);
V0(device->Backend,lock)();
aluHandleDisconnect(device);
V0(device->Backend,unlock)();
return 1;
}
SetRTPriority();
althrd_setname(althrd_current(), MIXER_THREAD_NAME);
update_size = device->UpdateSize;
buffer_len = update_size * device->NumUpdates;
while(!self->killNow)
{
hr = IAudioClient_GetCurrentPadding(self->client, &written);
if(FAILED(hr))
{
ERR("Failed to get padding: 0x%08lx\n", hr);
V0(device->Backend,lock)();
aluHandleDisconnect(device);
V0(device->Backend,unlock)();
break;
}
self->Padding = written;
len = buffer_len - written;
if(len < update_size)
{
DWORD res;
res = WaitForSingleObjectEx(self->NotifyEvent, 2000, FALSE);
if(res != WAIT_OBJECT_0)
ERR("WaitForSingleObjectEx error: 0x%lx\n", res);
continue;
}
len -= len%update_size;
hr = IAudioRenderClient_GetBuffer(self->render, len, &buffer);
if(SUCCEEDED(hr))
{
V0(device->Backend,lock)();
aluMixData(device, buffer, len);
self->Padding = written + len;
V0(device->Backend,unlock)();
hr = IAudioRenderClient_ReleaseBuffer(self->render, len, 0);
}
if(FAILED(hr))
{
ERR("Failed to buffer data: 0x%08lx\n", hr);
V0(device->Backend,lock)();
aluHandleDisconnect(device);
V0(device->Backend,unlock)();
break;
}
}
self->Padding = 0;
CoUninitialize();
return 0;
}
static ALCboolean MakeExtensible(WAVEFORMATEXTENSIBLE *out, const WAVEFORMATEX *in)
{
memset(out, 0, sizeof(*out));
if(in->wFormatTag == WAVE_FORMAT_EXTENSIBLE)
*out = *(const WAVEFORMATEXTENSIBLE*)in;
else if(in->wFormatTag == WAVE_FORMAT_PCM)
{
out->Format = *in;
out->Format.wFormatTag = WAVE_FORMAT_EXTENSIBLE;
out->Format.cbSize = sizeof(*out) - sizeof(*in);
if(out->Format.nChannels == 1)
out->dwChannelMask = MONO;
else if(out->Format.nChannels == 2)
out->dwChannelMask = STEREO;
else
ERR("Unhandled PCM channel count: %d\n", out->Format.nChannels);
out->SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
}
else if(in->wFormatTag == WAVE_FORMAT_IEEE_FLOAT)
{
out->Format = *in;
out->Format.wFormatTag = WAVE_FORMAT_EXTENSIBLE;
out->Format.cbSize = sizeof(*out) - sizeof(*in);
if(out->Format.nChannels == 1)
out->dwChannelMask = MONO;
else if(out->Format.nChannels == 2)
out->dwChannelMask = STEREO;
else
ERR("Unhandled IEEE float channel count: %d\n", out->Format.nChannels);
out->SubFormat = KSDATAFORMAT_SUBTYPE_IEEE_FLOAT;
}
else
{
ERR("Unhandled format tag: 0x%04x\n", in->wFormatTag);
return ALC_FALSE;
}
return ALC_TRUE;
}
static ALCenum ALCmmdevPlayback_open(ALCmmdevPlayback *self, const ALCchar *deviceName)
{
HRESULT hr = S_OK;
self->NotifyEvent = CreateEventW(NULL, FALSE, FALSE, NULL);
self->MsgEvent = CreateEventW(NULL, FALSE, FALSE, NULL);
if(self->NotifyEvent == NULL || self->MsgEvent == NULL)
{
ERR("Failed to create message events: %lu\n", GetLastError());
hr = E_FAIL;
}
if(SUCCEEDED(hr))
{
if(deviceName)
{
const DevMap *iter;
if(VECTOR_SIZE(PlaybackDevices) == 0)
{
ThreadRequest req = { self->MsgEvent, 0 };
if(PostThreadMessage(ThreadID, WM_USER_Enumerate, (WPARAM)&req, ALL_DEVICE_PROBE))
(void)WaitForResponse(&req);
}
hr = E_FAIL;
#define MATCH_NAME(i) (al_string_cmp_cstr((i)->name, deviceName) == 0)
VECTOR_FIND_IF(iter, const DevMap, PlaybackDevices, MATCH_NAME);
if(iter == VECTOR_END(PlaybackDevices))
WARN("Failed to find device name matching \"%s\"\n", deviceName);
else
{
ALCdevice *device = STATIC_CAST(ALCbackend,self)->mDevice;
self->devid = strdupW(iter->devid);
al_string_copy(&device->DeviceName, iter->name);
hr = S_OK;
}
#undef MATCH_NAME
}
}
if(SUCCEEDED(hr))
{
ThreadRequest req = { self->MsgEvent, 0 };
hr = E_FAIL;
if(PostThreadMessage(ThreadID, WM_USER_OpenDevice, (WPARAM)&req, (LPARAM)STATIC_CAST(ALCmmdevProxy, self)))
hr = WaitForResponse(&req);
else
ERR("Failed to post thread message: %lu\n", GetLastError());
}
if(FAILED(hr))
{
if(self->NotifyEvent != NULL)
CloseHandle(self->NotifyEvent);
self->NotifyEvent = NULL;
if(self->MsgEvent != NULL)
CloseHandle(self->MsgEvent);
self->MsgEvent = NULL;
free(self->devid);
self->devid = NULL;
ERR("Device init failed: 0x%08lx\n", hr);
return ALC_INVALID_VALUE;
}
return ALC_NO_ERROR;
}
static HRESULT ALCmmdevPlayback_openProxy(ALCmmdevPlayback *self)
{
ALCdevice *device = STATIC_CAST(ALCbackend, self)->mDevice;
void *ptr;
HRESULT hr;
hr = CoCreateInstance(&CLSID_MMDeviceEnumerator, NULL, CLSCTX_INPROC_SERVER, &IID_IMMDeviceEnumerator, &ptr);
if(SUCCEEDED(hr))
{
IMMDeviceEnumerator *Enumerator = ptr;
if(!self->devid)
hr = IMMDeviceEnumerator_GetDefaultAudioEndpoint(Enumerator, eRender, eMultimedia, &self->mmdev);
else
hr = IMMDeviceEnumerator_GetDevice(Enumerator, self->devid, &self->mmdev);
IMMDeviceEnumerator_Release(Enumerator);
Enumerator = NULL;
}
if(SUCCEEDED(hr))
hr = IMMDevice_Activate(self->mmdev, &IID_IAudioClient, CLSCTX_INPROC_SERVER, NULL, &ptr);
if(SUCCEEDED(hr))
{
self->client = ptr;
if(al_string_empty(device->DeviceName))
get_device_name(self->mmdev, &device->DeviceName);
}
if(FAILED(hr))
{
if(self->mmdev)
IMMDevice_Release(self->mmdev);
self->mmdev = NULL;
}
return hr;
}
static void ALCmmdevPlayback_close(ALCmmdevPlayback *self)
{
ThreadRequest req = { self->MsgEvent, 0 };
if(PostThreadMessage(ThreadID, WM_USER_CloseDevice, (WPARAM)&req, (LPARAM)STATIC_CAST(ALCmmdevProxy, self)))
(void)WaitForResponse(&req);
CloseHandle(self->MsgEvent);
self->MsgEvent = NULL;
CloseHandle(self->NotifyEvent);
self->NotifyEvent = NULL;
free(self->devid);
self->devid = NULL;
}
static void ALCmmdevPlayback_closeProxy(ALCmmdevPlayback *self)
{
if(self->client)
IAudioClient_Release(self->client);
self->client = NULL;
if(self->mmdev)
IMMDevice_Release(self->mmdev);
self->mmdev = NULL;
}
static ALCboolean ALCmmdevPlayback_reset(ALCmmdevPlayback *self)
{
ThreadRequest req = { self->MsgEvent, 0 };
HRESULT hr = E_FAIL;
if(PostThreadMessage(ThreadID, WM_USER_ResetDevice, (WPARAM)&req, (LPARAM)STATIC_CAST(ALCmmdevProxy, self)))
hr = WaitForResponse(&req);
return SUCCEEDED(hr) ? ALC_TRUE : ALC_FALSE;
}
static HRESULT ALCmmdevPlayback_resetProxy(ALCmmdevPlayback *self)
{
ALCdevice *device = STATIC_CAST(ALCbackend, self)->mDevice;
EndpointFormFactor formfactor = UnknownFormFactor;
WAVEFORMATEXTENSIBLE OutputType;
WAVEFORMATEX *wfx = NULL;
REFERENCE_TIME min_per, buf_time;
UINT32 buffer_len, min_len;
void *ptr = NULL;
HRESULT hr;
if(self->client)
IAudioClient_Release(self->client);
self->client = NULL;
hr = IMMDevice_Activate(self->mmdev, &IID_IAudioClient, CLSCTX_INPROC_SERVER, NULL, &ptr);
if(FAILED(hr))
{
ERR("Failed to reactivate audio client: 0x%08lx\n", hr);
return hr;
}
self->client = ptr;
hr = IAudioClient_GetMixFormat(self->client, &wfx);
if(FAILED(hr))
{
ERR("Failed to get mix format: 0x%08lx\n", hr);
return hr;
}
if(!MakeExtensible(&OutputType, wfx))
{
CoTaskMemFree(wfx);
return E_FAIL;
}
CoTaskMemFree(wfx);
wfx = NULL;
buf_time = ((REFERENCE_TIME)device->UpdateSize*device->NumUpdates*10000000 +
device->Frequency-1) / device->Frequency;
if(!(device->Flags&DEVICE_FREQUENCY_REQUEST))
device->Frequency = OutputType.Format.nSamplesPerSec;
if(!(device->Flags&DEVICE_CHANNELS_REQUEST))
{
if(OutputType.Format.nChannels == 1 && OutputType.dwChannelMask == MONO)
device->FmtChans = DevFmtMono;
else if(OutputType.Format.nChannels == 2 && OutputType.dwChannelMask == STEREO)
device->FmtChans = DevFmtStereo;
else if(OutputType.Format.nChannels == 4 && OutputType.dwChannelMask == QUAD)
device->FmtChans = DevFmtQuad;
else if(OutputType.Format.nChannels == 6 && OutputType.dwChannelMask == X5DOT1)
device->FmtChans = DevFmtX51;
else if(OutputType.Format.nChannels == 6 && OutputType.dwChannelMask == X5DOT1REAR)
device->FmtChans = DevFmtX51Rear;
else if(OutputType.Format.nChannels == 7 && OutputType.dwChannelMask == X6DOT1)
device->FmtChans = DevFmtX61;
else if(OutputType.Format.nChannels == 8 && (OutputType.dwChannelMask == X7DOT1 || OutputType.dwChannelMask == X7DOT1_WIDE))
device->FmtChans = DevFmtX71;
else
ERR("Unhandled channel config: %d -- 0x%08lx\n", OutputType.Format.nChannels, OutputType.dwChannelMask);
}
switch(device->FmtChans)
{
case DevFmtMono:
OutputType.Format.nChannels = 1;
OutputType.dwChannelMask = MONO;
break;
case DevFmtBFormat3D:
device->FmtChans = DevFmtStereo;
/*fall-through*/
case DevFmtStereo:
OutputType.Format.nChannels = 2;
OutputType.dwChannelMask = STEREO;
break;
case DevFmtQuad:
OutputType.Format.nChannels = 4;
OutputType.dwChannelMask = QUAD;
break;
case DevFmtX51:
OutputType.Format.nChannels = 6;
OutputType.dwChannelMask = X5DOT1;
break;
case DevFmtX51Rear:
OutputType.Format.nChannels = 6;
OutputType.dwChannelMask = X5DOT1REAR;
break;
case DevFmtX61:
OutputType.Format.nChannels = 7;
OutputType.dwChannelMask = X6DOT1;
break;
case DevFmtX71:
OutputType.Format.nChannels = 8;
OutputType.dwChannelMask = X7DOT1;
break;
}
switch(device->FmtType)
{
case DevFmtByte:
device->FmtType = DevFmtUByte;
/* fall-through */
case DevFmtUByte:
OutputType.Format.wBitsPerSample = 8;
OutputType.Samples.wValidBitsPerSample = 8;
OutputType.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
break;
case DevFmtUShort:
device->FmtType = DevFmtShort;
/* fall-through */
case DevFmtShort:
OutputType.Format.wBitsPerSample = 16;
OutputType.Samples.wValidBitsPerSample = 16;
OutputType.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
break;
case DevFmtUInt:
device->FmtType = DevFmtInt;
/* fall-through */
case DevFmtInt:
OutputType.Format.wBitsPerSample = 32;
OutputType.Samples.wValidBitsPerSample = 32;
OutputType.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
break;
case DevFmtFloat:
OutputType.Format.wBitsPerSample = 32;
OutputType.Samples.wValidBitsPerSample = 32;
OutputType.SubFormat = KSDATAFORMAT_SUBTYPE_IEEE_FLOAT;
break;
}
OutputType.Format.nSamplesPerSec = device->Frequency;
OutputType.Format.nBlockAlign = OutputType.Format.nChannels *
OutputType.Format.wBitsPerSample / 8;
OutputType.Format.nAvgBytesPerSec = OutputType.Format.nSamplesPerSec *
OutputType.Format.nBlockAlign;
hr = IAudioClient_IsFormatSupported(self->client, AUDCLNT_SHAREMODE_SHARED, &OutputType.Format, &wfx);
if(FAILED(hr))
{
ERR("Failed to check format support: 0x%08lx\n", hr);
hr = IAudioClient_GetMixFormat(self->client, &wfx);
}
if(FAILED(hr))
{
ERR("Failed to find a supported format: 0x%08lx\n", hr);
return hr;
}
if(wfx != NULL)
{
if(!MakeExtensible(&OutputType, wfx))
{
CoTaskMemFree(wfx);
return E_FAIL;
}
CoTaskMemFree(wfx);
wfx = NULL;
device->Frequency = OutputType.Format.nSamplesPerSec;
if(OutputType.Format.nChannels == 1 && OutputType.dwChannelMask == MONO)
device->FmtChans = DevFmtMono;
else if(OutputType.Format.nChannels == 2 && OutputType.dwChannelMask == STEREO)
device->FmtChans = DevFmtStereo;
else if(OutputType.Format.nChannels == 4 && OutputType.dwChannelMask == QUAD)
device->FmtChans = DevFmtQuad;
else if(OutputType.Format.nChannels == 6 && OutputType.dwChannelMask == X5DOT1)
device->FmtChans = DevFmtX51;
else if(OutputType.Format.nChannels == 6 && OutputType.dwChannelMask == X5DOT1REAR)
device->FmtChans = DevFmtX51Rear;
else if(OutputType.Format.nChannels == 7 && OutputType.dwChannelMask == X6DOT1)
device->FmtChans = DevFmtX61;
else if(OutputType.Format.nChannels == 8 && (OutputType.dwChannelMask == X7DOT1 || OutputType.dwChannelMask == X7DOT1_WIDE))
device->FmtChans = DevFmtX71;
else
{
ERR("Unhandled extensible channels: %d -- 0x%08lx\n", OutputType.Format.nChannels, OutputType.dwChannelMask);
device->FmtChans = DevFmtStereo;
OutputType.Format.nChannels = 2;
OutputType.dwChannelMask = STEREO;
}
if(IsEqualGUID(&OutputType.SubFormat, &KSDATAFORMAT_SUBTYPE_PCM))
{
if(OutputType.Format.wBitsPerSample == 8)
device->FmtType = DevFmtUByte;
else if(OutputType.Format.wBitsPerSample == 16)
device->FmtType = DevFmtShort;
else if(OutputType.Format.wBitsPerSample == 32)
device->FmtType = DevFmtInt;
else
{
device->FmtType = DevFmtShort;
OutputType.Format.wBitsPerSample = 16;
}
}
else if(IsEqualGUID(&OutputType.SubFormat, &KSDATAFORMAT_SUBTYPE_IEEE_FLOAT))
{
device->FmtType = DevFmtFloat;
OutputType.Format.wBitsPerSample = 32;
}
else
{
ERR("Unhandled format sub-type\n");
device->FmtType = DevFmtShort;
OutputType.Format.wBitsPerSample = 16;
OutputType.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
}
OutputType.Samples.wValidBitsPerSample = OutputType.Format.wBitsPerSample;
}
get_device_formfactor(self->mmdev, &formfactor);
device->IsHeadphones = (device->FmtChans == DevFmtStereo && formfactor == Headphones);
SetDefaultWFXChannelOrder(device);
hr = IAudioClient_Initialize(self->client, AUDCLNT_SHAREMODE_SHARED,
AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
buf_time, 0, &OutputType.Format, NULL);
if(FAILED(hr))
{
ERR("Failed to initialize audio client: 0x%08lx\n", hr);
return hr;
}
hr = IAudioClient_GetDevicePeriod(self->client, &min_per, NULL);
if(SUCCEEDED(hr))
{
min_len = (UINT32)((min_per*device->Frequency + 10000000-1) / 10000000);
/* Find the nearest multiple of the period size to the update size */
if(min_len < device->UpdateSize)
min_len *= (device->UpdateSize + min_len/2)/min_len;
hr = IAudioClient_GetBufferSize(self->client, &buffer_len);
}
if(FAILED(hr))
{
ERR("Failed to get audio buffer info: 0x%08lx\n", hr);
return hr;
}
device->UpdateSize = min_len;
device->NumUpdates = buffer_len / device->UpdateSize;
if(device->NumUpdates <= 1)
{
ERR("Audio client returned buffer_len < period*2; expect break up\n");
device->NumUpdates = 2;
device->UpdateSize = buffer_len / device->NumUpdates;
}
hr = IAudioClient_SetEventHandle(self->client, self->NotifyEvent);
if(FAILED(hr))
{
ERR("Failed to set event handle: 0x%08lx\n", hr);
return hr;
}
return hr;
}
static ALCboolean ALCmmdevPlayback_start(ALCmmdevPlayback *self)
{
ThreadRequest req = { self->MsgEvent, 0 };
HRESULT hr = E_FAIL;
if(PostThreadMessage(ThreadID, WM_USER_StartDevice, (WPARAM)&req, (LPARAM)STATIC_CAST(ALCmmdevProxy, self)))
hr = WaitForResponse(&req);
return SUCCEEDED(hr) ? ALC_TRUE : ALC_FALSE;
}
static HRESULT ALCmmdevPlayback_startProxy(ALCmmdevPlayback *self)
{
HRESULT hr;
void *ptr;
ResetEvent(self->NotifyEvent);
hr = IAudioClient_Start(self->client);
if(FAILED(hr))
ERR("Failed to start audio client: 0x%08lx\n", hr);
if(SUCCEEDED(hr))
hr = IAudioClient_GetService(self->client, &IID_IAudioRenderClient, &ptr);
if(SUCCEEDED(hr))
{
self->render = ptr;
self->killNow = 0;
if(althrd_create(&self->thread, ALCmmdevPlayback_mixerProc, self) != althrd_success)
{
if(self->render)
IAudioRenderClient_Release(self->render);
self->render = NULL;
IAudioClient_Stop(self->client);
ERR("Failed to start thread\n");
hr = E_FAIL;
}
}
return hr;
}
static void ALCmmdevPlayback_stop(ALCmmdevPlayback *self)
{
ThreadRequest req = { self->MsgEvent, 0 };
if(PostThreadMessage(ThreadID, WM_USER_StopDevice, (WPARAM)&req, (LPARAM)STATIC_CAST(ALCmmdevProxy, self)))
(void)WaitForResponse(&req);
}
static void ALCmmdevPlayback_stopProxy(ALCmmdevPlayback *self)
{
int res;
if(!self->render)
return;
self->killNow = 1;
althrd_join(self->thread, &res);
IAudioRenderClient_Release(self->render);
self->render = NULL;
IAudioClient_Stop(self->client);
}
static ClockLatency ALCmmdevPlayback_getClockLatency(ALCmmdevPlayback *self)
{
ALCdevice *device = STATIC_CAST(ALCbackend, self)->mDevice;
ClockLatency ret;
ALCmmdevPlayback_lock(self);
ret.ClockTime = GetDeviceClockTime(device);
ret.Latency = self->Padding * DEVICE_CLOCK_RES / device->Frequency;
ALCmmdevPlayback_unlock(self);
return ret;
}
typedef struct ALCmmdevCapture {
DERIVE_FROM_TYPE(ALCbackend);
DERIVE_FROM_TYPE(ALCmmdevProxy);
WCHAR *devid;
IMMDevice *mmdev;
IAudioClient *client;
IAudioCaptureClient *capture;
HANDLE NotifyEvent;
HANDLE MsgEvent;
ll_ringbuffer_t *Ring;
volatile int killNow;
althrd_t thread;
} ALCmmdevCapture;
static int ALCmmdevCapture_recordProc(void *arg);
static void ALCmmdevCapture_Construct(ALCmmdevCapture *self, ALCdevice *device);
static void ALCmmdevCapture_Destruct(ALCmmdevCapture *self);
static ALCenum ALCmmdevCapture_open(ALCmmdevCapture *self, const ALCchar *name);
static HRESULT ALCmmdevCapture_openProxy(ALCmmdevCapture *self);
static void ALCmmdevCapture_close(ALCmmdevCapture *self);
static void ALCmmdevCapture_closeProxy(ALCmmdevCapture *self);
static DECLARE_FORWARD(ALCmmdevCapture, ALCbackend, ALCboolean, reset)
static HRESULT ALCmmdevCapture_resetProxy(ALCmmdevCapture *self);
static ALCboolean ALCmmdevCapture_start(ALCmmdevCapture *self);
static HRESULT ALCmmdevCapture_startProxy(ALCmmdevCapture *self);
static void ALCmmdevCapture_stop(ALCmmdevCapture *self);
static void ALCmmdevCapture_stopProxy(ALCmmdevCapture *self);
static ALCenum ALCmmdevCapture_captureSamples(ALCmmdevCapture *self, ALCvoid *buffer, ALCuint samples);
static ALuint ALCmmdevCapture_availableSamples(ALCmmdevCapture *self);
static DECLARE_FORWARD(ALCmmdevCapture, ALCbackend, ClockLatency, getClockLatency)
static DECLARE_FORWARD(ALCmmdevCapture, ALCbackend, void, lock)
static DECLARE_FORWARD(ALCmmdevCapture, ALCbackend, void, unlock)
DECLARE_DEFAULT_ALLOCATORS(ALCmmdevCapture)
DEFINE_ALCMMDEVPROXY_VTABLE(ALCmmdevCapture);
DEFINE_ALCBACKEND_VTABLE(ALCmmdevCapture);
static void ALCmmdevCapture_Construct(ALCmmdevCapture *self, ALCdevice *device)
{
SET_VTABLE2(ALCmmdevCapture, ALCbackend, self);
SET_VTABLE2(ALCmmdevCapture, ALCmmdevProxy, self);
ALCbackend_Construct(STATIC_CAST(ALCbackend, self), device);
ALCmmdevProxy_Construct(STATIC_CAST(ALCmmdevProxy, self));
self->devid = NULL;
self->mmdev = NULL;
self->client = NULL;
self->capture = NULL;
self->NotifyEvent = NULL;
self->MsgEvent = NULL;
self->Ring = NULL;
self->killNow = 0;
}
static void ALCmmdevCapture_Destruct(ALCmmdevCapture *self)
{
ll_ringbuffer_free(self->Ring);
self->Ring = NULL;
if(self->NotifyEvent != NULL)
CloseHandle(self->NotifyEvent);
self->NotifyEvent = NULL;
if(self->MsgEvent != NULL)
CloseHandle(self->MsgEvent);
self->MsgEvent = NULL;
free(self->devid);
self->devid = NULL;
ALCmmdevProxy_Destruct(STATIC_CAST(ALCmmdevProxy, self));
ALCbackend_Destruct(STATIC_CAST(ALCbackend, self));
}
FORCE_ALIGN int ALCmmdevCapture_recordProc(void *arg)
{
ALCmmdevCapture *self = arg;
ALCdevice *device = STATIC_CAST(ALCbackend, self)->mDevice;
HRESULT hr;
hr = CoInitialize(NULL);
if(FAILED(hr))
{
ERR("CoInitialize(NULL) failed: 0x%08lx\n", hr);
V0(device->Backend,lock)();
aluHandleDisconnect(device);
V0(device->Backend,unlock)();
return 1;
}
althrd_setname(althrd_current(), RECORD_THREAD_NAME);
while(!self->killNow)
{
UINT32 avail;
DWORD res;
hr = IAudioCaptureClient_GetNextPacketSize(self->capture, &avail);
if(FAILED(hr))
ERR("Failed to get next packet size: 0x%08lx\n", hr);
else while(avail > 0 && SUCCEEDED(hr))
{
UINT32 numsamples;
DWORD flags;
BYTE *data;
hr = IAudioCaptureClient_GetBuffer(self->capture,
&data, &numsamples, &flags, NULL, NULL
);
if(FAILED(hr))
{
ERR("Failed to get capture buffer: 0x%08lx\n", hr);
break;
}
ll_ringbuffer_write(self->Ring, (char*)data, numsamples);
hr = IAudioCaptureClient_ReleaseBuffer(self->capture, numsamples);
if(FAILED(hr))
{
ERR("Failed to release capture buffer: 0x%08lx\n", hr);
break;
}
hr = IAudioCaptureClient_GetNextPacketSize(self->capture, &avail);
if(FAILED(hr))
ERR("Failed to get next packet size: 0x%08lx\n", hr);
}
if(FAILED(hr))
{
V0(device->Backend,lock)();
aluHandleDisconnect(device);
V0(device->Backend,unlock)();
break;
}
res = WaitForSingleObjectEx(self->NotifyEvent, 2000, FALSE);
if(res != WAIT_OBJECT_0)
ERR("WaitForSingleObjectEx error: 0x%lx\n", res);
}
CoUninitialize();
return 0;
}
static ALCenum ALCmmdevCapture_open(ALCmmdevCapture *self, const ALCchar *deviceName)
{
HRESULT hr = S_OK;
self->NotifyEvent = CreateEventW(NULL, FALSE, FALSE, NULL);
self->MsgEvent = CreateEventW(NULL, FALSE, FALSE, NULL);
if(self->NotifyEvent == NULL || self->MsgEvent == NULL)
{
ERR("Failed to create message events: %lu\n", GetLastError());
hr = E_FAIL;
}
if(SUCCEEDED(hr))
{
if(deviceName)
{
const DevMap *iter;
if(VECTOR_SIZE(CaptureDevices) == 0)
{
ThreadRequest req = { self->MsgEvent, 0 };
if(PostThreadMessage(ThreadID, WM_USER_Enumerate, (WPARAM)&req, CAPTURE_DEVICE_PROBE))
(void)WaitForResponse(&req);
}
hr = E_FAIL;
#define MATCH_NAME(i) (al_string_cmp_cstr((i)->name, deviceName) == 0)
VECTOR_FIND_IF(iter, const DevMap, CaptureDevices, MATCH_NAME);
if(iter == VECTOR_END(CaptureDevices))
WARN("Failed to find device name matching \"%s\"\n", deviceName);
else
{
ALCdevice *device = STATIC_CAST(ALCbackend,self)->mDevice;
self->devid = strdupW(iter->devid);
al_string_copy(&device->DeviceName, iter->name);
hr = S_OK;
}
#undef MATCH_NAME
}
}
if(SUCCEEDED(hr))
{
ThreadRequest req = { self->MsgEvent, 0 };
hr = E_FAIL;
if(PostThreadMessage(ThreadID, WM_USER_OpenDevice, (WPARAM)&req, (LPARAM)STATIC_CAST(ALCmmdevProxy, self)))
hr = WaitForResponse(&req);
else
ERR("Failed to post thread message: %lu\n", GetLastError());
}
if(FAILED(hr))
{
if(self->NotifyEvent != NULL)
CloseHandle(self->NotifyEvent);
self->NotifyEvent = NULL;
if(self->MsgEvent != NULL)
CloseHandle(self->MsgEvent);
self->MsgEvent = NULL;
free(self->devid);
self->devid = NULL;
ERR("Device init failed: 0x%08lx\n", hr);
return ALC_INVALID_VALUE;
}
else
{
ThreadRequest req = { self->MsgEvent, 0 };
hr = E_FAIL;
if(PostThreadMessage(ThreadID, WM_USER_ResetDevice, (WPARAM)&req, (LPARAM)STATIC_CAST(ALCmmdevProxy, self)))
hr = WaitForResponse(&req);
else
ERR("Failed to post thread message: %lu\n", GetLastError());
if(FAILED(hr))
{
ALCmmdevCapture_close(self);
if(hr == E_OUTOFMEMORY)
return ALC_OUT_OF_MEMORY;
return ALC_INVALID_VALUE;
}
}
return ALC_NO_ERROR;
}
static HRESULT ALCmmdevCapture_openProxy(ALCmmdevCapture *self)
{
ALCdevice *device = STATIC_CAST(ALCbackend, self)->mDevice;
void *ptr;
HRESULT hr;
hr = CoCreateInstance(&CLSID_MMDeviceEnumerator, NULL, CLSCTX_INPROC_SERVER, &IID_IMMDeviceEnumerator, &ptr);
if(SUCCEEDED(hr))
{
IMMDeviceEnumerator *Enumerator = ptr;
if(!self->devid)
hr = IMMDeviceEnumerator_GetDefaultAudioEndpoint(Enumerator, eCapture, eMultimedia, &self->mmdev);
else
hr = IMMDeviceEnumerator_GetDevice(Enumerator, self->devid, &self->mmdev);
IMMDeviceEnumerator_Release(Enumerator);
Enumerator = NULL;
}
if(SUCCEEDED(hr))
hr = IMMDevice_Activate(self->mmdev, &IID_IAudioClient, CLSCTX_INPROC_SERVER, NULL, &ptr);
if(SUCCEEDED(hr))
{
self->client = ptr;
if(al_string_empty(device->DeviceName))
get_device_name(self->mmdev, &device->DeviceName);
}
if(FAILED(hr))
{
if(self->mmdev)
IMMDevice_Release(self->mmdev);
self->mmdev = NULL;
}
return hr;
}
static void ALCmmdevCapture_close(ALCmmdevCapture *self)
{
ThreadRequest req = { self->MsgEvent, 0 };
if(PostThreadMessage(ThreadID, WM_USER_CloseDevice, (WPARAM)&req, (LPARAM)STATIC_CAST(ALCmmdevProxy, self)))
(void)WaitForResponse(&req);
ll_ringbuffer_free(self->Ring);
self->Ring = NULL;
CloseHandle(self->MsgEvent);
self->MsgEvent = NULL;
CloseHandle(self->NotifyEvent);
self->NotifyEvent = NULL;
free(self->devid);
self->devid = NULL;
}
static void ALCmmdevCapture_closeProxy(ALCmmdevCapture *self)
{
if(self->client)
IAudioClient_Release(self->client);
self->client = NULL;
if(self->mmdev)
IMMDevice_Release(self->mmdev);
self->mmdev = NULL;
}
static HRESULT ALCmmdevCapture_resetProxy(ALCmmdevCapture *self)
{
ALCdevice *device = STATIC_CAST(ALCbackend, self)->mDevice;
WAVEFORMATEXTENSIBLE OutputType;
WAVEFORMATEX *wfx = NULL;
REFERENCE_TIME buf_time;
UINT32 buffer_len;
void *ptr = NULL;
HRESULT hr;
if(self->client)
IAudioClient_Release(self->client);
self->client = NULL;
hr = IMMDevice_Activate(self->mmdev, &IID_IAudioClient, CLSCTX_INPROC_SERVER, NULL, &ptr);
if(FAILED(hr))
{
ERR("Failed to reactivate audio client: 0x%08lx\n", hr);
return hr;
}
self->client = ptr;
buf_time = ((REFERENCE_TIME)device->UpdateSize*device->NumUpdates*10000000 +
device->Frequency-1) / device->Frequency;
OutputType.Format.wFormatTag = WAVE_FORMAT_EXTENSIBLE;
switch(device->FmtChans)
{
case DevFmtMono:
OutputType.Format.nChannels = 1;
OutputType.dwChannelMask = MONO;
break;
case DevFmtStereo:
OutputType.Format.nChannels = 2;
OutputType.dwChannelMask = STEREO;
break;
case DevFmtQuad:
OutputType.Format.nChannels = 4;
OutputType.dwChannelMask = QUAD;
break;
case DevFmtX51:
OutputType.Format.nChannels = 6;
OutputType.dwChannelMask = X5DOT1;
break;
case DevFmtX51Rear:
OutputType.Format.nChannels = 6;
OutputType.dwChannelMask = X5DOT1REAR;
break;
case DevFmtX61:
OutputType.Format.nChannels = 7;
OutputType.dwChannelMask = X6DOT1;
break;
case DevFmtX71:
OutputType.Format.nChannels = 8;
OutputType.dwChannelMask = X7DOT1;
break;
case DevFmtBFormat3D:
return E_FAIL;
}
switch(device->FmtType)
{
case DevFmtUByte:
OutputType.Format.wBitsPerSample = 8;
OutputType.Samples.wValidBitsPerSample = 8;
OutputType.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
break;
case DevFmtShort:
OutputType.Format.wBitsPerSample = 16;
OutputType.Samples.wValidBitsPerSample = 16;
OutputType.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
break;
case DevFmtInt:
OutputType.Format.wBitsPerSample = 32;
OutputType.Samples.wValidBitsPerSample = 32;
OutputType.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
break;
case DevFmtFloat:
OutputType.Format.wBitsPerSample = 32;
OutputType.Samples.wValidBitsPerSample = 32;
OutputType.SubFormat = KSDATAFORMAT_SUBTYPE_IEEE_FLOAT;
break;
case DevFmtByte:
case DevFmtUShort:
case DevFmtUInt:
WARN("%s capture samples not supported\n", DevFmtTypeString(device->FmtType));
return E_FAIL;
}
OutputType.Format.nSamplesPerSec = device->Frequency;
OutputType.Format.nBlockAlign = OutputType.Format.nChannels *
OutputType.Format.wBitsPerSample / 8;
OutputType.Format.nAvgBytesPerSec = OutputType.Format.nSamplesPerSec *
OutputType.Format.nBlockAlign;
OutputType.Format.cbSize = sizeof(OutputType) - sizeof(OutputType.Format);
hr = IAudioClient_IsFormatSupported(self->client,
AUDCLNT_SHAREMODE_SHARED, &OutputType.Format, &wfx
);
if(FAILED(hr))
{
ERR("Failed to check format support: 0x%08lx\n", hr);
return hr;
}
/* FIXME: We should do conversion/resampling if we didn't get a matching format. */
if(wfx->nSamplesPerSec != OutputType.Format.nSamplesPerSec ||
wfx->wBitsPerSample != OutputType.Format.wBitsPerSample ||
wfx->nChannels != OutputType.Format.nChannels ||
wfx->nBlockAlign != OutputType.Format.nBlockAlign)
{
ERR("Failed to get matching format, wanted: %s %s %uhz, got: %d channel%s %d-bit %luhz\n",
DevFmtChannelsString(device->FmtChans), DevFmtTypeString(device->FmtType),
device->Frequency, wfx->nChannels, (wfx->nChannels==1)?"":"s", wfx->wBitsPerSample,
wfx->nSamplesPerSec);
CoTaskMemFree(wfx);
return E_FAIL;
}
if(!MakeExtensible(&OutputType, wfx))
{
CoTaskMemFree(wfx);
return E_FAIL;
}
CoTaskMemFree(wfx);
wfx = NULL;
hr = IAudioClient_Initialize(self->client,
AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
buf_time, 0, &OutputType.Format, NULL
);
if(FAILED(hr))
{
ERR("Failed to initialize audio client: 0x%08lx\n", hr);
return hr;
}
hr = IAudioClient_GetBufferSize(self->client, &buffer_len);
if(FAILED(hr))
{
ERR("Failed to get buffer size: 0x%08lx\n", hr);
return hr;
}
buffer_len = maxu(device->UpdateSize*device->NumUpdates + 1, buffer_len);
ll_ringbuffer_free(self->Ring);
self->Ring = ll_ringbuffer_create(buffer_len, OutputType.Format.nBlockAlign);
if(!self->Ring)
{
ERR("Failed to allocate capture ring buffer\n");
return E_OUTOFMEMORY;
}
hr = IAudioClient_SetEventHandle(self->client, self->NotifyEvent);
if(FAILED(hr))
{
ERR("Failed to set event handle: 0x%08lx\n", hr);
return hr;
}
return hr;
}
static ALCboolean ALCmmdevCapture_start(ALCmmdevCapture *self)
{
ThreadRequest req = { self->MsgEvent, 0 };
HRESULT hr = E_FAIL;
if(PostThreadMessage(ThreadID, WM_USER_StartDevice, (WPARAM)&req, (LPARAM)STATIC_CAST(ALCmmdevProxy, self)))
hr = WaitForResponse(&req);
return SUCCEEDED(hr) ? ALC_TRUE : ALC_FALSE;
}
static HRESULT ALCmmdevCapture_startProxy(ALCmmdevCapture *self)
{
HRESULT hr;
void *ptr;
ResetEvent(self->NotifyEvent);
hr = IAudioClient_Start(self->client);
if(FAILED(hr))
{
ERR("Failed to start audio client: 0x%08lx\n", hr);
return hr;
}
hr = IAudioClient_GetService(self->client, &IID_IAudioCaptureClient, &ptr);
if(SUCCEEDED(hr))
{
self->capture = ptr;
self->killNow = 0;
if(althrd_create(&self->thread, ALCmmdevCapture_recordProc, self) != althrd_success)
{
ERR("Failed to start thread\n");
IAudioCaptureClient_Release(self->capture);
self->capture = NULL;
hr = E_FAIL;
}
}
if(FAILED(hr))
{
IAudioClient_Stop(self->client);
IAudioClient_Reset(self->client);
}
return hr;
}
static void ALCmmdevCapture_stop(ALCmmdevCapture *self)
{
ThreadRequest req = { self->MsgEvent, 0 };
if(PostThreadMessage(ThreadID, WM_USER_StopDevice, (WPARAM)&req, (LPARAM)STATIC_CAST(ALCmmdevProxy, self)))
(void)WaitForResponse(&req);
}
static void ALCmmdevCapture_stopProxy(ALCmmdevCapture *self)
{
int res;
if(!self->capture)
return;
self->killNow = 1;
althrd_join(self->thread, &res);
IAudioCaptureClient_Release(self->capture);
self->capture = NULL;
IAudioClient_Stop(self->client);
IAudioClient_Reset(self->client);
}
ALuint ALCmmdevCapture_availableSamples(ALCmmdevCapture *self)
{
return (ALuint)ll_ringbuffer_read_space(self->Ring);
}
ALCenum ALCmmdevCapture_captureSamples(ALCmmdevCapture *self, ALCvoid *buffer, ALCuint samples)
{
if(ALCmmdevCapture_availableSamples(self) < samples)
return ALC_INVALID_VALUE;
ll_ringbuffer_read(self->Ring, buffer, samples);
return ALC_NO_ERROR;
}
static inline void AppendAllDevicesList2(const DevMap *entry)
{ AppendAllDevicesList(al_string_get_cstr(entry->name)); }
static inline void AppendCaptureDeviceList2(const DevMap *entry)
{ AppendCaptureDeviceList(al_string_get_cstr(entry->name)); }
typedef struct ALCmmdevBackendFactory {
DERIVE_FROM_TYPE(ALCbackendFactory);
} ALCmmdevBackendFactory;
#define ALCMMDEVBACKENDFACTORY_INITIALIZER { { GET_VTABLE2(ALCmmdevBackendFactory, ALCbackendFactory) } }
static ALCboolean ALCmmdevBackendFactory_init(ALCmmdevBackendFactory *self);
static void ALCmmdevBackendFactory_deinit(ALCmmdevBackendFactory *self);
static ALCboolean ALCmmdevBackendFactory_querySupport(ALCmmdevBackendFactory *self, ALCbackend_Type type);
static void ALCmmdevBackendFactory_probe(ALCmmdevBackendFactory *self, enum DevProbe type);
static ALCbackend* ALCmmdevBackendFactory_createBackend(ALCmmdevBackendFactory *self, ALCdevice *device, ALCbackend_Type type);
DEFINE_ALCBACKENDFACTORY_VTABLE(ALCmmdevBackendFactory);
static BOOL MMDevApiLoad(void)
{
static HRESULT InitResult;
if(!ThreadHdl)
{
ThreadRequest req;
InitResult = E_FAIL;
req.FinishedEvt = CreateEventW(NULL, FALSE, FALSE, NULL);
if(req.FinishedEvt == NULL)
ERR("Failed to create event: %lu\n", GetLastError());
else
{
ThreadHdl = CreateThread(NULL, 0, ALCmmdevProxy_messageHandler, &req, 0, &ThreadID);
if(ThreadHdl != NULL)
InitResult = WaitForResponse(&req);
CloseHandle(req.FinishedEvt);
}
}
return SUCCEEDED(InitResult);
}
static ALCboolean ALCmmdevBackendFactory_init(ALCmmdevBackendFactory* UNUSED(self))
{
VECTOR_INIT(PlaybackDevices);
VECTOR_INIT(CaptureDevices);
if(!MMDevApiLoad())
return ALC_FALSE;
return ALC_TRUE;
}
static void ALCmmdevBackendFactory_deinit(ALCmmdevBackendFactory* UNUSED(self))
{
clear_devlist(&PlaybackDevices);
VECTOR_DEINIT(PlaybackDevices);
clear_devlist(&CaptureDevices);
VECTOR_DEINIT(CaptureDevices);
if(ThreadHdl)
{
TRACE("Sending WM_QUIT to Thread %04lx\n", ThreadID);
PostThreadMessage(ThreadID, WM_QUIT, 0, 0);
CloseHandle(ThreadHdl);
ThreadHdl = NULL;
}
}
static ALCboolean ALCmmdevBackendFactory_querySupport(ALCmmdevBackendFactory* UNUSED(self), ALCbackend_Type type)
{
/* TODO: Disable capture with mmdevapi for now, since it doesn't do any
* rechanneling or resampling; if the device is configured for 48000hz
* stereo input, for example, and the app asks for 22050hz mono,
* initialization will fail.
*/
if(type == ALCbackend_Playback /*|| type == ALCbackend_Capture*/)
return ALC_TRUE;
return ALC_FALSE;
}
static void ALCmmdevBackendFactory_probe(ALCmmdevBackendFactory* UNUSED(self), enum DevProbe type)
{
ThreadRequest req = { NULL, 0 };
req.FinishedEvt = CreateEventW(NULL, FALSE, FALSE, NULL);
if(req.FinishedEvt == NULL)
ERR("Failed to create event: %lu\n", GetLastError());
else
{
HRESULT hr = E_FAIL;
if(PostThreadMessage(ThreadID, WM_USER_Enumerate, (WPARAM)&req, type))
hr = WaitForResponse(&req);
if(SUCCEEDED(hr)) switch(type)
{
case ALL_DEVICE_PROBE:
VECTOR_FOR_EACH(const DevMap, PlaybackDevices, AppendAllDevicesList2);
break;
case CAPTURE_DEVICE_PROBE:
VECTOR_FOR_EACH(const DevMap, CaptureDevices, AppendCaptureDeviceList2);
break;
}
CloseHandle(req.FinishedEvt);
req.FinishedEvt = NULL;
}
}
static ALCbackend* ALCmmdevBackendFactory_createBackend(ALCmmdevBackendFactory* UNUSED(self), ALCdevice *device, ALCbackend_Type type)
{
if(type == ALCbackend_Playback)
{
ALCmmdevPlayback *backend;
NEW_OBJ(backend, ALCmmdevPlayback)(device);
if(!backend) return NULL;
return STATIC_CAST(ALCbackend, backend);
}
if(type == ALCbackend_Capture)
{
ALCmmdevCapture *backend;
NEW_OBJ(backend, ALCmmdevCapture)(device);
if(!backend) return NULL;
return STATIC_CAST(ALCbackend, backend);
}
return NULL;
}
ALCbackendFactory *ALCmmdevBackendFactory_getFactory(void)
{
static ALCmmdevBackendFactory factory = ALCMMDEVBACKENDFACTORY_INITIALIZER;
return STATIC_CAST(ALCbackendFactory, &factory);
}
| {
"pile_set_name": "Github"
} |
// SPDX-License-Identifier: GPL-2.0
/*
* xHCI host controller driver
*
* Copyright (C) 2013 Xenia Ragiadakou
*
* Author: Xenia Ragiadakou
* Email : [email protected]
*/
#define CREATE_TRACE_POINTS
#include "xhci-trace.h"
EXPORT_TRACEPOINT_SYMBOL_GPL(xhci_dbg_quirks);
| {
"pile_set_name": "Github"
} |
{
"format_version": "1.13.0",
"minecraft:biome": {
"description": {
"identifier": "flower_forest"
},
"components": {
"minecraft:climate": {
"downfall": 0.80,
"snow_accumulation": [ 0.0, 0.125 ],
"temperature": 0.7
},
"minecraft:overworld_height": {
"noise_params": [ 0.1, 0.4 ]
},
"minecraft:surface_parameters": {
"sea_floor_depth": 7,
"sea_floor_material": "minecraft:gravel",
"foundation_material": "minecraft:stone",
"mid_material": "minecraft:dirt",
"top_material": "minecraft:grass",
"sea_material": "minecraft:water"
},
"flower_forest": {},
"monster": {},
"mutated": {},
"overworld": {},
"bee_habitat": {}
}
}
}
| {
"pile_set_name": "Github"
} |
var searchData=
[
['wait_5ffor_5fcal_175',['wait_for_cal',['../class_a_d_c___module.html#a4fb69b5b2d07c3fc8f5f0bbbf05dfa2a',1,'ADC_Module']]],
['waituntilstable_176',['waitUntilStable',['../namespace_v_r_e_f.html#a108f7c1b5a2073bc092eafcae58575b0',1,'VREF']]],
['write_177',['write',['../class_ring_buffer.html#aa14c47f5bbd73b5f8d2b83a86f01ccf2',1,'RingBuffer::write()'],['../class_ring_buffer_d_m_a.html#ac64de99b9ef9b4c1b1ae36444ce37699',1,'RingBufferDMA::write()']]]
];
| {
"pile_set_name": "Github"
} |
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width initial-scale=1" />
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>supershad</title>
<meta name="description" content="Решение вступительных испытаний в Школу анализа данных (ШАД)">
<link rel="stylesheet" href="../main.css">
<link rel="canonical" href="https://efiminem.github.io/supershad">
<script type="text/javascript" async
src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-MML-AM_CHTML"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<style>
.solution {
cursor: pointer;
text-decoration: underline;
}
.solution-text {
display: none;
margin-top: 10px;
padding: 8px;
border-radius: 4px;
background-color: #f0f0f0;
}
li{
margin: 20px 0;
}
code {
border: 0px;
background-color: transparent;
}
</style>
<script type="text/javascript">
$(document).ready(function(){
$(".solution").click(function(){
var toid = $(this).attr('toid');
$("#" + toid).slideToggle();
});
});
</script>
</head>
<body>
<header class="site-header">
<div class="wrapper">
<a class="site-title" href="/supershad">Решения вступительных испытаний в ШАД</a>
<nav class="site-nav">
<a href="#" class="menu-icon">
<svg viewBox="0 0 18 15">
<path fill="#424242" d="M18,1.484c0,0.82-0.665,1.484-1.484,1.484H1.484C0.665,2.969,0,2.304,0,1.484l0,0C0,0.665,0.665,0,1.484,0 h15.031C17.335,0,18,0.665,18,1.484L18,1.484z"/>
<path fill="#424242" d="M18,7.516C18,8.335,17.335,9,16.516,9H1.484C0.665,9,0,8.335,0,7.516l0,0c0-0.82,0.665-1.484,1.484-1.484 h15.031C17.335,6.031,18,6.696,18,7.516L18,7.516z"/>
<path fill="#424242" d="M18,13.516C18,14.335,17.335,15,16.516,15H1.484C0.665,15,0,14.335,0,13.516l0,0 c0-0.82,0.665-1.484,1.484-1.484h15.031C17.335,12.031,18,12.696,18,13.516L18,13.516z"/>
</svg>
</a>
</nav>
</div>
</header>
<div class="wrapper">
<div class="page-content">
<div class="post">
<article class="post-content">
<h4><center>27 мая 2017</center></h4>
<ol><li>За время обучения в ШАД Михаил <script type="math/tex">20</script> раз решал задачи классификации. В каждой задаче он использовал ансамбль из пяти различных
классификаторов, причем никакую пару классификаторов он не применял более одного раза. Каково минимально возможное число известных Михаилу классификаторов?<div class="solution" toid = "text1">Решение</div><div class="solution-text" id="text1">Пусть <script type="math/tex">k</script> — количество классификаторов. Тогда все возможные использования классификаторов можно представить в виде булевой матрицы <script type="math/tex">20\times k</script>,
в которой элемент <script type="math/tex">c_{ij}</script> равен <script type="math/tex">1</script>, если для решения <script type="math/tex">i</script>-й задачи классификации используется <script type="math/tex">j</script>-й классификатор.
По условию задачи в каждой строке такой матрицы должно быть ровно <script type="math/tex">5</script> единиц, а любая пара единиц <script type="math/tex">(j_1,j_2)</script> может встречаться только в одной строке.
Нетрудно понять, что <script type="math/tex">k \geqslant 21</script>. В самом деле, в каждой строке присутствует <script type="math/tex">\frac{5\cdot 4}{2} = 10</script> уникальных пар классификаторов.
Значит, число различных пар классификаторов, которые применялись для решения задач, равно <script type="math/tex">10\cdot 20 = 200</script>. С другой стороны, общее число пар классификаторов равно <script type="math/tex">\frac{k(k-1)}{2}</script>.
Очевидно, должно выполняться условие <script type="math/tex">200 \leqslant \frac{k(k-1)}{2}</script>. Следовательно, <script type="math/tex">k \geqslant 21</script>. Приведем пример матрицы <script type="math/tex">21\times 21</script>, у которой любая из подматриц <script type="math/tex">20\times 21</script> удовлетворяет условию задачи:
<script type="math/tex; mode=display">
\begin{matrix}
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&1&1&1&1\\
0&0&0&0&0&0&0&0&0&0&0&0&1&1&1&1&0&0&0&0&1\\
0&0&0&0&0&0&0&0&1&1&1&1&0&0&0&0&0&0&0&0&1\\
0&0&0&0&1&1&1&1&0&0&0&0&0&0&0&0&0&0&0&0&1\\
1&1&1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1\\
0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&1&0\\
0&0&1&0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&0&1&0\\
0&1&0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&0&0&1&0\\
1&0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&0&0&0&1&0\\
1&0&0&0&0&1&0&0&0&0&1&0&0&0&0&1&0&0&1&0&0\\
0&1&0&0&1&0&0&0&0&0&0&1&0&0&1&0&0&0&1&0&0\\
0&0&1&0&0&0&0&1&1&0&0&0&0&1&0&0&0&0&1&0&0\\
0&0&0&1&0&0&1&0&0&1&0&0&1&0&0&0&0&0&1&0&0\\
0&0&1&0&1&0&0&0&0&1&0&0&0&0&0&1&0&1&0&0&0\\
0&0&0&1&0&1&0&0&1&0&0&0&0&0&1&0&0&1&0&0&0\\
1&0&0&0&0&0&1&0&0&0&0&1&0&1&0&0&0&1&0&0&0\\
0&1&0&0&0&0&0&1&0&0&1&0&1&0&0&0&0&1&0&0&0\\
0&1&0&0&0&0&1&0&1&0&0&0&0&0&0&1&1&0&0&0&0\\
1&0&0&0&0&0&0&1&0&1&0&0&0&0&1&0&1&0&0&0&0\\
0&0&0&1&1&0&0&0&0&0&1&0&0&1&0&0&1&0&0&0&0\\
0&0&1&0&0&1&0&0&0&0&0&1&1&0&0&0&1&0&0&0&0
\end{matrix}
</script>
Построить такую матрицу на удивление нетрудно. Пусть изначально у нас все нули. Идем по матрице справа налево и сверху вниз и ставим <script type="math/tex">1</script>, если мы можем это сделать (то есть если такой пары еще нет; для первой единицы в строке — если она еще не образует все возможные пары).</div><div class="solution" toid = "answer1">Ответ</div><div class="solution-text" id="answer1"><script type="math/tex">21</script>.</div></li><li>Существует ли скалярное произведение на пространстве матриц <script type="math/tex">n\times n\; (n>1)</script>, относительно которого матрица из всех
единиц была бы ортогональна любой верхнетреугольной матрице?<div class="solution" toid = "text2">Решение</div><div class="solution-text" id="text2">Пространство будет иметь размерность <script type="math/tex">n^2</script> (грубо говоря, вытянем все столбцы в один).
Выберем базис следующим образом:
<ul>
<li> <script type="math/tex">n^2-1</script> матриц, у которых на <script type="math/tex">i,j</script> месте
<script type="math/tex">1</script>, на остальных <script type="math/tex">0</script>, <script type="math/tex">i,j \in \overline{1,n}, (i,j)\neq(n,1)</script>;
<li> одна матрица — матрица из всех едениц.
</ul>
Занумеруем векторы базиса следующим образом: матрица из всех единиц будет иметь
номер <script type="math/tex">n^2-n</script>, остальные векторы <script type="math/tex">(i-1)\cdot n+j</script>, где <script type="math/tex">i</script> и <script type="math/tex">j</script> — позиция ненулевого
элемента.
В таком базисе у верхнетреугольных матриц координата c номером <script type="math/tex">n^2-n</script> будет
всегда нулевой. Из формулы скалярного произведения<script type="math/tex; mode=display"><a,b>=\sum_{i=1}^{n^2}a_ib_i,</script>
<script type="math/tex">a_i,b_i</script> — координаты в выбранном базисе, следует, что еденичная матрица
ортогональна любой верхнетреугольной матрице.</div><div class="solution" toid = "answer2">Ответ</div><div class="solution-text" id="answer2">Существует.</div></li><li>Найдите сумму <script type="math/tex">\sum\limits_{k=0}^\infty (-1)^k \frac{(k+1)^2}{k!}</script>.<div class="solution" toid = "text3">Решение</div><div class="solution-text" id="text3"><script type="math/tex; mode=display">\sum\limits_{k=0}^\infty (-1)^k \frac{(k+1)^2}{k!}=\sum\limits_{k=0}^\infty(-1)^k\frac{k^2}{k!}+\sum\limits_{k=0}^\infty (-1)^k\frac{2k}{k!}+\sum\limits_{k=0}^\infty(-1)^k \frac{1}{k!}=(*)</script>
<ol type="a">
<li>
<script type="math/tex; mode=display">\sum\limits_{k=0}^\infty(-1)^k\frac{k^2}{k!}=\sum\limits_{k=1}^\infty(-1)^k\frac{k}{(k-1)!}=\sum\limits_{k=1}^\infty(-1)^k\frac{k-1+1}{(k-1)!}=</script>
<script type="math/tex; mode=display">=\sum\limits_{k=2}^\infty(-1)^k\frac{1}{(k-2)!}+\sum\limits_{k=1}^\infty(-1)^k\frac{1}{(k-1)!}=</script>
<script type="math/tex; mode=display">(-1)^2\sum\limits_{l=0}^\infty(-1)^l\frac{1}{l!}+(-1)\sum\limits_{l=0}^\infty(-1)^l\frac{1}{l!}=0.</script>
<li>
<script type="math/tex; mode=display">=\sum\limits_{k=0}^\infty (-1)^k\frac{2k}{k!}=2\sum\limits_{k=1}^\infty (-1)^k\frac{1}{(k-1)!}=-2\sum\limits_{l=0}^\infty (-1)^l\frac{1}{l!}=-\frac{2}{e}.</script>
</ol>
<script type="math/tex; mode=display">(*)=-\frac{2}{e}+\frac{1}{e}=-\frac{1}{e}.</script></div><div class="solution" toid = "answer3">Ответ</div><div class="solution-text" id="answer3"><script type="math/tex">-\dfrac{1}{e}</script>.</div></li><li>Вася поставил учиться две нейронных сети, каждую на своем GPU, и отправился спать. Времена обучения сетей независимы и равномерно
распределены на отрезке <script type="math/tex">[1;3]</script> (часов). Через время <script type="math/tex">t</script> сервер упал и оказалось, что лишь одна сеть успела доучиться.
С какой вероятностью <script type="math/tex">t \leqslant \frac32</script>? Считайте, что время падения сервера тоже равномерно распределено на отрезке <script type="math/tex">[1;3]</script>.</li><li>Докажите, что для произвольного <script type="math/tex">a_0 \in (0;2\pi)</script> последовательность, заданная условием
<script type="math/tex; mode=display">a_{n+1} = \int\limits_0^{a_n} \left( 1 + \frac14 \cos^{2n+1} t \right) dt,</script>
имеет предел и найдите его.</li><li>Пусть <script type="math/tex">X</script> — случайная величина, принимающая значения на отрезке <script type="math/tex">[0;1]</script>.
Пусть также <script type="math/tex">m</script> — медиана <script type="math/tex">X</script>. Рассмотрим <i>бинаризацию</i> этой величины
<script type="math/tex; mode=display">\beta (X) = \begin{cases}1,& \textrm{при }X \geqslant m;\\0,& \text{иначе.}\end{cases}</script>
Верно ли, что дисперсия <script type="math/tex">\beta (X)</script> не меньше дисперсии <script type="math/tex">X</script>? А если <script type="math/tex">X</script> непрерывна?<br>
Под медианой здесь имеется ввиду число <script type="math/tex">m</script>, для которого <script type="math/tex">P\{X \leqslant m\} = P\{X \geqslant m\}</script>.</li><li>Все числа от <script type="math/tex">1</script> до <script type="math/tex">n=2^k - 1</script> записаны неизвестным нам образом
в полном бинарном дереве высоты <script type="math/tex">k</script>. Будем говорить, что число <script type="math/tex">t</script> лежит между числами <script type="math/tex">i</script> и <script type="math/tex">j</script>
в этом дереве, если при удалении <script type="math/tex">t</script> из дерева <script type="math/tex">i</script> и <script type="math/tex">j</script> оказываются в разных компонентах. Предложите алгоритм, определяющий, что за число находится в корне дерева за <script type="math/tex">O(n \log n)</script> операций с помощью запросов вида «Лежит ли <script type="math/tex">t</script> между
<script type="math/tex">i</script> и <script type="math/tex">j</script>?».</li><li>В пространстве <script type="math/tex">\mathbb{R} [x,y]</script> многочленов с действительными коэффициентами от переменных
<script type="math/tex">x</script> и <script type="math/tex">y</script> действует оператор <script type="math/tex">y \frac{\partial}{\partial x} + x \frac{\partial}{\partial y}</script>.<br>
(a) Докажите, что каждое целое число является его собственным значением.<br>
(b) Найдите все его собственные значения. Является ли он диагонализуемым?</li></ol>
</article>
</div>
</div>
</div>
<footer class="site-footer">
<div class="wrapper">
<div class="footer-col-wrapper">
<div class="footer-col footer-col-1">
<ul class="contact-list">
<li><a href="https://github.com/efiminem/supershad">supershad</a></li>
<li><a href="mailto:"></a></li>
</ul>
</div>
<div class="footer-col footer-col-2">
<ul class="social-media-list">
<li>
<a href="https://github.com/efiminem">
<span class="icon icon--github">
<svg viewBox="0 0 16 16">
<path fill="#828282" d="M7.999,0.431c-4.285,0-7.76,3.474-7.76,7.761 c0,3.428,2.223,6.337,5.307,7.363c0.388,0.071,0.53-0.168,0.53-0.374c0-0.184-0.007-0.672-0.01-1.32 c-2.159,0.469-2.614-1.04-2.614-1.04c-0.353-0.896-0.862-1.135-0.862-1.135c-0.705-0.481,0.053-0.472,0.053-0.472 c0.779,0.055,1.189,0.8,1.189,0.8c0.692,1.186,1.816,0.843,2.258,0.645c0.071-0.502,0.271-0.843,0.493-1.037 C4.86,11.425,3.049,10.76,3.049,7.786c0-0.847,0.302-1.54,0.799-2.082C3.768,5.507,3.501,4.718,3.924,3.65 c0,0,0.652-0.209,2.134,0.796C6.677,4.273,7.34,4.187,8,4.184c0.659,0.003,1.323,0.089,1.943,0.261 c1.482-1.004,2.132-0.796,2.132-0.796c0.423,1.068,0.157,1.857,0.077,2.054c0.497,0.542,0.798,1.235,0.798,2.082 c0,2.981-1.814,3.637-3.543,3.829c0.279,0.24,0.527,0.713,0.527,1.437c0,1.037-0.01,1.874-0.01,2.129 c0,0.208,0.14,0.449,0.534,0.373c3.081-1.028,5.302-3.935,5.302-7.362C15.76,3.906,12.285,0.431,7.999,0.431z"/>
</svg>
</span>
<span class="username">efiminem</span>
</a>
</li>
</ul>
</div>
<div class="footer-col footer-col-3">
<p class="text">Решение вступительных испытаний в Школу анализа данных (ШАД)</p>
</div>
</div>
</div>
</footer>
</body>
</html>
| {
"pile_set_name": "Github"
} |
java110.mappingPath=classpath:mapper/*/*.xml
# Single file max size
java110.ftp.multipart.maxFileSize=100Mb
# All files max size
java110.ftp.multipart.maxRequestSize=100Mb
#ftp use
java110.ftpServer = 192.168.0.104
java110.ftpPort = 6069
java110.ftpUserName = uftp
java110.ftpUserPassword = 123456
java110.ftpPath = hc/ | {
"pile_set_name": "Github"
} |
EXPORTS
VSTPluginMain
main=VSTPluginMain | {
"pile_set_name": "Github"
} |
; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> AXFR jewelry. @demand.gamma.aridns.net.au. +nocomments +nocmd +noquestion +nostats +time=15
;; global options: +cmd
; Transfer failed.
| {
"pile_set_name": "Github"
} |
// AttributeDirectory.h
#ifndef NET_FS_ATTRIBUTE_DIRECTORY_H
#define NET_FS_ATTRIBUTE_DIRECTORY_H
#include <fs_attr.h>
#include "SLList.h"
class BNode;
// attribute directory status
enum {
ATTRIBUTE_DIRECTORY_NOT_LOADED,
ATTRIBUTE_DIRECTORY_VALID,
ATTRIBUTE_DIRECTORY_TOO_BIG,
};
// Attribute
class Attribute : public SLListLinkImpl<Attribute> {
Attribute(const char* name,
const attr_info& info, const void* data);
~Attribute();
public:
static status_t CreateAttribute(const char* name,
const attr_info& info, const void* data,
Attribute** attribute);
static void DeleteAttribute(Attribute* attribute);
const char* GetName() const;
void GetInfo(attr_info* info) const;
uint32 GetType() const;
off_t GetSize() const;
const void* GetData() const;
private:
attr_info fInfo;
char fDataAndName[1];
};
// AttributeDirectory
class AttributeDirectory {
public:
AttributeDirectory();
virtual ~AttributeDirectory();
uint32 GetAttrDirStatus() const;
bool IsAttrDirValid() const;
status_t LoadAttrDir();
void ClearAttrDir();
status_t AddAttribute(const char* name,
const attr_info& info, const void* data);
bool RemoveAttribute(const char* name);
void RemoveAttribute(Attribute* attribute);
status_t UpdateAttribute(const char* name, bool* removed,
attr_info* info, const void** data);
Attribute* GetAttribute(const char* name) const;
Attribute* GetFirstAttribute() const;
Attribute* GetNextAttribute(Attribute* attribute) const;
virtual status_t OpenNode(BNode& node) = 0;
private:
status_t _LoadAttribute(BNode& node, const char* name,
attr_info& info, void* data,
bool& dataLoaded);
private:
SLList<Attribute> fAttributes;
uint32 fStatus;
};
#endif // NET_FS_ATTRIBUTE_DIRECTORY_H
| {
"pile_set_name": "Github"
} |
// ==========================================================================
// Project: SproutCore - JavaScript Application Framework
// Copyright: ©2006-2011 Strobe Inc. and contributors.
// portions copyright @2009 Apple Inc.
// License: Licensed under MIT license (see license.js)
// ==========================================================================
/*global module test htmlbody ok equals same stop start */
var pane, view, view1, view2 ;
suite("SC.ScrollerView",{
setup: function() {
SC.RunLoop.begin();
pane = SC.MainPane.create({
childViews: [
SC.ScrollerView.extend({
}),
SC.ScrollerView.extend({
minimum:10,
maximum:100,
isEnabled:false,
layoutDirection: SC.LAYOUT_HORIZONTAL
}),
SC.ScrollerView.extend({
layout:{ top: 0, bottom: 0, right: 0, width: 20 },
minimum:0,
maximum:100,
hasButtons: false
})
]
});
pane.append(); // make sure there is a layer...
SC.RunLoop.end();
view = pane.childViews[0];
view1= pane.childViews[1];
view2= pane.childViews[2];
},
teardown: function() {
pane.remove();
pane = view = null ;
}
});
test('value', function() {
equals(view1.get('maximum'), 100, 'precond - view has maximum of 100');
equals(view1.get('minimum'), 10, 'precond - view has a minimum of 10');
view1.set('value', 300);
equals(view1.get('value'), view1.get('maximum'), 'value is set to maximum if attempting to set higher than maximum');
view1.set('maximum', 50);
equals(view1.get('value'), view1.get('maximum'), 'value should change if maximum changes');
view1.set('value', 0);
equals(view1.get('value'), view1.get('minimum'), 'value is set to minimum if attempt to set lower than minimum');
view1.set('minimum', 15);
equals(view1.get('value'), view1.get('minimum'), 'value should change if minimum changes');
});
test('isEnabled', function() {
ok(!view.$().hasClass('disabled'), 'scrollers should be enabled by default');
view.set('isEnabled', false);
SC.RunLoop.begin().end();
ok(view.$().hasClass('disabled'), 'scrollers should have disabled class set if isEnabled is set to false');
var layer = view.$('.button-down'),
evt = SC.Event.simulateEvent(layer, 'mousedown'),
scrollerHeight = view.get('frame').height;
view.set('maximum', scrollerHeight+100);
view.set('value', 500);
SC.Event.trigger(layer, 'mousedown', [evt]);
equals(view.get('value'), 500, 'scrollers should not respond to mouse events if they are disabled');
view.set('isEnabled', true);
SC.Event.trigger(layer, 'mousedown', [evt]);
ok(view.get('value') < (scrollerHeight+100), 'scrollers should respond to mouse events if they are not disabled');
});
test('layoutDirection', function() {
equals(view.get('layoutDirection'), SC.LAYOUT_VERTICAL, 'scrollers should default to vertical direction');
ok(view.$().hasClass('sc-vertical'), 'scroller with vertical layoutDirection has sc-vertical class name');
equals(view1.get('layoutDirection'), SC.LAYOUT_HORIZONTAL, 'precond - view1 has horizontal direction');
ok(view1.$().hasClass('sc-horizontal'), 'scroller with horizontal layoutDirection has sc-horizontal class name');
});
test('hasButtons', function() {
equals(view.get('hasButtons'), true, 'scrollers should default to having buttons');
equals(view.$('.endcap').length, 0, 'scrollers with buttons should not have an endcap');
equals(view.$('.button-bottom, .button-top').length, 2, 'scrollers with buttons should have an up and a down button');
equals(view2.$('.endcap').length, 1, 'scrollers with buttons should have an endcap');
equals(view2.$('.button-bottom, .button-top').length, 0, 'scrollers with buttons should not have an up or a down button');
});
| {
"pile_set_name": "Github"
} |
'use strict'
function formatHostname(hostname) {
// canonicalize the hostname, so that 'oogle.com' won't match 'google.com'
return hostname.replace(/^\.*/, '.').toLowerCase()
}
function parseNoProxyZone(zone) {
zone = zone.trim().toLowerCase()
var zoneParts = zone.split(':', 2)
, zoneHost = formatHostname(zoneParts[0])
, zonePort = zoneParts[1]
, hasPort = zone.indexOf(':') > -1
return {hostname: zoneHost, port: zonePort, hasPort: hasPort}
}
function uriInNoProxy(uri, noProxy) {
var port = uri.port || (uri.protocol === 'https:' ? '443' : '80')
, hostname = formatHostname(uri.hostname)
, noProxyList = noProxy.split(',')
// iterate through the noProxyList until it finds a match.
return noProxyList.map(parseNoProxyZone).some(function(noProxyZone) {
var isMatchedAt = hostname.indexOf(noProxyZone.hostname)
, hostnameMatched = (
isMatchedAt > -1 &&
(isMatchedAt === hostname.length - noProxyZone.hostname.length)
)
if (noProxyZone.hasPort) {
return (port === noProxyZone.port) && hostnameMatched
}
return hostnameMatched
})
}
function getProxyFromURI(uri) {
// Decide the proper request proxy to use based on the request URI object and the
// environmental variables (NO_PROXY, HTTP_PROXY, etc.)
// respect NO_PROXY environment variables (see: http://lynx.isc.org/current/breakout/lynx_help/keystrokes/environments.html)
var noProxy = process.env.NO_PROXY || process.env.no_proxy || ''
// if the noProxy is a wildcard then return null
if (noProxy === '*') {
return null
}
// if the noProxy is not empty and the uri is found return null
if (noProxy !== '' && uriInNoProxy(uri, noProxy)) {
return null
}
// Check for HTTP or HTTPS Proxy in environment Else default to null
if (uri.protocol === 'http:') {
return process.env.HTTP_PROXY ||
process.env.http_proxy || null
}
if (uri.protocol === 'https:') {
return process.env.HTTPS_PROXY ||
process.env.https_proxy ||
process.env.HTTP_PROXY ||
process.env.http_proxy || null
}
// if none of that works, return null
// (What uri protocol are you using then?)
return null
}
module.exports = getProxyFromURI
| {
"pile_set_name": "Github"
} |
{% extends 'manage/manage_base.html' %}
{% set page='tweets' -%}
{% block manage_title %}
All Event Tweets
{% endblock %}
{% block site_js %}
{{ super() }}
<script src="{{ static('angular/angular.min.js') }}"></script>
<script src="{{ static('angular/angular-moment.min.js') }}"></script>
{% javascript 'tweetmanager' %}
{% endblock %}
{% block site_css %}
{{ super() }}
<link rel="stylesheet" href="{{ static('manage/css/all-event-tweets.css') }}" />
{% endblock %}
{% block manage_content %}
<div ng-app="tweetmanagerApp" ng-controller="TweetManagerController">
<p ng-if="loading" class="loading">
<img src="{{ static('img/spinner.gif') }}">
<span class="blinking">Loading all tags...</span>
</p>
{% raw %}
<table class="table table-striped table-bordered" ng-class="{hidden: loading}" ng-cloak>
<thead>
<tr>
<th><a href="#" ng-click="sortBy('event.title')" ng-class="{active: sort_by=='event.title'}">Event</a></th>
<th>Text</th>
<th><a href="#" ng-click="sortBy('send_date', true)" ng-class="{active: sort_by=='send'}">Sent/Sending</a></th>
<th>Creator</th>
<th>
{{ filtered_items.length }} tweets found
</th>
</tr>
<tr>
<td>
<input type="search" class="form-control" ng-model="search_event" placeholder="Search by event">
</td>
<td>
<input type="search" class="form-control" ng-model="search_text" placeholder="Search by text">
</td>
<td>
<select name="status" ng-model="search_status">
<option value="">All</option>
<option value="future">Future</option>
<option value="past">Past</option>
<option value="failed">Failed</option>
</select>
</td>
<td>
</td>
<td>
<a href="#" class="btn btn-default btn-xs btn-primary"
ng-if="hasFilter()"
ng-click="clearFilter()">Clear filter</a>
</td>
</tr>
</thead>
<tbody>
<tr ng-repeat="tweet in filtered_items = (tweets | filter:filterBySearch | filter:filterByFuture) | orderBy:sort_by:sort_by_desc | startFrom:currentPage*pageSize | limitTo:pageSize">
<td >
<a class="truncate-title" href="{{ url('manage:event_edit', tweet.event.pk) }}" title="{{ tweet.event.title }}">{{ tweet.event.title }}</a>
</td>
<td class="text">{{ tweet.text | limitTo : 100 }}</td>
<td>
<a ng-if="tweet.tweet_id" href="{{ tweet.full_tweet_url }}">
<i class="glyphicon glyphicon-ok icon-positive"></i>
<time title="{{ formatDate(tweet.sent_date) }}" am-time-ago="tweet.sent_date"></time>
</a>
<span ng-if="!tweet.tweet_id && tweet.sent_date">
<i class="glyphicon glyphicon-warning-sign icon-negative"></i>
Failed to send {{ tweet.sent_date }}
</span>
<span ng-if="!tweet.tweet_id && !tweet.sent_date">
<span ng-if="!tweet.event._is_scheduled">Needs to be scheduled first</span>
<span ng-if="!tweet.event._is_scheduled && tweet.event._needs_approval">Needs to be approved first</span>
<span ng-if="tweet.event._is_scheduled && !tweet.event._needs_approval">
Not yet sent
<i class="glyphicon glyphicon-time"></i>
<time title="{{ formatDate(tweet.send_date) }}" am-time-ago="tweet.send_date"></time>
</span>
</span>
</td>
<td>
<span ng-if="tweet.creator">
{{ tweet.creator.email }}
</span>
<span ng-if="!tweet.creator">
<i>n/a</i>
</span>
</td>
<td>
<a class="btn btn-default btn-sm" href="{{ url('manage:event_tweets', tweet.event.pk) }}">Manage tweets</a>
</td>
</tr>
</tbody>
</table>
{% endraw %}
{% include "manage/_angular_pagesize.html" %}
{% include "manage/_angular_paginate.html" %}
</div>
{% endblock %}
| {
"pile_set_name": "Github"
} |
# $FreeBSD$
PORTREVISION= 0
CATEGORIES= x11
PKGNAMESUFFIX= 1-plugins-ximagesrc
COMMENT= GStreamer X source plugin
GST_PLUGIN= x
DIST= good
MASTERDIR= ${.CURDIR}/../../multimedia/gstreamer1-plugins
.include "${MASTERDIR}/Makefile"
| {
"pile_set_name": "Github"
} |
#include <cstdio>
int main() {
int n, m, t, d, a[2];
scanf("%d%d", &n, &m);
a[0] = a[1] = 0;
for (int i = 0; i < n; ++i) {
scanf("%d%d", &t, &d);
--t;
if (d == -1) {
a[t] += 1;
} else if (d <= m) {
a[t] += 2;
} else {
a[t] += 3;
}
}
printf("%d:%d\n", a[0], a[1]);
return 0;
}
| {
"pile_set_name": "Github"
} |
/* This is a modfied version of va-arg-9.c to test va_copy. */
#include <stdarg.h>
#ifndef va_copy
#define va_copy __va_copy
#endif
extern __SIZE_TYPE__ strlen (const char *);
int
to_hex (unsigned int a)
{
static char hex[] = "0123456789abcdef";
if (a > 15)
abort ();
return hex[a];
}
void
fap (int i, char* format, va_list ap)
{
va_list apc;
char *formatc;
va_copy (apc, ap);
formatc = format;
if (strlen (format) != 16 - i)
abort ();
while (*format)
if (*format++ != to_hex (va_arg (ap, int)))
abort ();
while (*formatc)
if (*formatc++ != to_hex (va_arg (apc, int)))
abort ();
}
void
f0 (char* format, ...)
{
va_list ap;
va_start (ap, format);
fap(0, format, ap);
va_end(ap);
}
void
f1 (int a1, char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(1, format, ap);
va_end(ap);
}
void
f2 (int a1, int a2, char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(2, format, ap);
va_end(ap);
}
void
f3 (int a1, int a2, int a3, char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(3, format, ap);
va_end(ap);
}
void
f4 (int a1, int a2, int a3, int a4, char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(4, format, ap);
va_end(ap);
}
void
f5 (int a1, int a2, int a3, int a4, int a5,
char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(5, format, ap);
va_end(ap);
}
void
f6 (int a1, int a2, int a3, int a4, int a5,
int a6,
char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(6, format, ap);
va_end(ap);
}
void
f7 (int a1, int a2, int a3, int a4, int a5,
int a6, int a7,
char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(7, format, ap);
va_end(ap);
}
void
f8 (int a1, int a2, int a3, int a4, int a5,
int a6, int a7, int a8,
char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(8, format, ap);
va_end(ap);
}
void
f9 (int a1, int a2, int a3, int a4, int a5,
int a6, int a7, int a8, int a9,
char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(9, format, ap);
va_end(ap);
}
void
f10 (int a1, int a2, int a3, int a4, int a5,
int a6, int a7, int a8, int a9, int a10,
char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(10, format, ap);
va_end(ap);
}
void
f11 (int a1, int a2, int a3, int a4, int a5,
int a6, int a7, int a8, int a9, int a10,
int a11,
char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(11, format, ap);
va_end(ap);
}
void
f12 (int a1, int a2, int a3, int a4, int a5,
int a6, int a7, int a8, int a9, int a10,
int a11, int a12,
char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(12, format, ap);
va_end(ap);
}
void
f13 (int a1, int a2, int a3, int a4, int a5,
int a6, int a7, int a8, int a9, int a10,
int a11, int a12, int a13,
char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(13, format, ap);
va_end(ap);
}
void
f14 (int a1, int a2, int a3, int a4, int a5,
int a6, int a7, int a8, int a9, int a10,
int a11, int a12, int a13, int a14,
char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(14, format, ap);
va_end(ap);
}
void
f15 (int a1, int a2, int a3, int a4, int a5,
int a6, int a7, int a8, int a9, int a10,
int a11, int a12, int a13, int a14, int a15,
char* format, ...)
{
va_list ap;
va_start(ap, format);
fap(15, format, ap);
va_end(ap);
}
main ()
{
char *f = "0123456789abcdef";
f0 (f+0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15);
f1 (0, f+1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15);
f2 (0, 1, f+2, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15);
f3 (0, 1, 2, f+3, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15);
f4 (0, 1, 2, 3, f+4, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15);
f5 (0, 1, 2, 3, 4, f+5, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15);
f6 (0, 1, 2, 3, 4, 5, f+6, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15);
f7 (0, 1, 2, 3, 4, 5, 6, f+7, 7, 8, 9, 10, 11, 12, 13, 14, 15);
f8 (0, 1, 2, 3, 4, 5, 6, 7, f+8, 8, 9, 10, 11, 12, 13, 14, 15);
f9 (0, 1, 2, 3, 4, 5, 6, 7, 8, f+9, 9, 10, 11, 12, 13, 14, 15);
f10 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, f+10, 10, 11, 12, 13, 14, 15);
f11 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, f+11, 11, 12, 13, 14, 15);
f12 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, f+12, 12, 13, 14, 15);
f13 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, f+13, 13, 14, 15);
f14 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, f+14, 14, 15);
f15 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, f+15, 15);
exit (0);
}
| {
"pile_set_name": "Github"
} |
.. include:: ../README.rst
.. include:: ../CREDITS.txt
.. include:: ../CHANGES.txt
.. include:: ../FAQ.rst
| {
"pile_set_name": "Github"
} |
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER = a4
BUILDDIR = _build
STATICDIR = _static
DOWNLOADDIR = _download
GHPAGESDIR = ../docs
NAME = WebAssembly
# Hack until we have moved to more recent Sphinx.
OLDMATHJAX = https://cdn.mathjax.org/mathjax/latest/MathJax.js
NEWMATHJAX = https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: usage
usage:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " pdf to make standalone PDF file"
@echo " all to make both"
@echo " publish to make all and push to gh-pages"
@echo " help to see more options"
.PHONY: help
help:
@echo "Usage: \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " pdf to make standalone PDF file"
@echo " all to make both"
@echo " publish to make all and push to gh-pages"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " epub3 to make an epub3"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
@echo " dummy to check syntax errors of document sources"
.PHONY: publish
publish: all deploy
.PHONY: deploy
deploy:
GIT_DEPLOY_DIR=$(BUILDDIR)/html sh util/deploy.sh
.PHONY: all
all: pdf html
.PHONY: pdf
pdf: latexpdf
mkdir -p $(BUILDDIR)/html/$(DOWNLOADDIR)
ln -f $(BUILDDIR)/latex/$(NAME).pdf $(BUILDDIR)/html/$(DOWNLOADDIR)/$(NAME).pdf
.PHONY: clean
clean:
rm -rf $(BUILDDIR)
rm -rf $(STATICDIR)
.PHONY: html
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
for file in `ls $(BUILDDIR)/html/*.html`; \
do \
sed s:BASEDIR:.:g <$$file >$$file.out; \
mv -f $$file.out $$file; \
sed s~$(OLDMATHJAX)~$(NEWMATHJAX)~g <$$file >$$file.out; \
mv -f $$file.out $$file; \
done
for file in `ls $(BUILDDIR)/html/*/*.html`; \
do \
sed s:BASEDIR:..:g <$$file >$$file.out; \
sed 's; <body; <script type="text/javascript">MathJax.Hub.Config({TeX: {MAXBUFFER: 30*1024}})</script><body;' \
<$$file.out >$$file; \
rm -f $$file.out; \
sed s~$(OLDMATHJAX)~$(NEWMATHJAX)~g <$$file >$$file.out; \
mv -f $$file.out $$file; \
done
@echo
@echo "Build finished. The HTML pages are in `pwd`/$(BUILDDIR)/html."
.PHONY: dirhtml
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
.PHONY: singlehtml
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
.PHONY: pickle
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
.PHONY: json
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
.PHONY: htmlhelp
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
.PHONY: qthelp
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/WebAssembly.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/WebAssembly.qhc"
.PHONY: applehelp
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
.PHONY: devhelp
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/WebAssembly"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/WebAssembly"
@echo "# devhelp"
.PHONY: epub
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
.PHONY: epub3
epub3:
$(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3
@echo
@echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3."
.PHONY: latex
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
.PHONY: latexpdf
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: latexpdfja
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: text
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
.PHONY: man
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
.PHONY: texinfo
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
.PHONY: info
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
.PHONY: gettext
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
.PHONY: changes
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
.PHONY: linkcheck
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
.PHONY: doctest
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
.PHONY: coverage
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
.PHONY: xml
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
.PHONY: pseudoxml
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
.PHONY: dummy
dummy:
$(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy
@echo
@echo "Build finished. Dummy builder generates no files."
| {
"pile_set_name": "Github"
} |
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<meta content="text/html" charset="UTF-8" http-equiv="Content-Type">
<meta name="GENERATOR" content="MSHTML 9.00.8112.16476"></meta>
<base href="v8config://a8dd2e89-c043-4f61-ab58-dafb034cd4c1/mdobject/id58570c20-db1f-43f1-af51-47fbbb1a4807/038b5c85-fb1c-4082-9c4c-e69f8928bf3a"></base>
</head>
<body>
<h1>1С:ГитКонвертер</h1>
<p><a href="https://its.1c.ru/db/metod8dev#content:5937:hdoc">Основные возможности</a></p>
<h2>Начальная настройка</h2>
<h3>Настройка базы ГитКонвертера</h3>
<ol>
<li>Разместите базу ГитКонвертера на сервере 1С. Работа в файловом режиме может быть использована только в демонстрационных целях.</li>
<li style="box-sizing: border-box; margin-top: 0.25em;">Заполните константу <span style="box-sizing: border-box; font-weight: 600;">"Путь к версиям платформы на сервере"</span>, где располагаются файлы Конфигуратора 1cv8(.exe) в формате: <code style="box-sizing: border-box; font-family: SFMono-Regular, Consolas, "Liberation Mono", Menlo, Courier, monospace; font-size: 13.600000381469727px; padding: 0.2em 0.4em; margin: 0px; background-color: rgba(27, 31, 35, 0.0470588); border-top-left-radius: 3px; border-top-right-radius: 3px; border-bottom-right-radius: 3px; border-bottom-left-radius: 3px;">C:\Program files (x86)\1cv8\%ВерсияПлатформы%\bin</code> где в параметр %ВерсияПлатформы% - будет подставлена текущая версия хранилища из настроек.</li>
<li style="box-sizing: border-box; margin-top: 0.25em;">Для ограничения производительности можно включить константу <span style="box-sizing: border-box; font-weight: 600;">"Использовать очереди выполнения"</span> - количеством очередей можно балансировать нагрузку на сервер.</li>
</ol>
<h3>Настройка сервера 1С</h3>
<p>Для ИБ ГитКонвертера на сервере 1С рекомендуется настроить удаление, перенос в архив или полностью отключить журнал регистрации, т.к. интесивность событий в ИБ может быть очень высокой, а ценность истории ЖР за прошлые периоды - низкая.</p>
<p>Для легкого удаления и архивирования ЖР можно переключить его формат на старый режим. Для этого необходимо в каталог журнала регистрации ИБ скопировать пустой файл с именем: <b>1Cv8.lgf</b>. Для больших проектов рекомендуется выполнить такую настройку (удаление/бэкапирование файлов журнала регистрации).</p>
<p>Так же рекомендуется переключить регистрацию событий - только ошибки. Для этого следует выбрать команду <b>Конфигуратор - Администрирование - Настройка журнала регистрации... = Регистрировать ошибки</b> и в открывшемся диалоге установить минимально необходимую вам периодичностью.</p>
<p>Настройки хранилища конфигураций и параметров синхронизации с репозиторием git.</p>
<h2>Настройка конвертации хранилища 1С</h2>
<p>Рекомендуется использовать сервер хранилищ конфигураций 1С.</p>
<p>Для оптимальной работы сервера хранилищ настройте <b>Размер глобального кэша</b> в "Администрировании" в 1,5-2 раза больше количества параллельных потоков (если используются "копии хранилища") <b>получения версий * размер одной версии, Мб.</b></p>
<h3>Параметры конвертации</h3>
<ul>
<li>Укажите адрес хранилища. При использовании сервера хранилища рекомендуется настроить в администрировании хранилища параметр "Глобальный кэш версий конфигурации" чтобы количество кэшированных версий было больше параллельно получаемых версий.<br></li>
<li>Укажите версию платформы, рекомендуется использовать 8.3.9.1818 и выше.<br></li>
<li>Укажите начальную версию в хранилище конфигураций, если текущее хранилище было обрезано и первая версия больше 1.<br></li>
<li>Если указана версия окончания - не будет выполняться запрос новых версий.<br></li><li>Укажите расписание запусков.<br></li>
<li>Укажите <b>Каталог выгрузки версий</b>, в котором будут создаваться временные каталоги с номерами версий и выгрузкой данных.<br></li>
<li>Желательно ограничить количество подготавливаемых (выгружаемых) версий - рекомендуется установить значение исходя из <b>Размер базы с версией + Размер выгрузки в xml</b> и размера жесткого диска.<br></li>
<li>Укажите <b>Минимальное количество метаданных</b> - число файлов и каталогов выгрузки в xml - необходимо для контроля, что все файлы выгружены. Рекомендуется устанавливать 90-95% от текущей версии, чтобы учесть возможность удаления метаданных (т.е. сокращения количества файлов)<br></li>
<li>Не рекомендуется устанавливать <b>Удаление конфигураций поставщиков</b>, если планируется загружать конфигурацию из файлов и обновлять конфигурации поставщиков. Опция позволяет оптимизировать размер хранилища Git и не хранить объемные файлы *.cf.<br></li>
<li>Установите <b>Удалять временные данные версии после коммита</b> - рекомендуется на реальных проектах.<br></li>
<li><b>Выгружать изменения</b> - позволяет на Платформе 8.3.10 и выше выгружать только изменения. Доступно при использовании <b>"Очередей"</b><br></li>
<li><b>Локальный каталог Git</b> рекомендуется указывать на одном логическом диске с <b>Каталогом выгрузки версий</b> - будет использовано перемещение версий возможностями ОС, иначе будет выполняться копирование.<br></li>
<li><b>Каталог выгрузки в репозитории</b> - относительный путь к каталогу выгрузки внутри репозитория. Рекомендуется указывать имя проекта для будущей совместимости с рабочим пространством 1C:EDT или оставить пустым.<br></li>
<li>Установите флаг <b>Выполнять коммиты</b> для выполнения коммитов. Отключение может быть необходимо с целью временно приостановить работу конвертера.<br></li>
<li>Установите флаг <b>Обрабатывать все очереди</b>, если используются очереди в ИБ.<br></li>
</ul>
<p>Нажмите кнопку "Создать гит репозиторий" перед конвертацией - команда выполняет инициализацию репозитория и начальную настройку, специфичную для конфигураций 1С.</p>
</body>
</html> | {
"pile_set_name": "Github"
} |
using System;
namespace SharpRepository.Tests.Integration.Data
{
public class CouchDbDatabaseNameFactory
{
private static int _num = 1;
public static string Build(string type)
{
var databaseName = String.Concat(type.ToLower(), _num);
_num++; // since it goes through and calls this for each test before running them, we need a different database for each test
return databaseName;
}
}
} | {
"pile_set_name": "Github"
} |
<?php
/* Copyright (C) 2010-2018 Regis Houssin <[email protected]>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <https://www.gnu.org/licenses/>.
*/
// Protection to avoid direct call of template
if (empty($conf) || !is_object($conf))
{
print "Error, template page can't be called as URL";
exit;
}
$object = $GLOBALS['object'];
$statutarray = array('1' => $langs->trans("OnSell"), '0' => $langs->trans("NotOnSell"));
?>
<!-- BEGIN PHP TEMPLATE EDIT.TPL -->
<?php
$head = product_prepare_head($object);
$titre = $langs->trans("CardProduct".$object->type);
dol_fiche_head($head, 'card', $titre, 0, 'service');
dol_htmloutput_errors($object->error, $object->errors);
?>
<form action="<?php echo $_SERVER["PHP_SELF"]; ?>" method="post">
<input type="hidden" name="token" value="<?php echo $_SESSION['newtoken']; ?>">
<input type="hidden" name="action" value="update">
<input type="hidden" name="id" value="<?php echo $object->id; ?>">
<input type="hidden" name="canvas" value="<?php echo $object->canvas; ?>">
<table class="border allwidth">
<tr>
<td class="fieldrequired" width="20%"><?php echo $langs->trans("Ref"); ?></td>
<td><input name="ref" size="40" maxlength="32" value="<?php echo $object->ref; ?>">
</td></tr>
<tr>
<td class="fieldrequired"><?php echo $langs->trans("Label"); ?></td>
<td><input name="label" size="40" value="<?php echo $object->label; ?>"></td>
</tr>
<tr>
<td class="fieldrequired"><?php echo $langs->trans("Status").' ('.$langs->trans("Sell").')'; ?></td>
<td><?php echo $form->selectarray('statut', $statutarray, $object->status); ?></td>
</tr>
<tr>
<td class="fieldrequired"><?php echo $langs->trans("Status").' ('.$langs->trans("Buy").')'; ?></td>
<td><?php echo $form->selectarray('statut_buy', $statutarray, $object->status_buy); ?></td>
</tr>
<tr><td><?php echo $langs->trans("Duration"); ?></td>
<td><input name="duration_value" size="6" maxlength="5" value="<?php echo $object->duration_value; ?>">
<?php echo $object->duration_unit; ?>
</td></tr>
<tr><td class="tdtop"><?php echo $langs->trans("NoteNotVisibleOnBill"); ?></td><td>
<?php echo $object->textarea_note; ?>
</td></tr>
</table>
<br>
<div align="center"><input type="submit" class="button" value="<?php echo $langs->trans("Save"); ?>">
<input type="submit" class="button" name="cancel" value="<?php echo $langs->trans("Cancel"); ?>"></div>
</form>
<!-- END PHP TEMPLATE -->
| {
"pile_set_name": "Github"
} |
--TEST--
Check for Asf_Application::init/getInstance
--SKIPIF--
<?php if (!extension_loaded("asf")) print "skip"; ?>
--INI--
asf.ctype_id=0
asf.use_namespace=0
--FILE--
<?php
ini_set('asf.ctype_id', 0);
ini_set('asf.use_namespace', 0);
$configs = array(
'asf' => array(
'root_path' => realpath(dirname(__FILE__)),
)
);
class IndexService extends Asf_AbstractService
{
public function indexAction()
{
var_dump('hi, asf');
}
}
$obj = Asf_Application::init($configs)->run();
Asf_Application::getInstance();
/* Error */
try {
$obj->task();
} catch (Error $e) {
var_dump($e->getMessage());
}
/* Error */
try {
$obj->run();
} catch (Error $e) {
var_dump($e->getMessage());
}
/* Error */
try {
$app = new Asf_Application($configs);
$app->init($configs);
} catch (Exception $e) {
var_dump($e->getMessage());
}
/* Error */
try {
$app = new Asf_Application($configs);
} catch (Exception $e) {
var_dump($e->getMessage());
}
?>
--EXPECTF--
string(%d) "%s"
string(%d) "%s"
string(%d) "%s"
string(%d) "%s"
string(%d) "%s"
| {
"pile_set_name": "Github"
} |
#
# This file is part of LUNA.
#
# Copyright (c) 2020 Great Scott Gadgets <[email protected]>
# SPDX-License-Identifier: BSD-3-Clause
import usb.core
from .jtag import JTAGChain
from .flash import ConfigurationFlash
from .spi import DebugSPIConnection
from .ila import ApolloILAFrontend
from .ecp5 import ECP5_JTAGProgrammer
from .intel import IntelJTAGProgrammer
from .onboard_jtag import *
class DebuggerNotFound(IOError):
pass
def create_ila_frontend(ila, *, use_cs_multiplexing=False):
""" Convenience method that instantiates an Apollo debug session and creates an ILA frontend from it.
Parameters:
ila -- The SyncSerialILA object we'll be connecting to.
"""
debugger = ApolloDebugger()
return ApolloILAFrontend(debugger, ila=ila, use_inverted_cs=use_cs_multiplexing)
class ApolloDebugger:
""" Class representing a link to an Apollo Debug Module. """
# This VID/PID pair is unique to development LUNA boards.
# TODO: potentially change this to an OpenMoko VID, like other LUNA boards.
USB_IDS = [(0x1d50, 0x615c), (0x16d0, 0x05a5)]
REQUEST_SET_LED_PATTERN = 0xa1
REQUEST_RECONFIGURE = 0xc0
LED_PATTERN_IDLE = 500
LED_PATTERN_UPLOAD = 50
# External boards (non-LUNA boards) are indicated with a Major revision of 0xFF.
# Their minor revision then encodes the board type.
EXTERNAL_BOARD_MAJOR = 0xFF
EXTERNAL_BOARD_NAMES = {
0: "Daisho [rev 31-Oct-2014]",
1: "Xil.se Pergola FPGA"
}
EXTERNAL_BOARD_PROGRAMMERS = {
0: IntelJTAGProgrammer
}
# LUNA subdevices (specialized variants of the LUNA board) use a major of 0xFE.
SUBDEVICE_MAJORS = {
0xFE: "Amalthea"
}
def __init__(self):
""" Sets up a connection to the debugger. """
# Try to create a connection to our Apollo debug firmware.
for vid, pid in self.USB_IDS:
device = usb.core.find(idVendor=vid, idProduct=pid)
if device is not None:
break
# If we couldn't find an Apollo device, bail out.
if device is None:
raise DebuggerNotFound()
self.device = device
self.major, self.minor = self.get_hardware_revision()
# Create our basic interfaces, for debugging convenience.
self.jtag = JTAGChain(self)
self.spi = DebugSPIConnection(self)
self.flash = ConfigurationFlash(self)
def detect_connected_version(self):
""" Attempts to determine the revision of the connected hardware.
Returns the relevant hardware's revision number, as (major, minor).
"""
# Extract the major and minor from the device's USB descriptor.
minor = self.device.bcdDevice & 0xFF
major = self.device.bcdDevice >> 8
return major, minor
def get_fpga_type(self):
""" Returns a string indicating the type of FPGA populated on the connected LUNA board.
The returned format is the same as used in a nMigen platform file; and can be used to override
a platform's device type.
"""
with self.jtag as jtag:
# First, we'll detect all devices on our JTAG chain.
jtag_devices = jtag.enumerate()
if not jtag_devices:
raise IOError("Could not detect an FPGA via JTAG!")
# ... and grab its device identifier.
first_device = jtag_devices[0]
if not hasattr(first_device, 'DEVICE'):
raise IOError("First JTAG device in chain does not provide an FPGA type. Is this a proper board?")
return first_device.DEVICE
@property
def serial_number(self):
""" Returns the device's serial number, as a string. """
return self.device.serial_number
def get_hardware_revision(self):
""" Returns the (major, minor) of the attached hardware revision. """
minor = self.device.bcdDevice & 0xFF
major = self.device.bcdDevice >> 8
return major, minor
def get_hardware_name(self):
""" Returns a string describing this piece of hardware. """
# If this is a non-LUNA board, we'll look up its name in our table.
if self.major == self.EXTERNAL_BOARD_MAJOR:
return self.EXTERNAL_BOARD_NAMES[self.minor]
# If this is a non-LUNA board, we'll look up its name in our table.
if self.major in self.SUBDEVICE_MAJORS:
product_name = self.SUBDEVICE_MAJORS[self.major]
major = 0 # For now?
else:
product_name = "LUNA"
major = self.major
# Otherwise, identify it by its revision number.
return f"{product_name} r{major}.{self.minor}"
def get_compatibility_string(self):
""" Returns 'LUNA' for a LUNA board; or 'LUNA-compatible' for supported external board."""
if self.major == self.EXTERNAL_BOARD_MAJOR:
return 'LUNA-compatible'
elif self.major in self.SUBDEVICE_MAJORS:
return self.SUBDEVICE_MAJORS[self.major]
return 'LUNA'
def create_jtag_programmer(self, jtag_chain):
""" Returns the JTAG programmer for the given device. """
# If this is an external programmer, return its programmer type.
if self.major == self.EXTERNAL_BOARD_MAJOR:
programmer = self.EXTERNAL_BOARD_PROGRAMMERS[self.minor]
# Otherwise, it should be an ECP5.
else:
programmer = ECP5_JTAGProgrammer
return programmer(jtag_chain)
def out_request(self, number, value=0, index=0, data=None, timeout=5000):
""" Helper that issues an OUT control request to the debugger. """
request_type = usb.ENDPOINT_OUT | usb.RECIP_DEVICE | usb.TYPE_VENDOR
return self.device.ctrl_transfer(request_type, number, value, index, data, timeout=timeout)
def in_request(self, number, value=0, index=0, length=0, timeout=500):
""" Helper that issues an IN control request to the debugger. """
request_type = usb.ENDPOINT_IN | usb.RECIP_DEVICE | usb.TYPE_VENDOR
result = self.device.ctrl_transfer(request_type, number, value, index, length, timeout=timeout)
return bytes(result)
def set_led_pattern(self, number):
self.out_request(self.REQUEST_SET_LED_PATTERN, number)
def soft_reset(self):
""" Resets the target (FPGA/etc) connected to the debug controller. """
try:
self.out_request(self.REQUEST_RECONFIGURE)
except usb.core.USBError:
pass
| {
"pile_set_name": "Github"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.