hunk
dict | file
stringlengths 0
11.8M
| file_path
stringlengths 2
234
| label
int64 0
1
| commit_url
stringlengths 74
103
| dependency_score
sequencelengths 5
5
|
---|---|---|---|---|---|
{
"id": 5,
"code_window": [
"the \"name\" parameter. For example, if this backend is mounted at \"aws\",\n",
"then \"aws/deploy\" would generate access keys for the \"deploy\" policy.\n",
"\n",
"The access keys will have a lease associated with them. The access keys\n",
"can be revoked by using the Vault ID.\n",
"```\n",
"\n",
"Within a path, we're given the parameters that this path requires.\n",
"Some parameters come from the route itself. In this case, the \"name\"\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"can be revoked by using the lease ID.\n"
],
"file_path": "website/source/intro/getting-started/help.html.md",
"type": "replace",
"edit_start_line_idx": 96
} | ---
layout: "intro"
page_title: "Built-in Help"
sidebar_current: "gettingstarted-help"
description: |-
Vault has a built-in help system to learn about the available paths in Vault and how to use them.
---
# Built-in Help
You've now worked with `vault write` and `vault read` for multiple paths:
generic secret backend with `secret/` and dynamic AWS credentials with the
AWS backend provider at `aws/`. In both cases, the usage of read/write and
the paths to use differed. AWS in particular had special paths like
`aws/config`.
Instead of having to memorize or reference documentation constantly
to determine what paths to use, we built a help system directly into
Vault. This help system can be access via the API or the command-line and
generates human-readable help for any mounted backend.
On this page, we'll learn how to use this help system. It is an invaluable
tool as you continue to work with Vault.
## Backend Overview
For this, we'll assume you have the AWS backend mounted. If not, mount
it with `vault mount aws`. Even if you don't have an AWS account, you
can still mount the AWS backend.
With the backend mounted, let's learn about it with `vault help`:
```
$ vault help aws
## DESCRIPTION
The AWS backend dynamically generates AWS access keys for a set of
IAM policies. The AWS access keys have a configurable lease set and
are automatically revoked at the end of the lease.
After mounting this backend, credentials to generate IAM keys must
be configured with the "root" path and policies must be written using
the "policy/" endpoints before any access keys can be generated.
## PATHS
The following paths are supported by this backend. To view help for
any of the paths below, use the help command with any route matching
the path pattern. Note that depending on the policy of your auth token,
you may or may not be able to access certain paths.
^(?P<name>\w+)$
Generate an access key pair for a specific policy.
^policy/(?P<name>\w+)$
Read and write IAM policies that access keys can be made for.
^root$
Configure the root credentials that are used to manage IAM.
```
The `vault help` command takes a path. By specifying the root path for
a mount, it will give us the overview of that mount. Notice how the help
not only contains a description, but also the exact regular expressions
used to match routes for this backend along with a brief description
of what the route is for.
## Path Help
After seeing the overview, we can continue to dive deeper by getting
help for an individual path. For this, just use `vault help` with a path
that would match the regular expression for that path. Note that the path
doesn't need to actually _work_. For example, we'll get the help below
for accessing `aws/operator`, even though we never wrote the `operator`
policy:
```
$ vault help aws/operator
Request: operator
Matching Route: ^(?P<name>\w+)$
Generate an access key pair for a specific policy.
## PARAMETERS
name (string)
Name of the policy
## DESCRIPTION
This path will generate a new, never before used key pair for
accessing AWS. The IAM policy used to back this key pair will be
the "name" parameter. For example, if this backend is mounted at "aws",
then "aws/deploy" would generate access keys for the "deploy" policy.
The access keys will have a lease associated with them. The access keys
can be revoked by using the Vault ID.
```
Within a path, we're given the parameters that this path requires.
Some parameters come from the route itself. In this case, the "name"
parameter is a named capture from the route regular expression.
There is also a description of what that path does.
Go ahead and explore more paths! Mount other backends, traverse their
help systems and learn about what they do. For example, learn about the
generic `secret/` path.
## Next
The help system may not be the most exciting feature of Vault, but it
is indispensable in day-to-day usage of Vault. The help system lets you
learn about how to use any backend within Vault without leaving the command
line.
Next, we'll learn about
[authentication](/intro/getting-started/authentication.html).
| website/source/intro/getting-started/help.html.md | 1 | https://github.com/hashicorp/vault/commit/c30d877fa422c9425c5e81bd904f81642b9fae87 | [
0.9925606846809387,
0.08754432201385498,
0.0003651974257081747,
0.0020204682368785143,
0.2729808986186981
] |
{
"id": 5,
"code_window": [
"the \"name\" parameter. For example, if this backend is mounted at \"aws\",\n",
"then \"aws/deploy\" would generate access keys for the \"deploy\" policy.\n",
"\n",
"The access keys will have a lease associated with them. The access keys\n",
"can be revoked by using the Vault ID.\n",
"```\n",
"\n",
"Within a path, we're given the parameters that this path requires.\n",
"Some parameters come from the route itself. In this case, the \"name\"\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"can be revoked by using the lease ID.\n"
],
"file_path": "website/source/intro/getting-started/help.html.md",
"type": "replace",
"edit_start_line_idx": 96
} | <!-- TODO Precompile ember templates -->
<script type="text/x-handlebars" data-template-name="application">
{{outlet}}
</script>
<script type="text/x-handlebars" data-template-name="demo">
<div class="terminal">
<span class="close-terminal" {{action "close"}}>X</span>
{{outlet}}
{{#if isLoading}}
<div class="loading-bar"></div>
{{/if}}
</div>
</script>
<script type="text/x-handlebars" data-template-name="demo/crud">
{{#if notCleared}}
<div class="welcome">
Any Vault command you run passes through remotely to
the real Vault interface, so feel free to explore, but
be careful of the values you set.
</div>
{{/if}}
<div class="log">
{{#each line in currentLog}}
{{logPrefix}}{{line}}
{{/each}}
</div>
<form {{action "submitText" on="submit"}}>
{{logPrefix}} {{input value=currentText class="shell" spellcheck="false"}}
</form>
</script>
| website/source/_ember_templates.html.erb | 0 | https://github.com/hashicorp/vault/commit/c30d877fa422c9425c5e81bd904f81642b9fae87 | [
0.00024333444889634848,
0.0002007649018196389,
0.00016845522623043507,
0.00019563495879992843,
0.00003292153269285336
] |
{
"id": 5,
"code_window": [
"the \"name\" parameter. For example, if this backend is mounted at \"aws\",\n",
"then \"aws/deploy\" would generate access keys for the \"deploy\" policy.\n",
"\n",
"The access keys will have a lease associated with them. The access keys\n",
"can be revoked by using the Vault ID.\n",
"```\n",
"\n",
"Within a path, we're given the parameters that this path requires.\n",
"Some parameters come from the route itself. In this case, the \"name\"\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"can be revoked by using the lease ID.\n"
],
"file_path": "website/source/intro/getting-started/help.html.md",
"type": "replace",
"edit_start_line_idx": 96
} | package vault
import (
"fmt"
"log"
"strings"
"testing"
"github.com/hashicorp/vault/logical"
)
type NoopBackend struct {
Root []string
Login []string
Paths []string
Requests []*logical.Request
Response *logical.Response
Logger *log.Logger
}
func (n *NoopBackend) HandleRequest(req *logical.Request) (*logical.Response, error) {
requestCopy := *req
n.Paths = append(n.Paths, req.Path)
n.Requests = append(n.Requests, &requestCopy)
if req.Storage == nil {
return nil, fmt.Errorf("missing view")
}
return n.Response, nil
}
func (n *NoopBackend) SpecialPaths() *logical.Paths {
return &logical.Paths{
Root: n.Root,
Unauthenticated: n.Login,
}
}
func (n *NoopBackend) SetLogger(l *log.Logger) {
n.Logger = l
}
func TestRouter_Mount(t *testing.T) {
r := NewRouter()
_, barrier, _ := mockBarrier(t)
view := NewBarrierView(barrier, "logical/")
n := &NoopBackend{}
err := r.Mount(n, "prod/aws/", generateUUID(), view)
if err != nil {
t.Fatalf("err: %v", err)
}
err = r.Mount(n, "prod/aws/", generateUUID(), view)
if !strings.Contains(err.Error(), "cannot mount under existing mount") {
t.Fatalf("err: %v", err)
}
if path := r.MatchingMount("prod/aws/foo"); path != "prod/aws/" {
t.Fatalf("bad: %s", path)
}
if v := r.MatchingView("prod/aws/foo"); v != view {
t.Fatalf("bad: %s", v)
}
if path := r.MatchingMount("stage/aws/foo"); path != "" {
t.Fatalf("bad: %s", path)
}
if v := r.MatchingView("stage/aws/foo"); v != nil {
t.Fatalf("bad: %s", v)
}
req := &logical.Request{
Path: "prod/aws/foo",
}
resp, err := r.Route(req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp != nil {
t.Fatalf("bad: %v", resp)
}
// Verify the path
if len(n.Paths) != 1 || n.Paths[0] != "foo" {
t.Fatalf("bad: %v", n.Paths)
}
}
func TestRouter_Unmount(t *testing.T) {
r := NewRouter()
_, barrier, _ := mockBarrier(t)
view := NewBarrierView(barrier, "logical/")
n := &NoopBackend{}
err := r.Mount(n, "prod/aws/", generateUUID(), view)
if err != nil {
t.Fatalf("err: %v", err)
}
err = r.Unmount("prod/aws/")
if err != nil {
t.Fatalf("err: %v", err)
}
req := &logical.Request{
Path: "prod/aws/foo",
}
_, err = r.Route(req)
if !strings.Contains(err.Error(), "no handler for route") {
t.Fatalf("err: %v", err)
}
}
func TestRouter_Remount(t *testing.T) {
r := NewRouter()
_, barrier, _ := mockBarrier(t)
view := NewBarrierView(barrier, "logical/")
n := &NoopBackend{}
err := r.Mount(n, "prod/aws/", generateUUID(), view)
if err != nil {
t.Fatalf("err: %v", err)
}
err = r.Remount("prod/aws/", "stage/aws/")
if err != nil {
t.Fatalf("err: %v", err)
}
err = r.Remount("prod/aws/", "stage/aws/")
if !strings.Contains(err.Error(), "no mount at") {
t.Fatalf("err: %v", err)
}
req := &logical.Request{
Path: "prod/aws/foo",
}
_, err = r.Route(req)
if !strings.Contains(err.Error(), "no handler for route") {
t.Fatalf("err: %v", err)
}
req = &logical.Request{
Path: "stage/aws/foo",
}
_, err = r.Route(req)
if err != nil {
t.Fatalf("err: %v", err)
}
// Verify the path
if len(n.Paths) != 1 || n.Paths[0] != "foo" {
t.Fatalf("bad: %v", n.Paths)
}
}
func TestRouter_RootPath(t *testing.T) {
r := NewRouter()
_, barrier, _ := mockBarrier(t)
view := NewBarrierView(barrier, "logical/")
n := &NoopBackend{
Root: []string{
"root",
"policy/*",
},
}
err := r.Mount(n, "prod/aws/", generateUUID(), view)
if err != nil {
t.Fatalf("err: %v", err)
}
type tcase struct {
path string
expect bool
}
tcases := []tcase{
{"random", false},
{"prod/aws/foo", false},
{"prod/aws/root", true},
{"prod/aws/root-more", false},
{"prod/aws/policy", false},
{"prod/aws/policy/", true},
{"prod/aws/policy/ops", true},
}
for _, tc := range tcases {
out := r.RootPath(tc.path)
if out != tc.expect {
t.Fatalf("bad: path: %s expect: %v got %v", tc.path, tc.expect, out)
}
}
}
func TestRouter_LoginPath(t *testing.T) {
r := NewRouter()
_, barrier, _ := mockBarrier(t)
view := NewBarrierView(barrier, "auth/")
n := &NoopBackend{
Login: []string{
"login",
"oauth/*",
},
}
err := r.Mount(n, "auth/foo/", generateUUID(), view)
if err != nil {
t.Fatalf("err: %v", err)
}
type tcase struct {
path string
expect bool
}
tcases := []tcase{
{"random", false},
{"auth/foo/bar", false},
{"auth/foo/login", true},
{"auth/foo/oauth", false},
{"auth/foo/oauth/redirect", true},
}
for _, tc := range tcases {
out := r.LoginPath(tc.path)
if out != tc.expect {
t.Fatalf("bad: path: %s expect: %v got %v", tc.path, tc.expect, out)
}
}
}
func TestRouter_Taint(t *testing.T) {
r := NewRouter()
_, barrier, _ := mockBarrier(t)
view := NewBarrierView(barrier, "logical/")
n := &NoopBackend{}
err := r.Mount(n, "prod/aws/", generateUUID(), view)
if err != nil {
t.Fatalf("err: %v", err)
}
err = r.Taint("prod/aws/")
if err != nil {
t.Fatalf("err: %v", err)
}
req := &logical.Request{
Operation: logical.ReadOperation,
Path: "prod/aws/foo",
}
_, err = r.Route(req)
if err.Error() != "no handler for route 'prod/aws/foo'" {
t.Fatalf("err: %v", err)
}
// Rollback and Revoke should work
req.Operation = logical.RollbackOperation
_, err = r.Route(req)
if err != nil {
t.Fatalf("err: %v", err)
}
req.Operation = logical.RevokeOperation
_, err = r.Route(req)
if err != nil {
t.Fatalf("err: %v", err)
}
}
func TestRouter_Untaint(t *testing.T) {
r := NewRouter()
_, barrier, _ := mockBarrier(t)
view := NewBarrierView(barrier, "logical/")
n := &NoopBackend{}
err := r.Mount(n, "prod/aws/", generateUUID(), view)
if err != nil {
t.Fatalf("err: %v", err)
}
err = r.Taint("prod/aws/")
if err != nil {
t.Fatalf("err: %v", err)
}
err = r.Untaint("prod/aws/")
if err != nil {
t.Fatalf("err: %v", err)
}
req := &logical.Request{
Operation: logical.ReadOperation,
Path: "prod/aws/foo",
}
_, err = r.Route(req)
if err != nil {
t.Fatalf("err: %v", err)
}
}
func TestPathsToRadix(t *testing.T) {
// Provide real paths
paths := []string{
"foo",
"foo/*",
"sub/bar*",
}
r := pathsToRadix(paths)
raw, ok := r.Get("foo")
if !ok || raw.(bool) != false {
t.Fatalf("bad: %v (foo)", raw)
}
raw, ok = r.Get("foo/")
if !ok || raw.(bool) != true {
t.Fatalf("bad: %v (foo/)", raw)
}
raw, ok = r.Get("sub/bar")
if !ok || raw.(bool) != true {
t.Fatalf("bad: %v (sub/bar)", raw)
}
}
| vault/router_test.go | 0 | https://github.com/hashicorp/vault/commit/c30d877fa422c9425c5e81bd904f81642b9fae87 | [
0.013605206273496151,
0.0009430561913177371,
0.000165523451869376,
0.0003503532789181918,
0.0023693619295954704
] |
{
"id": 5,
"code_window": [
"the \"name\" parameter. For example, if this backend is mounted at \"aws\",\n",
"then \"aws/deploy\" would generate access keys for the \"deploy\" policy.\n",
"\n",
"The access keys will have a lease associated with them. The access keys\n",
"can be revoked by using the Vault ID.\n",
"```\n",
"\n",
"Within a path, we're given the parameters that this path requires.\n",
"Some parameters come from the route itself. In this case, the \"name\"\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"can be revoked by using the lease ID.\n"
],
"file_path": "website/source/intro/getting-started/help.html.md",
"type": "replace",
"edit_start_line_idx": 96
} | (function(){
Sidebar = Base.extend({
$body: null,
$overlay: null,
$sidebar: null,
$sidebarHeader: null,
$sidebarImg: null,
$toggleButton: null,
constructor: function(){
this.$body = $('body');
this.$overlay = $('.sidebar-overlay');
this.$sidebar = $('#sidebar');
this.$sidebarHeader = $('#sidebar .sidebar-header');
this.$toggleButton = $('.navbar-toggle');
this.sidebarImg = this.$sidebarHeader.css('background-image');
this.addEventListeners();
},
addEventListeners: function(){
var _this = this;
_this.$toggleButton.on('click', function() {
_this.$sidebar.toggleClass('open');
if ((_this.$sidebar.hasClass('sidebar-fixed-left') || _this.$sidebar.hasClass('sidebar-fixed-right')) && _this.$sidebar.hasClass('open')) {
_this.$overlay.addClass('active');
_this.$body.css('overflow', 'hidden');
} else {
_this.$overlay.removeClass('active');
_this.$body.css('overflow', 'auto');
}
return false;
});
_this.$overlay.on('click', function() {
$(this).removeClass('active');
_this.$body.css('overflow', 'auto');
_this.$sidebar.removeClass('open');
});
}
});
window.Sidebar = Sidebar;
})();
| website/source/assets/javascripts/app/Sidebar.js | 0 | https://github.com/hashicorp/vault/commit/c30d877fa422c9425c5e81bd904f81642b9fae87 | [
0.00017274063429795206,
0.0001684391318121925,
0.00016508370754308999,
0.00016709842020645738,
0.000002776130713755265
] |
{
"id": 2,
"code_window": [
"\twarning := \"Warning 1452 Cannot add or update a child row: a foreign key constraint fails (`test`.`t2`, CONSTRAINT `fk_1` FOREIGN KEY (`i`) REFERENCES `t1` (`i`))\"\n",
"\ttk.MustQuery(\"show warnings;\").Check(testkit.Rows(warning, warning))\n",
"\ttk.MustQuery(\"select * from t2\").Check(testkit.Rows(\"1\", \"3\"))\n",
"}\n",
"\n",
"func TestForeignKeyOnInsertOnDuplicateParentTableCheck(t *testing.T) {\n"
],
"labels": [
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\ttk.MustQuery(\"select * from t2 order by i\").Check(testkit.Rows(\"<nil>\", \"1\", \"1\", \"3\"))\n",
"\t// Test for foreign key index is non-unique key.\n",
"\ttk.MustExec(\"drop table t1,t2\")\n",
"\ttk.MustExec(\"CREATE TABLE t1 (i INT, index(i));\")\n",
"\ttk.MustExec(\"CREATE TABLE t2 (i INT, FOREIGN KEY (i) REFERENCES t1 (i));\")\n",
"\ttk.MustExec(\"INSERT INTO t1 VALUES (1),(3);\")\n",
"\ttk.MustExec(\"INSERT IGNORE INTO t2 VALUES (1), (null), (1), (2), (3), (2);\")\n",
"\ttk.MustQuery(\"show warnings;\").Check(testkit.Rows(warning, warning))\n",
"\ttk.MustQuery(\"select * from t2 order by i\").Check(testkit.Rows(\"<nil>\", \"1\", \"1\", \"3\"))\n"
],
"file_path": "executor/fktest/foreign_key_test.go",
"type": "replace",
"edit_start_line_idx": 495
} | // Copyright 2022 PingCAP, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package executor
import (
"bytes"
"context"
"sync/atomic"
"github.com/pingcap/errors"
"github.com/pingcap/tidb/kv"
"github.com/pingcap/tidb/parser/model"
"github.com/pingcap/tidb/planner"
plannercore "github.com/pingcap/tidb/planner/core"
"github.com/pingcap/tidb/sessionctx"
"github.com/pingcap/tidb/sessionctx/stmtctx"
"github.com/pingcap/tidb/table"
"github.com/pingcap/tidb/tablecodec"
"github.com/pingcap/tidb/types"
"github.com/pingcap/tidb/util/codec"
"github.com/pingcap/tidb/util/set"
"github.com/pingcap/tidb/util/sqlexec"
"github.com/tikv/client-go/v2/txnkv/txnsnapshot"
)
// WithForeignKeyTrigger indicates the executor has foreign key check or cascade.
type WithForeignKeyTrigger interface {
GetFKChecks() []*FKCheckExec
GetFKCascades() []*FKCascadeExec
HasFKCascades() bool
}
// FKCheckExec uses to check foreign key constraint.
// When insert/update child table, need to check the row has related row exists in refer table.
// When insert/update parent table, need to check the row doesn't have related row exists in refer table.
type FKCheckExec struct {
*plannercore.FKCheck
*fkValueHelper
ctx sessionctx.Context
toBeCheckedKeys []kv.Key
toBeCheckedPrefixKeys []kv.Key
toBeLockedKeys []kv.Key
checkRowsCache map[string]bool
stats *FKCheckRuntimeStats
}
// FKCheckRuntimeStats contains the FKCheckExec runtime stats.
type FKCheckRuntimeStats struct {
Keys int
}
// FKCascadeExec uses to execute foreign key cascade behaviour.
type FKCascadeExec struct {
*fkValueHelper
b *executorBuilder
tp plannercore.FKCascadeType
referredFK *model.ReferredFKInfo
childTable *model.TableInfo
fk *model.FKInfo
// On delete statement, fkValues stores the delete foreign key values.
// On update statement and the foreign key cascade is `SET NULL`, fkValues stores the old foreign key values.
fkValues [][]types.Datum
// new-value-key => UpdatedValuesCouple
fkUpdatedValuesMap map[string]*UpdatedValuesCouple
}
// UpdatedValuesCouple contains the updated new row the old rows, exporting for test.
type UpdatedValuesCouple struct {
NewValues []types.Datum
OldValuesList [][]types.Datum
}
func buildTblID2FKCheckExecs(sctx sessionctx.Context, tblID2Table map[int64]table.Table, tblID2FKChecks map[int64][]*plannercore.FKCheck) (map[int64][]*FKCheckExec, error) {
fkChecksMap := make(map[int64][]*FKCheckExec)
for tid, tbl := range tblID2Table {
fkChecks, err := buildFKCheckExecs(sctx, tbl, tblID2FKChecks[tid])
if err != nil {
return nil, err
}
if len(fkChecks) > 0 {
fkChecksMap[tid] = fkChecks
}
}
return fkChecksMap, nil
}
func buildFKCheckExecs(sctx sessionctx.Context, tbl table.Table, fkChecks []*plannercore.FKCheck) ([]*FKCheckExec, error) {
fkCheckExecs := make([]*FKCheckExec, 0, len(fkChecks))
for _, fkCheck := range fkChecks {
fkCheckExec, err := buildFKCheckExec(sctx, tbl, fkCheck)
if err != nil {
return nil, err
}
if fkCheckExec != nil {
fkCheckExecs = append(fkCheckExecs, fkCheckExec)
}
}
return fkCheckExecs, nil
}
func buildFKCheckExec(sctx sessionctx.Context, tbl table.Table, fkCheck *plannercore.FKCheck) (*FKCheckExec, error) {
var cols []model.CIStr
if fkCheck.FK != nil {
cols = fkCheck.FK.Cols
} else if fkCheck.ReferredFK != nil {
cols = fkCheck.ReferredFK.Cols
}
colsOffsets, err := getFKColumnsOffsets(tbl.Meta(), cols)
if err != nil {
return nil, err
}
helper := &fkValueHelper{
colsOffsets: colsOffsets,
fkValuesSet: set.NewStringSet(),
}
return &FKCheckExec{
ctx: sctx,
FKCheck: fkCheck,
fkValueHelper: helper,
}, nil
}
func (fkc *FKCheckExec) insertRowNeedToCheck(sc *stmtctx.StatementContext, row []types.Datum) error {
return fkc.addRowNeedToCheck(sc, row)
}
func (fkc *FKCheckExec) updateRowNeedToCheck(sc *stmtctx.StatementContext, oldRow, newRow []types.Datum) error {
if fkc.FK != nil {
return fkc.addRowNeedToCheck(sc, newRow)
} else if fkc.ReferredFK != nil {
return fkc.addRowNeedToCheck(sc, oldRow)
}
return nil
}
func (fkc *FKCheckExec) deleteRowNeedToCheck(sc *stmtctx.StatementContext, row []types.Datum) error {
return fkc.addRowNeedToCheck(sc, row)
}
func (fkc *FKCheckExec) addRowNeedToCheck(sc *stmtctx.StatementContext, row []types.Datum) error {
vals, err := fkc.fetchFKValuesWithCheck(sc, row)
if err != nil || len(vals) == 0 {
return err
}
key, isPrefix, err := fkc.buildCheckKeyFromFKValue(sc, vals)
if err != nil {
return err
}
if isPrefix {
fkc.toBeCheckedPrefixKeys = append(fkc.toBeCheckedPrefixKeys, key)
} else {
fkc.toBeCheckedKeys = append(fkc.toBeCheckedKeys, key)
}
return nil
}
func (fkc *FKCheckExec) doCheck(ctx context.Context) error {
txn, err := fkc.ctx.Txn(false)
if err != nil {
return err
}
err = fkc.checkKeys(ctx, txn)
if err != nil {
return err
}
err = fkc.checkIndexKeys(ctx, txn)
if err != nil {
return err
}
if len(fkc.toBeLockedKeys) == 0 {
return nil
}
sessVars := fkc.ctx.GetSessionVars()
lockCtx, err := newLockCtx(fkc.ctx, sessVars.LockWaitTimeout, len(fkc.toBeLockedKeys))
if err != nil {
return err
}
// WARN: Since tidb current doesn't support `LOCK IN SHARE MODE`, therefore, performance will be very poor in concurrency cases.
// TODO(crazycs520):After TiDB support `LOCK IN SHARE MODE`, use `LOCK IN SHARE MODE` here.
forUpdate := atomic.LoadUint32(&sessVars.TxnCtx.ForUpdate)
err = doLockKeys(ctx, fkc.ctx, lockCtx, fkc.toBeLockedKeys...)
// doLockKeys may set TxnCtx.ForUpdate to 1, then if the lock meet write conflict, TiDB can't retry for update.
// So reset TxnCtx.ForUpdate to 0 then can be retry if meet write conflict.
atomic.StoreUint32(&sessVars.TxnCtx.ForUpdate, forUpdate)
return err
}
func (fkc *FKCheckExec) buildCheckKeyFromFKValue(sc *stmtctx.StatementContext, vals []types.Datum) (key kv.Key, isPrefix bool, err error) {
if fkc.IdxIsPrimaryKey {
handleKey, err := fkc.buildHandleFromFKValues(sc, vals)
if err != nil {
return nil, false, err
}
key := tablecodec.EncodeRecordKey(fkc.Tbl.RecordPrefix(), handleKey)
if fkc.IdxIsExclusive {
return key, false, nil
}
return key, true, nil
}
key, distinct, err := fkc.Idx.GenIndexKey(sc, vals, nil, nil)
if err != nil {
return nil, false, err
}
if distinct && fkc.IdxIsExclusive {
return key, false, nil
}
return key, true, nil
}
func (fkc *FKCheckExec) buildHandleFromFKValues(sc *stmtctx.StatementContext, vals []types.Datum) (kv.Handle, error) {
if len(vals) == 1 && fkc.Idx == nil {
return kv.IntHandle(vals[0].GetInt64()), nil
}
handleBytes, err := codec.EncodeKey(sc, nil, vals...)
if err != nil {
return nil, err
}
return kv.NewCommonHandle(handleBytes)
}
func (fkc *FKCheckExec) checkKeys(ctx context.Context, txn kv.Transaction) error {
if len(fkc.toBeCheckedKeys) == 0 {
return nil
}
err := fkc.prefetchKeys(ctx, txn, fkc.toBeCheckedKeys)
if err != nil {
return err
}
for _, k := range fkc.toBeCheckedKeys {
err = fkc.checkKey(ctx, txn, k)
if err != nil {
return err
}
}
return nil
}
func (fkc *FKCheckExec) prefetchKeys(ctx context.Context, txn kv.Transaction, keys []kv.Key) error {
// Fill cache using BatchGet
_, err := txn.BatchGet(ctx, keys)
if err != nil {
return err
}
return nil
}
func (fkc *FKCheckExec) checkKey(ctx context.Context, txn kv.Transaction, k kv.Key) error {
if fkc.CheckExist {
return fkc.checkKeyExist(ctx, txn, k)
}
return fkc.checkKeyNotExist(ctx, txn, k)
}
func (fkc *FKCheckExec) checkKeyExist(ctx context.Context, txn kv.Transaction, k kv.Key) error {
_, err := txn.Get(ctx, k)
if err == nil {
fkc.toBeLockedKeys = append(fkc.toBeLockedKeys, k)
return nil
}
if kv.IsErrNotFound(err) {
return fkc.FailedErr
}
return err
}
func (fkc *FKCheckExec) checkKeyNotExist(ctx context.Context, txn kv.Transaction, k kv.Key) error {
_, err := txn.Get(ctx, k)
if err == nil {
return fkc.FailedErr
}
if kv.IsErrNotFound(err) {
return nil
}
return err
}
func (fkc *FKCheckExec) checkIndexKeys(ctx context.Context, txn kv.Transaction) error {
if len(fkc.toBeCheckedPrefixKeys) == 0 {
return nil
}
memBuffer := txn.GetMemBuffer()
snap := txn.GetSnapshot()
snap.SetOption(kv.ScanBatchSize, 2)
defer func() {
snap.SetOption(kv.ScanBatchSize, txnsnapshot.DefaultScanBatchSize)
}()
for _, key := range fkc.toBeCheckedPrefixKeys {
err := fkc.checkPrefixKey(ctx, memBuffer, snap, key)
if err != nil {
return err
}
}
return nil
}
func (fkc *FKCheckExec) checkPrefixKey(ctx context.Context, memBuffer kv.MemBuffer, snap kv.Snapshot, key kv.Key) error {
key, value, err := fkc.getIndexKeyValueInTable(ctx, memBuffer, snap, key)
if err != nil {
return err
}
if fkc.CheckExist {
return fkc.checkPrefixKeyExist(key, value)
}
if len(value) > 0 {
// If check not exist, but the key is exist, return failedErr.
return fkc.FailedErr
}
return nil
}
func (fkc *FKCheckExec) checkPrefixKeyExist(key kv.Key, value []byte) error {
exist := len(value) > 0
if !exist {
return fkc.FailedErr
}
if fkc.Idx != nil && fkc.Idx.Meta().Primary && fkc.Tbl.Meta().IsCommonHandle {
fkc.toBeLockedKeys = append(fkc.toBeLockedKeys, key)
} else {
handle, err := tablecodec.DecodeIndexHandle(key, value, len(fkc.Idx.Meta().Columns))
if err != nil {
return err
}
handleKey := tablecodec.EncodeRecordKey(fkc.Tbl.RecordPrefix(), handle)
fkc.toBeLockedKeys = append(fkc.toBeLockedKeys, handleKey)
}
return nil
}
func (fkc *FKCheckExec) getIndexKeyValueInTable(ctx context.Context, memBuffer kv.MemBuffer, snap kv.Snapshot, key kv.Key) (k []byte, v []byte, _ error) {
select {
case <-ctx.Done():
return nil, nil, ctx.Err()
default:
}
memIter, err := memBuffer.Iter(key, key.PrefixNext())
if err != nil {
return nil, nil, err
}
deletedKeys := set.NewStringSet()
defer memIter.Close()
for ; memIter.Valid(); err = memIter.Next() {
if err != nil {
return nil, nil, err
}
k := memIter.Key()
if !k.HasPrefix(key) {
break
}
// check whether the key was been deleted.
if len(memIter.Value()) > 0 {
return k, memIter.Value(), nil
}
deletedKeys.Insert(string(k))
}
it, err := snap.Iter(key, key.PrefixNext())
if err != nil {
return nil, nil, err
}
defer it.Close()
for ; it.Valid(); err = it.Next() {
if err != nil {
return nil, nil, err
}
k := it.Key()
if !k.HasPrefix(key) {
break
}
if !deletedKeys.Exist(string(k)) {
return k, it.Value(), nil
}
}
return nil, nil, nil
}
type fkValueHelper struct {
colsOffsets []int
fkValuesSet set.StringSet
}
func (h *fkValueHelper) fetchFKValuesWithCheck(sc *stmtctx.StatementContext, row []types.Datum) ([]types.Datum, error) {
vals, err := h.fetchFKValues(row)
if err != nil || h.hasNullValue(vals) {
return nil, err
}
keyBuf, err := codec.EncodeKey(sc, nil, vals...)
if err != nil {
return nil, err
}
key := string(keyBuf)
if h.fkValuesSet.Exist(key) {
return nil, nil
}
h.fkValuesSet.Insert(key)
return vals, nil
}
func (h *fkValueHelper) fetchFKValues(row []types.Datum) ([]types.Datum, error) {
vals := make([]types.Datum, len(h.colsOffsets))
for i, offset := range h.colsOffsets {
if offset >= len(row) {
return nil, table.ErrIndexOutBound.GenWithStackByArgs("", offset, row)
}
vals[i] = row[offset]
}
return vals, nil
}
func (h *fkValueHelper) hasNullValue(vals []types.Datum) bool {
// If any foreign key column value is null, no need to check this row.
// test case:
// create table t1 (id int key,a int, b int, index(a, b));
// create table t2 (id int key,a int, b int, foreign key fk(a, b) references t1(a, b) ON DELETE CASCADE);
// > insert into t2 values (2, null, 1);
// Query OK, 1 row affected
// > insert into t2 values (3, 1, null);
// Query OK, 1 row affected
// > insert into t2 values (4, null, null);
// Query OK, 1 row affected
// > select * from t2;
// +----+--------+--------+
// | id | a | b |
// +----+--------+--------+
// | 4 | <null> | <null> |
// | 2 | <null> | 1 |
// | 3 | 1 | <null> |
// +----+--------+--------+
for _, val := range vals {
if val.IsNull() {
return true
}
}
return false
}
func getFKColumnsOffsets(tbInfo *model.TableInfo, cols []model.CIStr) ([]int, error) {
colsOffsets := make([]int, len(cols))
for i, col := range cols {
offset := -1
for i := range tbInfo.Columns {
if tbInfo.Columns[i].Name.L == col.L {
offset = tbInfo.Columns[i].Offset
break
}
}
if offset < 0 {
return nil, table.ErrUnknownColumn.GenWithStackByArgs(col.L)
}
colsOffsets[i] = offset
}
return colsOffsets, nil
}
type fkCheckKey struct {
k kv.Key
isPrefix bool
}
func (fkc FKCheckExec) checkRows(ctx context.Context, sc *stmtctx.StatementContext, txn kv.Transaction, rows []toBeCheckedRow) error {
if len(rows) == 0 {
return nil
}
if fkc.checkRowsCache == nil {
fkc.checkRowsCache = map[string]bool{}
}
fkCheckKeys := make([]*fkCheckKey, len(rows))
prefetchKeys := make([]kv.Key, 0, len(rows))
for i, r := range rows {
if r.ignored {
continue
}
vals, err := fkc.fetchFKValues(r.row)
if err != nil {
return err
}
if fkc.hasNullValue(vals) {
continue
}
key, isPrefix, err := fkc.buildCheckKeyFromFKValue(sc, vals)
if err != nil {
return err
}
fkCheckKeys[i] = &fkCheckKey{key, isPrefix}
if !isPrefix {
prefetchKeys = append(prefetchKeys, key)
}
}
if len(prefetchKeys) > 0 {
err := fkc.prefetchKeys(ctx, txn, prefetchKeys)
if err != nil {
return err
}
}
memBuffer := txn.GetMemBuffer()
snap := txn.GetSnapshot()
snap.SetOption(kv.ScanBatchSize, 2)
defer func() {
snap.SetOption(kv.ScanBatchSize, 256)
}()
for i, fkCheckKey := range fkCheckKeys {
if fkCheckKey == nil {
continue
}
k := fkCheckKey.k
if ignore, ok := fkc.checkRowsCache[string(k)]; ok {
if ignore {
rows[i].ignored = true
sc.AppendWarning(fkc.FailedErr)
}
continue
}
var err error
if fkCheckKey.isPrefix {
err = fkc.checkPrefixKey(ctx, memBuffer, snap, k)
} else {
err = fkc.checkKey(ctx, txn, k)
}
if err != nil {
rows[i].ignored = true
sc.AppendWarning(fkc.FailedErr)
fkc.checkRowsCache[string(k)] = true
}
fkc.checkRowsCache[string(k)] = false
if fkc.stats != nil {
fkc.stats.Keys++
}
}
return nil
}
func (b *executorBuilder) buildTblID2FKCascadeExecs(tblID2Table map[int64]table.Table, tblID2FKCascades map[int64][]*plannercore.FKCascade) (map[int64][]*FKCascadeExec, error) {
fkCascadesMap := make(map[int64][]*FKCascadeExec)
for tid, tbl := range tblID2Table {
fkCascades, err := b.buildFKCascadeExecs(tbl, tblID2FKCascades[tid])
if err != nil {
return nil, err
}
if len(fkCascades) > 0 {
fkCascadesMap[tid] = fkCascades
}
}
return fkCascadesMap, nil
}
func (b *executorBuilder) buildFKCascadeExecs(tbl table.Table, fkCascades []*plannercore.FKCascade) ([]*FKCascadeExec, error) {
fkCascadeExecs := make([]*FKCascadeExec, 0, len(fkCascades))
for _, fkCascade := range fkCascades {
fkCascadeExec, err := b.buildFKCascadeExec(tbl, fkCascade)
if err != nil {
return nil, err
}
if fkCascadeExec != nil {
fkCascadeExecs = append(fkCascadeExecs, fkCascadeExec)
}
}
return fkCascadeExecs, nil
}
func (b *executorBuilder) buildFKCascadeExec(tbl table.Table, fkCascade *plannercore.FKCascade) (*FKCascadeExec, error) {
colsOffsets, err := getFKColumnsOffsets(tbl.Meta(), fkCascade.ReferredFK.Cols)
if err != nil {
return nil, err
}
helper := &fkValueHelper{
colsOffsets: colsOffsets,
fkValuesSet: set.NewStringSet(),
}
return &FKCascadeExec{
b: b,
fkValueHelper: helper,
tp: fkCascade.Tp,
referredFK: fkCascade.ReferredFK,
childTable: fkCascade.ChildTable.Meta(),
fk: fkCascade.FK,
fkUpdatedValuesMap: make(map[string]*UpdatedValuesCouple),
}, nil
}
func (fkc *FKCascadeExec) onDeleteRow(sc *stmtctx.StatementContext, row []types.Datum) error {
vals, err := fkc.fetchFKValuesWithCheck(sc, row)
if err != nil || len(vals) == 0 {
return err
}
fkc.fkValues = append(fkc.fkValues, vals)
return nil
}
func (fkc *FKCascadeExec) onUpdateRow(sc *stmtctx.StatementContext, oldRow, newRow []types.Datum) error {
oldVals, err := fkc.fetchFKValuesWithCheck(sc, oldRow)
if err != nil || len(oldVals) == 0 {
return err
}
if model.ReferOptionType(fkc.fk.OnUpdate) == model.ReferOptionSetNull {
fkc.fkValues = append(fkc.fkValues, oldVals)
return nil
}
newVals, err := fkc.fetchFKValues(newRow)
if err != nil {
return err
}
newValsKey, err := codec.EncodeKey(sc, nil, newVals...)
if err != nil {
return err
}
couple := fkc.fkUpdatedValuesMap[string(newValsKey)]
if couple == nil {
couple = &UpdatedValuesCouple{
NewValues: newVals,
}
}
couple.OldValuesList = append(couple.OldValuesList, oldVals)
fkc.fkUpdatedValuesMap[string(newValsKey)] = couple
return nil
}
func (fkc *FKCascadeExec) buildExecutor(ctx context.Context) (Executor, error) {
p, err := fkc.buildFKCascadePlan(ctx)
if err != nil || p == nil {
return nil, err
}
e := fkc.b.build(p)
return e, fkc.b.err
}
var maxHandleFKValueInOneCascade = 1024
func (fkc *FKCascadeExec) buildFKCascadePlan(ctx context.Context) (plannercore.Plan, error) {
if len(fkc.fkValues) == 0 && len(fkc.fkUpdatedValuesMap) == 0 {
return nil, nil
}
var indexName model.CIStr
indexForFK := model.FindIndexByColumns(fkc.childTable, fkc.fk.Cols...)
if indexForFK != nil {
indexName = indexForFK.Name
}
var sqlStr string
var err error
switch fkc.tp {
case plannercore.FKCascadeOnDelete:
fkValues := fkc.fetchOnDeleteOrUpdateFKValues()
switch model.ReferOptionType(fkc.fk.OnDelete) {
case model.ReferOptionCascade:
sqlStr, err = GenCascadeDeleteSQL(fkc.referredFK.ChildSchema, fkc.childTable.Name, indexName, fkc.fk, fkValues)
case model.ReferOptionSetNull:
sqlStr, err = GenCascadeSetNullSQL(fkc.referredFK.ChildSchema, fkc.childTable.Name, indexName, fkc.fk, fkValues)
}
case plannercore.FKCascadeOnUpdate:
switch model.ReferOptionType(fkc.fk.OnUpdate) {
case model.ReferOptionCascade:
couple := fkc.fetchUpdatedValuesCouple()
if couple != nil && len(couple.NewValues) != 0 {
sqlStr, err = GenCascadeUpdateSQL(fkc.referredFK.ChildSchema, fkc.childTable.Name, indexName, fkc.fk, couple)
}
case model.ReferOptionSetNull:
fkValues := fkc.fetchOnDeleteOrUpdateFKValues()
sqlStr, err = GenCascadeSetNullSQL(fkc.referredFK.ChildSchema, fkc.childTable.Name, indexName, fkc.fk, fkValues)
}
}
if err != nil {
return nil, err
}
if sqlStr == "" {
return nil, errors.Errorf("generate foreign key cascade sql failed, %v", fkc.tp)
}
sctx := fkc.b.ctx
exec, ok := sctx.(sqlexec.RestrictedSQLExecutor)
if !ok {
return nil, nil
}
stmtNode, err := exec.ParseWithParams(ctx, sqlStr)
if err != nil {
return nil, err
}
ret := &plannercore.PreprocessorReturn{}
err = plannercore.Preprocess(ctx, sctx, stmtNode, plannercore.WithPreprocessorReturn(ret), plannercore.InitTxnContextProvider)
if err != nil {
return nil, err
}
finalPlan, _, err := planner.Optimize(ctx, sctx, stmtNode, fkc.b.is)
if err != nil {
return nil, err
}
return finalPlan, err
}
func (fkc *FKCascadeExec) fetchOnDeleteOrUpdateFKValues() [][]types.Datum {
var fkValues [][]types.Datum
if len(fkc.fkValues) <= maxHandleFKValueInOneCascade {
fkValues = fkc.fkValues
fkc.fkValues = nil
} else {
fkValues = fkc.fkValues[:maxHandleFKValueInOneCascade]
fkc.fkValues = fkc.fkValues[maxHandleFKValueInOneCascade:]
}
return fkValues
}
func (fkc *FKCascadeExec) fetchUpdatedValuesCouple() *UpdatedValuesCouple {
for k, couple := range fkc.fkUpdatedValuesMap {
if len(couple.OldValuesList) <= maxHandleFKValueInOneCascade {
delete(fkc.fkUpdatedValuesMap, k)
return couple
}
result := &UpdatedValuesCouple{
NewValues: couple.NewValues,
OldValuesList: couple.OldValuesList[:maxHandleFKValueInOneCascade],
}
couple.OldValuesList = couple.OldValuesList[maxHandleFKValueInOneCascade:]
return result
}
return nil
}
// GenCascadeDeleteSQL uses to generate cascade delete SQL, export for test.
func GenCascadeDeleteSQL(schema, table, idx model.CIStr, fk *model.FKInfo, fkValues [][]types.Datum) (string, error) {
buf := bytes.NewBuffer(make([]byte, 0, 48+8*len(fkValues)))
buf.WriteString("DELETE FROM `")
buf.WriteString(schema.L)
buf.WriteString("`.`")
buf.WriteString(table.L)
buf.WriteString("`")
if idx.L != "" {
// Add use index to make sure the optimizer will use index instead of full table scan.
buf.WriteString(" USE INDEX(`")
buf.WriteString(idx.L)
buf.WriteString("`)")
}
err := genCascadeSQLWhereCondition(buf, fk, fkValues)
if err != nil {
return "", err
}
return buf.String(), nil
}
// GenCascadeSetNullSQL uses to generate foreign key `SET NULL` SQL, export for test.
func GenCascadeSetNullSQL(schema, table, idx model.CIStr, fk *model.FKInfo, fkValues [][]types.Datum) (string, error) {
newValues := make([]types.Datum, len(fk.Cols))
for i := range fk.Cols {
newValues[i] = types.NewDatum(nil)
}
couple := &UpdatedValuesCouple{
NewValues: newValues,
OldValuesList: fkValues,
}
return GenCascadeUpdateSQL(schema, table, idx, fk, couple)
}
// GenCascadeUpdateSQL uses to generate cascade update SQL, export for test.
func GenCascadeUpdateSQL(schema, table, idx model.CIStr, fk *model.FKInfo, couple *UpdatedValuesCouple) (string, error) {
buf := bytes.NewBuffer(nil)
buf.WriteString("UPDATE `")
buf.WriteString(schema.L)
buf.WriteString("`.`")
buf.WriteString(table.L)
buf.WriteString("`")
if idx.L != "" {
// Add use index to make sure the optimizer will use index instead of full table scan.
buf.WriteString(" USE INDEX(`")
buf.WriteString(idx.L)
buf.WriteString("`)")
}
buf.WriteString(" SET ")
for i, col := range fk.Cols {
if i > 0 {
buf.WriteString(", ")
}
buf.WriteString("`" + col.L)
buf.WriteString("` = ")
val, err := genFKValueString(couple.NewValues[i])
if err != nil {
return "", err
}
buf.WriteString(val)
}
err := genCascadeSQLWhereCondition(buf, fk, couple.OldValuesList)
if err != nil {
return "", err
}
return buf.String(), nil
}
func genCascadeSQLWhereCondition(buf *bytes.Buffer, fk *model.FKInfo, fkValues [][]types.Datum) error {
buf.WriteString(" WHERE (")
for i, col := range fk.Cols {
if i > 0 {
buf.WriteString(", ")
}
buf.WriteString("`" + col.L + "`")
}
buf.WriteString(") IN (")
for i, vs := range fkValues {
if i > 0 {
buf.WriteString(", (")
} else {
buf.WriteString("(")
}
for i := range vs {
val, err := genFKValueString(vs[i])
if err != nil {
return err
}
if i > 0 {
buf.WriteString(",")
}
buf.WriteString(val)
}
buf.WriteString(")")
}
buf.WriteString(")")
return nil
}
func genFKValueString(v types.Datum) (string, error) {
switch v.Kind() {
case types.KindNull:
return "NULL", nil
case types.KindMysqlBit:
return v.GetBinaryLiteral().ToBitLiteralString(true), nil
}
val, err := v.ToString()
if err != nil {
return "", err
}
switch v.Kind() {
case types.KindInt64, types.KindUint64, types.KindFloat32, types.KindFloat64, types.KindMysqlDecimal:
return val, nil
default:
return "'" + val + "'", nil
}
}
| executor/foreign_key.go | 1 | https://github.com/pingcap/tidb/commit/e6f020a26efc60480e0a0690cdca87f0990d4ceb | [
0.0018099043518304825,
0.00025074282893911004,
0.00016100573702715337,
0.0001731180673232302,
0.0002836274798028171
] |
{
"id": 2,
"code_window": [
"\twarning := \"Warning 1452 Cannot add or update a child row: a foreign key constraint fails (`test`.`t2`, CONSTRAINT `fk_1` FOREIGN KEY (`i`) REFERENCES `t1` (`i`))\"\n",
"\ttk.MustQuery(\"show warnings;\").Check(testkit.Rows(warning, warning))\n",
"\ttk.MustQuery(\"select * from t2\").Check(testkit.Rows(\"1\", \"3\"))\n",
"}\n",
"\n",
"func TestForeignKeyOnInsertOnDuplicateParentTableCheck(t *testing.T) {\n"
],
"labels": [
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\ttk.MustQuery(\"select * from t2 order by i\").Check(testkit.Rows(\"<nil>\", \"1\", \"1\", \"3\"))\n",
"\t// Test for foreign key index is non-unique key.\n",
"\ttk.MustExec(\"drop table t1,t2\")\n",
"\ttk.MustExec(\"CREATE TABLE t1 (i INT, index(i));\")\n",
"\ttk.MustExec(\"CREATE TABLE t2 (i INT, FOREIGN KEY (i) REFERENCES t1 (i));\")\n",
"\ttk.MustExec(\"INSERT INTO t1 VALUES (1),(3);\")\n",
"\ttk.MustExec(\"INSERT IGNORE INTO t2 VALUES (1), (null), (1), (2), (3), (2);\")\n",
"\ttk.MustQuery(\"show warnings;\").Check(testkit.Rows(warning, warning))\n",
"\ttk.MustQuery(\"select * from t2 order by i\").Check(testkit.Rows(\"<nil>\", \"1\", \"1\", \"3\"))\n"
],
"file_path": "executor/fktest/foreign_key_test.go",
"type": "replace",
"edit_start_line_idx": 495
} | // Copyright 2019 PingCAP, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bindinfo_test
import (
"context"
"fmt"
"strconv"
"testing"
"github.com/pingcap/tidb/bindinfo"
"github.com/pingcap/tidb/config"
"github.com/pingcap/tidb/domain"
"github.com/pingcap/tidb/parser"
"github.com/pingcap/tidb/parser/auth"
"github.com/pingcap/tidb/parser/model"
"github.com/pingcap/tidb/parser/terror"
"github.com/pingcap/tidb/testkit"
"github.com/pingcap/tidb/util"
"github.com/stretchr/testify/require"
)
func TestPrepareCacheWithBinding(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec(`set tidb_enable_prepared_plan_cache=1`)
tk.MustExec("use test")
tk.MustExec("drop table if exists t1, t2")
tk.MustExec("create table t1(a int, b int, c int, key idx_b(b), key idx_c(c))")
tk.MustExec("create table t2(a int, b int, c int, key idx_b(b), key idx_c(c))")
// TestDMLSQLBind
tk.MustExec("prepare stmt1 from 'delete from t1 where b = 1 and c > 1';")
tk.MustExec("execute stmt1;")
require.Equal(t, "t1:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tkProcess := tk.Session().ShowProcess()
ps := []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res := tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.MustUseIndex4ExplainFor(res, "idx_b(b)"), res.Rows())
tk.MustExec("execute stmt1;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec("create global binding for delete from t1 where b = 1 and c > 1 using delete /*+ use_index(t1,idx_c) */ from t1 where b = 1 and c > 1")
tk.MustExec("execute stmt1;")
require.Equal(t, "t1:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.MustUseIndex4ExplainFor(res, "idx_c(c)"), res.Rows())
tk.MustExec("prepare stmt2 from 'delete t1, t2 from t1 inner join t2 on t1.b = t2.b';")
tk.MustExec("execute stmt2;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.HasPlan4ExplainFor(res, "HashJoin"), res.Rows())
tk.MustExec("execute stmt2;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec("create global binding for delete t1, t2 from t1 inner join t2 on t1.b = t2.b using delete /*+ inl_join(t1) */ t1, t2 from t1 inner join t2 on t1.b = t2.b")
tk.MustExec("execute stmt2;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.HasPlan4ExplainFor(res, "IndexJoin"), res.Rows())
tk.MustExec("prepare stmt3 from 'update t1 set a = 1 where b = 1 and c > 1';")
tk.MustExec("execute stmt3;")
require.Equal(t, "t1:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.MustUseIndex4ExplainFor(res, "idx_b(b)"), res.Rows())
tk.MustExec("execute stmt3;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec("create global binding for update t1 set a = 1 where b = 1 and c > 1 using update /*+ use_index(t1,idx_c) */ t1 set a = 1 where b = 1 and c > 1")
tk.MustExec("execute stmt3;")
require.Equal(t, "t1:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.MustUseIndex4ExplainFor(res, "idx_c(c)"), res.Rows())
tk.MustExec("prepare stmt4 from 'update t1, t2 set t1.a = 1 where t1.b = t2.b';")
tk.MustExec("execute stmt4;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.HasPlan4ExplainFor(res, "HashJoin"), res.Rows())
tk.MustExec("execute stmt4;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec("create global binding for update t1, t2 set t1.a = 1 where t1.b = t2.b using update /*+ inl_join(t1) */ t1, t2 set t1.a = 1 where t1.b = t2.b")
tk.MustExec("execute stmt4;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.HasPlan4ExplainFor(res, "IndexJoin"), res.Rows())
tk.MustExec("prepare stmt5 from 'insert into t1 select * from t2 where t2.b = 2 and t2.c > 2';")
tk.MustExec("execute stmt5;")
require.Equal(t, "t2:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.MustUseIndex4ExplainFor(res, "idx_b(b)"), res.Rows())
tk.MustExec("execute stmt5;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec("create global binding for insert into t1 select * from t2 where t2.b = 1 and t2.c > 1 using insert /*+ use_index(t2,idx_c) */ into t1 select * from t2 where t2.b = 1 and t2.c > 1")
tk.MustExec("execute stmt5;")
require.Equal(t, "t2:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.MustUseIndex4ExplainFor(res, "idx_b(b)"), res.Rows())
tk.MustExec("drop global binding for insert into t1 select * from t2 where t2.b = 1 and t2.c > 1")
tk.MustExec("create global binding for insert into t1 select * from t2 where t2.b = 1 and t2.c > 1 using insert into t1 select /*+ use_index(t2,idx_c) */ * from t2 where t2.b = 1 and t2.c > 1")
tk.MustExec("execute stmt5;")
require.Equal(t, "t2:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.MustUseIndex4ExplainFor(res, "idx_c(c)"), res.Rows())
tk.MustExec("prepare stmt6 from 'replace into t1 select * from t2 where t2.b = 2 and t2.c > 2';")
tk.MustExec("execute stmt6;")
require.Equal(t, "t2:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.MustUseIndex4ExplainFor(res, "idx_b(b)"), res.Rows())
tk.MustExec("execute stmt6;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec("create global binding for replace into t1 select * from t2 where t2.b = 1 and t2.c > 1 using replace into t1 select /*+ use_index(t2,idx_c) */ * from t2 where t2.b = 1 and t2.c > 1")
tk.MustExec("execute stmt6;")
require.Equal(t, "t2:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.MustUseIndex4ExplainFor(res, "idx_c(c)"), res.Rows())
// TestExplain
tk.MustExec("drop table if exists t1")
tk.MustExec("drop table if exists t2")
tk.MustExec("create table t1(id int)")
tk.MustExec("create table t2(id int)")
tk.MustExec("prepare stmt1 from 'SELECT * from t1,t2 where t1.id = t2.id';")
tk.MustExec("execute stmt1;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.HasPlan4ExplainFor(res, "HashJoin"))
tk.MustExec("execute stmt1;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec("prepare stmt2 from 'SELECT /*+ TIDB_SMJ(t1, t2) */ * from t1,t2 where t1.id = t2.id';")
tk.MustExec("execute stmt2;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.HasPlan4ExplainFor(res, "MergeJoin"))
tk.MustExec("execute stmt2;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec("create global binding for SELECT * from t1,t2 where t1.id = t2.id using SELECT /*+ TIDB_SMJ(t1, t2) */ * from t1,t2 where t1.id = t2.id")
tk.MustExec("execute stmt1;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.HasPlan4ExplainFor(res, "MergeJoin"))
tk.MustExec("drop global binding for SELECT * from t1,t2 where t1.id = t2.id")
tk.MustExec("create index index_id on t1(id)")
tk.MustExec("prepare stmt1 from 'SELECT * from t1 use index(index_id)';")
tk.MustExec("execute stmt1;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.HasPlan4ExplainFor(res, "IndexReader"))
tk.MustExec("execute stmt1;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec("create global binding for SELECT * from t1 using SELECT * from t1 ignore index(index_id)")
tk.MustExec("execute stmt1;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.False(t, tk.HasPlan4ExplainFor(res, "IndexReader"))
tk.MustExec("execute stmt1;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
// Add test for SetOprStmt
tk.MustExec("prepare stmt1 from 'SELECT * from t1 union SELECT * from t1';")
tk.MustExec("execute stmt1;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.False(t, tk.HasPlan4ExplainFor(res, "IndexReader"))
tk.MustExec("execute stmt1;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec("prepare stmt2 from 'SELECT * from t1 use index(index_id) union SELECT * from t1';")
tk.MustExec("execute stmt2;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.HasPlan4ExplainFor(res, "IndexReader"))
tk.MustExec("execute stmt2;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec("create global binding for SELECT * from t1 union SELECT * from t1 using SELECT * from t1 use index(index_id) union SELECT * from t1")
tk.MustExec("execute stmt1;")
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.HasPlan4ExplainFor(res, "IndexReader"))
tk.MustExec("drop global binding for SELECT * from t1 union SELECT * from t1")
// TestBindingSymbolList
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, INDEX ia (a), INDEX ib (b));")
tk.MustExec("insert into t value(1, 1);")
tk.MustExec("prepare stmt1 from 'select a, b from t where a = 3 limit 1, 100';")
tk.MustExec("execute stmt1;")
require.Equal(t, "t:ia", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.MustUseIndex4ExplainFor(res, "ia(a)"), res.Rows())
tk.MustExec("execute stmt1;")
tk.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("1"))
tk.MustExec(`create global binding for select a, b from t where a = 1 limit 0, 1 using select a, b from t use index (ib) where a = 1 limit 0, 1`)
// after binding
tk.MustExec("execute stmt1;")
require.Equal(t, "t:ib", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tkProcess = tk.Session().ShowProcess()
ps = []*util.ProcessInfo{tkProcess}
tk.Session().SetSessionManager(&testkit.MockSessionManager{PS: ps})
res = tk.MustQuery("explain for connection " + strconv.FormatUint(tkProcess.ID, 10))
require.True(t, tk.MustUseIndex4ExplainFor(res, "ib(b)"), res.Rows())
}
func TestExplain(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t1")
tk.MustExec("drop table if exists t2")
tk.MustExec("create table t1(id int)")
tk.MustExec("create table t2(id int)")
require.True(t, tk.HasPlan("SELECT * from t1,t2 where t1.id = t2.id", "HashJoin"))
require.True(t, tk.HasPlan("SELECT /*+ TIDB_SMJ(t1, t2) */ * from t1,t2 where t1.id = t2.id", "MergeJoin"))
tk.MustExec("create global binding for SELECT * from t1,t2 where t1.id = t2.id using SELECT /*+ TIDB_SMJ(t1, t2) */ * from t1,t2 where t1.id = t2.id")
require.True(t, tk.HasPlan("SELECT * from t1,t2 where t1.id = t2.id", "MergeJoin"))
tk.MustExec("drop global binding for SELECT * from t1,t2 where t1.id = t2.id")
// Add test for SetOprStmt
tk.MustExec("create index index_id on t1(id)")
require.False(t, tk.HasPlan("SELECT * from t1 union SELECT * from t1", "IndexReader"))
require.True(t, tk.HasPlan("SELECT * from t1 use index(index_id) union SELECT * from t1", "IndexReader"))
tk.MustExec("create global binding for SELECT * from t1 union SELECT * from t1 using SELECT * from t1 use index(index_id) union SELECT * from t1")
require.True(t, tk.HasPlan("SELECT * from t1 union SELECT * from t1", "IndexReader"))
tk.MustExec("drop global binding for SELECT * from t1 union SELECT * from t1")
}
func TestBindSemiJoinRewrite(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t1")
tk.MustExec("drop table if exists t2")
tk.MustExec("create table t1(id int)")
tk.MustExec("create table t2(id int)")
require.True(t, tk.HasKeywordInOperatorInfo("select * from t1 where exists(select 1 from t2 where t1.id=t2.id)", "semi join"))
require.True(t, tk.NotHasKeywordInOperatorInfo("select * from t1 where exists(select /*+ SEMI_JOIN_REWRITE() */ 1 from t2 where t1.id=t2.id)", "semi join"))
tk.MustExec(`
create global binding for
select * from t1 where exists(select 1 from t2 where t1.id=t2.id)
using
select * from t1 where exists(select /*+ SEMI_JOIN_REWRITE() */ 1 from t2 where t1.id=t2.id)
`)
require.True(t, tk.NotHasKeywordInOperatorInfo("select * from t1 where exists(select 1 from t2 where t1.id=t2.id)", "semi join"))
}
func TestBindCTEMerge(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t1")
tk.MustExec("create table t1(id int)")
require.True(t, tk.HasPlan("with cte as (select * from t1) select * from cte a, cte b", "CTEFullScan"))
require.False(t, tk.HasPlan("with cte as (select /*+ MERGE() */ * from t1) select * from cte a, cte b", "CTEFullScan"))
tk.MustExec(`
create global binding for
with cte as (select * from t1) select * from cte
using
with cte as (select /*+ MERGE() */ * from t1) select * from cte
`)
require.False(t, tk.HasPlan("with cte as (select * from t1) select * from cte", "CTEFullScan"))
}
func TestBindNoDecorrelate(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t1")
tk.MustExec("drop table if exists t2")
tk.MustExec("create table t1(a int, b int)")
tk.MustExec("create table t2(a int, b int)")
require.False(t, tk.HasPlan("select exists (select t2.b from t2 where t2.a = t1.b limit 2) from t1", "Apply"))
require.True(t, tk.HasPlan("select exists (select /*+ no_decorrelate() */ t2.b from t2 where t2.a = t1.b limit 2) from t1", "Apply"))
tk.MustExec(`
create global binding for
select exists (select t2.b from t2 where t2.a = t1.b limit 2) from t1
using
select exists (select /*+ no_decorrelate() */ t2.b from t2 where t2.a = t1.b limit 2) from t1
`)
require.True(t, tk.HasPlan("select exists (select t2.b from t2 where t2.a = t1.b limit 2) from t1", "Apply"))
}
// TestBindingSymbolList tests sql with "?, ?, ?, ?", fixes #13871
func TestBindingSymbolList(t *testing.T) {
store, dom := testkit.CreateMockStoreAndDomain(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, INDEX ia (a), INDEX ib (b));")
tk.MustExec("insert into t value(1, 1);")
// before binding
tk.MustQuery("select a, b from t where a = 3 limit 1, 100")
require.Equal(t, "t:ia", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("select a, b from t where a = 3 limit 1, 100", "ia(a)"))
tk.MustExec(`create global binding for select a, b from t where a = 1 limit 0, 1 using select a, b from t use index (ib) where a = 1 limit 0, 1`)
// after binding
tk.MustQuery("select a, b from t where a = 3 limit 1, 100")
require.Equal(t, "t:ib", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("select a, b from t where a = 3 limit 1, 100", "ib(b)"))
// Normalize
sql, hash := parser.NormalizeDigest("select a, b from test . t where a = 1 limit 0, 1")
bindData := dom.BindHandle().GetBindRecord(hash.String(), sql, "test")
require.NotNil(t, bindData)
require.Equal(t, "select `a` , `b` from `test` . `t` where `a` = ? limit ...", bindData.OriginalSQL)
bind := bindData.Bindings[0]
require.Equal(t, "SELECT `a`,`b` FROM `test`.`t` USE INDEX (`ib`) WHERE `a` = 1 LIMIT 0,1", bind.BindSQL)
require.Equal(t, "test", bindData.Db)
require.Equal(t, bindinfo.Enabled, bind.Status)
require.NotNil(t, bind.Charset)
require.NotNil(t, bind.Collation)
require.NotNil(t, bind.CreateTime)
require.NotNil(t, bind.UpdateTime)
}
func TestDMLSQLBind(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t1, t2")
tk.MustExec("create table t1(a int, b int, c int, key idx_b(b), key idx_c(c))")
tk.MustExec("create table t2(a int, b int, c int, key idx_b(b), key idx_c(c))")
tk.MustExec("delete from t1 where b = 1 and c > 1")
require.Equal(t, "t1:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("delete from t1 where b = 1 and c > 1", "idx_b(b)"))
tk.MustExec("create global binding for delete from t1 where b = 1 and c > 1 using delete /*+ use_index(t1,idx_c) */ from t1 where b = 1 and c > 1")
tk.MustExec("delete from t1 where b = 1 and c > 1")
require.Equal(t, "t1:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("delete from t1 where b = 1 and c > 1", "idx_c(c)"))
require.True(t, tk.HasPlan("delete t1, t2 from t1 inner join t2 on t1.b = t2.b", "HashJoin"))
tk.MustExec("create global binding for delete t1, t2 from t1 inner join t2 on t1.b = t2.b using delete /*+ inl_join(t1) */ t1, t2 from t1 inner join t2 on t1.b = t2.b")
require.True(t, tk.HasPlan("delete t1, t2 from t1 inner join t2 on t1.b = t2.b", "IndexJoin"))
tk.MustExec("update t1 set a = 1 where b = 1 and c > 1")
require.Equal(t, "t1:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("update t1 set a = 1 where b = 1 and c > 1", "idx_b(b)"))
tk.MustExec("create global binding for update t1 set a = 1 where b = 1 and c > 1 using update /*+ use_index(t1,idx_c) */ t1 set a = 1 where b = 1 and c > 1")
tk.MustExec("delete from t1 where b = 1 and c > 1")
require.Equal(t, "t1:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("update t1 set a = 1 where b = 1 and c > 1", "idx_c(c)"))
require.True(t, tk.HasPlan("update t1, t2 set t1.a = 1 where t1.b = t2.b", "HashJoin"))
tk.MustExec("create global binding for update t1, t2 set t1.a = 1 where t1.b = t2.b using update /*+ inl_join(t1) */ t1, t2 set t1.a = 1 where t1.b = t2.b")
require.True(t, tk.HasPlan("update t1, t2 set t1.a = 1 where t1.b = t2.b", "IndexJoin"))
tk.MustExec("insert into t1 select * from t2 where t2.b = 2 and t2.c > 2")
require.Equal(t, "t2:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("insert into t1 select * from t2 where t2.b = 2 and t2.c > 2", "idx_b(b)"))
tk.MustExec("create global binding for insert into t1 select * from t2 where t2.b = 1 and t2.c > 1 using insert /*+ use_index(t2,idx_c) */ into t1 select * from t2 where t2.b = 1 and t2.c > 1")
tk.MustExec("insert into t1 select * from t2 where t2.b = 2 and t2.c > 2")
require.Equal(t, "t2:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("insert into t1 select * from t2 where t2.b = 2 and t2.c > 2", "idx_b(b)"))
tk.MustExec("drop global binding for insert into t1 select * from t2 where t2.b = 1 and t2.c > 1")
tk.MustExec("create global binding for insert into t1 select * from t2 where t2.b = 1 and t2.c > 1 using insert into t1 select /*+ use_index(t2,idx_c) */ * from t2 where t2.b = 1 and t2.c > 1")
tk.MustExec("insert into t1 select * from t2 where t2.b = 2 and t2.c > 2")
require.Equal(t, "t2:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("insert into t1 select * from t2 where t2.b = 2 and t2.c > 2", "idx_c(c)"))
tk.MustExec("replace into t1 select * from t2 where t2.b = 2 and t2.c > 2")
require.Equal(t, "t2:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("replace into t1 select * from t2 where t2.b = 2 and t2.c > 2", "idx_b(b)"))
tk.MustExec("create global binding for replace into t1 select * from t2 where t2.b = 1 and t2.c > 1 using replace into t1 select /*+ use_index(t2,idx_c) */ * from t2 where t2.b = 1 and t2.c > 1")
tk.MustExec("replace into t1 select * from t2 where t2.b = 2 and t2.c > 2")
require.Equal(t, "t2:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("replace into t1 select * from t2 where t2.b = 2 and t2.c > 2", "idx_c(c)"))
}
func TestBestPlanInBaselines(t *testing.T) {
store, dom := testkit.CreateMockStoreAndDomain(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, INDEX ia (a), INDEX ib (b));")
tk.MustExec("insert into t value(1, 1);")
// before binding
tk.MustQuery("select a, b from t where a = 3 limit 1, 100")
require.Equal(t, "t:ia", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("select a, b from t where a = 3 limit 1, 100", "ia(a)"))
tk.MustQuery("select a, b from t where b = 3 limit 1, 100")
require.Equal(t, "t:ib", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("select a, b from t where b = 3 limit 1, 100", "ib(b)"))
tk.MustExec(`create global binding for select a, b from t where a = 1 limit 0, 1 using select /*+ use_index(@sel_1 test.t ia) */ a, b from t where a = 1 limit 0, 1`)
tk.MustExec(`create global binding for select a, b from t where b = 1 limit 0, 1 using select /*+ use_index(@sel_1 test.t ib) */ a, b from t where b = 1 limit 0, 1`)
sql, hash := utilNormalizeWithDefaultDB(t, "select a, b from t where a = 1 limit 0, 1", "test")
bindData := dom.BindHandle().GetBindRecord(hash, sql, "test")
require.NotNil(t, bindData)
require.Equal(t, "select `a` , `b` from `test` . `t` where `a` = ? limit ...", bindData.OriginalSQL)
bind := bindData.Bindings[0]
require.Equal(t, "SELECT /*+ use_index(@`sel_1` `test`.`t` `ia`)*/ `a`,`b` FROM `test`.`t` WHERE `a` = 1 LIMIT 0,1", bind.BindSQL)
require.Equal(t, "test", bindData.Db)
require.Equal(t, bindinfo.Enabled, bind.Status)
tk.MustQuery("select a, b from t where a = 3 limit 1, 10")
require.Equal(t, "t:ia", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("select a, b from t where a = 3 limit 1, 100", "ia(a)"))
tk.MustQuery("select a, b from t where b = 3 limit 1, 100")
require.Equal(t, "t:ib", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("select a, b from t where b = 3 limit 1, 100", "ib(b)"))
}
func TestErrorBind(t *testing.T) {
store, dom := testkit.CreateMockStoreAndDomain(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustGetErrMsg("create global binding for select * from t using select * from t", "[schema:1146]Table 'test.t' doesn't exist")
tk.MustExec("drop table if exists t")
tk.MustExec("drop table if exists t1")
tk.MustExec("create table t(i int, s varchar(20))")
tk.MustExec("create table t1(i int, s varchar(20))")
tk.MustExec("create index index_t on t(i,s)")
_, err := tk.Exec("create global binding for select * from t where i>100 using select * from t use index(index_t) where i>100")
require.NoError(t, err, "err %v", err)
sql, hash := parser.NormalizeDigest("select * from test . t where i > ?")
bindData := dom.BindHandle().GetBindRecord(hash.String(), sql, "test")
require.NotNil(t, bindData)
require.Equal(t, "select * from `test` . `t` where `i` > ?", bindData.OriginalSQL)
bind := bindData.Bindings[0]
require.Equal(t, "SELECT * FROM `test`.`t` USE INDEX (`index_t`) WHERE `i` > 100", bind.BindSQL)
require.Equal(t, "test", bindData.Db)
require.Equal(t, bindinfo.Enabled, bind.Status)
require.NotNil(t, bind.Charset)
require.NotNil(t, bind.Collation)
require.NotNil(t, bind.CreateTime)
require.NotNil(t, bind.UpdateTime)
tk.MustExec("drop index index_t on t")
rs, err := tk.Exec("select * from t where i > 10")
require.NoError(t, err)
rs.Close()
dom.BindHandle().DropInvalidBindRecord()
rs, err = tk.Exec("show global bindings")
require.NoError(t, err)
chk := rs.NewChunk(nil)
err = rs.Next(context.TODO(), chk)
require.NoError(t, err)
require.Equal(t, 0, chk.NumRows())
}
func TestDMLEvolveBaselines(t *testing.T) {
originalVal := config.CheckTableBeforeDrop
config.CheckTableBeforeDrop = true
defer func() {
config.CheckTableBeforeDrop = originalVal
}()
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, c int, index idx_b(b), index idx_c(c))")
tk.MustExec("insert into t values (1,1,1), (2,2,2), (3,3,3), (4,4,4), (5,5,5)")
tk.MustExec("analyze table t")
tk.MustExec("set @@tidb_evolve_plan_baselines=1")
tk.MustExec("create global binding for delete from t where b = 1 and c > 1 using delete /*+ use_index(t,idx_c) */ from t where b = 1 and c > 1")
rows := tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 1)
tk.MustExec("delete /*+ use_index(t,idx_b) */ from t where b = 2 and c > 1")
require.Equal(t, "t:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tk.MustExec("admin flush bindings")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 1)
tk.MustExec("admin evolve bindings")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 1)
tk.MustExec("create global binding for update t set a = 1 where b = 1 and c > 1 using update /*+ use_index(t,idx_c) */ t set a = 1 where b = 1 and c > 1")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 2)
tk.MustExec("update /*+ use_index(t,idx_b) */ t set a = 2 where b = 2 and c > 1")
require.Equal(t, "t:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tk.MustExec("admin flush bindings")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 2)
tk.MustExec("admin evolve bindings")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 2)
tk.MustExec("create table t1 like t")
tk.MustExec("create global binding for insert into t1 select * from t where t.b = 1 and t.c > 1 using insert into t1 select /*+ use_index(t,idx_c) */ * from t where t.b = 1 and t.c > 1")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 3)
tk.MustExec("insert into t1 select /*+ use_index(t,idx_b) */ * from t where t.b = 2 and t.c > 2")
require.Equal(t, "t:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tk.MustExec("admin flush bindings")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 3)
tk.MustExec("admin evolve bindings")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 3)
tk.MustExec("create global binding for replace into t1 select * from t where t.b = 1 and t.c > 1 using replace into t1 select /*+ use_index(t,idx_c) */ * from t where t.b = 1 and t.c > 1")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 4)
tk.MustExec("replace into t1 select /*+ use_index(t,idx_b) */ * from t where t.b = 2 and t.c > 2")
require.Equal(t, "t:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tk.MustExec("admin flush bindings")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 4)
tk.MustExec("admin evolve bindings")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 4)
}
func TestAddEvolveTasks(t *testing.T) {
originalVal := config.CheckTableBeforeDrop
config.CheckTableBeforeDrop = true
defer func() {
config.CheckTableBeforeDrop = originalVal
}()
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, c int, index idx_a(a), index idx_b(b), index idx_c(c))")
tk.MustExec("insert into t values (1,1,1), (2,2,2), (3,3,3), (4,4,4), (5,5,5)")
tk.MustExec("analyze table t")
tk.MustExec("create global binding for select * from t where a >= 1 and b >= 1 and c = 0 using select * from t use index(idx_a) where a >= 1 and b >= 1 and c = 0")
tk.MustExec("set @@tidb_evolve_plan_baselines=1")
// It cannot choose table path although it has lowest cost.
tk.MustQuery("select * from t where a >= 4 and b >= 1 and c = 0")
require.Equal(t, "t:idx_a", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tk.MustExec("admin flush bindings")
rows := tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 2)
require.Equal(t, "SELECT /*+ use_index(@`sel_1` `test`.`t` )*/ * FROM `test`.`t` WHERE `a` >= 4 AND `b` >= 1 AND `c` = 0", rows[0][1])
require.Equal(t, "pending verify", rows[0][3])
tk.MustExec("admin evolve bindings")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 2)
require.Equal(t, "SELECT /*+ use_index(@`sel_1` `test`.`t` )*/ * FROM `test`.`t` WHERE `a` >= 4 AND `b` >= 1 AND `c` = 0", rows[0][1])
status := rows[0][3].(string)
require.True(t, status == bindinfo.Enabled || status == bindinfo.Rejected)
}
func TestRuntimeHintsInEvolveTasks(t *testing.T) {
originalVal := config.CheckTableBeforeDrop
config.CheckTableBeforeDrop = true
defer func() {
config.CheckTableBeforeDrop = originalVal
}()
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("set @@tidb_evolve_plan_baselines=1")
tk.MustExec("create table t(a int, b int, c int, index idx_a(a), index idx_b(b), index idx_c(c))")
tk.MustExec("create global binding for select * from t where a >= 1 and b >= 1 and c = 0 using select * from t use index(idx_a) where a >= 1 and b >= 1 and c = 0")
tk.MustQuery("select /*+ MAX_EXECUTION_TIME(5000) */ * from t where a >= 4 and b >= 1 and c = 0")
tk.MustExec("admin flush bindings")
rows := tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 2)
require.Equal(t, "SELECT /*+ use_index(@`sel_1` `test`.`t` `idx_c`), max_execution_time(5000)*/ * FROM `test`.`t` WHERE `a` >= 4 AND `b` >= 1 AND `c` = 0", rows[0][1])
}
func TestDefaultSessionVars(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustQuery(`show variables like "%baselines%"`).Sort().Check(testkit.Rows(
"tidb_capture_plan_baselines OFF",
"tidb_evolve_plan_baselines OFF",
"tidb_use_plan_baselines ON"))
tk.MustQuery(`show global variables like "%baselines%"`).Sort().Check(testkit.Rows(
"tidb_capture_plan_baselines OFF",
"tidb_evolve_plan_baselines OFF",
"tidb_use_plan_baselines ON"))
}
func TestCaptureBaselinesScope(t *testing.T) {
store, dom := testkit.CreateMockStoreAndDomain(t)
tk1 := testkit.NewTestKit(t, store)
tk2 := testkit.NewTestKit(t, store)
utilCleanBindingEnv(tk1, dom)
tk1.MustQuery(`show session variables like "tidb_capture_plan_baselines"`).Check(testkit.Rows(
"tidb_capture_plan_baselines OFF",
))
tk1.MustQuery(`show global variables like "tidb_capture_plan_baselines"`).Check(testkit.Rows(
"tidb_capture_plan_baselines OFF",
))
tk1.MustQuery(`select @@global.tidb_capture_plan_baselines`).Check(testkit.Rows(
"0",
))
tk1.MustExec("SET GLOBAL tidb_capture_plan_baselines = on")
defer func() {
tk1.MustExec(" set GLOBAL tidb_capture_plan_baselines = off")
}()
tk1.MustQuery(`show variables like "tidb_capture_plan_baselines"`).Check(testkit.Rows(
"tidb_capture_plan_baselines ON",
))
tk1.MustQuery(`show global variables like "tidb_capture_plan_baselines"`).Check(testkit.Rows(
"tidb_capture_plan_baselines ON",
))
tk2.MustQuery(`show global variables like "tidb_capture_plan_baselines"`).Check(testkit.Rows(
"tidb_capture_plan_baselines ON",
))
tk2.MustQuery(`select @@global.tidb_capture_plan_baselines`).Check(testkit.Rows(
"1",
))
}
func TestStmtHints(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, index idx(a))")
tk.MustExec("create global binding for select * from t using select /*+ MAX_EXECUTION_TIME(100), MEMORY_QUOTA(2 GB) */ * from t use index(idx)")
tk.MustQuery("select * from t")
require.Equal(t, int64(2147483648), tk.Session().GetSessionVars().MemTracker.GetBytesLimit())
require.Equal(t, uint64(100), tk.Session().GetSessionVars().StmtCtx.MaxExecutionTime)
tk.MustQuery("select a, b from t")
require.Equal(t, int64(1073741824), tk.Session().GetSessionVars().MemTracker.GetBytesLimit())
require.Equal(t, uint64(0), tk.Session().GetSessionVars().StmtCtx.MaxExecutionTime)
}
func TestPrivileges(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, index idx(a))")
tk.MustExec("create global binding for select * from t using select * from t use index(idx)")
require.NoError(t, tk.Session().Auth(&auth.UserIdentity{Username: "root", Hostname: "%"}, nil, nil))
rows := tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 1)
tk.MustExec("create user test@'%'")
require.NoError(t, tk.Session().Auth(&auth.UserIdentity{Username: "test", Hostname: "%"}, nil, nil))
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 0)
}
func TestHintsSetEvolveTask(t *testing.T) {
originalVal := config.CheckTableBeforeDrop
config.CheckTableBeforeDrop = true
defer func() {
config.CheckTableBeforeDrop = originalVal
}()
store, dom := testkit.CreateMockStoreAndDomain(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, index idx_a(a))")
tk.MustExec("create global binding for select * from t where a > 10 using select * from t ignore index(idx_a) where a > 10")
tk.MustExec("set @@tidb_evolve_plan_baselines=1")
tk.MustQuery("select * from t use index(idx_a) where a > 0")
bindHandle := dom.BindHandle()
bindHandle.SaveEvolveTasksToStore()
// Verify the added Binding for evolution contains valid ID and Hint, otherwise, panic may happen.
sql, hash := utilNormalizeWithDefaultDB(t, "select * from t where a > ?", "test")
bindData := bindHandle.GetBindRecord(hash, sql, "test")
require.NotNil(t, bindData)
require.Equal(t, "select * from `test` . `t` where `a` > ?", bindData.OriginalSQL)
require.Len(t, bindData.Bindings, 2)
bind := bindData.Bindings[1]
require.Equal(t, bindinfo.PendingVerify, bind.Status)
require.NotEqual(t, "", bind.ID)
require.NotNil(t, bind.Hint)
}
func TestHintsSetID(t *testing.T) {
store, dom := testkit.CreateMockStoreAndDomain(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, index idx_a(a))")
tk.MustExec("create global binding for select * from t where a > 10 using select /*+ use_index(test.t, idx_a) */ * from t where a > 10")
bindHandle := dom.BindHandle()
// Verify the added Binding contains ID with restored query block.
sql, hash := utilNormalizeWithDefaultDB(t, "select * from t where a > ?", "test")
bindData := bindHandle.GetBindRecord(hash, sql, "test")
require.NotNil(t, bindData)
require.Equal(t, "select * from `test` . `t` where `a` > ?", bindData.OriginalSQL)
require.Len(t, bindData.Bindings, 1)
bind := bindData.Bindings[0]
require.Equal(t, "use_index(@`sel_1` `test`.`t` `idx_a`)", bind.ID)
utilCleanBindingEnv(tk, dom)
tk.MustExec("create global binding for select * from t where a > 10 using select /*+ use_index(t, idx_a) */ * from t where a > 10")
bindData = bindHandle.GetBindRecord(hash, sql, "test")
require.NotNil(t, bindData)
require.Equal(t, "select * from `test` . `t` where `a` > ?", bindData.OriginalSQL)
require.Len(t, bindData.Bindings, 1)
bind = bindData.Bindings[0]
require.Equal(t, "use_index(@`sel_1` `test`.`t` `idx_a`)", bind.ID)
utilCleanBindingEnv(tk, dom)
tk.MustExec("create global binding for select * from t where a > 10 using select /*+ use_index(@sel_1 t, idx_a) */ * from t where a > 10")
bindData = bindHandle.GetBindRecord(hash, sql, "test")
require.NotNil(t, bindData)
require.Equal(t, "select * from `test` . `t` where `a` > ?", bindData.OriginalSQL)
require.Len(t, bindData.Bindings, 1)
bind = bindData.Bindings[0]
require.Equal(t, "use_index(@`sel_1` `test`.`t` `idx_a`)", bind.ID)
utilCleanBindingEnv(tk, dom)
tk.MustExec("create global binding for select * from t where a > 10 using select /*+ use_index(@qb1 t, idx_a) qb_name(qb1) */ * from t where a > 10")
bindData = bindHandle.GetBindRecord(hash, sql, "test")
require.NotNil(t, bindData)
require.Equal(t, "select * from `test` . `t` where `a` > ?", bindData.OriginalSQL)
require.Len(t, bindData.Bindings, 1)
bind = bindData.Bindings[0]
require.Equal(t, "use_index(@`sel_1` `test`.`t` `idx_a`)", bind.ID)
utilCleanBindingEnv(tk, dom)
tk.MustExec("create global binding for select * from t where a > 10 using select /*+ use_index(T, IDX_A) */ * from t where a > 10")
bindData = bindHandle.GetBindRecord(hash, sql, "test")
require.NotNil(t, bindData)
require.Equal(t, "select * from `test` . `t` where `a` > ?", bindData.OriginalSQL)
require.Len(t, bindData.Bindings, 1)
bind = bindData.Bindings[0]
require.Equal(t, "use_index(@`sel_1` `test`.`t` `idx_a`)", bind.ID)
utilCleanBindingEnv(tk, dom)
err := tk.ExecToErr("create global binding for select * from t using select /*+ non_exist_hint() */ * from t")
require.True(t, terror.ErrorEqual(err, parser.ErrParse))
tk.MustExec("create global binding for select * from t where a > 10 using select * from t where a > 10")
bindData = bindHandle.GetBindRecord(hash, sql, "test")
require.NotNil(t, bindData)
require.Equal(t, "select * from `test` . `t` where `a` > ?", bindData.OriginalSQL)
require.Len(t, bindData.Bindings, 1)
bind = bindData.Bindings[0]
require.Equal(t, "", bind.ID)
}
func TestNotEvolvePlanForReadStorageHint(t *testing.T) {
originalVal := config.CheckTableBeforeDrop
config.CheckTableBeforeDrop = true
defer func() {
config.CheckTableBeforeDrop = originalVal
}()
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, index idx_a(a), index idx_b(b))")
tk.MustExec("insert into t values (1,1), (2,2), (3,3), (4,4), (5,5), (6,6), (7,7), (8,8), (9,9), (10,10)")
tk.MustExec("analyze table t")
// Create virtual tiflash replica info.
dom := domain.GetDomain(tk.Session())
is := dom.InfoSchema()
db, exists := is.SchemaByName(model.NewCIStr("test"))
require.True(t, exists)
for _, tblInfo := range db.Tables {
if tblInfo.Name.L == "t" {
tblInfo.TiFlashReplica = &model.TiFlashReplicaInfo{
Count: 1,
Available: true,
}
}
}
// Make sure the best plan of the SQL is use TiKV index.
tk.MustExec("set @@session.tidb_executor_concurrency = 4; set @@tidb_allow_mpp=0;")
rows := tk.MustQuery("explain select * from t where a >= 11 and b >= 11").Rows()
require.Equal(t, "cop[tikv]", fmt.Sprintf("%v", rows[len(rows)-1][2]))
tk.MustExec("set @@tidb_allow_mpp=1")
tk.MustExec("create global binding for select * from t where a >= 1 and b >= 1 using select /*+ read_from_storage(tiflash[t]) */ * from t where a >= 1 and b >= 1")
tk.MustExec("set @@tidb_evolve_plan_baselines=1")
// Even if index of TiKV has lower cost, it chooses TiFlash.
rows = tk.MustQuery("explain select * from t where a >= 11 and b >= 11").Rows()
require.Equal(t, "mpp[tiflash]", fmt.Sprintf("%v", rows[len(rows)-1][2]))
tk.MustExec("admin flush bindings")
rows = tk.MustQuery("show global bindings").Rows()
// None evolve task, because of the origin binding is a read_from_storage binding.
require.Len(t, rows, 1)
require.Equal(t, "SELECT /*+ read_from_storage(tiflash[`t`])*/ * FROM `test`.`t` WHERE `a` >= 1 AND `b` >= 1", rows[0][1])
require.Equal(t, bindinfo.Enabled, rows[0][3])
}
func TestBindingWithIsolationRead(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, index idx_a(a), index idx_b(b))")
tk.MustExec("insert into t values (1,1), (2,2), (3,3), (4,4), (5,5), (6,6), (7,7), (8,8), (9,9), (10,10)")
tk.MustExec("analyze table t")
// Create virtual tiflash replica info.
dom := domain.GetDomain(tk.Session())
is := dom.InfoSchema()
db, exists := is.SchemaByName(model.NewCIStr("test"))
require.True(t, exists)
for _, tblInfo := range db.Tables {
if tblInfo.Name.L == "t" {
tblInfo.TiFlashReplica = &model.TiFlashReplicaInfo{
Count: 1,
Available: true,
}
}
}
tk.MustExec("create global binding for select * from t where a >= 1 and b >= 1 using select * from t use index(idx_a) where a >= 1 and b >= 1")
tk.MustExec("set @@tidb_use_plan_baselines = 1")
rows := tk.MustQuery("explain select * from t where a >= 11 and b >= 11").Rows()
require.Equal(t, "cop[tikv]", rows[len(rows)-1][2])
// Even if we build a binding use index for SQL, but after we set the isolation read for TiFlash, it choose TiFlash instead of index of TiKV.
tk.MustExec("set @@tidb_isolation_read_engines = \"tiflash\"")
rows = tk.MustQuery("explain select * from t where a >= 11 and b >= 11").Rows()
require.Equal(t, "mpp[tiflash]", rows[len(rows)-1][2])
}
func TestReCreateBindAfterEvolvePlan(t *testing.T) {
originalVal := config.CheckTableBeforeDrop
config.CheckTableBeforeDrop = true
defer func() {
config.CheckTableBeforeDrop = originalVal
}()
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, c int, index idx_a(a), index idx_b(b), index idx_c(c))")
tk.MustExec("insert into t values (1,1,1), (2,2,2), (3,3,3), (4,4,4), (5,5,5)")
tk.MustExec("analyze table t")
tk.MustExec("create global binding for select * from t where a >= 1 and b >= 1 using select * from t use index(idx_a) where a >= 1 and b >= 1")
tk.MustExec("set @@tidb_evolve_plan_baselines=1")
// It cannot choose table path although it has lowest cost.
tk.MustQuery("select * from t where a >= 0 and b >= 0")
require.Equal(t, "t:idx_a", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
tk.MustExec("admin flush bindings")
rows := tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 2)
require.Equal(t, "SELECT /*+ use_index(@`sel_1` `test`.`t` )*/ * FROM `test`.`t` WHERE `a` >= 0 AND `b` >= 0", rows[0][1])
require.Equal(t, "pending verify", rows[0][3])
tk.MustExec("create global binding for select * from t where a >= 1 and b >= 1 using select * from t use index(idx_b) where a >= 1 and b >= 1")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 1)
tk.MustQuery("select * from t where a >= 4 and b >= 1")
require.Equal(t, "t:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
}
func TestInvisibleIndex(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, unique idx_a(a), index idx_b(b) invisible)")
tk.MustGetErrMsg(
"create global binding for select * from t using select * from t use index(idx_b) ",
"[planner:1176]Key 'idx_b' doesn't exist in table 't'")
// Create bind using index
tk.MustExec("create global binding for select * from t using select * from t use index(idx_a) ")
tk.MustQuery("select * from t")
require.Equal(t, "t:idx_a", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("select * from t", "idx_a(a)"))
tk.MustExec(`prepare stmt1 from 'select * from t'`)
tk.MustExec("execute stmt1")
require.Len(t, tk.Session().GetSessionVars().StmtCtx.IndexNames, 1)
require.Equal(t, "t:idx_a", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
// And then make this index invisible
tk.MustExec("alter table t alter index idx_a invisible")
tk.MustQuery("select * from t")
require.Len(t, tk.Session().GetSessionVars().StmtCtx.IndexNames, 0)
tk.MustExec("execute stmt1")
require.Len(t, tk.Session().GetSessionVars().StmtCtx.IndexNames, 0)
tk.MustExec("drop binding for select * from t")
}
func TestSPMHitInfo(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t1")
tk.MustExec("drop table if exists t2")
tk.MustExec("create table t1(id int)")
tk.MustExec("create table t2(id int)")
require.True(t, tk.HasPlan("SELECT * from t1,t2 where t1.id = t2.id", "HashJoin"))
require.True(t, tk.HasPlan("SELECT /*+ TIDB_SMJ(t1, t2) */ * from t1,t2 where t1.id = t2.id", "MergeJoin"))
tk.MustExec("SELECT * from t1,t2 where t1.id = t2.id")
tk.MustQuery(`select @@last_plan_from_binding;`).Check(testkit.Rows("0"))
tk.MustExec("create global binding for SELECT * from t1,t2 where t1.id = t2.id using SELECT /*+ TIDB_SMJ(t1, t2) */ * from t1,t2 where t1.id = t2.id")
require.True(t, tk.HasPlan("SELECT * from t1,t2 where t1.id = t2.id", "MergeJoin"))
tk.MustExec("SELECT * from t1,t2 where t1.id = t2.id")
tk.MustQuery(`select @@last_plan_from_binding;`).Check(testkit.Rows("1"))
tk.MustExec("set binding disabled for SELECT * from t1,t2 where t1.id = t2.id")
tk.MustExec("SELECT * from t1,t2 where t1.id = t2.id")
tk.MustQuery(`select @@last_plan_from_binding;`).Check(testkit.Rows("0"))
tk.MustExec("drop global binding for SELECT * from t1,t2 where t1.id = t2.id")
}
func TestReCreateBind(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, index idx(a))")
tk.MustQuery("select * from mysql.bind_info where source != 'builtin'").Check(testkit.Rows())
tk.MustQuery("show global bindings").Check(testkit.Rows())
tk.MustExec("create global binding for select * from t using select * from t")
tk.MustQuery("select original_sql, status from mysql.bind_info where source != 'builtin';").Check(testkit.Rows(
"select * from `test` . `t` enabled",
))
rows := tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 1)
require.Equal(t, "select * from `test` . `t`", rows[0][0])
require.Equal(t, bindinfo.Enabled, rows[0][3])
tk.MustExec("create global binding for select * from t using select * from t")
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 1)
require.Equal(t, "select * from `test` . `t`", rows[0][0])
require.Equal(t, bindinfo.Enabled, rows[0][3])
rows = tk.MustQuery("select original_sql, status from mysql.bind_info where source != 'builtin';").Rows()
require.Len(t, rows, 2)
require.Equal(t, "deleted", rows[0][1])
require.Equal(t, bindinfo.Enabled, rows[1][1])
}
func TestExplainShowBindSQL(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, key(a))")
tk.MustExec("create global binding for select * from t using select * from t use index(a)")
tk.MustQuery("select original_sql, bind_sql from mysql.bind_info where default_db != 'mysql'").Check(testkit.Rows(
"select * from `test` . `t` SELECT * FROM `test`.`t` USE INDEX (`a`)",
))
tk.MustExec("explain format = 'verbose' select * from t")
tk.MustQuery("show warnings").Check(testkit.Rows("Note 1105 Using the bindSQL: SELECT * FROM `test`.`t` USE INDEX (`a`)"))
// explain analyze do not support verbose yet.
}
func TestDMLIndexHintBind(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("create table t(a int, b int, c int, key idx_b(b), key idx_c(c))")
tk.MustExec("delete from t where b = 1 and c > 1")
require.Equal(t, "t:idx_b", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("delete from t where b = 1 and c > 1", "idx_b(b)"))
tk.MustExec("create global binding for delete from t where b = 1 and c > 1 using delete from t use index(idx_c) where b = 1 and c > 1")
tk.MustExec("delete from t where b = 1 and c > 1")
require.Equal(t, "t:idx_c", tk.Session().GetSessionVars().StmtCtx.IndexNames[0])
require.True(t, tk.MustUseIndex("delete from t where b = 1 and c > 1", "idx_c(c)"))
}
func TestForbidEvolvePlanBaseLinesBeforeGA(t *testing.T) {
originalVal := config.CheckTableBeforeDrop
config.CheckTableBeforeDrop = false
defer func() {
config.CheckTableBeforeDrop = originalVal
}()
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
err := tk.ExecToErr("set @@tidb_evolve_plan_baselines=0")
require.Equal(t, nil, err)
err = tk.ExecToErr("set @@TiDB_Evolve_pLan_baselines=1")
require.EqualError(t, err, "Cannot enable baseline evolution feature, it is not generally available now")
err = tk.ExecToErr("set @@TiDB_Evolve_pLan_baselines=oN")
require.EqualError(t, err, "Cannot enable baseline evolution feature, it is not generally available now")
err = tk.ExecToErr("admin evolve bindings")
require.EqualError(t, err, "Cannot enable baseline evolution feature, it is not generally available now")
}
func TestExplainTableStmts(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(id int, value decimal(5,2))")
tk.MustExec("table t")
tk.MustExec("explain table t")
tk.MustExec("desc table t")
}
func TestSPMWithoutUseDatabase(t *testing.T) {
store, dom := testkit.CreateMockStoreAndDomain(t)
tk := testkit.NewTestKit(t, store)
tk1 := testkit.NewTestKit(t, store)
utilCleanBindingEnv(tk, dom)
utilCleanBindingEnv(tk1, dom)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, key(a))")
tk.MustExec("create global binding for select * from t using select * from t force index(a)")
err := tk1.ExecToErr("select * from t")
require.Error(t, err)
require.Regexp(t, "No database selected$", err)
tk1.MustQuery(`select @@last_plan_from_binding;`).Check(testkit.Rows("0"))
require.True(t, tk1.MustUseIndex("select * from test.t", "a"))
tk1.MustExec("select * from test.t")
tk1.MustQuery(`select @@last_plan_from_binding;`).Check(testkit.Rows("1"))
tk1.MustExec("set binding disabled for select * from test.t")
tk1.MustExec("select * from test.t")
tk1.MustQuery(`select @@last_plan_from_binding;`).Check(testkit.Rows("0"))
}
func TestBindingWithoutCharset(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t (a varchar(10) CHARACTER SET utf8)")
tk.MustExec("create global binding for select * from t where a = 'aa' using select * from t where a = 'aa'")
rows := tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 1)
require.Equal(t, "select * from `test` . `t` where `a` = ?", rows[0][0])
require.Equal(t, "SELECT * FROM `test`.`t` WHERE `a` = 'aa'", rows[0][1])
}
func TestBindingWithMultiParenthesis(t *testing.T) {
store := testkit.CreateMockStore(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t (a int)")
tk.MustExec("create global binding for select * from (select * from t where a = 1) tt using select * from (select * from t where a = 1) tt")
tk.MustExec("create global binding for select * from ((select * from t where a = 1)) tt using select * from (select * from t where a = 1) tt")
rows := tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 1)
require.Equal(t, "select * from ( select * from `test` . `t` where `a` = ? ) as `tt`", rows[0][0])
require.Equal(t, "SELECT * FROM (SELECT * FROM `test`.`t` WHERE `a` = 1) AS `tt`", rows[0][1])
}
func TestGCBindRecord(t *testing.T) {
// set lease for gc tests
originLease := bindinfo.Lease
bindinfo.Lease = 0
defer func() {
bindinfo.Lease = originLease
}()
store, dom := testkit.CreateMockStoreAndDomain(t)
tk := testkit.NewTestKit(t, store)
tk.MustExec("use test")
tk.MustExec("drop table if exists t")
tk.MustExec("create table t(a int, b int, key(a))")
tk.MustExec("create global binding for select * from t where a = 1 using select * from t use index(a) where a = 1")
rows := tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 1)
require.Equal(t, "select * from `test` . `t` where `a` = ?", rows[0][0])
require.Equal(t, bindinfo.Enabled, rows[0][3])
tk.MustQuery("select status from mysql.bind_info where original_sql = 'select * from `test` . `t` where `a` = ?'").Check(testkit.Rows(
bindinfo.Enabled,
))
h := dom.BindHandle()
// bindinfo.Lease is set to 0 for test env in SetUpSuite.
require.NoError(t, h.GCBindRecord())
rows = tk.MustQuery("show global bindings").Rows()
require.Len(t, rows, 1)
require.Equal(t, "select * from `test` . `t` where `a` = ?", rows[0][0])
require.Equal(t, bindinfo.Enabled, rows[0][3])
tk.MustQuery("select status from mysql.bind_info where original_sql = 'select * from `test` . `t` where `a` = ?'").Check(testkit.Rows(
bindinfo.Enabled,
))
tk.MustExec("drop global binding for select * from t where a = 1")
tk.MustQuery("show global bindings").Check(testkit.Rows())
tk.MustQuery("select status from mysql.bind_info where original_sql = 'select * from `test` . `t` where `a` = ?'").Check(testkit.Rows(
"deleted",
))
require.NoError(t, h.GCBindRecord())
tk.MustQuery("show global bindings").Check(testkit.Rows())
tk.MustQuery("select status from mysql.bind_info where original_sql = 'select * from `test` . `t` where `a` = ?'").Check(testkit.Rows())
}
| bindinfo/bind_test.go | 0 | https://github.com/pingcap/tidb/commit/e6f020a26efc60480e0a0690cdca87f0990d4ceb | [
0.004312381613999605,
0.0008235048153437674,
0.00016055928426794708,
0.00017218844732269645,
0.0010058643529191613
] |
{
"id": 2,
"code_window": [
"\twarning := \"Warning 1452 Cannot add or update a child row: a foreign key constraint fails (`test`.`t2`, CONSTRAINT `fk_1` FOREIGN KEY (`i`) REFERENCES `t1` (`i`))\"\n",
"\ttk.MustQuery(\"show warnings;\").Check(testkit.Rows(warning, warning))\n",
"\ttk.MustQuery(\"select * from t2\").Check(testkit.Rows(\"1\", \"3\"))\n",
"}\n",
"\n",
"func TestForeignKeyOnInsertOnDuplicateParentTableCheck(t *testing.T) {\n"
],
"labels": [
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\ttk.MustQuery(\"select * from t2 order by i\").Check(testkit.Rows(\"<nil>\", \"1\", \"1\", \"3\"))\n",
"\t// Test for foreign key index is non-unique key.\n",
"\ttk.MustExec(\"drop table t1,t2\")\n",
"\ttk.MustExec(\"CREATE TABLE t1 (i INT, index(i));\")\n",
"\ttk.MustExec(\"CREATE TABLE t2 (i INT, FOREIGN KEY (i) REFERENCES t1 (i));\")\n",
"\ttk.MustExec(\"INSERT INTO t1 VALUES (1),(3);\")\n",
"\ttk.MustExec(\"INSERT IGNORE INTO t2 VALUES (1), (null), (1), (2), (3), (2);\")\n",
"\ttk.MustQuery(\"show warnings;\").Check(testkit.Rows(warning, warning))\n",
"\ttk.MustQuery(\"select * from t2 order by i\").Check(testkit.Rows(\"<nil>\", \"1\", \"1\", \"3\"))\n"
],
"file_path": "executor/fktest/foreign_key_test.go",
"type": "replace",
"edit_start_line_idx": 495
} | create database seconddb;
| br/tests/lightning_black-white-list/data/seconddb-schema-create.sql | 0 | https://github.com/pingcap/tidb/commit/e6f020a26efc60480e0a0690cdca87f0990d4ceb | [
0.00016102676454465836,
0.00016102676454465836,
0.00016102676454465836,
0.00016102676454465836,
0
] |
{
"id": 2,
"code_window": [
"\twarning := \"Warning 1452 Cannot add or update a child row: a foreign key constraint fails (`test`.`t2`, CONSTRAINT `fk_1` FOREIGN KEY (`i`) REFERENCES `t1` (`i`))\"\n",
"\ttk.MustQuery(\"show warnings;\").Check(testkit.Rows(warning, warning))\n",
"\ttk.MustQuery(\"select * from t2\").Check(testkit.Rows(\"1\", \"3\"))\n",
"}\n",
"\n",
"func TestForeignKeyOnInsertOnDuplicateParentTableCheck(t *testing.T) {\n"
],
"labels": [
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\ttk.MustQuery(\"select * from t2 order by i\").Check(testkit.Rows(\"<nil>\", \"1\", \"1\", \"3\"))\n",
"\t// Test for foreign key index is non-unique key.\n",
"\ttk.MustExec(\"drop table t1,t2\")\n",
"\ttk.MustExec(\"CREATE TABLE t1 (i INT, index(i));\")\n",
"\ttk.MustExec(\"CREATE TABLE t2 (i INT, FOREIGN KEY (i) REFERENCES t1 (i));\")\n",
"\ttk.MustExec(\"INSERT INTO t1 VALUES (1),(3);\")\n",
"\ttk.MustExec(\"INSERT IGNORE INTO t2 VALUES (1), (null), (1), (2), (3), (2);\")\n",
"\ttk.MustQuery(\"show warnings;\").Check(testkit.Rows(warning, warning))\n",
"\ttk.MustQuery(\"select * from t2 order by i\").Check(testkit.Rows(\"<nil>\", \"1\", \"1\", \"3\"))\n"
],
"file_path": "executor/fktest/foreign_key_test.go",
"type": "replace",
"edit_start_line_idx": 495
} | set tidb_cost_model_version=1;
-- http://www.tpc.org/tpc_documents_current_versions/pdf/tpc-h_v2.17.1.pdf
CREATE DATABASE IF NOT EXISTS TPCH;
USE TPCH;
CREATE TABLE IF NOT EXISTS nation ( N_NATIONKEY INTEGER NOT NULL,
N_NAME CHAR(25) NOT NULL,
N_REGIONKEY INTEGER NOT NULL,
N_COMMENT VARCHAR(152),
PRIMARY KEY (N_NATIONKEY));
CREATE TABLE IF NOT EXISTS region ( R_REGIONKEY INTEGER NOT NULL,
R_NAME CHAR(25) NOT NULL,
R_COMMENT VARCHAR(152),
PRIMARY KEY (R_REGIONKEY));
CREATE TABLE IF NOT EXISTS part ( P_PARTKEY INTEGER NOT NULL,
P_NAME VARCHAR(55) NOT NULL,
P_MFGR CHAR(25) NOT NULL,
P_BRAND CHAR(10) NOT NULL,
P_TYPE VARCHAR(25) NOT NULL,
P_SIZE INTEGER NOT NULL,
P_CONTAINER CHAR(10) NOT NULL,
P_RETAILPRICE DECIMAL(15,2) NOT NULL,
P_COMMENT VARCHAR(23) NOT NULL,
PRIMARY KEY (P_PARTKEY));
CREATE TABLE IF NOT EXISTS supplier ( S_SUPPKEY INTEGER NOT NULL,
S_NAME CHAR(25) NOT NULL,
S_ADDRESS VARCHAR(40) NOT NULL,
S_NATIONKEY INTEGER NOT NULL,
S_PHONE CHAR(15) NOT NULL,
S_ACCTBAL DECIMAL(15,2) NOT NULL,
S_COMMENT VARCHAR(101) NOT NULL,
PRIMARY KEY (S_SUPPKEY),
CONSTRAINT FOREIGN KEY SUPPLIER_FK1 (S_NATIONKEY) references nation(N_NATIONKEY));
CREATE TABLE IF NOT EXISTS partsupp ( PS_PARTKEY INTEGER NOT NULL,
PS_SUPPKEY INTEGER NOT NULL,
PS_AVAILQTY INTEGER NOT NULL,
PS_SUPPLYCOST DECIMAL(15,2) NOT NULL,
PS_COMMENT VARCHAR(199) NOT NULL,
PRIMARY KEY (PS_PARTKEY,PS_SUPPKEY),
CONSTRAINT FOREIGN KEY PARTSUPP_FK1 (PS_SUPPKEY) references supplier(S_SUPPKEY),
CONSTRAINT FOREIGN KEY PARTSUPP_FK2 (PS_PARTKEY) references part(P_PARTKEY));
CREATE TABLE IF NOT EXISTS customer ( C_CUSTKEY INTEGER NOT NULL,
C_NAME VARCHAR(25) NOT NULL,
C_ADDRESS VARCHAR(40) NOT NULL,
C_NATIONKEY INTEGER NOT NULL,
C_PHONE CHAR(15) NOT NULL,
C_ACCTBAL DECIMAL(15,2) NOT NULL,
C_MKTSEGMENT CHAR(10) NOT NULL,
C_COMMENT VARCHAR(117) NOT NULL,
PRIMARY KEY (C_CUSTKEY),
CONSTRAINT FOREIGN KEY CUSTOMER_FK1 (C_NATIONKEY) references nation(N_NATIONKEY));
CREATE TABLE IF NOT EXISTS orders ( O_ORDERKEY INTEGER NOT NULL,
O_CUSTKEY INTEGER NOT NULL,
O_ORDERSTATUS CHAR(1) NOT NULL,
O_TOTALPRICE DECIMAL(15,2) NOT NULL,
O_ORDERDATE DATE NOT NULL,
O_ORDERPRIORITY CHAR(15) NOT NULL,
O_CLERK CHAR(15) NOT NULL,
O_SHIPPRIORITY INTEGER NOT NULL,
O_COMMENT VARCHAR(79) NOT NULL,
PRIMARY KEY (O_ORDERKEY),
CONSTRAINT FOREIGN KEY ORDERS_FK1 (O_CUSTKEY) references customer(C_CUSTKEY));
CREATE TABLE IF NOT EXISTS lineitem ( L_ORDERKEY INTEGER NOT NULL,
L_PARTKEY INTEGER NOT NULL,
L_SUPPKEY INTEGER NOT NULL,
L_LINENUMBER INTEGER NOT NULL,
L_QUANTITY DECIMAL(15,2) NOT NULL,
L_EXTENDEDPRICE DECIMAL(15,2) NOT NULL,
L_DISCOUNT DECIMAL(15,2) NOT NULL,
L_TAX DECIMAL(15,2) NOT NULL,
L_RETURNFLAG CHAR(1) NOT NULL,
L_LINESTATUS CHAR(1) NOT NULL,
L_SHIPDATE DATE NOT NULL,
L_COMMITDATE DATE NOT NULL,
L_RECEIPTDATE DATE NOT NULL,
L_SHIPINSTRUCT CHAR(25) NOT NULL,
L_SHIPMODE CHAR(10) NOT NULL,
L_COMMENT VARCHAR(44) NOT NULL,
PRIMARY KEY (L_ORDERKEY,L_LINENUMBER),
CONSTRAINT FOREIGN KEY LINEITEM_FK1 (L_ORDERKEY) references orders(O_ORDERKEY),
CONSTRAINT FOREIGN KEY LINEITEM_FK2 (L_PARTKEY,L_SUPPKEY) references partsupp(PS_PARTKEY, PS_SUPPKEY));
-- load stats.
load stats 's/tpch_stats/nation.json';
load stats 's/tpch_stats/region.json';
load stats 's/tpch_stats/part.json';
load stats 's/tpch_stats/supplier.json';
load stats 's/tpch_stats/partsupp.json';
load stats 's/tpch_stats/customer.json';
load stats 's/tpch_stats/orders.json';
load stats 's/tpch_stats/lineitem.json';
set @@session.tidb_opt_agg_push_down = 0;
/*
Q1 Pricing Summary Report
This query reports the amount of business that was billed, shipped, and returned.
The Pricing Summary Report Query provides a summary pricing report for all lineitems shipped as of a given date.
The date is within 60 - 120 days of the greatest ship date contained in the database. The query lists totals for
extended price, discounted extended price, discounted extended price plus tax, average quantity, average extended
price, and average discount. These aggregates are grouped by RETURNFLAG and LINESTATUS, and listed in
ascending order of RETURNFLAG and LINESTATUS. A count of the number of lineitems in each group is
included.
Planner enhancement: none.
*/
explain format = 'brief'
select
l_returnflag,
l_linestatus,
sum(l_quantity) as sum_qty,
sum(l_extendedprice) as sum_base_price,
sum(l_extendedprice * (1 - l_discount)) as sum_disc_price,
sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge,
avg(l_quantity) as avg_qty,
avg(l_extendedprice) as avg_price,
avg(l_discount) as avg_disc,
count(*) as count_order
from
lineitem
where
l_shipdate <= date_sub('1998-12-01', interval 108 day)
group by
l_returnflag,
l_linestatus
order by
l_returnflag,
l_linestatus;
/*
Q2 Minimum Cost Supplier Query
This query finds which supplier should be selected to place an order for a given part in a given region.
The Minimum Cost Supplier Query finds, in a given region, for each part of a certain type and size, the supplier who
can supply it at minimum cost. If several suppliers in that region offer the desired part type and size at the same
(minimum) cost, the query lists the parts from suppliers with the 100 highest account balances. For each supplier,
the query lists the supplier's account balance, name and nation; the part's number and manufacturer; the supplier's
address, phone number and comment information.
Planner enhancement: join reorder.
*/
explain format = 'brief'
select
s_acctbal,
s_name,
n_name,
p_partkey,
p_mfgr,
s_address,
s_phone,
s_comment
from
part,
supplier,
partsupp,
nation,
region
where
p_partkey = ps_partkey
and s_suppkey = ps_suppkey
and p_size = 30
and p_type like '%STEEL'
and s_nationkey = n_nationkey
and n_regionkey = r_regionkey
and r_name = 'ASIA'
and ps_supplycost = (
select
min(ps_supplycost)
from
partsupp,
supplier,
nation,
region
where
p_partkey = ps_partkey
and s_suppkey = ps_suppkey
and s_nationkey = n_nationkey
and n_regionkey = r_regionkey
and r_name = 'ASIA'
)
order by
s_acctbal desc,
n_name,
s_name,
p_partkey
limit 100;
/*
Q3 Shipping Priority Query
This query retrieves the 10 unshipped orders with the highest value.
The Shipping Priority Query retrieves the shipping priority and potential revenue, defined as the sum of
l_extendedprice * (1-l_discount), of the orders having the largest revenue among those that had not been shipped as
of a given date. Orders are listed in decreasing order of revenue. If more than 10 unshipped orders exist, only the 10
orders with the largest revenue are listed.
planner enhancement: if group-by item have primary key, non-priamry key is useless.
*/
explain format = 'brief'
select
l_orderkey,
sum(l_extendedprice * (1 - l_discount)) as revenue,
o_orderdate,
o_shippriority
from
customer,
orders,
lineitem
where
c_mktsegment = 'AUTOMOBILE'
and c_custkey = o_custkey
and l_orderkey = o_orderkey
and o_orderdate < '1995-03-13'
and l_shipdate > '1995-03-13'
group by
l_orderkey,
o_orderdate,
o_shippriority
order by
revenue desc,
o_orderdate
limit 10;
/*
Q4 Order Priority Checking Query
This query determines how well the order priority system is working and gives an assessment of customer satisfaction.
The Order Priority Checking Query counts the number of orders ordered in a given quarter of a given year in which
at least one lineitem was received by the customer later than its committed date. The query lists the count of such
orders for each order priority sorted in ascending priority order.
*/
explain format = 'brief'
select
o_orderpriority,
count(*) as order_count
from
orders
where
o_orderdate >= '1995-01-01'
and o_orderdate < date_add('1995-01-01', interval '3' month)
and exists (
select
*
from
lineitem
where
l_orderkey = o_orderkey
and l_commitdate < l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
/*
Q5 Local Supplier Volume Query
This query lists the revenue volume done through local suppliers.
The Local Supplier Volume Query lists for each nation in a region the revenue volume that resulted from lineitem
transactions in which the customer ordering parts and the supplier filling them were both within that nation. The
query is run in order to determine whether to institute local distribution centers in a given region. The query considers
only parts ordered in a given year. The query displays the nations and revenue volume in descending order by
revenue. Revenue volume for all qualifying lineitems in a particular nation is defined as sum(l_extendedprice * (1 -
l_discount)).
Planner enhancement: join reorder.
*/
explain format = 'brief'
select
n_name,
sum(l_extendedprice * (1 - l_discount)) as revenue
from
customer,
orders,
lineitem,
supplier,
nation,
region
where
c_custkey = o_custkey
and l_orderkey = o_orderkey
and l_suppkey = s_suppkey
and c_nationkey = s_nationkey
and s_nationkey = n_nationkey
and n_regionkey = r_regionkey
and r_name = 'MIDDLE EAST'
and o_orderdate >= '1994-01-01'
and o_orderdate < date_add('1994-01-01', interval '1' year)
group by
n_name
order by
revenue desc;
/*
Q6 Forecasting Revenue Change Query
This query quantifies the amount of revenue increase that would have resulted from eliminating certain companywide
discounts in a given percentage range in a given year. Asking this type of "what if" query can be used to look
for ways to increase revenues.
The Forecasting Revenue Change Query considers all the lineitems shipped in a given year with discounts between
DISCOUNT-0.01 and DISCOUNT+0.01. The query lists the amount by which the total revenue would have
increased if these discounts had been eliminated for lineitems with l_quantity less than quantity. Note that the
potential revenue increase is equal to the sum of [l_extendedprice * l_discount] for all lineitems with discounts and
quantities in the qualifying range.
*/
explain format = 'brief'
select
sum(l_extendedprice * l_discount) as revenue
from
lineitem
where
l_shipdate >= '1994-01-01'
and l_shipdate < date_add('1994-01-01', interval '1' year)
and l_discount between 0.06 - 0.01 and 0.06 + 0.01
and l_quantity < 24;
/*
Q7 Volume Shipping Query
This query determines the value of goods shipped between certain nations to help in the re-negotiation of shipping
contracts.
The Volume Shipping Query finds, for two given nations, the gross discounted revenues derived from lineitems in
which parts were shipped from a supplier in either nation to a customer in the other nation during 1995 and 1996.
The query lists the supplier nation, the customer nation, the year, and the revenue from shipments that took place in
that year. The query orders the answer by Supplier nation, Customer nation, and year (all ascending).
Planner enahancement: join reorder.
*/
explain format = 'brief'
select
supp_nation,
cust_nation,
l_year,
sum(volume) as revenue
from
(
select
n1.n_name as supp_nation,
n2.n_name as cust_nation,
extract(year from l_shipdate) as l_year,
l_extendedprice * (1 - l_discount) as volume
from
supplier,
lineitem,
orders,
customer,
nation n1,
nation n2
where
s_suppkey = l_suppkey
and o_orderkey = l_orderkey
and c_custkey = o_custkey
and s_nationkey = n1.n_nationkey
and c_nationkey = n2.n_nationkey
and (
(n1.n_name = 'JAPAN' and n2.n_name = 'INDIA')
or (n1.n_name = 'INDIA' and n2.n_name = 'JAPAN')
)
and l_shipdate between '1995-01-01' and '1996-12-31'
) as shipping
group by
supp_nation,
cust_nation,
l_year
order by
supp_nation,
cust_nation,
l_year;
/*
Q8 National Market Share Query
This query determines how the market share of a given nation within a given region has changed over two years for
a given part type.
The market share for a given nation within a given region is defined as the fraction of the revenue, the sum of
[l_extendedprice * (1-l_discount)], from the products of a specified type in that region that was supplied by suppliers
from the given nation. The query determines this for the years 1995 and 1996 presented in this order.
Planner enhancement: join reorder.
*/
explain format = 'brief'
select
o_year,
sum(case
when nation = 'INDIA' then volume
else 0
end) / sum(volume) as mkt_share
from
(
select
extract(year from o_orderdate) as o_year,
l_extendedprice * (1 - l_discount) as volume,
n2.n_name as nation
from
part,
supplier,
lineitem,
orders,
customer,
nation n1,
nation n2,
region
where
p_partkey = l_partkey
and s_suppkey = l_suppkey
and l_orderkey = o_orderkey
and o_custkey = c_custkey
and c_nationkey = n1.n_nationkey
and n1.n_regionkey = r_regionkey
and r_name = 'ASIA'
and s_nationkey = n2.n_nationkey
and o_orderdate between '1995-01-01' and '1996-12-31'
and p_type = 'SMALL PLATED COPPER'
) as all_nations
group by
o_year
order by
o_year;
/*
Q9 Product Type Profit Measure Query
This query determines how much profit is made on a given line of parts, broken out by supplier nation and year.
The Product Type Profit Measure Query finds, for each nation and each year, the profit for all parts ordered in that
year that contain a specified substring in their names and that were filled by a supplier in that nation. The profit is
defined as the sum of [(l_extendedprice*(1-l_discount)) - (ps_supplycost * l_quantity)] for all lineitems describing
parts in the specified line. The query lists the nations in ascending alphabetical order and, for each nation, the year
and profit in descending order by year (most recent first).
Planner enhancement: join reorder.
*/
explain format = 'brief'
select
nation,
o_year,
sum(amount) as sum_profit
from
(
select
n_name as nation,
extract(year from o_orderdate) as o_year,
l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount
from
part,
supplier,
lineitem,
partsupp,
orders,
nation
where
s_suppkey = l_suppkey
and ps_suppkey = l_suppkey
and ps_partkey = l_partkey
and p_partkey = l_partkey
and o_orderkey = l_orderkey
and s_nationkey = n_nationkey
and p_name like '%dim%'
) as profit
group by
nation,
o_year
order by
nation,
o_year desc;
/*
Q10 Returned Item Reporting Query
The query identifies customers who might be having problems with the parts that are shipped to them.
The Returned Item Reporting Query finds the top 20 customers, in terms of their effect on lost revenue for a given
quarter, who have returned parts. The query considers only parts that were ordered in the specified quarter. The
query lists the customer's name, address, nation, phone number, account balance, comment information and revenue
lost. The customers are listed in descending order of lost revenue. Revenue lost is defined as
sum(l_extendedprice*(1-l_discount)) for all qualifying lineitems.
Planner enhancement: join reorder, if group-by item have primary key, non-priamry key is useless.
*/
explain format = 'brief'
select
c_custkey,
c_name,
sum(l_extendedprice * (1 - l_discount)) as revenue,
c_acctbal,
n_name,
c_address,
c_phone,
c_comment
from
customer,
orders,
lineitem,
nation
where
c_custkey = o_custkey
and l_orderkey = o_orderkey
and o_orderdate >= '1993-08-01'
and o_orderdate < date_add('1993-08-01', interval '3' month)
and l_returnflag = 'R'
and c_nationkey = n_nationkey
group by
c_custkey,
c_name,
c_acctbal,
c_phone,
n_name,
c_address,
c_comment
order by
revenue desc
limit 20;
/*
Q11 Important Stock Identification Query
This query finds the most important subset of suppliers' stock in a given nation.
The Important Stock Identification Query finds, from scanning the available stock of suppliers in a given nation, all
the parts that represent a significant percentage of the total value of all available parts. The query displays the part
number and the value of those parts in descending order of value.
*/
explain format = 'brief'
select
ps_partkey,
sum(ps_supplycost * ps_availqty) as value
from
partsupp,
supplier,
nation
where
ps_suppkey = s_suppkey
and s_nationkey = n_nationkey
and n_name = 'MOZAMBIQUE'
group by
ps_partkey having
sum(ps_supplycost * ps_availqty) > (
select
sum(ps_supplycost * ps_availqty) * 0.0001000000
from
partsupp,
supplier,
nation
where
ps_suppkey = s_suppkey
and s_nationkey = n_nationkey
and n_name = 'MOZAMBIQUE'
)
order by
value desc;
/*
Q12 Shipping Modes and Order Priority Query
This query determines whether selecting less expensive modes of shipping is negatively affecting the critical-priority
orders by causing more parts to be received by customers after the committed date.
The Shipping Modes and Order Priority Query counts, by ship mode, for lineitems actually received by customers in
a given year, the number of lineitems belonging to orders for which the l_receiptdate exceeds the l_commitdate for
two different specified ship modes. Only lineitems that were actually shipped before the l_commitdate are considered.
The late lineitems are partitioned into two groups, those with priority URGENT or HIGH, and those with a
priority other than URGENT or HIGH.
*/
explain format = 'brief'
select
l_shipmode,
sum(case
when o_orderpriority = '1-URGENT'
or o_orderpriority = '2-HIGH'
then 1
else 0
end) as high_line_count,
sum(case
when o_orderpriority <> '1-URGENT'
and o_orderpriority <> '2-HIGH'
then 1
else 0
end) as low_line_count
from
orders,
lineitem
where
o_orderkey = l_orderkey
and l_shipmode in ('RAIL', 'FOB')
and l_commitdate < l_receiptdate
and l_shipdate < l_commitdate
and l_receiptdate >= '1997-01-01'
and l_receiptdate < date_add('1997-01-01', interval '1' year)
group by
l_shipmode
order by
l_shipmode;
/*
Q13 Customer Distribution Query
This query seeks relationships between customers and the size of their orders.
This query determines the distribution of customers by the number of orders they have made, including customers
who have no record of orders, past or present. It counts and reports how many customers have no orders, how many
have 1, 2, 3, etc. A check is made to ensure that the orders counted do not fall into one of several special categories
of orders. Special categories are identified in the order comment column by looking for a particular pattern.
*/
explain format = 'brief'
select
c_count,
count(*) as custdist
from
(
select
c_custkey,
count(o_orderkey) as c_count
from
customer left outer join orders on
c_custkey = o_custkey
and o_comment not like '%pending%deposits%'
group by
c_custkey
) c_orders
group by
c_count
order by
custdist desc,
c_count desc;
/*
Q14 Promotion Effect Query
This query monitors the market response to a promotion such as TV advertisements or a special campaign.
The Promotion Effect Query determines what percentage of the revenue in a given year and month was derived from
promotional parts. The query considers only parts actually shipped in that month and gives the percentage. Revenue
is defined as (l_extendedprice * (1-l_discount)).
*/
explain format = 'brief'
select
100.00 * sum(case
when p_type like 'PROMO%'
then l_extendedprice * (1 - l_discount)
else 0
end) / sum(l_extendedprice * (1 - l_discount)) as promo_revenue
from
lineitem,
part
where
l_partkey = p_partkey
and l_shipdate >= '1996-12-01'
and l_shipdate < date_add('1996-12-01', interval '1' month);
/*
Q15 Top Supplier Query
This query determines the top supplier so it can be rewarded, given more business, or identified for special recognition.
The Top Supplier Query finds the supplier who contributed the most to the overall revenue for parts shipped during
a given quarter of a given year. In case of a tie, the query lists all suppliers whose contribution was equal to the
maximum, presented in supplier number order.
Planner enhancement: support view.
create view revenue0 (supplier_no, total_revenue) as
select
l_suppkey,
sum(l_extendedprice * (1 - l_discount))
from
lineitem
where
l_shipdate >= '1997-07-01'
and l_shipdate < date_add('1997-07-01', interval '3' month)
group by
l_suppkey
select
s_suppkey,
s_name,
s_address,
s_phone,
total_revenue
from
supplier,
revenue0
where
s_suppkey = supplier_no
and total_revenue = (
select
max(total_revenue)
from
revenue0
)
order by
s_suppkey
drop view revenue0
*/
/*
Q16 Parts/Supplier Relationship Query
This query finds out how many suppliers can supply parts with given attributes. It might be used, for example, to
determine whether there is a sufficient number of suppliers for heavily ordered parts.
The Parts/Supplier Relationship Query counts the number of suppliers who can supply parts that satisfy a particular
customer's requirements. The customer is interested in parts of eight different sizes as long as they are not of a given
type, not of a given brand, and not from a supplier who has had complaints registered at the Better Business Bureau.
Results must be presented in descending count and ascending brand, type, and size.
*/
explain format = 'brief'
select
p_brand,
p_type,
p_size,
count(distinct ps_suppkey) as supplier_cnt
from
partsupp,
part
where
p_partkey = ps_partkey
and p_brand <> 'Brand#34'
and p_type not like 'LARGE BRUSHED%'
and p_size in (48, 19, 12, 4, 41, 7, 21, 39)
and ps_suppkey not in (
select
s_suppkey
from
supplier
where
s_comment like '%Customer%Complaints%'
)
group by
p_brand,
p_type,
p_size
order by
supplier_cnt desc,
p_brand,
p_type,
p_size;
/*
Q17 Small-Quantity-Order Revenue Query
This query determines how much average yearly revenue would be lost if orders were no longer filled for small
quantities of certain parts. This may reduce overhead expenses by concentrating sales on larger shipments.
The Small-Quantity-Order Revenue Query considers parts of a given brand and with a given container type and
determines the average lineitem quantity of such parts ordered for all orders (past and pending) in the 7-year database.
What would be the average yearly gross (undiscounted) loss in revenue if orders for these parts with a quantity
of less than 20% of this average were no longer taken?
Planner enahancement: aggregation pull up through join.
*/
explain format = 'brief'
select
sum(l_extendedprice) / 7.0 as avg_yearly
from
lineitem,
part
where
p_partkey = l_partkey
and p_brand = 'Brand#44'
and p_container = 'WRAP PKG'
and l_quantity < (
select
0.2 * avg(l_quantity)
from
lineitem
where
l_partkey = p_partkey
);
/*
Q18 Large Volume Customer Query
The Large Volume Customer Query ranks customers based on their having placed a large quantity order. Large
quantity orders are defined as those orders whose total quantity is above a certain level.
The Large Volume Customer Query finds a list of the top 100 customers who have ever placed large quantity orders.
The query lists the customer name, customer key, the order key, date and total price and the quantity for the order.
Planner enhancement: cost estimation is not so good, join reorder. The inner subquery's result is only 300+ rows.
*/
explain format = 'brief'
select
c_name,
c_custkey,
o_orderkey,
o_orderdate,
o_totalprice,
sum(l_quantity)
from
customer,
orders,
lineitem
where
o_orderkey in (
select
l_orderkey
from
lineitem
group by
l_orderkey having
sum(l_quantity) > 314
)
and c_custkey = o_custkey
and o_orderkey = l_orderkey
group by
c_name,
c_custkey,
o_orderkey,
o_orderdate,
o_totalprice
order by
o_totalprice desc,
o_orderdate
limit 100;
/*
Q19 Discounted Revenue Query
The Discounted Revenue Query reports the gross discounted revenue attributed to the sale of selected parts handled
in a particular manner. This query is an example of code such as might be produced programmatically by a data
mining tool.
The Discounted Revenue query finds the gross discounted revenue for all orders for three different types of parts
that were shipped by air and delivered in person. Parts are selected based on the combination of specific brands, a
list of containers, and a range of sizes.
*/
explain format = 'brief'
select
sum(l_extendedprice* (1 - l_discount)) as revenue
from
lineitem,
part
where
(
p_partkey = l_partkey
and p_brand = 'Brand#52'
and p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG')
and l_quantity >= 4 and l_quantity <= 4 + 10
and p_size between 1 and 5
and l_shipmode in ('AIR', 'AIR REG')
and l_shipinstruct = 'DELIVER IN PERSON'
)
or
(
p_partkey = l_partkey
and p_brand = 'Brand#11'
and p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK')
and l_quantity >= 18 and l_quantity <= 18 + 10
and p_size between 1 and 10
and l_shipmode in ('AIR', 'AIR REG')
and l_shipinstruct = 'DELIVER IN PERSON'
)
or
(
p_partkey = l_partkey
and p_brand = 'Brand#51'
and p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG')
and l_quantity >= 29 and l_quantity <= 29 + 10
and p_size between 1 and 15
and l_shipmode in ('AIR', 'AIR REG')
and l_shipinstruct = 'DELIVER IN PERSON'
);
/*
Q20 Potential Part Promotion Query
The Potential Part Promotion Query identifies suppliers in a particular nation having selected parts that may be candidates
for a promotional offer.
The Potential Part Promotion query identifies suppliers who have an excess of a given part available; an excess is
defined to be more than 50% of the parts like the given part that the supplier shipped in a given year for a given
nation. Only parts whose names share a certain naming convention are considered.
*/
explain format = 'brief'
select
s_name,
s_address
from
supplier,
nation
where
s_suppkey in (
select
ps_suppkey
from
partsupp
where
ps_partkey in (
select
p_partkey
from
part
where
p_name like 'green%'
)
and ps_availqty > (
select
0.5 * sum(l_quantity)
from
lineitem
where
l_partkey = ps_partkey
and l_suppkey = ps_suppkey
and l_shipdate >= '1993-01-01'
and l_shipdate < date_add('1993-01-01', interval '1' year)
)
)
and s_nationkey = n_nationkey
and n_name = 'ALGERIA'
order by
s_name;
/*
Q21 Suppliers Who Kept Orders Waiting Query
This query identifies certain suppliers who were not able to ship required parts in a timely manner.
The Suppliers Who Kept Orders Waiting query identifies suppliers, for a given nation, whose product was part of a
multi-supplier order (with current status of 'F') where they were the only supplier who failed to meet the committed
delivery date.
*/
explain format = 'brief'
select
s_name,
count(*) as numwait
from
supplier,
lineitem l1,
orders,
nation
where
s_suppkey = l1.l_suppkey
and o_orderkey = l1.l_orderkey
and o_orderstatus = 'F'
and l1.l_receiptdate > l1.l_commitdate
and exists (
select
*
from
lineitem l2
where
l2.l_orderkey = l1.l_orderkey
and l2.l_suppkey <> l1.l_suppkey
)
and not exists (
select
*
from
lineitem l3
where
l3.l_orderkey = l1.l_orderkey
and l3.l_suppkey <> l1.l_suppkey
and l3.l_receiptdate > l3.l_commitdate
)
and s_nationkey = n_nationkey
and n_name = 'EGYPT'
group by
s_name
order by
numwait desc,
s_name
limit 100;
/*
Q22 Global Sales Opportunity Query
The Global Sales Opportunity Query identifies geographies where there are customers who may be likely to make a
purchase.
This query counts how many customers within a specific range of country codes have not placed orders for 7 years
but who have a greater than average “positive” account balance. It also reflects the magnitude of that balance.
Country code is defined as the first two characters of c_phone.
*/
explain format = 'brief'
select
cntrycode,
count(*) as numcust,
sum(c_acctbal) as totacctbal
from
(
select
substring(c_phone from 1 for 2) as cntrycode,
c_acctbal
from
customer
where
substring(c_phone from 1 for 2) in
('20', '40', '22', '30', '39', '42', '21')
and c_acctbal > (
select
avg(c_acctbal)
from
customer
where
c_acctbal > 0.00
and substring(c_phone from 1 for 2) in
('20', '40', '22', '30', '39', '42', '21')
)
and not exists (
select
*
from
orders
where
o_custkey = c_custkey
)
) as custsale
group by
cntrycode
order by
cntrycode;
| cmd/explaintest/t/tpch.test | 0 | https://github.com/pingcap/tidb/commit/e6f020a26efc60480e0a0690cdca87f0990d4ceb | [
0.11400894075632095,
0.0014390755677595735,
0.0001612979976925999,
0.00017517707601655275,
0.011286860331892967
] |
{
"id": 0,
"code_window": [
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n",
"\t\t\t\"tags\": tagsSchema(),\n",
"\t\t},\n",
"\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\t\"primary_access_key\": &schema.Schema{\n",
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n",
"\t\t\t\"secondary_access_key\": &schema.Schema{\n",
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n"
],
"file_path": "builtin/providers/azurerm/resource_arm_storage_account.go",
"type": "add",
"edit_start_line_idx": 92
} | package azurerm
import (
"fmt"
"net/http"
"regexp"
"strings"
"github.com/Azure/azure-sdk-for-go/arm/storage"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceArmStorageAccount() *schema.Resource {
return &schema.Resource{
Create: resourceArmStorageAccountCreate,
Read: resourceArmStorageAccountRead,
Update: resourceArmStorageAccountUpdate,
Delete: resourceArmStorageAccountDelete,
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validateArmStorageAccountName,
},
"resource_group_name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"location": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
StateFunc: azureRMNormalizeLocation,
},
"account_type": &schema.Schema{
Type: schema.TypeString,
Required: true,
ValidateFunc: validateArmStorageAccountType,
},
"primary_location": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"secondary_location": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"primary_blob_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"secondary_blob_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"primary_queue_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"secondary_queue_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"primary_table_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"secondary_table_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
// NOTE: The API does not appear to expose a secondary file endpoint
"primary_file_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"tags": tagsSchema(),
},
}
}
func resourceArmStorageAccountCreate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*ArmClient).storageServiceClient
resourceGroupName := d.Get("resource_group_name").(string)
storageAccountName := d.Get("name").(string)
accountType := d.Get("account_type").(string)
location := d.Get("location").(string)
tags := d.Get("tags").(map[string]interface{})
opts := storage.AccountCreateParameters{
Location: &location,
Properties: &storage.AccountPropertiesCreateParameters{
AccountType: storage.AccountType(accountType),
},
Tags: expandTags(tags),
}
accResp, err := client.Create(resourceGroupName, storageAccountName, opts)
if err != nil {
return fmt.Errorf("Error creating Azure Storage Account '%s': %s", storageAccountName, err)
}
_, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response.Response, http.StatusOK)
if err != nil {
return fmt.Errorf("Error creating Azure Storage Account %q: %s", storageAccountName, err)
}
// The only way to get the ID back apparently is to read the resource again
account, err := client.GetProperties(resourceGroupName, storageAccountName)
if err != nil {
return fmt.Errorf("Error retrieving Azure Storage Account %q: %s", storageAccountName, err)
}
d.SetId(*account.ID)
return resourceArmStorageAccountRead(d, meta)
}
// resourceArmStorageAccountUpdate is unusual in the ARM API where most resources have a combined
// and idempotent operation for CreateOrUpdate. In particular updating all of the parameters
// available requires a call to Update per parameter...
func resourceArmStorageAccountUpdate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*ArmClient).storageServiceClient
id, err := parseAzureResourceID(d.Id())
if err != nil {
return err
}
storageAccountName := id.Path["storageAccounts"]
resourceGroupName := id.ResourceGroup
d.Partial(true)
if d.HasChange("account_type") {
accountType := d.Get("account_type").(string)
opts := storage.AccountUpdateParameters{
Properties: &storage.AccountPropertiesUpdateParameters{
AccountType: storage.AccountType(accountType),
},
}
accResp, err := client.Update(resourceGroupName, storageAccountName, opts)
if err != nil {
return fmt.Errorf("Error updating Azure Storage Account type %q: %s", storageAccountName, err)
}
_, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response.Response, http.StatusOK)
if err != nil {
return fmt.Errorf("Error updating Azure Storage Account type %q: %s", storageAccountName, err)
}
d.SetPartial("account_type")
}
if d.HasChange("tags") {
tags := d.Get("tags").(map[string]interface{})
opts := storage.AccountUpdateParameters{
Tags: expandTags(tags),
}
accResp, err := client.Update(resourceGroupName, storageAccountName, opts)
if err != nil {
return fmt.Errorf("Error updating Azure Storage Account tags %q: %s", storageAccountName, err)
}
_, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response.Response, http.StatusOK)
if err != nil {
return fmt.Errorf("Error updating Azure Storage Account tags %q: %s", storageAccountName, err)
}
d.SetPartial("tags")
}
d.Partial(false)
return nil
}
func resourceArmStorageAccountRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*ArmClient).storageServiceClient
id, err := parseAzureResourceID(d.Id())
if err != nil {
return err
}
name := id.Path["storageAccounts"]
resGroup := id.ResourceGroup
resp, err := client.GetProperties(resGroup, name)
if err != nil {
if resp.StatusCode == http.StatusNotFound {
d.SetId("")
return nil
}
return fmt.Errorf("Error reading the state of AzureRM Storage Account %q: %s", name, err)
}
d.Set("location", resp.Location)
d.Set("account_type", resp.Properties.AccountType)
d.Set("primary_location", resp.Properties.PrimaryLocation)
d.Set("secondary_location", resp.Properties.SecondaryLocation)
if resp.Properties.PrimaryEndpoints != nil {
d.Set("primary_blob_endpoint", resp.Properties.PrimaryEndpoints.Blob)
d.Set("primary_queue_endpoint", resp.Properties.PrimaryEndpoints.Queue)
d.Set("primary_table_endpoint", resp.Properties.PrimaryEndpoints.Table)
d.Set("primary_file_endpoint", resp.Properties.PrimaryEndpoints.File)
}
if resp.Properties.SecondaryEndpoints != nil {
if resp.Properties.SecondaryEndpoints.Blob != nil {
d.Set("secondary_blob_endpoint", resp.Properties.SecondaryEndpoints.Blob)
} else {
d.Set("secondary_blob_endpoint", "")
}
if resp.Properties.SecondaryEndpoints.Queue != nil {
d.Set("secondary_queue_endpoint", resp.Properties.SecondaryEndpoints.Queue)
} else {
d.Set("secondary_queue_endpoint", "")
}
if resp.Properties.SecondaryEndpoints.Table != nil {
d.Set("secondary_table_endpoint", resp.Properties.SecondaryEndpoints.Table)
} else {
d.Set("secondary_table_endpoint", "")
}
}
flattenAndSetTags(d, resp.Tags)
return nil
}
func resourceArmStorageAccountDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*ArmClient).storageServiceClient
id, err := parseAzureResourceID(d.Id())
if err != nil {
return err
}
name := id.Path["storageAccounts"]
resGroup := id.ResourceGroup
accResp, err := client.Delete(resGroup, name)
if err != nil {
return fmt.Errorf("Error issuing AzureRM delete request for storage account %q: %s", name, err)
}
_, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response, http.StatusNotFound)
if err != nil {
return fmt.Errorf("Error polling for AzureRM delete request for storage account %q: %s", name, err)
}
return nil
}
func validateArmStorageAccountName(v interface{}, k string) (ws []string, es []error) {
input := v.(string)
if !regexp.MustCompile(`\A([a-z0-9]{3,24})\z`).MatchString(input) {
es = append(es, fmt.Errorf("name can only consist of lowercase letters and numbers, and must be between 3 and 24 characters long"))
}
return
}
func validateArmStorageAccountType(v interface{}, k string) (ws []string, es []error) {
validAccountTypes := []string{"standard_lrs", "standard_zrs",
"standard_grs", "standard_ragrs", "premium_lrs"}
input := strings.ToLower(v.(string))
for _, valid := range validAccountTypes {
if valid == input {
return
}
}
es = append(es, fmt.Errorf("Invalid storage account type %q", input))
return
}
| builtin/providers/azurerm/resource_arm_storage_account.go | 1 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.000828307238407433,
0.0002443291887175292,
0.00016447251255158335,
0.00017373744049109519,
0.0001565548882354051
] |
{
"id": 0,
"code_window": [
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n",
"\t\t\t\"tags\": tagsSchema(),\n",
"\t\t},\n",
"\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\t\"primary_access_key\": &schema.Schema{\n",
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n",
"\t\t\t\"secondary_access_key\": &schema.Schema{\n",
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n"
],
"file_path": "builtin/providers/azurerm/resource_arm_storage_account.go",
"type": "add",
"edit_start_line_idx": 92
} | package cloudstack
import (
"fmt"
"testing"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"github.com/xanzy/go-cloudstack/cloudstack"
)
func TestAccCloudStackTemplate_basic(t *testing.T) {
var template cloudstack.Template
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckCloudStackTemplateDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccCloudStackTemplate_basic,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackTemplateExists("cloudstack_template.foo", &template),
testAccCheckCloudStackTemplateBasicAttributes(&template),
resource.TestCheckResourceAttr(
"cloudstack_template.foo", "display_text", "terraform-test"),
),
},
},
})
}
func TestAccCloudStackTemplate_update(t *testing.T) {
var template cloudstack.Template
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckCloudStackTemplateDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccCloudStackTemplate_basic,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackTemplateExists("cloudstack_template.foo", &template),
testAccCheckCloudStackTemplateBasicAttributes(&template),
),
},
resource.TestStep{
Config: testAccCloudStackTemplate_update,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackTemplateExists(
"cloudstack_template.foo", &template),
testAccCheckCloudStackTemplateUpdatedAttributes(&template),
resource.TestCheckResourceAttr(
"cloudstack_template.foo", "display_text", "terraform-updated"),
),
},
},
})
}
func testAccCheckCloudStackTemplateExists(
n string, template *cloudstack.Template) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No template ID is set")
}
cs := testAccProvider.Meta().(*cloudstack.CloudStackClient)
tmpl, _, err := cs.Template.GetTemplateByID(rs.Primary.ID, "executable")
if err != nil {
return err
}
if tmpl.Id != rs.Primary.ID {
return fmt.Errorf("Template not found")
}
*template = *tmpl
return nil
}
}
func testAccCheckCloudStackTemplateBasicAttributes(
template *cloudstack.Template) resource.TestCheckFunc {
return func(s *terraform.State) error {
if template.Name != "terraform-test" {
return fmt.Errorf("Bad name: %s", template.Name)
}
if template.Format != CLOUDSTACK_TEMPLATE_FORMAT {
return fmt.Errorf("Bad format: %s", template.Format)
}
if template.Hypervisor != CLOUDSTACK_HYPERVISOR {
return fmt.Errorf("Bad hypervisor: %s", template.Hypervisor)
}
if template.Ostypename != CLOUDSTACK_TEMPLATE_OS_TYPE {
return fmt.Errorf("Bad os type: %s", template.Ostypename)
}
if template.Zonename != CLOUDSTACK_ZONE {
return fmt.Errorf("Bad zone: %s", template.Zonename)
}
return nil
}
}
func testAccCheckCloudStackTemplateUpdatedAttributes(
template *cloudstack.Template) resource.TestCheckFunc {
return func(s *terraform.State) error {
if template.Displaytext != "terraform-updated" {
return fmt.Errorf("Bad name: %s", template.Displaytext)
}
if !template.Isdynamicallyscalable {
return fmt.Errorf("Bad is_dynamically_scalable: %t", template.Isdynamicallyscalable)
}
if !template.Passwordenabled {
return fmt.Errorf("Bad password_enabled: %t", template.Passwordenabled)
}
return nil
}
}
func testAccCheckCloudStackTemplateDestroy(s *terraform.State) error {
cs := testAccProvider.Meta().(*cloudstack.CloudStackClient)
for _, rs := range s.RootModule().Resources {
if rs.Type != "cloudstack_template" {
continue
}
if rs.Primary.ID == "" {
return fmt.Errorf("No template ID is set")
}
_, _, err := cs.Template.GetTemplateByID(rs.Primary.ID, "executable")
if err == nil {
return fmt.Errorf("Template %s still exists", rs.Primary.ID)
}
}
return nil
}
var testAccCloudStackTemplate_basic = fmt.Sprintf(`
resource "cloudstack_template" "foo" {
name = "terraform-test"
format = "%s"
hypervisor = "%s"
os_type = "%s"
url = "%s"
zone = "%s"
}
`,
CLOUDSTACK_TEMPLATE_FORMAT,
CLOUDSTACK_HYPERVISOR,
CLOUDSTACK_TEMPLATE_OS_TYPE,
CLOUDSTACK_TEMPLATE_URL,
CLOUDSTACK_ZONE)
var testAccCloudStackTemplate_update = fmt.Sprintf(`
resource "cloudstack_template" "foo" {
name = "terraform-test"
display_text = "terraform-updated"
format = "%s"
hypervisor = "%s"
os_type = "%s"
url = "%s"
zone = "%s"
is_dynamically_scalable = true
password_enabled = true
}
`,
CLOUDSTACK_TEMPLATE_FORMAT,
CLOUDSTACK_HYPERVISOR,
CLOUDSTACK_TEMPLATE_OS_TYPE,
CLOUDSTACK_TEMPLATE_URL,
CLOUDSTACK_ZONE)
| builtin/providers/cloudstack/resource_cloudstack_template_test.go | 0 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.00023721061006654054,
0.0001766708301147446,
0.00016523110389243811,
0.00017105891311075538,
0.00001728676215861924
] |
{
"id": 0,
"code_window": [
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n",
"\t\t\t\"tags\": tagsSchema(),\n",
"\t\t},\n",
"\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\t\"primary_access_key\": &schema.Schema{\n",
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n",
"\t\t\t\"secondary_access_key\": &schema.Schema{\n",
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n"
],
"file_path": "builtin/providers/azurerm/resource_arm_storage_account.go",
"type": "add",
"edit_start_line_idx": 92
} | // Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build arm,netbsd
package unix
func Getpagesize() int { return 4096 }
func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) }
func NsecToTimespec(nsec int64) (ts Timespec) {
ts.Sec = int64(nsec / 1e9)
ts.Nsec = int32(nsec % 1e9)
return
}
func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 }
func NsecToTimeval(nsec int64) (tv Timeval) {
nsec += 999 // round up to microsecond
tv.Usec = int32(nsec % 1e9 / 1e3)
tv.Sec = int64(nsec / 1e9)
return
}
func SetKevent(k *Kevent_t, fd, mode, flags int) {
k.Ident = uint32(fd)
k.Filter = uint32(mode)
k.Flags = uint32(flags)
}
func (iov *Iovec) SetLen(length int) {
iov.Len = uint32(length)
}
func (msghdr *Msghdr) SetControllen(length int) {
msghdr.Controllen = uint32(length)
}
func (cmsg *Cmsghdr) SetLen(length int) {
cmsg.Len = uint32(length)
}
| vendor/golang.org/x/sys/unix/syscall_netbsd_arm.go | 0 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.00017474099877290428,
0.0001706671464489773,
0.0001676778483670205,
0.0001696448161965236,
0.0000025905649181368062
] |
{
"id": 0,
"code_window": [
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n",
"\t\t\t\"tags\": tagsSchema(),\n",
"\t\t},\n",
"\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\t\"primary_access_key\": &schema.Schema{\n",
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n",
"\t\t\t\"secondary_access_key\": &schema.Schema{\n",
"\t\t\t\tType: schema.TypeString,\n",
"\t\t\t\tComputed: true,\n",
"\t\t\t},\n",
"\n"
],
"file_path": "builtin/providers/azurerm/resource_arm_storage_account.go",
"type": "add",
"edit_start_line_idx": 92
} | package azure
import (
"crypto/rand"
"crypto/rsa"
"crypto/sha1"
"crypto/x509"
"encoding/base64"
"fmt"
"net/http"
"net/url"
"strconv"
"time"
"github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go-autorest/autorest"
"github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/dgrijalva/jwt-go"
)
const (
defaultRefresh = 5 * time.Minute
oauthURL = "https://login.microsoftonline.com/{tenantID}/oauth2/{requestType}?api-version=1.0"
tokenBaseDate = "1970-01-01T00:00:00Z"
jwtAudienceTemplate = "https://login.microsoftonline.com/%s/oauth2/token"
// AzureResourceManagerScope is the OAuth scope for the Azure Resource Manager.
AzureResourceManagerScope = "https://management.azure.com/"
)
var expirationBase time.Time
func init() {
expirationBase, _ = time.Parse(time.RFC3339, tokenBaseDate)
}
// Token encapsulates the access token used to authorize Azure requests.
type Token struct {
AccessToken string `json:"access_token"`
ExpiresIn string `json:"expires_in"`
ExpiresOn string `json:"expires_on"`
NotBefore string `json:"not_before"`
Resource string `json:"resource"`
Type string `json:"token_type"`
}
// Expires returns the time.Time when the Token expires.
func (t Token) Expires() time.Time {
s, err := strconv.Atoi(t.ExpiresOn)
if err != nil {
s = -3600
}
return expirationBase.Add(time.Duration(s) * time.Second).UTC()
}
// IsExpired returns true if the Token is expired, false otherwise.
func (t Token) IsExpired() bool {
return t.WillExpireIn(0)
}
// WillExpireIn returns true if the Token will expire after the passed time.Duration interval
// from now, false otherwise.
func (t Token) WillExpireIn(d time.Duration) bool {
return !t.Expires().After(time.Now().Add(d))
}
// WithAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose
// value is "Bearer " followed by the AccessToken of the Token.
func (t *Token) WithAuthorization() autorest.PrepareDecorator {
return func(p autorest.Preparer) autorest.Preparer {
return autorest.PreparerFunc(func(r *http.Request) (*http.Request, error) {
return (autorest.WithBearerAuthorization(t.AccessToken)(p)).Prepare(r)
})
}
}
// ServicePrincipalSecret is an interface that allows various secret mechanism to fill the form
// that is submitted when acquiring an oAuth token.
type ServicePrincipalSecret interface {
SetAuthenticationValues(spt *ServicePrincipalToken, values *url.Values) error
}
// ServicePrincipalTokenSecret implements ServicePrincipalSecret for client_secret type authorization.
type ServicePrincipalTokenSecret struct {
ClientSecret string
}
// SetAuthenticationValues is a method of the interface ServicePrincipalTokenSecret.
// It will populate the form submitted during oAuth Token Acquisition using the client_secret.
func (tokenSecret *ServicePrincipalTokenSecret) SetAuthenticationValues(spt *ServicePrincipalToken, v *url.Values) error {
v.Set("client_secret", tokenSecret.ClientSecret)
return nil
}
// ServicePrincipalCertificateSecret implements ServicePrincipalSecret for generic RSA cert auth with signed JWTs.
type ServicePrincipalCertificateSecret struct {
Certificate *x509.Certificate
PrivateKey *rsa.PrivateKey
}
// SignJwt returns the JWT signed with the certificate's private key.
func (secret *ServicePrincipalCertificateSecret) SignJwt(spt *ServicePrincipalToken) (string, error) {
hasher := sha1.New()
_, err := hasher.Write(secret.Certificate.Raw)
if err != nil {
return "", err
}
thumbprint := base64.URLEncoding.EncodeToString(hasher.Sum(nil))
// The jti (JWT ID) claim provides a unique identifier for the JWT.
jti := make([]byte, 20)
_, err = rand.Read(jti)
if err != nil {
return "", err
}
token := jwt.New(jwt.SigningMethodRS256)
token.Header["x5t"] = thumbprint
token.Claims = map[string]interface{}{
"aud": fmt.Sprintf(jwtAudienceTemplate, spt.tenantID),
"iss": spt.clientID,
"sub": spt.clientID,
"jti": base64.URLEncoding.EncodeToString(jti),
"nbf": time.Now().Unix(),
"exp": time.Now().Add(time.Hour * 24).Unix(),
}
signedString, err := token.SignedString(secret.PrivateKey)
return signedString, nil
}
// SetAuthenticationValues is a method of the interface ServicePrincipalTokenSecret.
// It will populate the form submitted during oAuth Token Acquisition using a JWT signed with a certificate.
func (secret *ServicePrincipalCertificateSecret) SetAuthenticationValues(spt *ServicePrincipalToken, v *url.Values) error {
jwt, err := secret.SignJwt(spt)
if err != nil {
return err
}
v.Set("client_assertion", jwt)
v.Set("client_assertion_type", "urn:ietf:params:oauth:client-assertion-type:jwt-bearer")
return nil
}
// ServicePrincipalToken encapsulates a Token created for a Service Principal.
type ServicePrincipalToken struct {
Token
secret ServicePrincipalSecret
clientID string
tenantID string
resource string
autoRefresh bool
refreshWithin time.Duration
sender autorest.Sender
}
// NewServicePrincipalTokenWithSecret create a ServicePrincipalToken using the supplied ServicePrincipalSecret implementation.
func NewServicePrincipalTokenWithSecret(id string, tenantID string, resource string, secret ServicePrincipalSecret) (*ServicePrincipalToken, error) {
spt := &ServicePrincipalToken{
secret: secret,
clientID: id,
resource: resource,
tenantID: tenantID,
autoRefresh: true,
refreshWithin: defaultRefresh,
sender: &http.Client{},
}
return spt, nil
}
// NewServicePrincipalToken creates a ServicePrincipalToken from the supplied Service Principal
// credentials scoped to the named resource.
func NewServicePrincipalToken(id string, secret string, tenantID string, resource string) (*ServicePrincipalToken, error) {
return NewServicePrincipalTokenWithSecret(
id,
tenantID,
resource,
&ServicePrincipalTokenSecret{
ClientSecret: secret,
},
)
}
// NewServicePrincipalTokenFromCertificate create a ServicePrincipalToken from the supplied pkcs12 bytes.
func NewServicePrincipalTokenFromCertificate(id string, certificate *x509.Certificate, privateKey *rsa.PrivateKey, tenantID string, resource string) (*ServicePrincipalToken, error) {
return NewServicePrincipalTokenWithSecret(
id,
tenantID,
resource,
&ServicePrincipalCertificateSecret{
PrivateKey: privateKey,
Certificate: certificate,
},
)
}
// EnsureFresh will refresh the token if it will expire within the refresh window (as set by
// RefreshWithin).
func (spt *ServicePrincipalToken) EnsureFresh() error {
if spt.WillExpireIn(spt.refreshWithin) {
return spt.Refresh()
}
return nil
}
// Refresh obtains a fresh token for the Service Principal.
func (spt *ServicePrincipalToken) Refresh() error {
p := map[string]interface{}{
"tenantID": spt.tenantID,
"requestType": "token",
}
v := url.Values{}
v.Set("client_id", spt.clientID)
v.Set("grant_type", "client_credentials")
v.Set("resource", spt.resource)
err := spt.secret.SetAuthenticationValues(spt, &v)
if err != nil {
return err
}
req, err := autorest.Prepare(&http.Request{},
autorest.AsPost(),
autorest.AsFormURLEncoded(),
autorest.WithBaseURL(oauthURL),
autorest.WithPathParameters(p),
autorest.WithFormData(v))
if err != nil {
return err
}
resp, err := autorest.SendWithSender(spt.sender, req)
if err != nil {
return autorest.NewErrorWithError(err,
"azure.ServicePrincipalToken", "Refresh", resp.StatusCode, "Failure sending request for Service Principal %s",
spt.clientID)
}
var newToken Token
err = autorest.Respond(resp,
autorest.WithErrorUnlessOK(),
autorest.ByUnmarshallingJSON(&newToken),
autorest.ByClosing())
if err != nil {
return autorest.NewErrorWithError(err,
"azure.ServicePrincipalToken", "Refresh", resp.StatusCode, "Failure handling response to Service Principal %s request",
spt.clientID)
}
spt.Token = newToken
return nil
}
// SetAutoRefresh enables or disables automatic refreshing of stale tokens.
func (spt *ServicePrincipalToken) SetAutoRefresh(autoRefresh bool) {
spt.autoRefresh = autoRefresh
}
// SetRefreshWithin sets the interval within which if the token will expire, EnsureFresh will
// refresh the token.
func (spt *ServicePrincipalToken) SetRefreshWithin(d time.Duration) {
spt.refreshWithin = d
return
}
// SetSender sets the autorest.Sender used when obtaining the Service Principal token. An
// undecorated http.Client is used by default.
func (spt *ServicePrincipalToken) SetSender(s autorest.Sender) {
spt.sender = s
}
// WithAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose
// value is "Bearer " followed by the AccessToken of the ServicePrincipalToken.
//
// By default, the token will automatically refresh if nearly expired (as determined by the
// RefreshWithin interval). Use the AutoRefresh method to enable or disable automatically refreshing
// tokens.
func (spt *ServicePrincipalToken) WithAuthorization() autorest.PrepareDecorator {
return func(p autorest.Preparer) autorest.Preparer {
return autorest.PreparerFunc(func(r *http.Request) (*http.Request, error) {
if spt.autoRefresh {
err := spt.EnsureFresh()
if err != nil {
return r, autorest.NewErrorWithError(err,
"azure.ServicePrincipalToken", "WithAuthorization", autorest.UndefinedStatusCode, "Failed to refresh Service Principal Token for request to %s",
r.URL)
}
}
return (autorest.WithBearerAuthorization(spt.AccessToken)(p)).Prepare(r)
})
}
}
| vendor/github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go-autorest/autorest/azure/token.go | 0 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.0005978676490485668,
0.00019940909987781197,
0.00016428051458206028,
0.00017002227832563221,
0.00010293563536833972
] |
{
"id": 1,
"code_window": [
"\n",
"\t\treturn fmt.Errorf(\"Error reading the state of AzureRM Storage Account %q: %s\", name, err)\n",
"\t}\n",
"\n",
"\td.Set(\"location\", resp.Location)\n",
"\td.Set(\"account_type\", resp.Properties.AccountType)\n",
"\td.Set(\"primary_location\", resp.Properties.PrimaryLocation)\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tkeys, err := client.ListKeys(resGroup, name)\n",
"\tif err != nil {\n",
"\t\treturn err\n",
"\t}\n",
"\n",
"\td.Set(\"primary_access_key\", keys.Key1)\n",
"\td.Set(\"secondary_access_key\", keys.Key2)\n"
],
"file_path": "builtin/providers/azurerm/resource_arm_storage_account.go",
"type": "add",
"edit_start_line_idx": 210
} | ---
layout: "azurerm"
page_title: "Azure Resource Manager: azurerm_storage_account"
sidebar_current: "docs-azurerm-resource-storage-account"
description: |-
Create a Azure Storage Account.
---
# azurerm\_storage\_account
Create an Azure Storage Account.
## Example Usage
```
resource "azurerm_resource_group" "testrg" {
name = "resourceGroupName"
location = "westus"
}
resource "azurerm_storage_account" "testsa" {
name = "storageaccountname"
resource_group_name = "${azurerm_resource_group.testrg.name}"
location = "westus"
account_type = "Standard_GRS"
tags {
environment = "staging"
}
}
```
## Argument Reference
The following arguments are supported:
* `name` - (Required) Specifies the name of the storage account. Changing this forces a
new resource to be created. This must be unique across the entire Azure service,
not just within the resource group.
* `resource_group_name` - (Required) The name of the resource group in which to
create the storage account. Changing this forces a new resource to be created.
* `location` - (Required) Specifies the supported Azure location where the
resource exists. Changing this forces a new resource to be created.
* `account_type` - (Required) Defines the type of storage account to be
created. Valid options are `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`,
`Standard_RAGRS`, `Premium_LRS`. Changing this is sometimes valid - see the Azure
documentation for more information on which types of accounts can be converted
into other types.
* `tags` - (Optional) A mapping of tags to assign to the resource.
Note that although the Azure API supports setting custom domain names for
storage accounts, this is not currently supported.
## Attributes Reference
The following attributes are exported in addition to the arguments listed above:
* `id` - The storage account Resource ID.
* `primary_location` - The primary location of the storage account.
* `secondary_location` - The secondary location of the storage account.
* `primary_blob_endpoint` - The endpoint URL for blob storage in the primary location.
* `secondary_blob_endpoint` - The endpoint URL for blob storage in the secondary location.
* `primary_queue_endpoint` - The endpoint URL for queue storage in the primary location.
* `secondary_queue_endpoint` - The endpoint URL for queue storage in the secondary location.
* `primary_table_endpoint` - The endpoint URL for table storage in the primary location.
* `secondary_table_endpoint` - The endpoint URL for table storage in the secondary location.
* `primary_file_endpoint` - The endpoint URL for file storage in the primary location.
| website/source/docs/providers/azurerm/r/storage_account.html.markdown | 1 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.0012782569974660873,
0.0005842820391990244,
0.00016482779756188393,
0.000539424130693078,
0.0004021601052954793
] |
{
"id": 1,
"code_window": [
"\n",
"\t\treturn fmt.Errorf(\"Error reading the state of AzureRM Storage Account %q: %s\", name, err)\n",
"\t}\n",
"\n",
"\td.Set(\"location\", resp.Location)\n",
"\td.Set(\"account_type\", resp.Properties.AccountType)\n",
"\td.Set(\"primary_location\", resp.Properties.PrimaryLocation)\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tkeys, err := client.ListKeys(resGroup, name)\n",
"\tif err != nil {\n",
"\t\treturn err\n",
"\t}\n",
"\n",
"\td.Set(\"primary_access_key\", keys.Key1)\n",
"\td.Set(\"secondary_access_key\", keys.Key2)\n"
],
"file_path": "builtin/providers/azurerm/resource_arm_storage_account.go",
"type": "add",
"edit_start_line_idx": 210
} | package aws
import (
"fmt"
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/rds"
"github.com/hashicorp/terraform/helper/schema"
)
// setTags is a helper to set the tags for a resource. It expects the
// tags field to be named "tags"
func setTagsRDS(conn *rds.RDS, d *schema.ResourceData, arn string) error {
if d.HasChange("tags") {
oraw, nraw := d.GetChange("tags")
o := oraw.(map[string]interface{})
n := nraw.(map[string]interface{})
create, remove := diffTagsRDS(tagsFromMapRDS(o), tagsFromMapRDS(n))
// Set tags
if len(remove) > 0 {
log.Printf("[DEBUG] Removing tags: %s", remove)
k := make([]*string, len(remove), len(remove))
for i, t := range remove {
k[i] = t.Key
}
_, err := conn.RemoveTagsFromResource(&rds.RemoveTagsFromResourceInput{
ResourceName: aws.String(arn),
TagKeys: k,
})
if err != nil {
return err
}
}
if len(create) > 0 {
log.Printf("[DEBUG] Creating tags: %s", create)
_, err := conn.AddTagsToResource(&rds.AddTagsToResourceInput{
ResourceName: aws.String(arn),
Tags: create,
})
if err != nil {
return err
}
}
}
return nil
}
// diffTags takes our tags locally and the ones remotely and returns
// the set of tags that must be created, and the set of tags that must
// be destroyed.
func diffTagsRDS(oldTags, newTags []*rds.Tag) ([]*rds.Tag, []*rds.Tag) {
// First, we're creating everything we have
create := make(map[string]interface{})
for _, t := range newTags {
create[*t.Key] = *t.Value
}
// Build the list of what to remove
var remove []*rds.Tag
for _, t := range oldTags {
old, ok := create[*t.Key]
if !ok || old != *t.Value {
// Delete it!
remove = append(remove, t)
}
}
return tagsFromMapRDS(create), remove
}
// tagsFromMap returns the tags for the given map of data.
func tagsFromMapRDS(m map[string]interface{}) []*rds.Tag {
result := make([]*rds.Tag, 0, len(m))
for k, v := range m {
result = append(result, &rds.Tag{
Key: aws.String(k),
Value: aws.String(v.(string)),
})
}
return result
}
// tagsToMap turns the list of tags into a map.
func tagsToMapRDS(ts []*rds.Tag) map[string]string {
result := make(map[string]string)
for _, t := range ts {
result[*t.Key] = *t.Value
}
return result
}
func saveTagsRDS(conn *rds.RDS, d *schema.ResourceData, arn string) error {
resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{
ResourceName: aws.String(arn),
})
if err != nil {
return fmt.Errorf("[DEBUG] Error retreiving tags for ARN: %s", arn)
}
var dt []*rds.Tag
if len(resp.TagList) > 0 {
dt = resp.TagList
}
return d.Set("tags", tagsToMapRDS(dt))
}
| builtin/providers/aws/tagsRDS.go | 0 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.0022617413196712732,
0.00042368462891317904,
0.00016241721459664404,
0.00017162281437776983,
0.0005911960615776479
] |
{
"id": 1,
"code_window": [
"\n",
"\t\treturn fmt.Errorf(\"Error reading the state of AzureRM Storage Account %q: %s\", name, err)\n",
"\t}\n",
"\n",
"\td.Set(\"location\", resp.Location)\n",
"\td.Set(\"account_type\", resp.Properties.AccountType)\n",
"\td.Set(\"primary_location\", resp.Properties.PrimaryLocation)\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tkeys, err := client.ListKeys(resGroup, name)\n",
"\tif err != nil {\n",
"\t\treturn err\n",
"\t}\n",
"\n",
"\td.Set(\"primary_access_key\", keys.Key1)\n",
"\td.Set(\"secondary_access_key\", keys.Key2)\n"
],
"file_path": "builtin/providers/azurerm/resource_arm_storage_account.go",
"type": "add",
"edit_start_line_idx": 210
} | package artifactory
import (
"encoding/json"
)
type LicenseInformation struct {
LicenseType string `json:"type"`
ValidThrough string `json:"validThrough"`
LicensedTo string `json:"licensedTo"`
}
func (c *ArtifactoryClient) GetLicenseInformation() (LicenseInformation, error) {
o := make(map[string]string, 0)
var l LicenseInformation
d, e := c.Get("/api/system/license", o)
if e != nil {
return l, e
} else {
err := json.Unmarshal(d, &l)
return l, err
}
}
| vendor/github.com/lusis/go-artifactory/src/artifactory.v401/license.go | 0 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.0021536548156291246,
0.0008317541214637458,
0.00016553260502405465,
0.00017607500194571912,
0.0009347348241135478
] |
{
"id": 1,
"code_window": [
"\n",
"\t\treturn fmt.Errorf(\"Error reading the state of AzureRM Storage Account %q: %s\", name, err)\n",
"\t}\n",
"\n",
"\td.Set(\"location\", resp.Location)\n",
"\td.Set(\"account_type\", resp.Properties.AccountType)\n",
"\td.Set(\"primary_location\", resp.Properties.PrimaryLocation)\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tkeys, err := client.ListKeys(resGroup, name)\n",
"\tif err != nil {\n",
"\t\treturn err\n",
"\t}\n",
"\n",
"\td.Set(\"primary_access_key\", keys.Key1)\n",
"\td.Set(\"secondary_access_key\", keys.Key2)\n"
],
"file_path": "builtin/providers/azurerm/resource_arm_storage_account.go",
"type": "add",
"edit_start_line_idx": 210
} | package command
import (
"fmt"
"log"
"strings"
"github.com/hashicorp/terraform/terraform"
)
// TaintCommand is a cli.Command implementation that manually taints
// a resource, marking it for recreation.
type TaintCommand struct {
Meta
}
func (c *TaintCommand) Run(args []string) int {
args = c.Meta.process(args, false)
var allowMissing bool
var module string
cmdFlags := c.Meta.flagSet("taint")
cmdFlags.BoolVar(&allowMissing, "allow-missing", false, "module")
cmdFlags.StringVar(&module, "module", "", "module")
cmdFlags.StringVar(&c.Meta.statePath, "state", DefaultStateFilename, "path")
cmdFlags.StringVar(&c.Meta.stateOutPath, "state-out", "", "path")
cmdFlags.StringVar(&c.Meta.backupPath, "backup", "", "path")
cmdFlags.Usage = func() { c.Ui.Error(c.Help()) }
if err := cmdFlags.Parse(args); err != nil {
return 1
}
// Require the one argument for the resource to taint
args = cmdFlags.Args()
if len(args) != 1 {
c.Ui.Error("The taint command expects exactly one argument.")
cmdFlags.Usage()
return 1
}
name := args[0]
if module == "" {
module = "root"
} else {
module = "root." + module
}
rsk, err := terraform.ParseResourceStateKey(name)
if err != nil {
c.Ui.Error(fmt.Sprintf("Failed to parse resource name: %s", err))
return 1
}
if !rsk.Mode.Taintable() {
c.Ui.Error(fmt.Sprintf("Resource '%s' cannot be tainted", name))
return 1
}
// Get the state that we'll be modifying
state, err := c.State()
if err != nil {
c.Ui.Error(fmt.Sprintf("Failed to load state: %s", err))
return 1
}
// Get the actual state structure
s := state.State()
if s.Empty() {
if allowMissing {
return c.allowMissingExit(name, module)
}
c.Ui.Error(fmt.Sprintf(
"The state is empty. The most common reason for this is that\n" +
"an invalid state file path was given or Terraform has never\n " +
"been run for this infrastructure. Infrastructure must exist\n" +
"for it to be tainted."))
return 1
}
// Get the proper module we want to taint
modPath := strings.Split(module, ".")
mod := s.ModuleByPath(modPath)
if mod == nil {
if allowMissing {
return c.allowMissingExit(name, module)
}
c.Ui.Error(fmt.Sprintf(
"The module %s could not be found. There is nothing to taint.",
module))
return 1
}
// If there are no resources in this module, it is an error
if len(mod.Resources) == 0 {
if allowMissing {
return c.allowMissingExit(name, module)
}
c.Ui.Error(fmt.Sprintf(
"The module %s has no resources. There is nothing to taint.",
module))
return 1
}
// Get the resource we're looking for
rs, ok := mod.Resources[name]
if !ok {
if allowMissing {
return c.allowMissingExit(name, module)
}
c.Ui.Error(fmt.Sprintf(
"The resource %s couldn't be found in the module %s.",
name,
module))
return 1
}
// Taint the resource
rs.Taint()
log.Printf("[INFO] Writing state output to: %s", c.Meta.StateOutPath())
if err := c.Meta.PersistState(s); err != nil {
c.Ui.Error(fmt.Sprintf("Error writing state file: %s", err))
return 1
}
c.Ui.Output(fmt.Sprintf(
"The resource %s in the module %s has been marked as tainted!",
name, module))
return 0
}
func (c *TaintCommand) Help() string {
helpText := `
Usage: terraform taint [options] name
Manually mark a resource as tainted, forcing a destroy and recreate
on the next plan/apply.
This will not modify your infrastructure. This command changes your
state to mark a resource as tainted so that during the next plan or
apply, that resource will be destroyed and recreated. This command on
its own will not modify infrastructure. This command can be undone by
reverting the state backup file that is created.
Options:
-allow-missing If specified, the command will succeed (exit code 0)
even if the resource is missing.
-backup=path Path to backup the existing state file before
modifying. Defaults to the "-state-out" path with
".backup" extension. Set to "-" to disable backup.
-module=path The module path where the resource lives. By
default this will be root. Child modules can be specified
by names. Ex. "consul" or "consul.vpc" (nested modules).
-no-color If specified, output won't contain any color.
-state=path Path to read and save state (unless state-out
is specified). Defaults to "terraform.tfstate".
-state-out=path Path to write updated state file. By default, the
"-state" path will be used.
`
return strings.TrimSpace(helpText)
}
func (c *TaintCommand) Synopsis() string {
return "Manually mark a resource for recreation"
}
func (c *TaintCommand) allowMissingExit(name, module string) int {
c.Ui.Output(fmt.Sprintf(
"The resource %s in the module %s was not found, but\n"+
"-allow-missing is set, so we're exiting successfully.",
name, module))
return 0
}
| command/taint.go | 0 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.0007299810531549156,
0.00020692039106506854,
0.0001635261287447065,
0.00016929465346038342,
0.00012862346193287522
] |
{
"id": 2,
"code_window": [
"* `secondary_queue_endpoint` - The endpoint URL for queue storage in the secondary location.\n",
"* `primary_table_endpoint` - The endpoint URL for table storage in the primary location.\n",
"* `secondary_table_endpoint` - The endpoint URL for table storage in the secondary location.\n",
"* `primary_file_endpoint` - The endpoint URL for file storage in the primary location.\n"
],
"labels": [
"keep",
"keep",
"keep",
"add"
],
"after_edit": [
"* `primary_access_key` - The primary access key for the storage account\n",
"* `secondary_access_key` - The secondary access key for the storage account"
],
"file_path": "website/source/docs/providers/azurerm/r/storage_account.html.markdown",
"type": "add",
"edit_start_line_idx": 72
} | ---
layout: "azurerm"
page_title: "Azure Resource Manager: azurerm_storage_account"
sidebar_current: "docs-azurerm-resource-storage-account"
description: |-
Create a Azure Storage Account.
---
# azurerm\_storage\_account
Create an Azure Storage Account.
## Example Usage
```
resource "azurerm_resource_group" "testrg" {
name = "resourceGroupName"
location = "westus"
}
resource "azurerm_storage_account" "testsa" {
name = "storageaccountname"
resource_group_name = "${azurerm_resource_group.testrg.name}"
location = "westus"
account_type = "Standard_GRS"
tags {
environment = "staging"
}
}
```
## Argument Reference
The following arguments are supported:
* `name` - (Required) Specifies the name of the storage account. Changing this forces a
new resource to be created. This must be unique across the entire Azure service,
not just within the resource group.
* `resource_group_name` - (Required) The name of the resource group in which to
create the storage account. Changing this forces a new resource to be created.
* `location` - (Required) Specifies the supported Azure location where the
resource exists. Changing this forces a new resource to be created.
* `account_type` - (Required) Defines the type of storage account to be
created. Valid options are `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`,
`Standard_RAGRS`, `Premium_LRS`. Changing this is sometimes valid - see the Azure
documentation for more information on which types of accounts can be converted
into other types.
* `tags` - (Optional) A mapping of tags to assign to the resource.
Note that although the Azure API supports setting custom domain names for
storage accounts, this is not currently supported.
## Attributes Reference
The following attributes are exported in addition to the arguments listed above:
* `id` - The storage account Resource ID.
* `primary_location` - The primary location of the storage account.
* `secondary_location` - The secondary location of the storage account.
* `primary_blob_endpoint` - The endpoint URL for blob storage in the primary location.
* `secondary_blob_endpoint` - The endpoint URL for blob storage in the secondary location.
* `primary_queue_endpoint` - The endpoint URL for queue storage in the primary location.
* `secondary_queue_endpoint` - The endpoint URL for queue storage in the secondary location.
* `primary_table_endpoint` - The endpoint URL for table storage in the primary location.
* `secondary_table_endpoint` - The endpoint URL for table storage in the secondary location.
* `primary_file_endpoint` - The endpoint URL for file storage in the primary location.
| website/source/docs/providers/azurerm/r/storage_account.html.markdown | 1 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.8327224850654602,
0.12572145462036133,
0.00017556000966578722,
0.00023695995332673192,
0.2730388045310974
] |
{
"id": 2,
"code_window": [
"* `secondary_queue_endpoint` - The endpoint URL for queue storage in the secondary location.\n",
"* `primary_table_endpoint` - The endpoint URL for table storage in the primary location.\n",
"* `secondary_table_endpoint` - The endpoint URL for table storage in the secondary location.\n",
"* `primary_file_endpoint` - The endpoint URL for file storage in the primary location.\n"
],
"labels": [
"keep",
"keep",
"keep",
"add"
],
"after_edit": [
"* `primary_access_key` - The primary access key for the storage account\n",
"* `secondary_access_key` - The secondary access key for the storage account"
],
"file_path": "website/source/docs/providers/azurerm/r/storage_account.html.markdown",
"type": "add",
"edit_start_line_idx": 72
} | // THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT.
package rds
import (
"github.com/aws/aws-sdk-go/private/waiter"
)
func (c *RDS) WaitUntilDBInstanceAvailable(input *DescribeDBInstancesInput) error {
waiterCfg := waiter.Config{
Operation: "DescribeDBInstances",
Delay: 30,
MaxAttempts: 60,
Acceptors: []waiter.WaitAcceptor{
{
State: "success",
Matcher: "pathAll",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "available",
},
{
State: "failure",
Matcher: "pathAny",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "deleted",
},
{
State: "failure",
Matcher: "pathAny",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "deleting",
},
{
State: "failure",
Matcher: "pathAny",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "failed",
},
{
State: "failure",
Matcher: "pathAny",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "incompatible-restore",
},
{
State: "failure",
Matcher: "pathAny",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "incompatible-parameters",
},
{
State: "failure",
Matcher: "pathAny",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "incompatible-parameters",
},
{
State: "failure",
Matcher: "pathAny",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "incompatible-restore",
},
},
}
w := waiter.Waiter{
Client: c,
Input: input,
Config: waiterCfg,
}
return w.Wait()
}
func (c *RDS) WaitUntilDBInstanceDeleted(input *DescribeDBInstancesInput) error {
waiterCfg := waiter.Config{
Operation: "DescribeDBInstances",
Delay: 30,
MaxAttempts: 60,
Acceptors: []waiter.WaitAcceptor{
{
State: "success",
Matcher: "pathAll",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "deleted",
},
{
State: "success",
Matcher: "error",
Argument: "",
Expected: "DBInstanceNotFound",
},
{
State: "failure",
Matcher: "pathAny",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "creating",
},
{
State: "failure",
Matcher: "pathAny",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "modifying",
},
{
State: "failure",
Matcher: "pathAny",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "rebooting",
},
{
State: "failure",
Matcher: "pathAny",
Argument: "DBInstances[].DBInstanceStatus",
Expected: "resetting-master-credentials",
},
},
}
w := waiter.Waiter{
Client: c,
Input: input,
Config: waiterCfg,
}
return w.Wait()
}
| vendor/github.com/aws/aws-sdk-go/service/rds/waiters.go | 0 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.0002773222513496876,
0.00022970259306021035,
0.00018350142636336386,
0.00022394763072952628,
0.000026652820452000014
] |
{
"id": 2,
"code_window": [
"* `secondary_queue_endpoint` - The endpoint URL for queue storage in the secondary location.\n",
"* `primary_table_endpoint` - The endpoint URL for table storage in the primary location.\n",
"* `secondary_table_endpoint` - The endpoint URL for table storage in the secondary location.\n",
"* `primary_file_endpoint` - The endpoint URL for file storage in the primary location.\n"
],
"labels": [
"keep",
"keep",
"keep",
"add"
],
"after_edit": [
"* `primary_access_key` - The primary access key for the storage account\n",
"* `secondary_access_key` - The secondary access key for the storage account"
],
"file_path": "website/source/docs/providers/azurerm/r/storage_account.html.markdown",
"type": "add",
"edit_start_line_idx": 72
} | package terraform
import (
"testing"
)
func TestNullGraphWalker_impl(t *testing.T) {
var _ GraphWalker = NullGraphWalker{}
}
| terraform/graph_walk_test.go | 0 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.0002238963934360072,
0.0002238963934360072,
0.0002238963934360072,
0.0002238963934360072,
0
] |
{
"id": 2,
"code_window": [
"* `secondary_queue_endpoint` - The endpoint URL for queue storage in the secondary location.\n",
"* `primary_table_endpoint` - The endpoint URL for table storage in the primary location.\n",
"* `secondary_table_endpoint` - The endpoint URL for table storage in the secondary location.\n",
"* `primary_file_endpoint` - The endpoint URL for file storage in the primary location.\n"
],
"labels": [
"keep",
"keep",
"keep",
"add"
],
"after_edit": [
"* `primary_access_key` - The primary access key for the storage account\n",
"* `secondary_access_key` - The secondary access key for the storage account"
],
"file_path": "website/source/docs/providers/azurerm/r/storage_account.html.markdown",
"type": "add",
"edit_start_line_idx": 72
} | // Created by cgo -godefs - DO NOT EDIT
// cgo -godefs types_openbsd.go
// +build amd64,openbsd
package unix
const (
sizeofPtr = 0x8
sizeofShort = 0x2
sizeofInt = 0x4
sizeofLong = 0x8
sizeofLongLong = 0x8
)
type (
_C_short int16
_C_int int32
_C_long int64
_C_long_long int64
)
type Timespec struct {
Sec int64
Nsec int64
}
type Timeval struct {
Sec int64
Usec int64
}
type Rusage struct {
Utime Timeval
Stime Timeval
Maxrss int64
Ixrss int64
Idrss int64
Isrss int64
Minflt int64
Majflt int64
Nswap int64
Inblock int64
Oublock int64
Msgsnd int64
Msgrcv int64
Nsignals int64
Nvcsw int64
Nivcsw int64
}
type Rlimit struct {
Cur uint64
Max uint64
}
type _Gid_t uint32
const (
S_IFMT = 0xf000
S_IFIFO = 0x1000
S_IFCHR = 0x2000
S_IFDIR = 0x4000
S_IFBLK = 0x6000
S_IFREG = 0x8000
S_IFLNK = 0xa000
S_IFSOCK = 0xc000
S_ISUID = 0x800
S_ISGID = 0x400
S_ISVTX = 0x200
S_IRUSR = 0x100
S_IWUSR = 0x80
S_IXUSR = 0x40
)
type Stat_t struct {
Mode uint32
Dev int32
Ino uint64
Nlink uint32
Uid uint32
Gid uint32
Rdev int32
Atim Timespec
Mtim Timespec
Ctim Timespec
Size int64
Blocks int64
Blksize uint32
Flags uint32
Gen uint32
Pad_cgo_0 [4]byte
X__st_birthtim Timespec
}
type Statfs_t struct {
F_flags uint32
F_bsize uint32
F_iosize uint32
Pad_cgo_0 [4]byte
F_blocks uint64
F_bfree uint64
F_bavail int64
F_files uint64
F_ffree uint64
F_favail int64
F_syncwrites uint64
F_syncreads uint64
F_asyncwrites uint64
F_asyncreads uint64
F_fsid Fsid
F_namemax uint32
F_owner uint32
F_ctime uint64
F_fstypename [16]int8
F_mntonname [90]int8
F_mntfromname [90]int8
F_mntfromspec [90]int8
Pad_cgo_1 [2]byte
Mount_info [160]byte
}
type Flock_t struct {
Start int64
Len int64
Pid int32
Type int16
Whence int16
}
type Dirent struct {
Fileno uint64
Off int64
Reclen uint16
Type uint8
Namlen uint8
X__d_padding [4]uint8
Name [256]int8
}
type Fsid struct {
Val [2]int32
}
type RawSockaddrInet4 struct {
Len uint8
Family uint8
Port uint16
Addr [4]byte /* in_addr */
Zero [8]int8
}
type RawSockaddrInet6 struct {
Len uint8
Family uint8
Port uint16
Flowinfo uint32
Addr [16]byte /* in6_addr */
Scope_id uint32
}
type RawSockaddrUnix struct {
Len uint8
Family uint8
Path [104]int8
}
type RawSockaddrDatalink struct {
Len uint8
Family uint8
Index uint16
Type uint8
Nlen uint8
Alen uint8
Slen uint8
Data [24]int8
}
type RawSockaddr struct {
Len uint8
Family uint8
Data [14]int8
}
type RawSockaddrAny struct {
Addr RawSockaddr
Pad [92]int8
}
type _Socklen uint32
type Linger struct {
Onoff int32
Linger int32
}
type Iovec struct {
Base *byte
Len uint64
}
type IPMreq struct {
Multiaddr [4]byte /* in_addr */
Interface [4]byte /* in_addr */
}
type IPv6Mreq struct {
Multiaddr [16]byte /* in6_addr */
Interface uint32
}
type Msghdr struct {
Name *byte
Namelen uint32
Pad_cgo_0 [4]byte
Iov *Iovec
Iovlen uint32
Pad_cgo_1 [4]byte
Control *byte
Controllen uint32
Flags int32
}
type Cmsghdr struct {
Len uint32
Level int32
Type int32
}
type Inet6Pktinfo struct {
Addr [16]byte /* in6_addr */
Ifindex uint32
}
type IPv6MTUInfo struct {
Addr RawSockaddrInet6
Mtu uint32
}
type ICMPv6Filter struct {
Filt [8]uint32
}
const (
SizeofSockaddrInet4 = 0x10
SizeofSockaddrInet6 = 0x1c
SizeofSockaddrAny = 0x6c
SizeofSockaddrUnix = 0x6a
SizeofSockaddrDatalink = 0x20
SizeofLinger = 0x8
SizeofIPMreq = 0x8
SizeofIPv6Mreq = 0x14
SizeofMsghdr = 0x30
SizeofCmsghdr = 0xc
SizeofInet6Pktinfo = 0x14
SizeofIPv6MTUInfo = 0x20
SizeofICMPv6Filter = 0x20
)
const (
PTRACE_TRACEME = 0x0
PTRACE_CONT = 0x7
PTRACE_KILL = 0x8
)
type Kevent_t struct {
Ident uint64
Filter int16
Flags uint16
Fflags uint32
Data int64
Udata *byte
}
type FdSet struct {
Bits [32]uint32
}
const (
SizeofIfMsghdr = 0xf8
SizeofIfData = 0xe0
SizeofIfaMsghdr = 0x18
SizeofIfAnnounceMsghdr = 0x1a
SizeofRtMsghdr = 0x60
SizeofRtMetrics = 0x38
)
type IfMsghdr struct {
Msglen uint16
Version uint8
Type uint8
Hdrlen uint16
Index uint16
Tableid uint16
Pad1 uint8
Pad2 uint8
Addrs int32
Flags int32
Xflags int32
Data IfData
}
type IfData struct {
Type uint8
Addrlen uint8
Hdrlen uint8
Link_state uint8
Mtu uint32
Metric uint32
Pad uint32
Baudrate uint64
Ipackets uint64
Ierrors uint64
Opackets uint64
Oerrors uint64
Collisions uint64
Ibytes uint64
Obytes uint64
Imcasts uint64
Omcasts uint64
Iqdrops uint64
Noproto uint64
Capabilities uint32
Pad_cgo_0 [4]byte
Lastchange Timeval
Mclpool [7]Mclpool
Pad_cgo_1 [4]byte
}
type IfaMsghdr struct {
Msglen uint16
Version uint8
Type uint8
Hdrlen uint16
Index uint16
Tableid uint16
Pad1 uint8
Pad2 uint8
Addrs int32
Flags int32
Metric int32
}
type IfAnnounceMsghdr struct {
Msglen uint16
Version uint8
Type uint8
Hdrlen uint16
Index uint16
What uint16
Name [16]int8
}
type RtMsghdr struct {
Msglen uint16
Version uint8
Type uint8
Hdrlen uint16
Index uint16
Tableid uint16
Priority uint8
Mpls uint8
Addrs int32
Flags int32
Fmask int32
Pid int32
Seq int32
Errno int32
Inits uint32
Rmx RtMetrics
}
type RtMetrics struct {
Pksent uint64
Expire int64
Locks uint32
Mtu uint32
Refcnt uint32
Hopcount uint32
Recvpipe uint32
Sendpipe uint32
Ssthresh uint32
Rtt uint32
Rttvar uint32
Pad uint32
}
type Mclpool struct {
Grown int32
Alive uint16
Hwm uint16
Cwm uint16
Lwm uint16
}
const (
SizeofBpfVersion = 0x4
SizeofBpfStat = 0x8
SizeofBpfProgram = 0x10
SizeofBpfInsn = 0x8
SizeofBpfHdr = 0x14
)
type BpfVersion struct {
Major uint16
Minor uint16
}
type BpfStat struct {
Recv uint32
Drop uint32
}
type BpfProgram struct {
Len uint32
Pad_cgo_0 [4]byte
Insns *BpfInsn
}
type BpfInsn struct {
Code uint16
Jt uint8
Jf uint8
K uint32
}
type BpfHdr struct {
Tstamp BpfTimeval
Caplen uint32
Datalen uint32
Hdrlen uint16
Pad_cgo_0 [2]byte
}
type BpfTimeval struct {
Sec uint32
Usec uint32
}
type Termios struct {
Iflag uint32
Oflag uint32
Cflag uint32
Lflag uint32
Cc [20]uint8
Ispeed int32
Ospeed int32
}
| vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go | 0 | https://github.com/hashicorp/terraform/commit/be0db001db4b5ce8b1b493cb26a6f56573128836 | [
0.0009128784877248108,
0.000307469250401482,
0.00018081434245686978,
0.0002634004922583699,
0.00015251086733769625
] |
{
"id": 0,
"code_window": [
"func (n *Node) stopIPC() {\n",
"\tif n.ipcListener != nil {\n",
"\t\tn.ipcListener.Close()\n",
"\t\tn.ipcListener = nil\n",
"\n",
"\t\tn.log.Info(\"IPC endpoint closed\", \"endpoint\", n.ipcEndpoint)\n",
"\t}\n",
"\tif n.ipcHandler != nil {\n",
"\t\tn.ipcHandler.Stop()\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tn.log.Info(\"IPC endpoint closed\", \"url\", n.ipcEndpoint)\n"
],
"file_path": "node/node.go",
"type": "replace",
"edit_start_line_idx": 324
} | // Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package rpc
import (
"context"
"net"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/netutil"
)
// ServeListener accepts connections on l, serving JSON-RPC on them.
func (srv *Server) ServeListener(l net.Listener) error {
for {
conn, err := l.Accept()
if netutil.IsTemporaryError(err) {
log.Warn("RPC accept error", "err", err)
continue
} else if err != nil {
return err
}
log.Trace("Accepted connection", "addr", conn.RemoteAddr())
go srv.ServeCodec(NewJSONCodec(conn), OptionMethodInvocation|OptionSubscriptions)
}
}
// DialIPC create a new IPC client that connects to the given endpoint. On Unix it assumes
// the endpoint is the full path to a unix socket, and Windows the endpoint is an
// identifier for a named pipe.
//
// The context is used for the initial connection establishment. It does not
// affect subsequent interactions with the client.
func DialIPC(ctx context.Context, endpoint string) (*Client, error) {
return newClient(ctx, func(ctx context.Context) (net.Conn, error) {
return newIPCConnection(ctx, endpoint)
})
}
| rpc/ipc.go | 1 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.00809362344443798,
0.0015018401900306344,
0.00017045506683643907,
0.00017845164984464645,
0.002947969827800989
] |
{
"id": 0,
"code_window": [
"func (n *Node) stopIPC() {\n",
"\tif n.ipcListener != nil {\n",
"\t\tn.ipcListener.Close()\n",
"\t\tn.ipcListener = nil\n",
"\n",
"\t\tn.log.Info(\"IPC endpoint closed\", \"endpoint\", n.ipcEndpoint)\n",
"\t}\n",
"\tif n.ipcHandler != nil {\n",
"\t\tn.ipcHandler.Stop()\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tn.log.Info(\"IPC endpoint closed\", \"url\", n.ipcEndpoint)\n"
],
"file_path": "node/node.go",
"type": "replace",
"edit_start_line_idx": 324
} | // Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package les
import (
"sync"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
)
type ltrInfo struct {
tx *types.Transaction
sentTo map[*peer]struct{}
}
type LesTxRelay struct {
txSent map[common.Hash]*ltrInfo
txPending map[common.Hash]struct{}
ps *peerSet
peerList []*peer
peerStartPos int
lock sync.RWMutex
reqDist *requestDistributor
}
func NewLesTxRelay(ps *peerSet, reqDist *requestDistributor) *LesTxRelay {
r := &LesTxRelay{
txSent: make(map[common.Hash]*ltrInfo),
txPending: make(map[common.Hash]struct{}),
ps: ps,
reqDist: reqDist,
}
ps.notify(r)
return r
}
func (self *LesTxRelay) registerPeer(p *peer) {
self.lock.Lock()
defer self.lock.Unlock()
self.peerList = self.ps.AllPeers()
}
func (self *LesTxRelay) unregisterPeer(p *peer) {
self.lock.Lock()
defer self.lock.Unlock()
self.peerList = self.ps.AllPeers()
}
// send sends a list of transactions to at most a given number of peers at
// once, never resending any particular transaction to the same peer twice
func (self *LesTxRelay) send(txs types.Transactions, count int) {
sendTo := make(map[*peer]types.Transactions)
self.peerStartPos++ // rotate the starting position of the peer list
if self.peerStartPos >= len(self.peerList) {
self.peerStartPos = 0
}
for _, tx := range txs {
hash := tx.Hash()
ltr, ok := self.txSent[hash]
if !ok {
ltr = <rInfo{
tx: tx,
sentTo: make(map[*peer]struct{}),
}
self.txSent[hash] = ltr
self.txPending[hash] = struct{}{}
}
if len(self.peerList) > 0 {
cnt := count
pos := self.peerStartPos
for {
peer := self.peerList[pos]
if _, ok := ltr.sentTo[peer]; !ok {
sendTo[peer] = append(sendTo[peer], tx)
ltr.sentTo[peer] = struct{}{}
cnt--
}
if cnt == 0 {
break // sent it to the desired number of peers
}
pos++
if pos == len(self.peerList) {
pos = 0
}
if pos == self.peerStartPos {
break // tried all available peers
}
}
}
}
for p, list := range sendTo {
pp := p
ll := list
reqID := genReqID()
rq := &distReq{
getCost: func(dp distPeer) uint64 {
peer := dp.(*peer)
return peer.GetRequestCost(SendTxMsg, len(ll))
},
canSend: func(dp distPeer) bool {
return dp.(*peer) == pp
},
request: func(dp distPeer) func() {
peer := dp.(*peer)
cost := peer.GetRequestCost(SendTxMsg, len(ll))
peer.fcServer.QueueRequest(reqID, cost)
return func() { peer.SendTxs(reqID, cost, ll) }
},
}
self.reqDist.queue(rq)
}
}
func (self *LesTxRelay) Send(txs types.Transactions) {
self.lock.Lock()
defer self.lock.Unlock()
self.send(txs, 3)
}
func (self *LesTxRelay) NewHead(head common.Hash, mined []common.Hash, rollback []common.Hash) {
self.lock.Lock()
defer self.lock.Unlock()
for _, hash := range mined {
delete(self.txPending, hash)
}
for _, hash := range rollback {
self.txPending[hash] = struct{}{}
}
if len(self.txPending) > 0 {
txs := make(types.Transactions, len(self.txPending))
i := 0
for hash := range self.txPending {
txs[i] = self.txSent[hash].tx
i++
}
self.send(txs, 1)
}
}
func (self *LesTxRelay) Discard(hashes []common.Hash) {
self.lock.Lock()
defer self.lock.Unlock()
for _, hash := range hashes {
delete(self.txSent, hash)
delete(self.txPending, hash)
}
}
| les/txrelay.go | 0 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.0001778063306119293,
0.00016991703887470067,
0.00016331326332874596,
0.00016969666467048228,
0.000004256948614056455
] |
{
"id": 0,
"code_window": [
"func (n *Node) stopIPC() {\n",
"\tif n.ipcListener != nil {\n",
"\t\tn.ipcListener.Close()\n",
"\t\tn.ipcListener = nil\n",
"\n",
"\t\tn.log.Info(\"IPC endpoint closed\", \"endpoint\", n.ipcEndpoint)\n",
"\t}\n",
"\tif n.ipcHandler != nil {\n",
"\t\tn.ipcHandler.Stop()\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tn.log.Info(\"IPC endpoint closed\", \"url\", n.ipcEndpoint)\n"
],
"file_path": "node/node.go",
"type": "replace",
"edit_start_line_idx": 324
} | // Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package jsre provides execution environment for JavaScript.
package jsre
import (
crand "crypto/rand"
"encoding/binary"
"fmt"
"io"
"io/ioutil"
"math/rand"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/internal/jsre/deps"
"github.com/robertkrimen/otto"
)
var (
BigNumber_JS = deps.MustAsset("bignumber.js")
Web3_JS = deps.MustAsset("web3.js")
)
/*
JSRE is a generic JS runtime environment embedding the otto JS interpreter.
It provides some helper functions to
- load code from files
- run code snippets
- require libraries
- bind native go objects
*/
type JSRE struct {
assetPath string
output io.Writer
evalQueue chan *evalReq
stopEventLoop chan bool
closed chan struct{}
}
// jsTimer is a single timer instance with a callback function
type jsTimer struct {
timer *time.Timer
duration time.Duration
interval bool
call otto.FunctionCall
}
// evalReq is a serialized vm execution request processed by runEventLoop.
type evalReq struct {
fn func(vm *otto.Otto)
done chan bool
}
// runtime must be stopped with Stop() after use and cannot be used after stopping
func New(assetPath string, output io.Writer) *JSRE {
re := &JSRE{
assetPath: assetPath,
output: output,
closed: make(chan struct{}),
evalQueue: make(chan *evalReq),
stopEventLoop: make(chan bool),
}
go re.runEventLoop()
re.Set("loadScript", re.loadScript)
re.Set("inspect", re.prettyPrintJS)
return re
}
// randomSource returns a pseudo random value generator.
func randomSource() *rand.Rand {
bytes := make([]byte, 8)
seed := time.Now().UnixNano()
if _, err := crand.Read(bytes); err == nil {
seed = int64(binary.LittleEndian.Uint64(bytes))
}
src := rand.NewSource(seed)
return rand.New(src)
}
// This function runs the main event loop from a goroutine that is started
// when JSRE is created. Use Stop() before exiting to properly stop it.
// The event loop processes vm access requests from the evalQueue in a
// serialized way and calls timer callback functions at the appropriate time.
// Exported functions always access the vm through the event queue. You can
// call the functions of the otto vm directly to circumvent the queue. These
// functions should be used if and only if running a routine that was already
// called from JS through an RPC call.
func (re *JSRE) runEventLoop() {
defer close(re.closed)
vm := otto.New()
r := randomSource()
vm.SetRandomSource(r.Float64)
registry := map[*jsTimer]*jsTimer{}
ready := make(chan *jsTimer)
newTimer := func(call otto.FunctionCall, interval bool) (*jsTimer, otto.Value) {
delay, _ := call.Argument(1).ToInteger()
if 0 >= delay {
delay = 1
}
timer := &jsTimer{
duration: time.Duration(delay) * time.Millisecond,
call: call,
interval: interval,
}
registry[timer] = timer
timer.timer = time.AfterFunc(timer.duration, func() {
ready <- timer
})
value, err := call.Otto.ToValue(timer)
if err != nil {
panic(err)
}
return timer, value
}
setTimeout := func(call otto.FunctionCall) otto.Value {
_, value := newTimer(call, false)
return value
}
setInterval := func(call otto.FunctionCall) otto.Value {
_, value := newTimer(call, true)
return value
}
clearTimeout := func(call otto.FunctionCall) otto.Value {
timer, _ := call.Argument(0).Export()
if timer, ok := timer.(*jsTimer); ok {
timer.timer.Stop()
delete(registry, timer)
}
return otto.UndefinedValue()
}
vm.Set("_setTimeout", setTimeout)
vm.Set("_setInterval", setInterval)
vm.Run(`var setTimeout = function(args) {
if (arguments.length < 1) {
throw TypeError("Failed to execute 'setTimeout': 1 argument required, but only 0 present.");
}
return _setTimeout.apply(this, arguments);
}`)
vm.Run(`var setInterval = function(args) {
if (arguments.length < 1) {
throw TypeError("Failed to execute 'setInterval': 1 argument required, but only 0 present.");
}
return _setInterval.apply(this, arguments);
}`)
vm.Set("clearTimeout", clearTimeout)
vm.Set("clearInterval", clearTimeout)
var waitForCallbacks bool
loop:
for {
select {
case timer := <-ready:
// execute callback, remove/reschedule the timer
var arguments []interface{}
if len(timer.call.ArgumentList) > 2 {
tmp := timer.call.ArgumentList[2:]
arguments = make([]interface{}, 2+len(tmp))
for i, value := range tmp {
arguments[i+2] = value
}
} else {
arguments = make([]interface{}, 1)
}
arguments[0] = timer.call.ArgumentList[0]
_, err := vm.Call(`Function.call.call`, nil, arguments...)
if err != nil {
fmt.Println("js error:", err, arguments)
}
_, inreg := registry[timer] // when clearInterval is called from within the callback don't reset it
if timer.interval && inreg {
timer.timer.Reset(timer.duration)
} else {
delete(registry, timer)
if waitForCallbacks && (len(registry) == 0) {
break loop
}
}
case req := <-re.evalQueue:
// run the code, send the result back
req.fn(vm)
close(req.done)
if waitForCallbacks && (len(registry) == 0) {
break loop
}
case waitForCallbacks = <-re.stopEventLoop:
if !waitForCallbacks || (len(registry) == 0) {
break loop
}
}
}
for _, timer := range registry {
timer.timer.Stop()
delete(registry, timer)
}
}
// Do executes the given function on the JS event loop.
func (re *JSRE) Do(fn func(*otto.Otto)) {
done := make(chan bool)
req := &evalReq{fn, done}
re.evalQueue <- req
<-done
}
// stops the event loop before exit, optionally waits for all timers to expire
func (re *JSRE) Stop(waitForCallbacks bool) {
select {
case <-re.closed:
case re.stopEventLoop <- waitForCallbacks:
<-re.closed
}
}
// Exec(file) loads and runs the contents of a file
// if a relative path is given, the jsre's assetPath is used
func (re *JSRE) Exec(file string) error {
code, err := ioutil.ReadFile(common.AbsolutePath(re.assetPath, file))
if err != nil {
return err
}
var script *otto.Script
re.Do(func(vm *otto.Otto) {
script, err = vm.Compile(file, code)
if err != nil {
return
}
_, err = vm.Run(script)
})
return err
}
// Bind assigns value v to a variable in the JS environment
// This method is deprecated, use Set.
func (re *JSRE) Bind(name string, v interface{}) error {
return re.Set(name, v)
}
// Run runs a piece of JS code.
func (re *JSRE) Run(code string) (v otto.Value, err error) {
re.Do(func(vm *otto.Otto) { v, err = vm.Run(code) })
return v, err
}
// Get returns the value of a variable in the JS environment.
func (re *JSRE) Get(ns string) (v otto.Value, err error) {
re.Do(func(vm *otto.Otto) { v, err = vm.Get(ns) })
return v, err
}
// Set assigns value v to a variable in the JS environment.
func (re *JSRE) Set(ns string, v interface{}) (err error) {
re.Do(func(vm *otto.Otto) { err = vm.Set(ns, v) })
return err
}
// loadScript executes a JS script from inside the currently executing JS code.
func (re *JSRE) loadScript(call otto.FunctionCall) otto.Value {
file, err := call.Argument(0).ToString()
if err != nil {
// TODO: throw exception
return otto.FalseValue()
}
file = common.AbsolutePath(re.assetPath, file)
source, err := ioutil.ReadFile(file)
if err != nil {
// TODO: throw exception
return otto.FalseValue()
}
if _, err := compileAndRun(call.Otto, file, source); err != nil {
// TODO: throw exception
fmt.Println("err:", err)
return otto.FalseValue()
}
// TODO: return evaluation result
return otto.TrueValue()
}
// Evaluate executes code and pretty prints the result to the specified output
// stream.
func (re *JSRE) Evaluate(code string, w io.Writer) error {
var fail error
re.Do(func(vm *otto.Otto) {
val, err := vm.Run(code)
if err != nil {
prettyError(vm, err, w)
} else {
prettyPrint(vm, val, w)
}
fmt.Fprintln(w)
})
return fail
}
// Compile compiles and then runs a piece of JS code.
func (re *JSRE) Compile(filename string, src interface{}) (err error) {
re.Do(func(vm *otto.Otto) { _, err = compileAndRun(vm, filename, src) })
return err
}
func compileAndRun(vm *otto.Otto, filename string, src interface{}) (otto.Value, error) {
script, err := vm.Compile(filename, src)
if err != nil {
return otto.Value{}, err
}
return vm.Run(script)
}
| internal/jsre/jsre.go | 0 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.001111868885345757,
0.00024961575400084257,
0.00016356822743546218,
0.00017100138938985765,
0.0002530641504563391
] |
{
"id": 0,
"code_window": [
"func (n *Node) stopIPC() {\n",
"\tif n.ipcListener != nil {\n",
"\t\tn.ipcListener.Close()\n",
"\t\tn.ipcListener = nil\n",
"\n",
"\t\tn.log.Info(\"IPC endpoint closed\", \"endpoint\", n.ipcEndpoint)\n",
"\t}\n",
"\tif n.ipcHandler != nil {\n",
"\t\tn.ipcHandler.Stop()\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tn.log.Info(\"IPC endpoint closed\", \"url\", n.ipcEndpoint)\n"
],
"file_path": "node/node.go",
"type": "replace",
"edit_start_line_idx": 324
} | // Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Functions to access/create device major and minor numbers matching the
// encoding used in NetBSD's sys/types.h header.
package unix
// Major returns the major component of a NetBSD device number.
func Major(dev uint64) uint32 {
return uint32((dev & 0x000fff00) >> 8)
}
// Minor returns the minor component of a NetBSD device number.
func Minor(dev uint64) uint32 {
minor := uint32((dev & 0x000000ff) >> 0)
minor |= uint32((dev & 0xfff00000) >> 12)
return minor
}
// Mkdev returns a NetBSD device number generated from the given major and minor
// components.
func Mkdev(major, minor uint32) uint64 {
dev := (uint64(major) << 8) & 0x000fff00
dev |= (uint64(minor) << 12) & 0xfff00000
dev |= (uint64(minor) << 0) & 0x000000ff
return dev
}
| vendor/golang.org/x/sys/unix/dev_netbsd.go | 0 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.00017885018314700574,
0.00017560091509949416,
0.00017367077816743404,
0.00017428174032829702,
0.000002311088792339433
] |
{
"id": 1,
"code_window": [
"func (srv *Server) ServeListener(l net.Listener) error {\n",
"\tfor {\n",
"\t\tconn, err := l.Accept()\n",
"\t\tif netutil.IsTemporaryError(err) {\n",
"\t\t\tlog.Warn(\"RPC accept error\", \"err\", err)\n",
"\t\t\tcontinue\n",
"\t\t} else if err != nil {\n",
"\t\t\treturn err\n",
"\t\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\tlog.Warn(\"IPC accept error\", \"err\", err)\n"
],
"file_path": "rpc/ipc.go",
"type": "replace",
"edit_start_line_idx": 31
} | // Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package node
import (
"errors"
"fmt"
"net"
"os"
"path/filepath"
"reflect"
"strings"
"sync"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/internal/debug"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/rpc"
"github.com/prometheus/prometheus/util/flock"
)
// Node is a container on which services can be registered.
type Node struct {
eventmux *event.TypeMux // Event multiplexer used between the services of a stack
config *Config
accman *accounts.Manager
ephemeralKeystore string // if non-empty, the key directory that will be removed by Stop
instanceDirLock flock.Releaser // prevents concurrent use of instance directory
serverConfig p2p.Config
server *p2p.Server // Currently running P2P networking layer
serviceFuncs []ServiceConstructor // Service constructors (in dependency order)
services map[reflect.Type]Service // Currently running services
rpcAPIs []rpc.API // List of APIs currently provided by the node
inprocHandler *rpc.Server // In-process RPC request handler to process the API requests
ipcEndpoint string // IPC endpoint to listen at (empty = IPC disabled)
ipcListener net.Listener // IPC RPC listener socket to serve API requests
ipcHandler *rpc.Server // IPC RPC request handler to process the API requests
httpEndpoint string // HTTP endpoint (interface + port) to listen at (empty = HTTP disabled)
httpWhitelist []string // HTTP RPC modules to allow through this endpoint
httpListener net.Listener // HTTP RPC listener socket to server API requests
httpHandler *rpc.Server // HTTP RPC request handler to process the API requests
wsEndpoint string // Websocket endpoint (interface + port) to listen at (empty = websocket disabled)
wsListener net.Listener // Websocket RPC listener socket to server API requests
wsHandler *rpc.Server // Websocket RPC request handler to process the API requests
stop chan struct{} // Channel to wait for termination notifications
lock sync.RWMutex
log log.Logger
}
// New creates a new P2P node, ready for protocol registration.
func New(conf *Config) (*Node, error) {
// Copy config and resolve the datadir so future changes to the current
// working directory don't affect the node.
confCopy := *conf
conf = &confCopy
if conf.DataDir != "" {
absdatadir, err := filepath.Abs(conf.DataDir)
if err != nil {
return nil, err
}
conf.DataDir = absdatadir
}
// Ensure that the instance name doesn't cause weird conflicts with
// other files in the data directory.
if strings.ContainsAny(conf.Name, `/\`) {
return nil, errors.New(`Config.Name must not contain '/' or '\'`)
}
if conf.Name == datadirDefaultKeyStore {
return nil, errors.New(`Config.Name cannot be "` + datadirDefaultKeyStore + `"`)
}
if strings.HasSuffix(conf.Name, ".ipc") {
return nil, errors.New(`Config.Name cannot end in ".ipc"`)
}
// Ensure that the AccountManager method works before the node has started.
// We rely on this in cmd/geth.
am, ephemeralKeystore, err := makeAccountManager(conf)
if err != nil {
return nil, err
}
if conf.Logger == nil {
conf.Logger = log.New()
}
// Note: any interaction with Config that would create/touch files
// in the data directory or instance directory is delayed until Start.
return &Node{
accman: am,
ephemeralKeystore: ephemeralKeystore,
config: conf,
serviceFuncs: []ServiceConstructor{},
ipcEndpoint: conf.IPCEndpoint(),
httpEndpoint: conf.HTTPEndpoint(),
wsEndpoint: conf.WSEndpoint(),
eventmux: new(event.TypeMux),
log: conf.Logger,
}, nil
}
// Register injects a new service into the node's stack. The service created by
// the passed constructor must be unique in its type with regard to sibling ones.
func (n *Node) Register(constructor ServiceConstructor) error {
n.lock.Lock()
defer n.lock.Unlock()
if n.server != nil {
return ErrNodeRunning
}
n.serviceFuncs = append(n.serviceFuncs, constructor)
return nil
}
// Start create a live P2P node and starts running it.
func (n *Node) Start() error {
n.lock.Lock()
defer n.lock.Unlock()
// Short circuit if the node's already running
if n.server != nil {
return ErrNodeRunning
}
if err := n.openDataDir(); err != nil {
return err
}
// Initialize the p2p server. This creates the node key and
// discovery databases.
n.serverConfig = n.config.P2P
n.serverConfig.PrivateKey = n.config.NodeKey()
n.serverConfig.Name = n.config.NodeName()
n.serverConfig.Logger = n.log
if n.serverConfig.StaticNodes == nil {
n.serverConfig.StaticNodes = n.config.StaticNodes()
}
if n.serverConfig.TrustedNodes == nil {
n.serverConfig.TrustedNodes = n.config.TrustedNodes()
}
if n.serverConfig.NodeDatabase == "" {
n.serverConfig.NodeDatabase = n.config.NodeDB()
}
running := &p2p.Server{Config: n.serverConfig}
n.log.Info("Starting peer-to-peer node", "instance", n.serverConfig.Name)
// Otherwise copy and specialize the P2P configuration
services := make(map[reflect.Type]Service)
for _, constructor := range n.serviceFuncs {
// Create a new context for the particular service
ctx := &ServiceContext{
config: n.config,
services: make(map[reflect.Type]Service),
EventMux: n.eventmux,
AccountManager: n.accman,
}
for kind, s := range services { // copy needed for threaded access
ctx.services[kind] = s
}
// Construct and save the service
service, err := constructor(ctx)
if err != nil {
return err
}
kind := reflect.TypeOf(service)
if _, exists := services[kind]; exists {
return &DuplicateServiceError{Kind: kind}
}
services[kind] = service
}
// Gather the protocols and start the freshly assembled P2P server
for _, service := range services {
running.Protocols = append(running.Protocols, service.Protocols()...)
}
if err := running.Start(); err != nil {
return convertFileLockError(err)
}
// Start each of the services
started := []reflect.Type{}
for kind, service := range services {
// Start the next service, stopping all previous upon failure
if err := service.Start(running); err != nil {
for _, kind := range started {
services[kind].Stop()
}
running.Stop()
return err
}
// Mark the service started for potential cleanup
started = append(started, kind)
}
// Lastly start the configured RPC interfaces
if err := n.startRPC(services); err != nil {
for _, service := range services {
service.Stop()
}
running.Stop()
return err
}
// Finish initializing the startup
n.services = services
n.server = running
n.stop = make(chan struct{})
return nil
}
func (n *Node) openDataDir() error {
if n.config.DataDir == "" {
return nil // ephemeral
}
instdir := filepath.Join(n.config.DataDir, n.config.name())
if err := os.MkdirAll(instdir, 0700); err != nil {
return err
}
// Lock the instance directory to prevent concurrent use by another instance as well as
// accidental use of the instance directory as a database.
release, _, err := flock.New(filepath.Join(instdir, "LOCK"))
if err != nil {
return convertFileLockError(err)
}
n.instanceDirLock = release
return nil
}
// startRPC is a helper method to start all the various RPC endpoint during node
// startup. It's not meant to be called at any time afterwards as it makes certain
// assumptions about the state of the node.
func (n *Node) startRPC(services map[reflect.Type]Service) error {
// Gather all the possible APIs to surface
apis := n.apis()
for _, service := range services {
apis = append(apis, service.APIs()...)
}
// Start the various API endpoints, terminating all in case of errors
if err := n.startInProc(apis); err != nil {
return err
}
if err := n.startIPC(apis); err != nil {
n.stopInProc()
return err
}
if err := n.startHTTP(n.httpEndpoint, apis, n.config.HTTPModules, n.config.HTTPCors, n.config.HTTPVirtualHosts, n.config.HTTPTimeouts); err != nil {
n.stopIPC()
n.stopInProc()
return err
}
if err := n.startWS(n.wsEndpoint, apis, n.config.WSModules, n.config.WSOrigins, n.config.WSExposeAll); err != nil {
n.stopHTTP()
n.stopIPC()
n.stopInProc()
return err
}
// All API endpoints started successfully
n.rpcAPIs = apis
return nil
}
// startInProc initializes an in-process RPC endpoint.
func (n *Node) startInProc(apis []rpc.API) error {
// Register all the APIs exposed by the services
handler := rpc.NewServer()
for _, api := range apis {
if err := handler.RegisterName(api.Namespace, api.Service); err != nil {
return err
}
n.log.Debug("InProc registered", "namespace", api.Namespace)
}
n.inprocHandler = handler
return nil
}
// stopInProc terminates the in-process RPC endpoint.
func (n *Node) stopInProc() {
if n.inprocHandler != nil {
n.inprocHandler.Stop()
n.inprocHandler = nil
}
}
// startIPC initializes and starts the IPC RPC endpoint.
func (n *Node) startIPC(apis []rpc.API) error {
if n.ipcEndpoint == "" {
return nil // IPC disabled.
}
listener, handler, err := rpc.StartIPCEndpoint(n.ipcEndpoint, apis)
if err != nil {
return err
}
n.ipcListener = listener
n.ipcHandler = handler
n.log.Info("IPC endpoint opened", "url", n.ipcEndpoint)
return nil
}
// stopIPC terminates the IPC RPC endpoint.
func (n *Node) stopIPC() {
if n.ipcListener != nil {
n.ipcListener.Close()
n.ipcListener = nil
n.log.Info("IPC endpoint closed", "endpoint", n.ipcEndpoint)
}
if n.ipcHandler != nil {
n.ipcHandler.Stop()
n.ipcHandler = nil
}
}
// startHTTP initializes and starts the HTTP RPC endpoint.
func (n *Node) startHTTP(endpoint string, apis []rpc.API, modules []string, cors []string, vhosts []string, timeouts rpc.HTTPTimeouts) error {
// Short circuit if the HTTP endpoint isn't being exposed
if endpoint == "" {
return nil
}
listener, handler, err := rpc.StartHTTPEndpoint(endpoint, apis, modules, cors, vhosts, timeouts)
if err != nil {
return err
}
n.log.Info("HTTP endpoint opened", "url", fmt.Sprintf("http://%s", endpoint), "cors", strings.Join(cors, ","), "vhosts", strings.Join(vhosts, ","))
// All listeners booted successfully
n.httpEndpoint = endpoint
n.httpListener = listener
n.httpHandler = handler
return nil
}
// stopHTTP terminates the HTTP RPC endpoint.
func (n *Node) stopHTTP() {
if n.httpListener != nil {
n.httpListener.Close()
n.httpListener = nil
n.log.Info("HTTP endpoint closed", "url", fmt.Sprintf("http://%s", n.httpEndpoint))
}
if n.httpHandler != nil {
n.httpHandler.Stop()
n.httpHandler = nil
}
}
// startWS initializes and starts the websocket RPC endpoint.
func (n *Node) startWS(endpoint string, apis []rpc.API, modules []string, wsOrigins []string, exposeAll bool) error {
// Short circuit if the WS endpoint isn't being exposed
if endpoint == "" {
return nil
}
listener, handler, err := rpc.StartWSEndpoint(endpoint, apis, modules, wsOrigins, exposeAll)
if err != nil {
return err
}
n.log.Info("WebSocket endpoint opened", "url", fmt.Sprintf("ws://%s", listener.Addr()))
// All listeners booted successfully
n.wsEndpoint = endpoint
n.wsListener = listener
n.wsHandler = handler
return nil
}
// stopWS terminates the websocket RPC endpoint.
func (n *Node) stopWS() {
if n.wsListener != nil {
n.wsListener.Close()
n.wsListener = nil
n.log.Info("WebSocket endpoint closed", "url", fmt.Sprintf("ws://%s", n.wsEndpoint))
}
if n.wsHandler != nil {
n.wsHandler.Stop()
n.wsHandler = nil
}
}
// Stop terminates a running node along with all it's services. In the node was
// not started, an error is returned.
func (n *Node) Stop() error {
n.lock.Lock()
defer n.lock.Unlock()
// Short circuit if the node's not running
if n.server == nil {
return ErrNodeStopped
}
// Terminate the API, services and the p2p server.
n.stopWS()
n.stopHTTP()
n.stopIPC()
n.rpcAPIs = nil
failure := &StopError{
Services: make(map[reflect.Type]error),
}
for kind, service := range n.services {
if err := service.Stop(); err != nil {
failure.Services[kind] = err
}
}
n.server.Stop()
n.services = nil
n.server = nil
// Release instance directory lock.
if n.instanceDirLock != nil {
if err := n.instanceDirLock.Release(); err != nil {
n.log.Error("Can't release datadir lock", "err", err)
}
n.instanceDirLock = nil
}
// unblock n.Wait
close(n.stop)
// Remove the keystore if it was created ephemerally.
var keystoreErr error
if n.ephemeralKeystore != "" {
keystoreErr = os.RemoveAll(n.ephemeralKeystore)
}
if len(failure.Services) > 0 {
return failure
}
if keystoreErr != nil {
return keystoreErr
}
return nil
}
// Wait blocks the thread until the node is stopped. If the node is not running
// at the time of invocation, the method immediately returns.
func (n *Node) Wait() {
n.lock.RLock()
if n.server == nil {
n.lock.RUnlock()
return
}
stop := n.stop
n.lock.RUnlock()
<-stop
}
// Restart terminates a running node and boots up a new one in its place. If the
// node isn't running, an error is returned.
func (n *Node) Restart() error {
if err := n.Stop(); err != nil {
return err
}
if err := n.Start(); err != nil {
return err
}
return nil
}
// Attach creates an RPC client attached to an in-process API handler.
func (n *Node) Attach() (*rpc.Client, error) {
n.lock.RLock()
defer n.lock.RUnlock()
if n.server == nil {
return nil, ErrNodeStopped
}
return rpc.DialInProc(n.inprocHandler), nil
}
// RPCHandler returns the in-process RPC request handler.
func (n *Node) RPCHandler() (*rpc.Server, error) {
n.lock.RLock()
defer n.lock.RUnlock()
if n.inprocHandler == nil {
return nil, ErrNodeStopped
}
return n.inprocHandler, nil
}
// Server retrieves the currently running P2P network layer. This method is meant
// only to inspect fields of the currently running server, life cycle management
// should be left to this Node entity.
func (n *Node) Server() *p2p.Server {
n.lock.RLock()
defer n.lock.RUnlock()
return n.server
}
// Service retrieves a currently running service registered of a specific type.
func (n *Node) Service(service interface{}) error {
n.lock.RLock()
defer n.lock.RUnlock()
// Short circuit if the node's not running
if n.server == nil {
return ErrNodeStopped
}
// Otherwise try to find the service to return
element := reflect.ValueOf(service).Elem()
if running, ok := n.services[element.Type()]; ok {
element.Set(reflect.ValueOf(running))
return nil
}
return ErrServiceUnknown
}
// DataDir retrieves the current datadir used by the protocol stack.
// Deprecated: No files should be stored in this directory, use InstanceDir instead.
func (n *Node) DataDir() string {
return n.config.DataDir
}
// InstanceDir retrieves the instance directory used by the protocol stack.
func (n *Node) InstanceDir() string {
return n.config.instanceDir()
}
// AccountManager retrieves the account manager used by the protocol stack.
func (n *Node) AccountManager() *accounts.Manager {
return n.accman
}
// IPCEndpoint retrieves the current IPC endpoint used by the protocol stack.
func (n *Node) IPCEndpoint() string {
return n.ipcEndpoint
}
// HTTPEndpoint retrieves the current HTTP endpoint used by the protocol stack.
func (n *Node) HTTPEndpoint() string {
n.lock.Lock()
defer n.lock.Unlock()
if n.httpListener != nil {
return n.httpListener.Addr().String()
}
return n.httpEndpoint
}
// WSEndpoint retrieves the current WS endpoint used by the protocol stack.
func (n *Node) WSEndpoint() string {
n.lock.Lock()
defer n.lock.Unlock()
if n.wsListener != nil {
return n.wsListener.Addr().String()
}
return n.wsEndpoint
}
// EventMux retrieves the event multiplexer used by all the network services in
// the current protocol stack.
func (n *Node) EventMux() *event.TypeMux {
return n.eventmux
}
// OpenDatabase opens an existing database with the given name (or creates one if no
// previous can be found) from within the node's instance directory. If the node is
// ephemeral, a memory database is returned.
func (n *Node) OpenDatabase(name string, cache, handles int) (ethdb.Database, error) {
if n.config.DataDir == "" {
return ethdb.NewMemDatabase(), nil
}
return ethdb.NewLDBDatabase(n.config.ResolvePath(name), cache, handles)
}
// ResolvePath returns the absolute path of a resource in the instance directory.
func (n *Node) ResolvePath(x string) string {
return n.config.ResolvePath(x)
}
// apis returns the collection of RPC descriptors this node offers.
func (n *Node) apis() []rpc.API {
return []rpc.API{
{
Namespace: "admin",
Version: "1.0",
Service: NewPrivateAdminAPI(n),
}, {
Namespace: "admin",
Version: "1.0",
Service: NewPublicAdminAPI(n),
Public: true,
}, {
Namespace: "debug",
Version: "1.0",
Service: debug.Handler,
}, {
Namespace: "debug",
Version: "1.0",
Service: NewPublicDebugAPI(n),
Public: true,
}, {
Namespace: "web3",
Version: "1.0",
Service: NewPublicWeb3API(n),
Public: true,
},
}
}
| node/node.go | 1 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.004022185690701008,
0.0003710920864250511,
0.0001618225360289216,
0.0001751480158418417,
0.0005881023244000971
] |
{
"id": 1,
"code_window": [
"func (srv *Server) ServeListener(l net.Listener) error {\n",
"\tfor {\n",
"\t\tconn, err := l.Accept()\n",
"\t\tif netutil.IsTemporaryError(err) {\n",
"\t\t\tlog.Warn(\"RPC accept error\", \"err\", err)\n",
"\t\t\tcontinue\n",
"\t\t} else if err != nil {\n",
"\t\t\treturn err\n",
"\t\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\tlog.Warn(\"IPC accept error\", \"err\", err)\n"
],
"file_path": "rpc/ipc.go",
"type": "replace",
"edit_start_line_idx": 31
} | // Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package ed25519 implements the Ed25519 signature algorithm. See
// https://ed25519.cr.yp.to/.
//
// These functions are also compatible with the “Ed25519” function defined in
// RFC 8032.
package ed25519
// This code is a port of the public domain, “ref10” implementation of ed25519
// from SUPERCOP.
import (
"bytes"
"crypto"
cryptorand "crypto/rand"
"crypto/sha512"
"errors"
"io"
"strconv"
"golang.org/x/crypto/ed25519/internal/edwards25519"
)
const (
// PublicKeySize is the size, in bytes, of public keys as used in this package.
PublicKeySize = 32
// PrivateKeySize is the size, in bytes, of private keys as used in this package.
PrivateKeySize = 64
// SignatureSize is the size, in bytes, of signatures generated and verified by this package.
SignatureSize = 64
)
// PublicKey is the type of Ed25519 public keys.
type PublicKey []byte
// PrivateKey is the type of Ed25519 private keys. It implements crypto.Signer.
type PrivateKey []byte
// Public returns the PublicKey corresponding to priv.
func (priv PrivateKey) Public() crypto.PublicKey {
publicKey := make([]byte, PublicKeySize)
copy(publicKey, priv[32:])
return PublicKey(publicKey)
}
// Sign signs the given message with priv.
// Ed25519 performs two passes over messages to be signed and therefore cannot
// handle pre-hashed messages. Thus opts.HashFunc() must return zero to
// indicate the message hasn't been hashed. This can be achieved by passing
// crypto.Hash(0) as the value for opts.
func (priv PrivateKey) Sign(rand io.Reader, message []byte, opts crypto.SignerOpts) (signature []byte, err error) {
if opts.HashFunc() != crypto.Hash(0) {
return nil, errors.New("ed25519: cannot sign hashed message")
}
return Sign(priv, message), nil
}
// GenerateKey generates a public/private key pair using entropy from rand.
// If rand is nil, crypto/rand.Reader will be used.
func GenerateKey(rand io.Reader) (publicKey PublicKey, privateKey PrivateKey, err error) {
if rand == nil {
rand = cryptorand.Reader
}
privateKey = make([]byte, PrivateKeySize)
publicKey = make([]byte, PublicKeySize)
_, err = io.ReadFull(rand, privateKey[:32])
if err != nil {
return nil, nil, err
}
digest := sha512.Sum512(privateKey[:32])
digest[0] &= 248
digest[31] &= 127
digest[31] |= 64
var A edwards25519.ExtendedGroupElement
var hBytes [32]byte
copy(hBytes[:], digest[:])
edwards25519.GeScalarMultBase(&A, &hBytes)
var publicKeyBytes [32]byte
A.ToBytes(&publicKeyBytes)
copy(privateKey[32:], publicKeyBytes[:])
copy(publicKey, publicKeyBytes[:])
return publicKey, privateKey, nil
}
// Sign signs the message with privateKey and returns a signature. It will
// panic if len(privateKey) is not PrivateKeySize.
func Sign(privateKey PrivateKey, message []byte) []byte {
if l := len(privateKey); l != PrivateKeySize {
panic("ed25519: bad private key length: " + strconv.Itoa(l))
}
h := sha512.New()
h.Write(privateKey[:32])
var digest1, messageDigest, hramDigest [64]byte
var expandedSecretKey [32]byte
h.Sum(digest1[:0])
copy(expandedSecretKey[:], digest1[:])
expandedSecretKey[0] &= 248
expandedSecretKey[31] &= 63
expandedSecretKey[31] |= 64
h.Reset()
h.Write(digest1[32:])
h.Write(message)
h.Sum(messageDigest[:0])
var messageDigestReduced [32]byte
edwards25519.ScReduce(&messageDigestReduced, &messageDigest)
var R edwards25519.ExtendedGroupElement
edwards25519.GeScalarMultBase(&R, &messageDigestReduced)
var encodedR [32]byte
R.ToBytes(&encodedR)
h.Reset()
h.Write(encodedR[:])
h.Write(privateKey[32:])
h.Write(message)
h.Sum(hramDigest[:0])
var hramDigestReduced [32]byte
edwards25519.ScReduce(&hramDigestReduced, &hramDigest)
var s [32]byte
edwards25519.ScMulAdd(&s, &hramDigestReduced, &expandedSecretKey, &messageDigestReduced)
signature := make([]byte, SignatureSize)
copy(signature[:], encodedR[:])
copy(signature[32:], s[:])
return signature
}
// Verify reports whether sig is a valid signature of message by publicKey. It
// will panic if len(publicKey) is not PublicKeySize.
func Verify(publicKey PublicKey, message, sig []byte) bool {
if l := len(publicKey); l != PublicKeySize {
panic("ed25519: bad public key length: " + strconv.Itoa(l))
}
if len(sig) != SignatureSize || sig[63]&224 != 0 {
return false
}
var A edwards25519.ExtendedGroupElement
var publicKeyBytes [32]byte
copy(publicKeyBytes[:], publicKey)
if !A.FromBytes(&publicKeyBytes) {
return false
}
edwards25519.FeNeg(&A.X, &A.X)
edwards25519.FeNeg(&A.T, &A.T)
h := sha512.New()
h.Write(sig[:32])
h.Write(publicKey[:])
h.Write(message)
var digest [64]byte
h.Sum(digest[:0])
var hReduced [32]byte
edwards25519.ScReduce(&hReduced, &digest)
var R edwards25519.ProjectiveGroupElement
var b [32]byte
copy(b[:], sig[32:])
edwards25519.GeDoubleScalarMultVartime(&R, &hReduced, &A, &b)
var checkR [32]byte
R.ToBytes(&checkR)
return bytes.Equal(sig[:32], checkR[:])
}
| vendor/golang.org/x/crypto/ed25519/ed25519.go | 0 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.00036092655500397086,
0.00019293505465611815,
0.0001612155610928312,
0.00017787415708880872,
0.00005647863508784212
] |
{
"id": 1,
"code_window": [
"func (srv *Server) ServeListener(l net.Listener) error {\n",
"\tfor {\n",
"\t\tconn, err := l.Accept()\n",
"\t\tif netutil.IsTemporaryError(err) {\n",
"\t\t\tlog.Warn(\"RPC accept error\", \"err\", err)\n",
"\t\t\tcontinue\n",
"\t\t} else if err != nil {\n",
"\t\t\treturn err\n",
"\t\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\tlog.Warn(\"IPC accept error\", \"err\", err)\n"
],
"file_path": "rpc/ipc.go",
"type": "replace",
"edit_start_line_idx": 31
} | package metrics
import "testing"
func BenchmarkCounter(b *testing.B) {
c := NewCounter()
b.ResetTimer()
for i := 0; i < b.N; i++ {
c.Inc(1)
}
}
func TestCounterClear(t *testing.T) {
c := NewCounter()
c.Inc(1)
c.Clear()
if count := c.Count(); 0 != count {
t.Errorf("c.Count(): 0 != %v\n", count)
}
}
func TestCounterDec1(t *testing.T) {
c := NewCounter()
c.Dec(1)
if count := c.Count(); -1 != count {
t.Errorf("c.Count(): -1 != %v\n", count)
}
}
func TestCounterDec2(t *testing.T) {
c := NewCounter()
c.Dec(2)
if count := c.Count(); -2 != count {
t.Errorf("c.Count(): -2 != %v\n", count)
}
}
func TestCounterInc1(t *testing.T) {
c := NewCounter()
c.Inc(1)
if count := c.Count(); 1 != count {
t.Errorf("c.Count(): 1 != %v\n", count)
}
}
func TestCounterInc2(t *testing.T) {
c := NewCounter()
c.Inc(2)
if count := c.Count(); 2 != count {
t.Errorf("c.Count(): 2 != %v\n", count)
}
}
func TestCounterSnapshot(t *testing.T) {
c := NewCounter()
c.Inc(1)
snapshot := c.Snapshot()
c.Inc(1)
if count := snapshot.Count(); 1 != count {
t.Errorf("c.Count(): 1 != %v\n", count)
}
}
func TestCounterZero(t *testing.T) {
c := NewCounter()
if count := c.Count(); 0 != count {
t.Errorf("c.Count(): 0 != %v\n", count)
}
}
func TestGetOrRegisterCounter(t *testing.T) {
r := NewRegistry()
NewRegisteredCounter("foo", r).Inc(47)
if c := GetOrRegisterCounter("foo", r); 47 != c.Count() {
t.Fatal(c)
}
}
| metrics/counter_test.go | 0 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.00017759384354576468,
0.00017295122961513698,
0.00017030768503900617,
0.00017251598183065653,
0.0000021029134131822502
] |
{
"id": 1,
"code_window": [
"func (srv *Server) ServeListener(l net.Listener) error {\n",
"\tfor {\n",
"\t\tconn, err := l.Accept()\n",
"\t\tif netutil.IsTemporaryError(err) {\n",
"\t\t\tlog.Warn(\"RPC accept error\", \"err\", err)\n",
"\t\t\tcontinue\n",
"\t\t} else if err != nil {\n",
"\t\t\treturn err\n",
"\t\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\tlog.Warn(\"IPC accept error\", \"err\", err)\n"
],
"file_path": "rpc/ipc.go",
"type": "replace",
"edit_start_line_idx": 31
} | // Copyright (c) 2012, Suryandaru Triandana <[email protected]>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build solaris
package storage
import (
"os"
"syscall"
)
type unixFileLock struct {
f *os.File
}
func (fl *unixFileLock) release() error {
if err := setFileLock(fl.f, false, false); err != nil {
return err
}
return fl.f.Close()
}
func newFileLock(path string, readOnly bool) (fl fileLock, err error) {
var flag int
if readOnly {
flag = os.O_RDONLY
} else {
flag = os.O_RDWR
}
f, err := os.OpenFile(path, flag, 0)
if os.IsNotExist(err) {
f, err = os.OpenFile(path, flag|os.O_CREATE, 0644)
}
if err != nil {
return
}
err = setFileLock(f, readOnly, true)
if err != nil {
f.Close()
return
}
fl = &unixFileLock{f: f}
return
}
func setFileLock(f *os.File, readOnly, lock bool) error {
flock := syscall.Flock_t{
Type: syscall.F_UNLCK,
Start: 0,
Len: 0,
Whence: 1,
}
if lock {
if readOnly {
flock.Type = syscall.F_RDLCK
} else {
flock.Type = syscall.F_WRLCK
}
}
return syscall.FcntlFlock(f.Fd(), syscall.F_SETLK, &flock)
}
func rename(oldpath, newpath string) error {
return os.Rename(oldpath, newpath)
}
func syncDir(name string) error {
f, err := os.Open(name)
if err != nil {
return err
}
defer f.Close()
if err := f.Sync(); err != nil {
return err
}
return nil
}
| vendor/github.com/syndtr/goleveldb/leveldb/storage/file_storage_solaris.go | 0 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.00018043372256215662,
0.00017324177315458655,
0.00016907928511500359,
0.00017148468759842217,
0.0000036833059766649967
] |
{
"id": 2,
"code_window": [
"\t\t\tcontinue\n",
"\t\t} else if err != nil {\n",
"\t\t\treturn err\n",
"\t\t}\n",
"\t\tlog.Trace(\"Accepted connection\", \"addr\", conn.RemoteAddr())\n",
"\t\tgo srv.ServeCodec(NewJSONCodec(conn), OptionMethodInvocation|OptionSubscriptions)\n",
"\t}\n",
"}\n",
"\n",
"// DialIPC create a new IPC client that connects to the given endpoint. On Unix it assumes\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tlog.Trace(\"IPC accepted connection\")\n"
],
"file_path": "rpc/ipc.go",
"type": "replace",
"edit_start_line_idx": 36
} | // Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package node
import (
"errors"
"fmt"
"net"
"os"
"path/filepath"
"reflect"
"strings"
"sync"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/internal/debug"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/rpc"
"github.com/prometheus/prometheus/util/flock"
)
// Node is a container on which services can be registered.
type Node struct {
eventmux *event.TypeMux // Event multiplexer used between the services of a stack
config *Config
accman *accounts.Manager
ephemeralKeystore string // if non-empty, the key directory that will be removed by Stop
instanceDirLock flock.Releaser // prevents concurrent use of instance directory
serverConfig p2p.Config
server *p2p.Server // Currently running P2P networking layer
serviceFuncs []ServiceConstructor // Service constructors (in dependency order)
services map[reflect.Type]Service // Currently running services
rpcAPIs []rpc.API // List of APIs currently provided by the node
inprocHandler *rpc.Server // In-process RPC request handler to process the API requests
ipcEndpoint string // IPC endpoint to listen at (empty = IPC disabled)
ipcListener net.Listener // IPC RPC listener socket to serve API requests
ipcHandler *rpc.Server // IPC RPC request handler to process the API requests
httpEndpoint string // HTTP endpoint (interface + port) to listen at (empty = HTTP disabled)
httpWhitelist []string // HTTP RPC modules to allow through this endpoint
httpListener net.Listener // HTTP RPC listener socket to server API requests
httpHandler *rpc.Server // HTTP RPC request handler to process the API requests
wsEndpoint string // Websocket endpoint (interface + port) to listen at (empty = websocket disabled)
wsListener net.Listener // Websocket RPC listener socket to server API requests
wsHandler *rpc.Server // Websocket RPC request handler to process the API requests
stop chan struct{} // Channel to wait for termination notifications
lock sync.RWMutex
log log.Logger
}
// New creates a new P2P node, ready for protocol registration.
func New(conf *Config) (*Node, error) {
// Copy config and resolve the datadir so future changes to the current
// working directory don't affect the node.
confCopy := *conf
conf = &confCopy
if conf.DataDir != "" {
absdatadir, err := filepath.Abs(conf.DataDir)
if err != nil {
return nil, err
}
conf.DataDir = absdatadir
}
// Ensure that the instance name doesn't cause weird conflicts with
// other files in the data directory.
if strings.ContainsAny(conf.Name, `/\`) {
return nil, errors.New(`Config.Name must not contain '/' or '\'`)
}
if conf.Name == datadirDefaultKeyStore {
return nil, errors.New(`Config.Name cannot be "` + datadirDefaultKeyStore + `"`)
}
if strings.HasSuffix(conf.Name, ".ipc") {
return nil, errors.New(`Config.Name cannot end in ".ipc"`)
}
// Ensure that the AccountManager method works before the node has started.
// We rely on this in cmd/geth.
am, ephemeralKeystore, err := makeAccountManager(conf)
if err != nil {
return nil, err
}
if conf.Logger == nil {
conf.Logger = log.New()
}
// Note: any interaction with Config that would create/touch files
// in the data directory or instance directory is delayed until Start.
return &Node{
accman: am,
ephemeralKeystore: ephemeralKeystore,
config: conf,
serviceFuncs: []ServiceConstructor{},
ipcEndpoint: conf.IPCEndpoint(),
httpEndpoint: conf.HTTPEndpoint(),
wsEndpoint: conf.WSEndpoint(),
eventmux: new(event.TypeMux),
log: conf.Logger,
}, nil
}
// Register injects a new service into the node's stack. The service created by
// the passed constructor must be unique in its type with regard to sibling ones.
func (n *Node) Register(constructor ServiceConstructor) error {
n.lock.Lock()
defer n.lock.Unlock()
if n.server != nil {
return ErrNodeRunning
}
n.serviceFuncs = append(n.serviceFuncs, constructor)
return nil
}
// Start create a live P2P node and starts running it.
func (n *Node) Start() error {
n.lock.Lock()
defer n.lock.Unlock()
// Short circuit if the node's already running
if n.server != nil {
return ErrNodeRunning
}
if err := n.openDataDir(); err != nil {
return err
}
// Initialize the p2p server. This creates the node key and
// discovery databases.
n.serverConfig = n.config.P2P
n.serverConfig.PrivateKey = n.config.NodeKey()
n.serverConfig.Name = n.config.NodeName()
n.serverConfig.Logger = n.log
if n.serverConfig.StaticNodes == nil {
n.serverConfig.StaticNodes = n.config.StaticNodes()
}
if n.serverConfig.TrustedNodes == nil {
n.serverConfig.TrustedNodes = n.config.TrustedNodes()
}
if n.serverConfig.NodeDatabase == "" {
n.serverConfig.NodeDatabase = n.config.NodeDB()
}
running := &p2p.Server{Config: n.serverConfig}
n.log.Info("Starting peer-to-peer node", "instance", n.serverConfig.Name)
// Otherwise copy and specialize the P2P configuration
services := make(map[reflect.Type]Service)
for _, constructor := range n.serviceFuncs {
// Create a new context for the particular service
ctx := &ServiceContext{
config: n.config,
services: make(map[reflect.Type]Service),
EventMux: n.eventmux,
AccountManager: n.accman,
}
for kind, s := range services { // copy needed for threaded access
ctx.services[kind] = s
}
// Construct and save the service
service, err := constructor(ctx)
if err != nil {
return err
}
kind := reflect.TypeOf(service)
if _, exists := services[kind]; exists {
return &DuplicateServiceError{Kind: kind}
}
services[kind] = service
}
// Gather the protocols and start the freshly assembled P2P server
for _, service := range services {
running.Protocols = append(running.Protocols, service.Protocols()...)
}
if err := running.Start(); err != nil {
return convertFileLockError(err)
}
// Start each of the services
started := []reflect.Type{}
for kind, service := range services {
// Start the next service, stopping all previous upon failure
if err := service.Start(running); err != nil {
for _, kind := range started {
services[kind].Stop()
}
running.Stop()
return err
}
// Mark the service started for potential cleanup
started = append(started, kind)
}
// Lastly start the configured RPC interfaces
if err := n.startRPC(services); err != nil {
for _, service := range services {
service.Stop()
}
running.Stop()
return err
}
// Finish initializing the startup
n.services = services
n.server = running
n.stop = make(chan struct{})
return nil
}
func (n *Node) openDataDir() error {
if n.config.DataDir == "" {
return nil // ephemeral
}
instdir := filepath.Join(n.config.DataDir, n.config.name())
if err := os.MkdirAll(instdir, 0700); err != nil {
return err
}
// Lock the instance directory to prevent concurrent use by another instance as well as
// accidental use of the instance directory as a database.
release, _, err := flock.New(filepath.Join(instdir, "LOCK"))
if err != nil {
return convertFileLockError(err)
}
n.instanceDirLock = release
return nil
}
// startRPC is a helper method to start all the various RPC endpoint during node
// startup. It's not meant to be called at any time afterwards as it makes certain
// assumptions about the state of the node.
func (n *Node) startRPC(services map[reflect.Type]Service) error {
// Gather all the possible APIs to surface
apis := n.apis()
for _, service := range services {
apis = append(apis, service.APIs()...)
}
// Start the various API endpoints, terminating all in case of errors
if err := n.startInProc(apis); err != nil {
return err
}
if err := n.startIPC(apis); err != nil {
n.stopInProc()
return err
}
if err := n.startHTTP(n.httpEndpoint, apis, n.config.HTTPModules, n.config.HTTPCors, n.config.HTTPVirtualHosts, n.config.HTTPTimeouts); err != nil {
n.stopIPC()
n.stopInProc()
return err
}
if err := n.startWS(n.wsEndpoint, apis, n.config.WSModules, n.config.WSOrigins, n.config.WSExposeAll); err != nil {
n.stopHTTP()
n.stopIPC()
n.stopInProc()
return err
}
// All API endpoints started successfully
n.rpcAPIs = apis
return nil
}
// startInProc initializes an in-process RPC endpoint.
func (n *Node) startInProc(apis []rpc.API) error {
// Register all the APIs exposed by the services
handler := rpc.NewServer()
for _, api := range apis {
if err := handler.RegisterName(api.Namespace, api.Service); err != nil {
return err
}
n.log.Debug("InProc registered", "namespace", api.Namespace)
}
n.inprocHandler = handler
return nil
}
// stopInProc terminates the in-process RPC endpoint.
func (n *Node) stopInProc() {
if n.inprocHandler != nil {
n.inprocHandler.Stop()
n.inprocHandler = nil
}
}
// startIPC initializes and starts the IPC RPC endpoint.
func (n *Node) startIPC(apis []rpc.API) error {
if n.ipcEndpoint == "" {
return nil // IPC disabled.
}
listener, handler, err := rpc.StartIPCEndpoint(n.ipcEndpoint, apis)
if err != nil {
return err
}
n.ipcListener = listener
n.ipcHandler = handler
n.log.Info("IPC endpoint opened", "url", n.ipcEndpoint)
return nil
}
// stopIPC terminates the IPC RPC endpoint.
func (n *Node) stopIPC() {
if n.ipcListener != nil {
n.ipcListener.Close()
n.ipcListener = nil
n.log.Info("IPC endpoint closed", "endpoint", n.ipcEndpoint)
}
if n.ipcHandler != nil {
n.ipcHandler.Stop()
n.ipcHandler = nil
}
}
// startHTTP initializes and starts the HTTP RPC endpoint.
func (n *Node) startHTTP(endpoint string, apis []rpc.API, modules []string, cors []string, vhosts []string, timeouts rpc.HTTPTimeouts) error {
// Short circuit if the HTTP endpoint isn't being exposed
if endpoint == "" {
return nil
}
listener, handler, err := rpc.StartHTTPEndpoint(endpoint, apis, modules, cors, vhosts, timeouts)
if err != nil {
return err
}
n.log.Info("HTTP endpoint opened", "url", fmt.Sprintf("http://%s", endpoint), "cors", strings.Join(cors, ","), "vhosts", strings.Join(vhosts, ","))
// All listeners booted successfully
n.httpEndpoint = endpoint
n.httpListener = listener
n.httpHandler = handler
return nil
}
// stopHTTP terminates the HTTP RPC endpoint.
func (n *Node) stopHTTP() {
if n.httpListener != nil {
n.httpListener.Close()
n.httpListener = nil
n.log.Info("HTTP endpoint closed", "url", fmt.Sprintf("http://%s", n.httpEndpoint))
}
if n.httpHandler != nil {
n.httpHandler.Stop()
n.httpHandler = nil
}
}
// startWS initializes and starts the websocket RPC endpoint.
func (n *Node) startWS(endpoint string, apis []rpc.API, modules []string, wsOrigins []string, exposeAll bool) error {
// Short circuit if the WS endpoint isn't being exposed
if endpoint == "" {
return nil
}
listener, handler, err := rpc.StartWSEndpoint(endpoint, apis, modules, wsOrigins, exposeAll)
if err != nil {
return err
}
n.log.Info("WebSocket endpoint opened", "url", fmt.Sprintf("ws://%s", listener.Addr()))
// All listeners booted successfully
n.wsEndpoint = endpoint
n.wsListener = listener
n.wsHandler = handler
return nil
}
// stopWS terminates the websocket RPC endpoint.
func (n *Node) stopWS() {
if n.wsListener != nil {
n.wsListener.Close()
n.wsListener = nil
n.log.Info("WebSocket endpoint closed", "url", fmt.Sprintf("ws://%s", n.wsEndpoint))
}
if n.wsHandler != nil {
n.wsHandler.Stop()
n.wsHandler = nil
}
}
// Stop terminates a running node along with all it's services. In the node was
// not started, an error is returned.
func (n *Node) Stop() error {
n.lock.Lock()
defer n.lock.Unlock()
// Short circuit if the node's not running
if n.server == nil {
return ErrNodeStopped
}
// Terminate the API, services and the p2p server.
n.stopWS()
n.stopHTTP()
n.stopIPC()
n.rpcAPIs = nil
failure := &StopError{
Services: make(map[reflect.Type]error),
}
for kind, service := range n.services {
if err := service.Stop(); err != nil {
failure.Services[kind] = err
}
}
n.server.Stop()
n.services = nil
n.server = nil
// Release instance directory lock.
if n.instanceDirLock != nil {
if err := n.instanceDirLock.Release(); err != nil {
n.log.Error("Can't release datadir lock", "err", err)
}
n.instanceDirLock = nil
}
// unblock n.Wait
close(n.stop)
// Remove the keystore if it was created ephemerally.
var keystoreErr error
if n.ephemeralKeystore != "" {
keystoreErr = os.RemoveAll(n.ephemeralKeystore)
}
if len(failure.Services) > 0 {
return failure
}
if keystoreErr != nil {
return keystoreErr
}
return nil
}
// Wait blocks the thread until the node is stopped. If the node is not running
// at the time of invocation, the method immediately returns.
func (n *Node) Wait() {
n.lock.RLock()
if n.server == nil {
n.lock.RUnlock()
return
}
stop := n.stop
n.lock.RUnlock()
<-stop
}
// Restart terminates a running node and boots up a new one in its place. If the
// node isn't running, an error is returned.
func (n *Node) Restart() error {
if err := n.Stop(); err != nil {
return err
}
if err := n.Start(); err != nil {
return err
}
return nil
}
// Attach creates an RPC client attached to an in-process API handler.
func (n *Node) Attach() (*rpc.Client, error) {
n.lock.RLock()
defer n.lock.RUnlock()
if n.server == nil {
return nil, ErrNodeStopped
}
return rpc.DialInProc(n.inprocHandler), nil
}
// RPCHandler returns the in-process RPC request handler.
func (n *Node) RPCHandler() (*rpc.Server, error) {
n.lock.RLock()
defer n.lock.RUnlock()
if n.inprocHandler == nil {
return nil, ErrNodeStopped
}
return n.inprocHandler, nil
}
// Server retrieves the currently running P2P network layer. This method is meant
// only to inspect fields of the currently running server, life cycle management
// should be left to this Node entity.
func (n *Node) Server() *p2p.Server {
n.lock.RLock()
defer n.lock.RUnlock()
return n.server
}
// Service retrieves a currently running service registered of a specific type.
func (n *Node) Service(service interface{}) error {
n.lock.RLock()
defer n.lock.RUnlock()
// Short circuit if the node's not running
if n.server == nil {
return ErrNodeStopped
}
// Otherwise try to find the service to return
element := reflect.ValueOf(service).Elem()
if running, ok := n.services[element.Type()]; ok {
element.Set(reflect.ValueOf(running))
return nil
}
return ErrServiceUnknown
}
// DataDir retrieves the current datadir used by the protocol stack.
// Deprecated: No files should be stored in this directory, use InstanceDir instead.
func (n *Node) DataDir() string {
return n.config.DataDir
}
// InstanceDir retrieves the instance directory used by the protocol stack.
func (n *Node) InstanceDir() string {
return n.config.instanceDir()
}
// AccountManager retrieves the account manager used by the protocol stack.
func (n *Node) AccountManager() *accounts.Manager {
return n.accman
}
// IPCEndpoint retrieves the current IPC endpoint used by the protocol stack.
func (n *Node) IPCEndpoint() string {
return n.ipcEndpoint
}
// HTTPEndpoint retrieves the current HTTP endpoint used by the protocol stack.
func (n *Node) HTTPEndpoint() string {
n.lock.Lock()
defer n.lock.Unlock()
if n.httpListener != nil {
return n.httpListener.Addr().String()
}
return n.httpEndpoint
}
// WSEndpoint retrieves the current WS endpoint used by the protocol stack.
func (n *Node) WSEndpoint() string {
n.lock.Lock()
defer n.lock.Unlock()
if n.wsListener != nil {
return n.wsListener.Addr().String()
}
return n.wsEndpoint
}
// EventMux retrieves the event multiplexer used by all the network services in
// the current protocol stack.
func (n *Node) EventMux() *event.TypeMux {
return n.eventmux
}
// OpenDatabase opens an existing database with the given name (or creates one if no
// previous can be found) from within the node's instance directory. If the node is
// ephemeral, a memory database is returned.
func (n *Node) OpenDatabase(name string, cache, handles int) (ethdb.Database, error) {
if n.config.DataDir == "" {
return ethdb.NewMemDatabase(), nil
}
return ethdb.NewLDBDatabase(n.config.ResolvePath(name), cache, handles)
}
// ResolvePath returns the absolute path of a resource in the instance directory.
func (n *Node) ResolvePath(x string) string {
return n.config.ResolvePath(x)
}
// apis returns the collection of RPC descriptors this node offers.
func (n *Node) apis() []rpc.API {
return []rpc.API{
{
Namespace: "admin",
Version: "1.0",
Service: NewPrivateAdminAPI(n),
}, {
Namespace: "admin",
Version: "1.0",
Service: NewPublicAdminAPI(n),
Public: true,
}, {
Namespace: "debug",
Version: "1.0",
Service: debug.Handler,
}, {
Namespace: "debug",
Version: "1.0",
Service: NewPublicDebugAPI(n),
Public: true,
}, {
Namespace: "web3",
Version: "1.0",
Service: NewPublicWeb3API(n),
Public: true,
},
}
}
| node/node.go | 1 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.007957314141094685,
0.000657453725580126,
0.00016107667761389166,
0.00017286914226133376,
0.0013567673740908504
] |
{
"id": 2,
"code_window": [
"\t\t\tcontinue\n",
"\t\t} else if err != nil {\n",
"\t\t\treturn err\n",
"\t\t}\n",
"\t\tlog.Trace(\"Accepted connection\", \"addr\", conn.RemoteAddr())\n",
"\t\tgo srv.ServeCodec(NewJSONCodec(conn), OptionMethodInvocation|OptionSubscriptions)\n",
"\t}\n",
"}\n",
"\n",
"// DialIPC create a new IPC client that connects to the given endpoint. On Unix it assumes\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tlog.Trace(\"IPC accepted connection\")\n"
],
"file_path": "rpc/ipc.go",
"type": "replace",
"edit_start_line_idx": 36
} | // +build !windows
// This file contains a simple and incomplete implementation of the terminfo
// database. Information was taken from the ncurses manpages term(5) and
// terminfo(5). Currently, only the string capabilities for special keys and for
// functions without parameters are actually used. Colors are still done with
// ANSI escape sequences. Other special features that are not (yet?) supported
// are reading from ~/.terminfo, the TERMINFO_DIRS variable, Berkeley database
// format and extended capabilities.
package termbox
import (
"bytes"
"encoding/binary"
"encoding/hex"
"errors"
"fmt"
"io/ioutil"
"os"
"strings"
)
const (
ti_magic = 0432
ti_header_length = 12
ti_mouse_enter = "\x1b[?1000h\x1b[?1002h\x1b[?1015h\x1b[?1006h"
ti_mouse_leave = "\x1b[?1006l\x1b[?1015l\x1b[?1002l\x1b[?1000l"
)
func load_terminfo() ([]byte, error) {
var data []byte
var err error
term := os.Getenv("TERM")
if term == "" {
return nil, fmt.Errorf("termbox: TERM not set")
}
// The following behaviour follows the one described in terminfo(5) as
// distributed by ncurses.
terminfo := os.Getenv("TERMINFO")
if terminfo != "" {
// if TERMINFO is set, no other directory should be searched
return ti_try_path(terminfo)
}
// next, consider ~/.terminfo
home := os.Getenv("HOME")
if home != "" {
data, err = ti_try_path(home + "/.terminfo")
if err == nil {
return data, nil
}
}
// next, TERMINFO_DIRS
dirs := os.Getenv("TERMINFO_DIRS")
if dirs != "" {
for _, dir := range strings.Split(dirs, ":") {
if dir == "" {
// "" -> "/usr/share/terminfo"
dir = "/usr/share/terminfo"
}
data, err = ti_try_path(dir)
if err == nil {
return data, nil
}
}
}
// fall back to /usr/share/terminfo
return ti_try_path("/usr/share/terminfo")
}
func ti_try_path(path string) (data []byte, err error) {
// load_terminfo already made sure it is set
term := os.Getenv("TERM")
// first try, the typical *nix path
terminfo := path + "/" + term[0:1] + "/" + term
data, err = ioutil.ReadFile(terminfo)
if err == nil {
return
}
// fallback to darwin specific dirs structure
terminfo = path + "/" + hex.EncodeToString([]byte(term[:1])) + "/" + term
data, err = ioutil.ReadFile(terminfo)
return
}
func setup_term_builtin() error {
name := os.Getenv("TERM")
if name == "" {
return errors.New("termbox: TERM environment variable not set")
}
for _, t := range terms {
if t.name == name {
keys = t.keys
funcs = t.funcs
return nil
}
}
compat_table := []struct {
partial string
keys []string
funcs []string
}{
{"xterm", xterm_keys, xterm_funcs},
{"rxvt", rxvt_unicode_keys, rxvt_unicode_funcs},
{"linux", linux_keys, linux_funcs},
{"Eterm", eterm_keys, eterm_funcs},
{"screen", screen_keys, screen_funcs},
// let's assume that 'cygwin' is xterm compatible
{"cygwin", xterm_keys, xterm_funcs},
{"st", xterm_keys, xterm_funcs},
}
// try compatibility variants
for _, it := range compat_table {
if strings.Contains(name, it.partial) {
keys = it.keys
funcs = it.funcs
return nil
}
}
return errors.New("termbox: unsupported terminal")
}
func setup_term() (err error) {
var data []byte
var header [6]int16
var str_offset, table_offset int16
data, err = load_terminfo()
if err != nil {
return setup_term_builtin()
}
rd := bytes.NewReader(data)
// 0: magic number, 1: size of names section, 2: size of boolean section, 3:
// size of numbers section (in integers), 4: size of the strings section (in
// integers), 5: size of the string table
err = binary.Read(rd, binary.LittleEndian, header[:])
if err != nil {
return
}
if (header[1]+header[2])%2 != 0 {
// old quirk to align everything on word boundaries
header[2] += 1
}
str_offset = ti_header_length + header[1] + header[2] + 2*header[3]
table_offset = str_offset + 2*header[4]
keys = make([]string, 0xFFFF-key_min)
for i, _ := range keys {
keys[i], err = ti_read_string(rd, str_offset+2*ti_keys[i], table_offset)
if err != nil {
return
}
}
funcs = make([]string, t_max_funcs)
// the last two entries are reserved for mouse. because the table offset is
// not there, the two entries have to fill in manually
for i, _ := range funcs[:len(funcs)-2] {
funcs[i], err = ti_read_string(rd, str_offset+2*ti_funcs[i], table_offset)
if err != nil {
return
}
}
funcs[t_max_funcs-2] = ti_mouse_enter
funcs[t_max_funcs-1] = ti_mouse_leave
return nil
}
func ti_read_string(rd *bytes.Reader, str_off, table int16) (string, error) {
var off int16
_, err := rd.Seek(int64(str_off), 0)
if err != nil {
return "", err
}
err = binary.Read(rd, binary.LittleEndian, &off)
if err != nil {
return "", err
}
_, err = rd.Seek(int64(table+off), 0)
if err != nil {
return "", err
}
var bs []byte
for {
b, err := rd.ReadByte()
if err != nil {
return "", err
}
if b == byte(0x00) {
break
}
bs = append(bs, b)
}
return string(bs), nil
}
// "Maps" the function constants from termbox.go to the number of the respective
// string capability in the terminfo file. Taken from (ncurses) term.h.
var ti_funcs = []int16{
28, 40, 16, 13, 5, 39, 36, 27, 26, 34, 89, 88,
}
// Same as above for the special keys.
var ti_keys = []int16{
66, 68 /* apparently not a typo; 67 is F10 for whatever reason */, 69, 70,
71, 72, 73, 74, 75, 67, 216, 217, 77, 59, 76, 164, 82, 81, 87, 61, 79, 83,
}
| vendor/github.com/nsf/termbox-go/terminfo.go | 0 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.0004108089779037982,
0.00020516457152552903,
0.00016389779921155423,
0.00017100073455367237,
0.00006879868305986747
] |
{
"id": 2,
"code_window": [
"\t\t\tcontinue\n",
"\t\t} else if err != nil {\n",
"\t\t\treturn err\n",
"\t\t}\n",
"\t\tlog.Trace(\"Accepted connection\", \"addr\", conn.RemoteAddr())\n",
"\t\tgo srv.ServeCodec(NewJSONCodec(conn), OptionMethodInvocation|OptionSubscriptions)\n",
"\t}\n",
"}\n",
"\n",
"// DialIPC create a new IPC client that connects to the given endpoint. On Unix it assumes\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tlog.Trace(\"IPC accepted connection\")\n"
],
"file_path": "rpc/ipc.go",
"type": "replace",
"edit_start_line_idx": 36
} | // Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package bloombits
import (
"bytes"
"context"
"errors"
"math"
"sort"
"sync"
"sync/atomic"
"time"
"github.com/ethereum/go-ethereum/common/bitutil"
"github.com/ethereum/go-ethereum/crypto"
)
// bloomIndexes represents the bit indexes inside the bloom filter that belong
// to some key.
type bloomIndexes [3]uint
// calcBloomIndexes returns the bloom filter bit indexes belonging to the given key.
func calcBloomIndexes(b []byte) bloomIndexes {
b = crypto.Keccak256(b)
var idxs bloomIndexes
for i := 0; i < len(idxs); i++ {
idxs[i] = (uint(b[2*i])<<8)&2047 + uint(b[2*i+1])
}
return idxs
}
// partialMatches with a non-nil vector represents a section in which some sub-
// matchers have already found potential matches. Subsequent sub-matchers will
// binary AND their matches with this vector. If vector is nil, it represents a
// section to be processed by the first sub-matcher.
type partialMatches struct {
section uint64
bitset []byte
}
// Retrieval represents a request for retrieval task assignments for a given
// bit with the given number of fetch elements, or a response for such a request.
// It can also have the actual results set to be used as a delivery data struct.
//
// The contest and error fields are used by the light client to terminate matching
// early if an error is encountered on some path of the pipeline.
type Retrieval struct {
Bit uint
Sections []uint64
Bitsets [][]byte
Context context.Context
Error error
}
// Matcher is a pipelined system of schedulers and logic matchers which perform
// binary AND/OR operations on the bit-streams, creating a stream of potential
// blocks to inspect for data content.
type Matcher struct {
sectionSize uint64 // Size of the data batches to filter on
filters [][]bloomIndexes // Filter the system is matching for
schedulers map[uint]*scheduler // Retrieval schedulers for loading bloom bits
retrievers chan chan uint // Retriever processes waiting for bit allocations
counters chan chan uint // Retriever processes waiting for task count reports
retrievals chan chan *Retrieval // Retriever processes waiting for task allocations
deliveries chan *Retrieval // Retriever processes waiting for task response deliveries
running uint32 // Atomic flag whether a session is live or not
}
// NewMatcher creates a new pipeline for retrieving bloom bit streams and doing
// address and topic filtering on them. Setting a filter component to `nil` is
// allowed and will result in that filter rule being skipped (OR 0x11...1).
func NewMatcher(sectionSize uint64, filters [][][]byte) *Matcher {
// Create the matcher instance
m := &Matcher{
sectionSize: sectionSize,
schedulers: make(map[uint]*scheduler),
retrievers: make(chan chan uint),
counters: make(chan chan uint),
retrievals: make(chan chan *Retrieval),
deliveries: make(chan *Retrieval),
}
// Calculate the bloom bit indexes for the groups we're interested in
m.filters = nil
for _, filter := range filters {
// Gather the bit indexes of the filter rule, special casing the nil filter
if len(filter) == 0 {
continue
}
bloomBits := make([]bloomIndexes, len(filter))
for i, clause := range filter {
if clause == nil {
bloomBits = nil
break
}
bloomBits[i] = calcBloomIndexes(clause)
}
// Accumulate the filter rules if no nil rule was within
if bloomBits != nil {
m.filters = append(m.filters, bloomBits)
}
}
// For every bit, create a scheduler to load/download the bit vectors
for _, bloomIndexLists := range m.filters {
for _, bloomIndexList := range bloomIndexLists {
for _, bloomIndex := range bloomIndexList {
m.addScheduler(bloomIndex)
}
}
}
return m
}
// addScheduler adds a bit stream retrieval scheduler for the given bit index if
// it has not existed before. If the bit is already selected for filtering, the
// existing scheduler can be used.
func (m *Matcher) addScheduler(idx uint) {
if _, ok := m.schedulers[idx]; ok {
return
}
m.schedulers[idx] = newScheduler(idx)
}
// Start starts the matching process and returns a stream of bloom matches in
// a given range of blocks. If there are no more matches in the range, the result
// channel is closed.
func (m *Matcher) Start(ctx context.Context, begin, end uint64, results chan uint64) (*MatcherSession, error) {
// Make sure we're not creating concurrent sessions
if atomic.SwapUint32(&m.running, 1) == 1 {
return nil, errors.New("matcher already running")
}
defer atomic.StoreUint32(&m.running, 0)
// Initiate a new matching round
session := &MatcherSession{
matcher: m,
quit: make(chan struct{}),
kill: make(chan struct{}),
ctx: ctx,
}
for _, scheduler := range m.schedulers {
scheduler.reset()
}
sink := m.run(begin, end, cap(results), session)
// Read the output from the result sink and deliver to the user
session.pend.Add(1)
go func() {
defer session.pend.Done()
defer close(results)
for {
select {
case <-session.quit:
return
case res, ok := <-sink:
// New match result found
if !ok {
return
}
// Calculate the first and last blocks of the section
sectionStart := res.section * m.sectionSize
first := sectionStart
if begin > first {
first = begin
}
last := sectionStart + m.sectionSize - 1
if end < last {
last = end
}
// Iterate over all the blocks in the section and return the matching ones
for i := first; i <= last; i++ {
// Skip the entire byte if no matches are found inside (and we're processing an entire byte!)
next := res.bitset[(i-sectionStart)/8]
if next == 0 {
if i%8 == 0 {
i += 7
}
continue
}
// Some bit it set, do the actual submatching
if bit := 7 - i%8; next&(1<<bit) != 0 {
select {
case <-session.quit:
return
case results <- i:
}
}
}
}
}
}()
return session, nil
}
// run creates a daisy-chain of sub-matchers, one for the address set and one
// for each topic set, each sub-matcher receiving a section only if the previous
// ones have all found a potential match in one of the blocks of the section,
// then binary AND-ing its own matches and forwarding the result to the next one.
//
// The method starts feeding the section indexes into the first sub-matcher on a
// new goroutine and returns a sink channel receiving the results.
func (m *Matcher) run(begin, end uint64, buffer int, session *MatcherSession) chan *partialMatches {
// Create the source channel and feed section indexes into
source := make(chan *partialMatches, buffer)
session.pend.Add(1)
go func() {
defer session.pend.Done()
defer close(source)
for i := begin / m.sectionSize; i <= end/m.sectionSize; i++ {
select {
case <-session.quit:
return
case source <- &partialMatches{i, bytes.Repeat([]byte{0xff}, int(m.sectionSize/8))}:
}
}
}()
// Assemble the daisy-chained filtering pipeline
next := source
dist := make(chan *request, buffer)
for _, bloom := range m.filters {
next = m.subMatch(next, dist, bloom, session)
}
// Start the request distribution
session.pend.Add(1)
go m.distributor(dist, session)
return next
}
// subMatch creates a sub-matcher that filters for a set of addresses or topics, binary OR-s those matches, then
// binary AND-s the result to the daisy-chain input (source) and forwards it to the daisy-chain output.
// The matches of each address/topic are calculated by fetching the given sections of the three bloom bit indexes belonging to
// that address/topic, and binary AND-ing those vectors together.
func (m *Matcher) subMatch(source chan *partialMatches, dist chan *request, bloom []bloomIndexes, session *MatcherSession) chan *partialMatches {
// Start the concurrent schedulers for each bit required by the bloom filter
sectionSources := make([][3]chan uint64, len(bloom))
sectionSinks := make([][3]chan []byte, len(bloom))
for i, bits := range bloom {
for j, bit := range bits {
sectionSources[i][j] = make(chan uint64, cap(source))
sectionSinks[i][j] = make(chan []byte, cap(source))
m.schedulers[bit].run(sectionSources[i][j], dist, sectionSinks[i][j], session.quit, &session.pend)
}
}
process := make(chan *partialMatches, cap(source)) // entries from source are forwarded here after fetches have been initiated
results := make(chan *partialMatches, cap(source))
session.pend.Add(2)
go func() {
// Tear down the goroutine and terminate all source channels
defer session.pend.Done()
defer close(process)
defer func() {
for _, bloomSources := range sectionSources {
for _, bitSource := range bloomSources {
close(bitSource)
}
}
}()
// Read sections from the source channel and multiplex into all bit-schedulers
for {
select {
case <-session.quit:
return
case subres, ok := <-source:
// New subresult from previous link
if !ok {
return
}
// Multiplex the section index to all bit-schedulers
for _, bloomSources := range sectionSources {
for _, bitSource := range bloomSources {
select {
case <-session.quit:
return
case bitSource <- subres.section:
}
}
}
// Notify the processor that this section will become available
select {
case <-session.quit:
return
case process <- subres:
}
}
}
}()
go func() {
// Tear down the goroutine and terminate the final sink channel
defer session.pend.Done()
defer close(results)
// Read the source notifications and collect the delivered results
for {
select {
case <-session.quit:
return
case subres, ok := <-process:
// Notified of a section being retrieved
if !ok {
return
}
// Gather all the sub-results and merge them together
var orVector []byte
for _, bloomSinks := range sectionSinks {
var andVector []byte
for _, bitSink := range bloomSinks {
var data []byte
select {
case <-session.quit:
return
case data = <-bitSink:
}
if andVector == nil {
andVector = make([]byte, int(m.sectionSize/8))
copy(andVector, data)
} else {
bitutil.ANDBytes(andVector, andVector, data)
}
}
if orVector == nil {
orVector = andVector
} else {
bitutil.ORBytes(orVector, orVector, andVector)
}
}
if orVector == nil {
orVector = make([]byte, int(m.sectionSize/8))
}
if subres.bitset != nil {
bitutil.ANDBytes(orVector, orVector, subres.bitset)
}
if bitutil.TestBytes(orVector) {
select {
case <-session.quit:
return
case results <- &partialMatches{subres.section, orVector}:
}
}
}
}
}()
return results
}
// distributor receives requests from the schedulers and queues them into a set
// of pending requests, which are assigned to retrievers wanting to fulfil them.
func (m *Matcher) distributor(dist chan *request, session *MatcherSession) {
defer session.pend.Done()
var (
requests = make(map[uint][]uint64) // Per-bit list of section requests, ordered by section number
unallocs = make(map[uint]struct{}) // Bits with pending requests but not allocated to any retriever
retrievers chan chan uint // Waiting retrievers (toggled to nil if unallocs is empty)
)
var (
allocs int // Number of active allocations to handle graceful shutdown requests
shutdown = session.quit // Shutdown request channel, will gracefully wait for pending requests
)
// assign is a helper method fo try to assign a pending bit an actively
// listening servicer, or schedule it up for later when one arrives.
assign := func(bit uint) {
select {
case fetcher := <-m.retrievers:
allocs++
fetcher <- bit
default:
// No retrievers active, start listening for new ones
retrievers = m.retrievers
unallocs[bit] = struct{}{}
}
}
for {
select {
case <-shutdown:
// Graceful shutdown requested, wait until all pending requests are honoured
if allocs == 0 {
return
}
shutdown = nil
case <-session.kill:
// Pending requests not honoured in time, hard terminate
return
case req := <-dist:
// New retrieval request arrived to be distributed to some fetcher process
queue := requests[req.bit]
index := sort.Search(len(queue), func(i int) bool { return queue[i] >= req.section })
requests[req.bit] = append(queue[:index], append([]uint64{req.section}, queue[index:]...)...)
// If it's a new bit and we have waiting fetchers, allocate to them
if len(queue) == 0 {
assign(req.bit)
}
case fetcher := <-retrievers:
// New retriever arrived, find the lowest section-ed bit to assign
bit, best := uint(0), uint64(math.MaxUint64)
for idx := range unallocs {
if requests[idx][0] < best {
bit, best = idx, requests[idx][0]
}
}
// Stop tracking this bit (and alloc notifications if no more work is available)
delete(unallocs, bit)
if len(unallocs) == 0 {
retrievers = nil
}
allocs++
fetcher <- bit
case fetcher := <-m.counters:
// New task count request arrives, return number of items
fetcher <- uint(len(requests[<-fetcher]))
case fetcher := <-m.retrievals:
// New fetcher waiting for tasks to retrieve, assign
task := <-fetcher
if want := len(task.Sections); want >= len(requests[task.Bit]) {
task.Sections = requests[task.Bit]
delete(requests, task.Bit)
} else {
task.Sections = append(task.Sections[:0], requests[task.Bit][:want]...)
requests[task.Bit] = append(requests[task.Bit][:0], requests[task.Bit][want:]...)
}
fetcher <- task
// If anything was left unallocated, try to assign to someone else
if len(requests[task.Bit]) > 0 {
assign(task.Bit)
}
case result := <-m.deliveries:
// New retrieval task response from fetcher, split out missing sections and
// deliver complete ones
var (
sections = make([]uint64, 0, len(result.Sections))
bitsets = make([][]byte, 0, len(result.Bitsets))
missing = make([]uint64, 0, len(result.Sections))
)
for i, bitset := range result.Bitsets {
if len(bitset) == 0 {
missing = append(missing, result.Sections[i])
continue
}
sections = append(sections, result.Sections[i])
bitsets = append(bitsets, bitset)
}
m.schedulers[result.Bit].deliver(sections, bitsets)
allocs--
// Reschedule missing sections and allocate bit if newly available
if len(missing) > 0 {
queue := requests[result.Bit]
for _, section := range missing {
index := sort.Search(len(queue), func(i int) bool { return queue[i] >= section })
queue = append(queue[:index], append([]uint64{section}, queue[index:]...)...)
}
requests[result.Bit] = queue
if len(queue) == len(missing) {
assign(result.Bit)
}
}
// If we're in the process of shutting down, terminate
if allocs == 0 && shutdown == nil {
return
}
}
}
}
// MatcherSession is returned by a started matcher to be used as a terminator
// for the actively running matching operation.
type MatcherSession struct {
matcher *Matcher
closer sync.Once // Sync object to ensure we only ever close once
quit chan struct{} // Quit channel to request pipeline termination
kill chan struct{} // Term channel to signal non-graceful forced shutdown
ctx context.Context // Context used by the light client to abort filtering
err atomic.Value // Global error to track retrieval failures deep in the chain
pend sync.WaitGroup
}
// Close stops the matching process and waits for all subprocesses to terminate
// before returning. The timeout may be used for graceful shutdown, allowing the
// currently running retrievals to complete before this time.
func (s *MatcherSession) Close() {
s.closer.Do(func() {
// Signal termination and wait for all goroutines to tear down
close(s.quit)
time.AfterFunc(time.Second, func() { close(s.kill) })
s.pend.Wait()
})
}
// Error returns any failure encountered during the matching session.
func (s *MatcherSession) Error() error {
if err := s.err.Load(); err != nil {
return err.(error)
}
return nil
}
// AllocateRetrieval assigns a bloom bit index to a client process that can either
// immediately request and fetch the section contents assigned to this bit or wait
// a little while for more sections to be requested.
func (s *MatcherSession) AllocateRetrieval() (uint, bool) {
fetcher := make(chan uint)
select {
case <-s.quit:
return 0, false
case s.matcher.retrievers <- fetcher:
bit, ok := <-fetcher
return bit, ok
}
}
// PendingSections returns the number of pending section retrievals belonging to
// the given bloom bit index.
func (s *MatcherSession) PendingSections(bit uint) int {
fetcher := make(chan uint)
select {
case <-s.quit:
return 0
case s.matcher.counters <- fetcher:
fetcher <- bit
return int(<-fetcher)
}
}
// AllocateSections assigns all or part of an already allocated bit-task queue
// to the requesting process.
func (s *MatcherSession) AllocateSections(bit uint, count int) []uint64 {
fetcher := make(chan *Retrieval)
select {
case <-s.quit:
return nil
case s.matcher.retrievals <- fetcher:
task := &Retrieval{
Bit: bit,
Sections: make([]uint64, count),
}
fetcher <- task
return (<-fetcher).Sections
}
}
// DeliverSections delivers a batch of section bit-vectors for a specific bloom
// bit index to be injected into the processing pipeline.
func (s *MatcherSession) DeliverSections(bit uint, sections []uint64, bitsets [][]byte) {
select {
case <-s.kill:
return
case s.matcher.deliveries <- &Retrieval{Bit: bit, Sections: sections, Bitsets: bitsets}:
}
}
// Multiplex polls the matcher session for retrieval tasks and multiplexes it into
// the requested retrieval queue to be serviced together with other sessions.
//
// This method will block for the lifetime of the session. Even after termination
// of the session, any request in-flight need to be responded to! Empty responses
// are fine though in that case.
func (s *MatcherSession) Multiplex(batch int, wait time.Duration, mux chan chan *Retrieval) {
for {
// Allocate a new bloom bit index to retrieve data for, stopping when done
bit, ok := s.AllocateRetrieval()
if !ok {
return
}
// Bit allocated, throttle a bit if we're below our batch limit
if s.PendingSections(bit) < batch {
select {
case <-s.quit:
// Session terminating, we can't meaningfully service, abort
s.AllocateSections(bit, 0)
s.DeliverSections(bit, []uint64{}, [][]byte{})
return
case <-time.After(wait):
// Throttling up, fetch whatever's available
}
}
// Allocate as much as we can handle and request servicing
sections := s.AllocateSections(bit, batch)
request := make(chan *Retrieval)
select {
case <-s.quit:
// Session terminating, we can't meaningfully service, abort
s.DeliverSections(bit, sections, make([][]byte, len(sections)))
return
case mux <- request:
// Retrieval accepted, something must arrive before we're aborting
request <- &Retrieval{Bit: bit, Sections: sections, Context: s.ctx}
result := <-request
if result.Error != nil {
s.err.Store(result.Error)
s.Close()
}
s.DeliverSections(result.Bit, result.Sections, result.Bitsets)
}
}
}
| core/bloombits/matcher.go | 0 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.00018809453467838466,
0.00017152263899333775,
0.00016183457046281546,
0.00017140517593361437,
0.000004328073828219203
] |
{
"id": 2,
"code_window": [
"\t\t\tcontinue\n",
"\t\t} else if err != nil {\n",
"\t\t\treturn err\n",
"\t\t}\n",
"\t\tlog.Trace(\"Accepted connection\", \"addr\", conn.RemoteAddr())\n",
"\t\tgo srv.ServeCodec(NewJSONCodec(conn), OptionMethodInvocation|OptionSubscriptions)\n",
"\t}\n",
"}\n",
"\n",
"// DialIPC create a new IPC client that connects to the given endpoint. On Unix it assumes\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tlog.Trace(\"IPC accepted connection\")\n"
],
"file_path": "rpc/ipc.go",
"type": "replace",
"edit_start_line_idx": 36
} | - Assert(slice, Contains, item)
- Parallel test support
| vendor/gopkg.in/check.v1/TODO | 0 | https://github.com/ethereum/go-ethereum/commit/af8daf91a659c05a9c6424752d050f2beca0ee29 | [
0.00017000803200062364,
0.00017000803200062364,
0.00017000803200062364,
0.00017000803200062364,
0
] |
{
"id": 0,
"code_window": [
"\n",
"func (m *Master) init(cloud cloudprovider.Interface, podInfoGetter client.PodInfoGetter) {\n",
"\tm.random = rand.New(rand.NewSource(int64(time.Now().Nanosecond())))\n",
"\tpodCache := NewPodCache(podInfoGetter, m.podRegistry, time.Second*30)\n",
"\tgo podCache.Loop()\n",
"\ts := scheduler.MakeFirstFitScheduler(m.podRegistry, m.random)\n",
"\tm.storage = map[string]apiserver.RESTStorage{\n",
"\t\t\"pods\": registry.MakePodRegistryStorage(m.podRegistry, podInfoGetter, s, m.minionRegistry, cloud, podCache),\n",
"\t\t\"replicationControllers\": registry.MakeControllerRegistryStorage(m.controllerRegistry, m.podRegistry),\n",
"\t\t\"services\": registry.MakeServiceRegistryStorage(m.serviceRegistry, cloud, m.minionRegistry),\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\ts := scheduler.NewFirstFitScheduler(m.podRegistry, m.random)\n"
],
"file_path": "pkg/master/master.go",
"type": "replace",
"edit_start_line_idx": 86
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package master
import (
"math/rand"
"net/http"
"time"
"github.com/GoogleCloudPlatform/kubernetes/pkg/apiserver"
"github.com/GoogleCloudPlatform/kubernetes/pkg/client"
"github.com/GoogleCloudPlatform/kubernetes/pkg/cloudprovider"
"github.com/GoogleCloudPlatform/kubernetes/pkg/registry"
"github.com/GoogleCloudPlatform/kubernetes/pkg/scheduler"
"github.com/GoogleCloudPlatform/kubernetes/pkg/util"
"github.com/coreos/go-etcd/etcd"
"github.com/golang/glog"
)
// Master contains state for a Kubernetes cluster master/api server.
type Master struct {
podRegistry registry.PodRegistry
controllerRegistry registry.ControllerRegistry
serviceRegistry registry.ServiceRegistry
minionRegistry registry.MinionRegistry
// TODO: don't reuse non-threadsafe objects.
random *rand.Rand
storage map[string]apiserver.RESTStorage
}
// Returns a memory (not etcd) backed apiserver.
func NewMemoryServer(minions []string, podInfoGetter client.PodInfoGetter, cloud cloudprovider.Interface) *Master {
m := &Master{
podRegistry: registry.MakeMemoryRegistry(),
controllerRegistry: registry.MakeMemoryRegistry(),
serviceRegistry: registry.MakeMemoryRegistry(),
minionRegistry: registry.MakeMinionRegistry(minions),
}
m.init(cloud, podInfoGetter)
return m
}
// Returns a new apiserver.
func New(etcdServers, minions []string, podInfoGetter client.PodInfoGetter, cloud cloudprovider.Interface, minionRegexp string) *Master {
etcdClient := etcd.NewClient(etcdServers)
minionRegistry := minionRegistryMaker(minions, cloud, minionRegexp)
m := &Master{
podRegistry: registry.MakeEtcdRegistry(etcdClient, minionRegistry),
controllerRegistry: registry.MakeEtcdRegistry(etcdClient, minionRegistry),
serviceRegistry: registry.MakeEtcdRegistry(etcdClient, minionRegistry),
minionRegistry: minionRegistry,
}
m.init(cloud, podInfoGetter)
return m
}
func minionRegistryMaker(minions []string, cloud cloudprovider.Interface, minionRegexp string) registry.MinionRegistry {
if cloud != nil && len(minionRegexp) > 0 {
minionRegistry, err := registry.MakeCloudMinionRegistry(cloud, minionRegexp)
if err != nil {
glog.Errorf("Failed to initalize cloud minion registry reverting to static registry (%#v)", err)
}
return minionRegistry
}
return registry.MakeMinionRegistry(minions)
}
func (m *Master) init(cloud cloudprovider.Interface, podInfoGetter client.PodInfoGetter) {
m.random = rand.New(rand.NewSource(int64(time.Now().Nanosecond())))
podCache := NewPodCache(podInfoGetter, m.podRegistry, time.Second*30)
go podCache.Loop()
s := scheduler.MakeFirstFitScheduler(m.podRegistry, m.random)
m.storage = map[string]apiserver.RESTStorage{
"pods": registry.MakePodRegistryStorage(m.podRegistry, podInfoGetter, s, m.minionRegistry, cloud, podCache),
"replicationControllers": registry.MakeControllerRegistryStorage(m.controllerRegistry, m.podRegistry),
"services": registry.MakeServiceRegistryStorage(m.serviceRegistry, cloud, m.minionRegistry),
"minions": registry.MakeMinionRegistryStorage(m.minionRegistry),
}
}
// Runs master. Never returns.
func (m *Master) Run(myAddress, apiPrefix string) error {
endpoints := registry.MakeEndpointController(m.serviceRegistry, m.podRegistry)
go util.Forever(func() { endpoints.SyncServiceEndpoints() }, time.Second*10)
s := &http.Server{
Addr: myAddress,
Handler: apiserver.New(m.storage, apiPrefix),
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
MaxHeaderBytes: 1 << 20,
}
return s.ListenAndServe()
}
// Instead of calling Run, call ConstructHandler to get a handler for your own
// server. Intended for testing. Only call once.
func (m *Master) ConstructHandler(apiPrefix string) http.Handler {
endpoints := registry.MakeEndpointController(m.serviceRegistry, m.podRegistry)
go util.Forever(func() { endpoints.SyncServiceEndpoints() }, time.Second*10)
return apiserver.New(m.storage, apiPrefix)
}
| pkg/master/master.go | 1 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.998292863368988,
0.3311191499233246,
0.00016616897482890636,
0.0035439624916762114,
0.42177996039390564
] |
{
"id": 0,
"code_window": [
"\n",
"func (m *Master) init(cloud cloudprovider.Interface, podInfoGetter client.PodInfoGetter) {\n",
"\tm.random = rand.New(rand.NewSource(int64(time.Now().Nanosecond())))\n",
"\tpodCache := NewPodCache(podInfoGetter, m.podRegistry, time.Second*30)\n",
"\tgo podCache.Loop()\n",
"\ts := scheduler.MakeFirstFitScheduler(m.podRegistry, m.random)\n",
"\tm.storage = map[string]apiserver.RESTStorage{\n",
"\t\t\"pods\": registry.MakePodRegistryStorage(m.podRegistry, podInfoGetter, s, m.minionRegistry, cloud, podCache),\n",
"\t\t\"replicationControllers\": registry.MakeControllerRegistryStorage(m.controllerRegistry, m.podRegistry),\n",
"\t\t\"services\": registry.MakeServiceRegistryStorage(m.serviceRegistry, cloud, m.minionRegistry),\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\ts := scheduler.NewFirstFitScheduler(m.podRegistry, m.random)\n"
],
"file_path": "pkg/master/master.go",
"type": "replace",
"edit_start_line_idx": 86
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"flag"
"fmt"
"io/ioutil"
"net/url"
"os"
"strconv"
"strings"
"time"
kube_client "github.com/GoogleCloudPlatform/kubernetes/pkg/client"
"github.com/GoogleCloudPlatform/kubernetes/pkg/kubecfg"
"github.com/GoogleCloudPlatform/kubernetes/pkg/util"
"github.com/golang/glog"
)
// AppVersion is the current version of kubecfg.
const AppVersion = "0.1"
// The flag package provides a default help printer via -h switch
var (
versionFlag = flag.Bool("V", false, "Print the version number.")
httpServer = flag.String("h", "", "The host to connect to.")
config = flag.String("c", "", "Path to the config file.")
selector = flag.String("l", "", "Selector (label query) to use for listing")
updatePeriod = flag.Duration("u", 60*time.Second, "Update interarrival period")
portSpec = flag.String("p", "", "The port spec, comma-separated list of <external>:<internal>,...")
servicePort = flag.Int("s", -1, "If positive, create and run a corresponding service on this port, only used with 'run'")
authConfig = flag.String("auth", os.Getenv("HOME")+"/.kubernetes_auth", "Path to the auth info file. If missing, prompt the user. Only used if doing https.")
json = flag.Bool("json", false, "If true, print raw JSON for responses")
yaml = flag.Bool("yaml", false, "If true, print raw YAML for responses")
verbose = flag.Bool("verbose", false, "If true, print extra information")
proxy = flag.Bool("proxy", false, "If true, run a proxy to the api server")
www = flag.String("www", "", "If -proxy is true, use this directory to serve static files")
)
func usage() {
fmt.Fprint(os.Stderr, `usage: kubecfg -h [-c config/file.json] [-p :,..., :] <method>
Kubernetes REST API:
kubecfg [OPTIONS] get|list|create|delete|update <url>
Manage replication controllers:
kubecfg [OPTIONS] stop|rm|rollingupdate <controller>
kubecfg [OPTIONS] run <image> <replicas> <controller>
kubecfg [OPTIONS] resize <controller> <replicas>
Options:
`)
flag.PrintDefaults()
}
// Reads & parses config file. On error, calls glog.Fatal().
func readConfig(storage string) []byte {
if len(*config) == 0 {
glog.Fatal("Need config file (-c)")
}
data, err := ioutil.ReadFile(*config)
if err != nil {
glog.Fatalf("Unable to read %v: %v\n", *config, err)
}
data, err = kubecfg.ToWireFormat(data, storage)
if err != nil {
glog.Fatalf("Error parsing %v as an object for %v: %v\n", *config, storage, err)
}
if *verbose {
glog.Infof("Parsed config file successfully; sending:\n%v\n", string(data))
}
return data
}
// CloudCfg command line tool.
func main() {
flag.Usage = func() {
usage()
}
flag.Parse() // Scan the arguments list
util.InitLogs()
defer util.FlushLogs()
if *versionFlag {
fmt.Println("Version:", AppVersion)
os.Exit(0)
}
secure := true
var masterServer string
if len(*httpServer) > 0 {
masterServer = *httpServer
} else if len(os.Getenv("KUBERNETES_MASTER")) > 0 {
masterServer = os.Getenv("KUBERNETES_MASTER")
} else {
masterServer = "http://localhost:8080"
}
parsedURL, err := url.Parse(masterServer)
if err != nil {
glog.Fatalf("Unable to parse %v as a URL\n", err)
}
if parsedURL.Scheme != "" && parsedURL.Scheme != "https" {
secure = false
}
var auth *kube_client.AuthInfo
if secure {
auth, err = kubecfg.LoadAuthInfo(*authConfig)
if err != nil {
glog.Fatalf("Error loading auth: %v", err)
}
}
if *proxy {
glog.Info("Starting to serve on localhost:8001")
server := kubecfg.NewProxyServer(*www, masterServer, auth)
glog.Fatal(server.Serve())
}
if len(flag.Args()) < 1 {
usage()
os.Exit(1)
}
method := flag.Arg(0)
client := kube_client.New(masterServer, auth)
matchFound := executeAPIRequest(method, client) || executeControllerRequest(method, client)
if matchFound == false {
glog.Fatalf("Unknown command %s", method)
}
}
// Attempts to execute an API request
func executeAPIRequest(method string, s *kube_client.Client) bool {
parseStorage := func() string {
if len(flag.Args()) != 2 {
glog.Fatal("usage: kubecfg [OPTIONS] get|list|create|update|delete <url>")
}
return strings.Trim(flag.Arg(1), "/")
}
verb := ""
switch method {
case "get", "list":
verb = "GET"
case "delete":
verb = "DELETE"
case "create":
verb = "POST"
case "update":
verb = "PUT"
default:
return false
}
r := s.Verb(verb).
Path(parseStorage()).
ParseSelector(*selector)
if method == "create" || method == "update" {
r.Body(readConfig(parseStorage()))
}
result := r.Do()
obj, err := result.Get()
if err != nil {
glog.Fatalf("Got request error: %v\n", err)
return false
}
var printer kubecfg.ResourcePrinter
if *json {
printer = &kubecfg.IdentityPrinter{}
} else if *yaml {
printer = &kubecfg.YAMLPrinter{}
} else {
printer = &kubecfg.HumanReadablePrinter{}
}
if err = printer.PrintObj(obj, os.Stdout); err != nil {
body, _ := result.Raw()
glog.Fatalf("Failed to print: %v\nRaw received object:\n%#v\n\nBody received: %v", err, obj, string(body))
}
fmt.Print("\n")
return true
}
// Attempts to execute a replicationController request
func executeControllerRequest(method string, c *kube_client.Client) bool {
parseController := func() string {
if len(flag.Args()) != 2 {
glog.Fatal("usage: kubecfg [OPTIONS] stop|rm|rollingupdate <controller>")
}
return flag.Arg(1)
}
var err error
switch method {
case "stop":
err = kubecfg.StopController(parseController(), c)
case "rm":
err = kubecfg.DeleteController(parseController(), c)
case "rollingupdate":
err = kubecfg.Update(parseController(), c, *updatePeriod)
case "run":
if len(flag.Args()) != 4 {
glog.Fatal("usage: kubecfg [OPTIONS] run <image> <replicas> <controller>")
}
image := flag.Arg(1)
replicas, err := strconv.Atoi(flag.Arg(2))
name := flag.Arg(3)
if err != nil {
glog.Fatalf("Error parsing replicas: %v", err)
}
err = kubecfg.RunController(image, name, replicas, c, *portSpec, *servicePort)
case "resize":
args := flag.Args()
if len(args) < 3 {
glog.Fatal("usage: kubecfg resize <controller> <replicas>")
}
name := args[1]
replicas, err := strconv.Atoi(args[2])
if err != nil {
glog.Fatalf("Error parsing replicas: %v", err)
}
err = kubecfg.ResizeController(name, replicas, c)
default:
return false
}
if err != nil {
glog.Fatalf("Error: %v", err)
}
return true
}
| cmd/kubecfg/kubecfg.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.006450284272432327,
0.00041448662523180246,
0.00016367259377148002,
0.00016951901488937438,
0.0012072151293978095
] |
{
"id": 0,
"code_window": [
"\n",
"func (m *Master) init(cloud cloudprovider.Interface, podInfoGetter client.PodInfoGetter) {\n",
"\tm.random = rand.New(rand.NewSource(int64(time.Now().Nanosecond())))\n",
"\tpodCache := NewPodCache(podInfoGetter, m.podRegistry, time.Second*30)\n",
"\tgo podCache.Loop()\n",
"\ts := scheduler.MakeFirstFitScheduler(m.podRegistry, m.random)\n",
"\tm.storage = map[string]apiserver.RESTStorage{\n",
"\t\t\"pods\": registry.MakePodRegistryStorage(m.podRegistry, podInfoGetter, s, m.minionRegistry, cloud, podCache),\n",
"\t\t\"replicationControllers\": registry.MakeControllerRegistryStorage(m.controllerRegistry, m.podRegistry),\n",
"\t\t\"services\": registry.MakeServiceRegistryStorage(m.serviceRegistry, cloud, m.minionRegistry),\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\ts := scheduler.NewFirstFitScheduler(m.podRegistry, m.random)\n"
],
"file_path": "pkg/master/master.go",
"type": "replace",
"edit_start_line_idx": 86
} | defaultcc: [email protected]
| third_party/src/code.google.com/p/google-api-go-client/lib/codereview/codereview.cfg | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.00016875183791853487,
0.00016875183791853487,
0.00016875183791853487,
0.00016875183791853487,
0
] |
{
"id": 0,
"code_window": [
"\n",
"func (m *Master) init(cloud cloudprovider.Interface, podInfoGetter client.PodInfoGetter) {\n",
"\tm.random = rand.New(rand.NewSource(int64(time.Now().Nanosecond())))\n",
"\tpodCache := NewPodCache(podInfoGetter, m.podRegistry, time.Second*30)\n",
"\tgo podCache.Loop()\n",
"\ts := scheduler.MakeFirstFitScheduler(m.podRegistry, m.random)\n",
"\tm.storage = map[string]apiserver.RESTStorage{\n",
"\t\t\"pods\": registry.MakePodRegistryStorage(m.podRegistry, podInfoGetter, s, m.minionRegistry, cloud, podCache),\n",
"\t\t\"replicationControllers\": registry.MakeControllerRegistryStorage(m.controllerRegistry, m.podRegistry),\n",
"\t\t\"services\": registry.MakeServiceRegistryStorage(m.serviceRegistry, cloud, m.minionRegistry),\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\ts := scheduler.NewFirstFitScheduler(m.podRegistry, m.random)\n"
],
"file_path": "pkg/master/master.go",
"type": "replace",
"edit_start_line_idx": 86
} | // A set of packages that provide many tools for testifying that your code will behave as you intend.
//
// testify contains the following packages:
//
// The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system.
//
// The http package contains tools to make it easier to test http activity using the Go testing system.
//
// The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected.
//
// The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
package testify
import (
_ "github.com/stretchr/testify/assert"
_ "github.com/stretchr/testify/http"
_ "github.com/stretchr/testify/mock"
)
| third_party/src/github.com/stretchr/testify/doc.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.0001749347138684243,
0.0001724486064631492,
0.00016996248450595886,
0.0001724486064631492,
0.0000024861146812327206
] |
{
"id": 1,
"code_window": [
"\t// TODO: *rand.Rand is *not* threadsafe\n",
"\trandom *rand.Rand\n",
"}\n",
"\n",
"func MakeFirstFitScheduler(podLister PodLister, random *rand.Rand) Scheduler {\n",
"\treturn &FirstFitScheduler{\n",
"\t\tpodLister: podLister,\n",
"\t\trandom: random,\n",
"\t}\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"func NewFirstFitScheduler(podLister PodLister, random *rand.Rand) Scheduler {\n"
],
"file_path": "pkg/scheduler/firstfit.go",
"type": "replace",
"edit_start_line_idx": 32
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package scheduler
import (
"math/rand"
"testing"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
)
func TestFirstFitSchedulerNothingScheduled(t *testing.T) {
fakeRegistry := FakePodLister{}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(&fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(api.Pod{}, "m3")
}
func TestFirstFitSchedulerFirstScheduled(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 8080),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(makePod("", 8080), "m3")
}
func TestFirstFitSchedulerFirstScheduledComplicated(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 80, 8080),
makePod("m2", 8081, 8082, 8083),
makePod("m3", 80, 443, 8085),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(makePod("", 8080, 8081), "m3")
}
func TestFirstFitSchedulerFirstScheduledImpossible(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 8080),
makePod("m2", 8081),
makePod("m3", 8080),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectFailure(makePod("", 8080, 8081))
}
| pkg/scheduler/firstfit_test.go | 1 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.9988893866539001,
0.6035350561141968,
0.00017486483557149768,
0.7493952512741089,
0.4238455891609192
] |
{
"id": 1,
"code_window": [
"\t// TODO: *rand.Rand is *not* threadsafe\n",
"\trandom *rand.Rand\n",
"}\n",
"\n",
"func MakeFirstFitScheduler(podLister PodLister, random *rand.Rand) Scheduler {\n",
"\treturn &FirstFitScheduler{\n",
"\t\tpodLister: podLister,\n",
"\t\trandom: random,\n",
"\t}\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"func NewFirstFitScheduler(podLister PodLister, random *rand.Rand) Scheduler {\n"
],
"file_path": "pkg/scheduler/firstfit.go",
"type": "replace",
"edit_start_line_idx": 32
} | #!/bin/bash
# Copyright 2014 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script sets up a go workspace locally and builds all go components.
# You can 'source' this file if you want to set up GOPATH in your local shell.
cd $(dirname "${BASH_SOURCE}")/../.. >/dev/null
readonly KUBE_REPO_ROOT="${PWD}"
readonly KUBE_TARGET="${KUBE_REPO_ROOT}/output/build"
readonly KUBE_GO_PACKAGE=github.com/GoogleCloudPlatform/kubernetes
mkdir -p "${KUBE_TARGET}"
if [[ ! -f "/kube-build-image" ]]; then
echo "WARNING: This script should be run in the kube-build conrtainer image!" >&2
fi
function make-binaries() {
readonly BINARIES="
proxy
integration
apiserver
controller-manager
kubelet
kubecfg"
ARCH_TARGET="${KUBE_TARGET}/${GOOS}/${GOARCH}"
mkdir -p "${ARCH_TARGET}"
function make-binary() {
echo "+++ Building $1 for ${GOOS}/${GOARCH}"
go build \
-o "${ARCH_TARGET}/$1" \
github.com/GoogleCloudPlatform/kubernetes/cmd/$1
}
if [[ -n $1 ]]; then
make-binary $1
exit 0
fi
for b in ${BINARIES}; do
make-binary $b
done
}
| build/build-image/common.sh | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.00017680058954283595,
0.00017323919746559113,
0.0001709444768493995,
0.0001723021996440366,
0.0000023800760118319886
] |
{
"id": 1,
"code_window": [
"\t// TODO: *rand.Rand is *not* threadsafe\n",
"\trandom *rand.Rand\n",
"}\n",
"\n",
"func MakeFirstFitScheduler(podLister PodLister, random *rand.Rand) Scheduler {\n",
"\treturn &FirstFitScheduler{\n",
"\t\tpodLister: podLister,\n",
"\t\trandom: random,\n",
"\t}\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"func NewFirstFitScheduler(podLister PodLister, random *rand.Rand) Scheduler {\n"
],
"file_path": "pkg/scheduler/firstfit.go",
"type": "replace",
"edit_start_line_idx": 32
} | // Package customsearch provides access to the CustomSearch API.
//
// See https://developers.google.com/custom-search/v1/using_rest
//
// Usage example:
//
// import "code.google.com/p/google-api-go-client/customsearch/v1"
// ...
// customsearchService, err := customsearch.New(oauthHttpClient)
package customsearch
import (
"bytes"
"code.google.com/p/google-api-go-client/googleapi"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"strings"
)
// Always reference these packages, just in case the auto-generated code
// below doesn't.
var _ = bytes.NewBuffer
var _ = strconv.Itoa
var _ = fmt.Sprintf
var _ = json.NewDecoder
var _ = io.Copy
var _ = url.Parse
var _ = googleapi.Version
var _ = errors.New
var _ = strings.Replace
const apiId = "customsearch:v1"
const apiName = "customsearch"
const apiVersion = "v1"
const basePath = "https://www.googleapis.com/customsearch/"
func New(client *http.Client) (*Service, error) {
if client == nil {
return nil, errors.New("client is nil")
}
s := &Service{client: client, BasePath: basePath}
s.Cse = NewCseService(s)
return s, nil
}
type Service struct {
client *http.Client
BasePath string // API endpoint base URL
Cse *CseService
}
func NewCseService(s *Service) *CseService {
rs := &CseService{s: s}
return rs
}
type CseService struct {
s *Service
}
type Context struct {
Facets [][]*ContextFacetsItem `json:"facets,omitempty"`
Title string `json:"title,omitempty"`
}
type ContextFacetsItem struct {
Anchor string `json:"anchor,omitempty"`
Label string `json:"label,omitempty"`
Label_with_op string `json:"label_with_op,omitempty"`
}
type Promotion struct {
BodyLines []*PromotionBodyLines `json:"bodyLines,omitempty"`
DisplayLink string `json:"displayLink,omitempty"`
HtmlTitle string `json:"htmlTitle,omitempty"`
Image *PromotionImage `json:"image,omitempty"`
Link string `json:"link,omitempty"`
Title string `json:"title,omitempty"`
}
type PromotionBodyLines struct {
HtmlTitle string `json:"htmlTitle,omitempty"`
Link string `json:"link,omitempty"`
Title string `json:"title,omitempty"`
Url string `json:"url,omitempty"`
}
type PromotionImage struct {
Height int64 `json:"height,omitempty"`
Source string `json:"source,omitempty"`
Width int64 `json:"width,omitempty"`
}
type Query struct {
Count int64 `json:"count,omitempty"`
Cr string `json:"cr,omitempty"`
Cref string `json:"cref,omitempty"`
Cx string `json:"cx,omitempty"`
DateRestrict string `json:"dateRestrict,omitempty"`
DisableCnTwTranslation string `json:"disableCnTwTranslation,omitempty"`
ExactTerms string `json:"exactTerms,omitempty"`
ExcludeTerms string `json:"excludeTerms,omitempty"`
FileType string `json:"fileType,omitempty"`
Filter string `json:"filter,omitempty"`
Gl string `json:"gl,omitempty"`
GoogleHost string `json:"googleHost,omitempty"`
HighRange string `json:"highRange,omitempty"`
Hl string `json:"hl,omitempty"`
Hq string `json:"hq,omitempty"`
ImgColorType string `json:"imgColorType,omitempty"`
ImgDominantColor string `json:"imgDominantColor,omitempty"`
ImgSize string `json:"imgSize,omitempty"`
ImgType string `json:"imgType,omitempty"`
InputEncoding string `json:"inputEncoding,omitempty"`
Language string `json:"language,omitempty"`
LinkSite string `json:"linkSite,omitempty"`
LowRange string `json:"lowRange,omitempty"`
OrTerms string `json:"orTerms,omitempty"`
OutputEncoding string `json:"outputEncoding,omitempty"`
RelatedSite string `json:"relatedSite,omitempty"`
Rights string `json:"rights,omitempty"`
Safe string `json:"safe,omitempty"`
SearchTerms string `json:"searchTerms,omitempty"`
SearchType string `json:"searchType,omitempty"`
SiteSearch string `json:"siteSearch,omitempty"`
SiteSearchFilter string `json:"siteSearchFilter,omitempty"`
Sort string `json:"sort,omitempty"`
StartIndex int64 `json:"startIndex,omitempty"`
StartPage int64 `json:"startPage,omitempty"`
Title string `json:"title,omitempty"`
TotalResults int64 `json:"totalResults,omitempty,string"`
}
type Result struct {
CacheId string `json:"cacheId,omitempty"`
DisplayLink string `json:"displayLink,omitempty"`
FileFormat string `json:"fileFormat,omitempty"`
FormattedUrl string `json:"formattedUrl,omitempty"`
HtmlFormattedUrl string `json:"htmlFormattedUrl,omitempty"`
HtmlSnippet string `json:"htmlSnippet,omitempty"`
HtmlTitle string `json:"htmlTitle,omitempty"`
Image *ResultImage `json:"image,omitempty"`
Kind string `json:"kind,omitempty"`
Labels []*ResultLabels `json:"labels,omitempty"`
Link string `json:"link,omitempty"`
Mime string `json:"mime,omitempty"`
Pagemap *ResultPagemap `json:"pagemap,omitempty"`
Snippet string `json:"snippet,omitempty"`
Title string `json:"title,omitempty"`
}
type ResultImage struct {
ByteSize int64 `json:"byteSize,omitempty"`
ContextLink string `json:"contextLink,omitempty"`
Height int64 `json:"height,omitempty"`
ThumbnailHeight int64 `json:"thumbnailHeight,omitempty"`
ThumbnailLink string `json:"thumbnailLink,omitempty"`
ThumbnailWidth int64 `json:"thumbnailWidth,omitempty"`
Width int64 `json:"width,omitempty"`
}
type ResultLabels struct {
DisplayName string `json:"displayName,omitempty"`
Label_with_op string `json:"label_with_op,omitempty"`
Name string `json:"name,omitempty"`
}
type ResultPagemap struct {
}
type Search struct {
Context *Context `json:"context,omitempty"`
Items []*Result `json:"items,omitempty"`
Kind string `json:"kind,omitempty"`
Promotions []*Promotion `json:"promotions,omitempty"`
Queries *SearchQueries `json:"queries,omitempty"`
SearchInformation *SearchSearchInformation `json:"searchInformation,omitempty"`
Spelling *SearchSpelling `json:"spelling,omitempty"`
Url *SearchUrl `json:"url,omitempty"`
}
type SearchQueries struct {
}
type SearchSearchInformation struct {
FormattedSearchTime string `json:"formattedSearchTime,omitempty"`
FormattedTotalResults string `json:"formattedTotalResults,omitempty"`
SearchTime float64 `json:"searchTime,omitempty"`
TotalResults int64 `json:"totalResults,omitempty,string"`
}
type SearchSpelling struct {
CorrectedQuery string `json:"correctedQuery,omitempty"`
HtmlCorrectedQuery string `json:"htmlCorrectedQuery,omitempty"`
}
type SearchUrl struct {
Template string `json:"template,omitempty"`
Type string `json:"type,omitempty"`
}
// method id "search.cse.list":
type CseListCall struct {
s *Service
q string
opt_ map[string]interface{}
}
// List: Returns metadata about the search performed, metadata about the
// custom search engine used for the search, and the search results.
func (r *CseService) List(q string) *CseListCall {
c := &CseListCall{s: r.s, opt_: make(map[string]interface{})}
c.q = q
return c
}
// C2coff sets the optional parameter "c2coff": Turns off the
// translation between zh-CN and zh-TW.
func (c *CseListCall) C2coff(c2coff string) *CseListCall {
c.opt_["c2coff"] = c2coff
return c
}
// Cr sets the optional parameter "cr": Country restrict(s).
func (c *CseListCall) Cr(cr string) *CseListCall {
c.opt_["cr"] = cr
return c
}
// Cref sets the optional parameter "cref": The URL of a linked custom
// search engine
func (c *CseListCall) Cref(cref string) *CseListCall {
c.opt_["cref"] = cref
return c
}
// Cx sets the optional parameter "cx": The custom search engine ID to
// scope this search query
func (c *CseListCall) Cx(cx string) *CseListCall {
c.opt_["cx"] = cx
return c
}
// DateRestrict sets the optional parameter "dateRestrict": Specifies
// all search results are from a time period
func (c *CseListCall) DateRestrict(dateRestrict string) *CseListCall {
c.opt_["dateRestrict"] = dateRestrict
return c
}
// ExactTerms sets the optional parameter "exactTerms": Identifies a
// phrase that all documents in the search results must contain
func (c *CseListCall) ExactTerms(exactTerms string) *CseListCall {
c.opt_["exactTerms"] = exactTerms
return c
}
// ExcludeTerms sets the optional parameter "excludeTerms": Identifies a
// word or phrase that should not appear in any documents in the search
// results
func (c *CseListCall) ExcludeTerms(excludeTerms string) *CseListCall {
c.opt_["excludeTerms"] = excludeTerms
return c
}
// FileType sets the optional parameter "fileType": Returns images of a
// specified type. Some of the allowed values are: bmp, gif, png, jpg,
// svg, pdf, ...
func (c *CseListCall) FileType(fileType string) *CseListCall {
c.opt_["fileType"] = fileType
return c
}
// Filter sets the optional parameter "filter": Controls turning on or
// off the duplicate content filter.
func (c *CseListCall) Filter(filter string) *CseListCall {
c.opt_["filter"] = filter
return c
}
// Gl sets the optional parameter "gl": Geolocation of end user.
func (c *CseListCall) Gl(gl string) *CseListCall {
c.opt_["gl"] = gl
return c
}
// Googlehost sets the optional parameter "googlehost": The local Google
// domain to use to perform the search.
func (c *CseListCall) Googlehost(googlehost string) *CseListCall {
c.opt_["googlehost"] = googlehost
return c
}
// HighRange sets the optional parameter "highRange": Creates a range in
// form as_nlo value..as_nhi value and attempts to append it to query
func (c *CseListCall) HighRange(highRange string) *CseListCall {
c.opt_["highRange"] = highRange
return c
}
// Hl sets the optional parameter "hl": Sets the user interface
// language.
func (c *CseListCall) Hl(hl string) *CseListCall {
c.opt_["hl"] = hl
return c
}
// Hq sets the optional parameter "hq": Appends the extra query terms to
// the query.
func (c *CseListCall) Hq(hq string) *CseListCall {
c.opt_["hq"] = hq
return c
}
// ImgColorType sets the optional parameter "imgColorType": Returns
// black and white, grayscale, or color images: mono, gray, and color.
func (c *CseListCall) ImgColorType(imgColorType string) *CseListCall {
c.opt_["imgColorType"] = imgColorType
return c
}
// ImgDominantColor sets the optional parameter "imgDominantColor":
// Returns images of a specific dominant color: yellow, green, teal,
// blue, purple, pink, white, gray, black and brown.
func (c *CseListCall) ImgDominantColor(imgDominantColor string) *CseListCall {
c.opt_["imgDominantColor"] = imgDominantColor
return c
}
// ImgSize sets the optional parameter "imgSize": Returns images of a
// specified size, where size can be one of: icon, small, medium, large,
// xlarge, xxlarge, and huge.
func (c *CseListCall) ImgSize(imgSize string) *CseListCall {
c.opt_["imgSize"] = imgSize
return c
}
// ImgType sets the optional parameter "imgType": Returns images of a
// type, which can be one of: clipart, face, lineart, news, and photo.
func (c *CseListCall) ImgType(imgType string) *CseListCall {
c.opt_["imgType"] = imgType
return c
}
// LinkSite sets the optional parameter "linkSite": Specifies that all
// search results should contain a link to a particular URL
func (c *CseListCall) LinkSite(linkSite string) *CseListCall {
c.opt_["linkSite"] = linkSite
return c
}
// LowRange sets the optional parameter "lowRange": Creates a range in
// form as_nlo value..as_nhi value and attempts to append it to query
func (c *CseListCall) LowRange(lowRange string) *CseListCall {
c.opt_["lowRange"] = lowRange
return c
}
// Lr sets the optional parameter "lr": The language restriction for the
// search results
func (c *CseListCall) Lr(lr string) *CseListCall {
c.opt_["lr"] = lr
return c
}
// Num sets the optional parameter "num": Number of search results to
// return
func (c *CseListCall) Num(num int64) *CseListCall {
c.opt_["num"] = num
return c
}
// OrTerms sets the optional parameter "orTerms": Provides additional
// search terms to check for in a document, where each document in the
// search results must contain at least one of the additional search
// terms
func (c *CseListCall) OrTerms(orTerms string) *CseListCall {
c.opt_["orTerms"] = orTerms
return c
}
// RelatedSite sets the optional parameter "relatedSite": Specifies that
// all search results should be pages that are related to the specified
// URL
func (c *CseListCall) RelatedSite(relatedSite string) *CseListCall {
c.opt_["relatedSite"] = relatedSite
return c
}
// Rights sets the optional parameter "rights": Filters based on
// licensing. Supported values include: cc_publicdomain, cc_attribute,
// cc_sharealike, cc_noncommercial, cc_nonderived and combinations of
// these.
func (c *CseListCall) Rights(rights string) *CseListCall {
c.opt_["rights"] = rights
return c
}
// Safe sets the optional parameter "safe": Search safety level
func (c *CseListCall) Safe(safe string) *CseListCall {
c.opt_["safe"] = safe
return c
}
// SearchType sets the optional parameter "searchType": Specifies the
// search type: image.
func (c *CseListCall) SearchType(searchType string) *CseListCall {
c.opt_["searchType"] = searchType
return c
}
// SiteSearch sets the optional parameter "siteSearch": Specifies all
// search results should be pages from a given site
func (c *CseListCall) SiteSearch(siteSearch string) *CseListCall {
c.opt_["siteSearch"] = siteSearch
return c
}
// SiteSearchFilter sets the optional parameter "siteSearchFilter":
// Controls whether to include or exclude results from the site named in
// the as_sitesearch parameter
func (c *CseListCall) SiteSearchFilter(siteSearchFilter string) *CseListCall {
c.opt_["siteSearchFilter"] = siteSearchFilter
return c
}
// Sort sets the optional parameter "sort": The sort expression to apply
// to the results
func (c *CseListCall) Sort(sort string) *CseListCall {
c.opt_["sort"] = sort
return c
}
// Start sets the optional parameter "start": The index of the first
// result to return
func (c *CseListCall) Start(start int64) *CseListCall {
c.opt_["start"] = start
return c
}
func (c *CseListCall) Do() (*Search, error) {
var body io.Reader = nil
params := make(url.Values)
params.Set("alt", "json")
params.Set("q", fmt.Sprintf("%v", c.q))
if v, ok := c.opt_["c2coff"]; ok {
params.Set("c2coff", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["cr"]; ok {
params.Set("cr", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["cref"]; ok {
params.Set("cref", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["cx"]; ok {
params.Set("cx", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["dateRestrict"]; ok {
params.Set("dateRestrict", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["exactTerms"]; ok {
params.Set("exactTerms", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["excludeTerms"]; ok {
params.Set("excludeTerms", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["fileType"]; ok {
params.Set("fileType", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["filter"]; ok {
params.Set("filter", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["gl"]; ok {
params.Set("gl", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["googlehost"]; ok {
params.Set("googlehost", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["highRange"]; ok {
params.Set("highRange", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["hl"]; ok {
params.Set("hl", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["hq"]; ok {
params.Set("hq", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["imgColorType"]; ok {
params.Set("imgColorType", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["imgDominantColor"]; ok {
params.Set("imgDominantColor", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["imgSize"]; ok {
params.Set("imgSize", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["imgType"]; ok {
params.Set("imgType", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["linkSite"]; ok {
params.Set("linkSite", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["lowRange"]; ok {
params.Set("lowRange", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["lr"]; ok {
params.Set("lr", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["num"]; ok {
params.Set("num", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["orTerms"]; ok {
params.Set("orTerms", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["relatedSite"]; ok {
params.Set("relatedSite", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["rights"]; ok {
params.Set("rights", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["safe"]; ok {
params.Set("safe", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["searchType"]; ok {
params.Set("searchType", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["siteSearch"]; ok {
params.Set("siteSearch", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["siteSearchFilter"]; ok {
params.Set("siteSearchFilter", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["sort"]; ok {
params.Set("sort", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["start"]; ok {
params.Set("start", fmt.Sprintf("%v", v))
}
urls := googleapi.ResolveRelative(c.s.BasePath, "v1")
urls += "?" + params.Encode()
req, _ := http.NewRequest("GET", urls, body)
googleapi.SetOpaque(req.URL)
req.Header.Set("User-Agent", "google-api-go-client/0.5")
res, err := c.s.client.Do(req)
if err != nil {
return nil, err
}
defer googleapi.CloseBody(res)
if err := googleapi.CheckResponse(res); err != nil {
return nil, err
}
ret := new(Search)
if err := json.NewDecoder(res.Body).Decode(ret); err != nil {
return nil, err
}
return ret, nil
// {
// "description": "Returns metadata about the search performed, metadata about the custom search engine used for the search, and the search results.",
// "httpMethod": "GET",
// "id": "search.cse.list",
// "parameterOrder": [
// "q"
// ],
// "parameters": {
// "c2coff": {
// "description": "Turns off the translation between zh-CN and zh-TW.",
// "location": "query",
// "type": "string"
// },
// "cr": {
// "description": "Country restrict(s).",
// "location": "query",
// "type": "string"
// },
// "cref": {
// "description": "The URL of a linked custom search engine",
// "location": "query",
// "type": "string"
// },
// "cx": {
// "description": "The custom search engine ID to scope this search query",
// "location": "query",
// "type": "string"
// },
// "dateRestrict": {
// "description": "Specifies all search results are from a time period",
// "location": "query",
// "type": "string"
// },
// "exactTerms": {
// "description": "Identifies a phrase that all documents in the search results must contain",
// "location": "query",
// "type": "string"
// },
// "excludeTerms": {
// "description": "Identifies a word or phrase that should not appear in any documents in the search results",
// "location": "query",
// "type": "string"
// },
// "fileType": {
// "description": "Returns images of a specified type. Some of the allowed values are: bmp, gif, png, jpg, svg, pdf, ...",
// "location": "query",
// "type": "string"
// },
// "filter": {
// "description": "Controls turning on or off the duplicate content filter.",
// "enum": [
// "0",
// "1"
// ],
// "enumDescriptions": [
// "Turns off duplicate content filter.",
// "Turns on duplicate content filter."
// ],
// "location": "query",
// "type": "string"
// },
// "gl": {
// "description": "Geolocation of end user.",
// "location": "query",
// "type": "string"
// },
// "googlehost": {
// "description": "The local Google domain to use to perform the search.",
// "location": "query",
// "type": "string"
// },
// "highRange": {
// "description": "Creates a range in form as_nlo value..as_nhi value and attempts to append it to query",
// "location": "query",
// "type": "string"
// },
// "hl": {
// "description": "Sets the user interface language.",
// "location": "query",
// "type": "string"
// },
// "hq": {
// "description": "Appends the extra query terms to the query.",
// "location": "query",
// "type": "string"
// },
// "imgColorType": {
// "description": "Returns black and white, grayscale, or color images: mono, gray, and color.",
// "enum": [
// "color",
// "gray",
// "mono"
// ],
// "enumDescriptions": [
// "color",
// "gray",
// "mono"
// ],
// "location": "query",
// "type": "string"
// },
// "imgDominantColor": {
// "description": "Returns images of a specific dominant color: yellow, green, teal, blue, purple, pink, white, gray, black and brown.",
// "enum": [
// "black",
// "blue",
// "brown",
// "gray",
// "green",
// "pink",
// "purple",
// "teal",
// "white",
// "yellow"
// ],
// "enumDescriptions": [
// "black",
// "blue",
// "brown",
// "gray",
// "green",
// "pink",
// "purple",
// "teal",
// "white",
// "yellow"
// ],
// "location": "query",
// "type": "string"
// },
// "imgSize": {
// "description": "Returns images of a specified size, where size can be one of: icon, small, medium, large, xlarge, xxlarge, and huge.",
// "enum": [
// "huge",
// "icon",
// "large",
// "medium",
// "small",
// "xlarge",
// "xxlarge"
// ],
// "enumDescriptions": [
// "huge",
// "icon",
// "large",
// "medium",
// "small",
// "xlarge",
// "xxlarge"
// ],
// "location": "query",
// "type": "string"
// },
// "imgType": {
// "description": "Returns images of a type, which can be one of: clipart, face, lineart, news, and photo.",
// "enum": [
// "clipart",
// "face",
// "lineart",
// "news",
// "photo"
// ],
// "enumDescriptions": [
// "clipart",
// "face",
// "lineart",
// "news",
// "photo"
// ],
// "location": "query",
// "type": "string"
// },
// "linkSite": {
// "description": "Specifies that all search results should contain a link to a particular URL",
// "location": "query",
// "type": "string"
// },
// "lowRange": {
// "description": "Creates a range in form as_nlo value..as_nhi value and attempts to append it to query",
// "location": "query",
// "type": "string"
// },
// "lr": {
// "description": "The language restriction for the search results",
// "enum": [
// "lang_ar",
// "lang_bg",
// "lang_ca",
// "lang_cs",
// "lang_da",
// "lang_de",
// "lang_el",
// "lang_en",
// "lang_es",
// "lang_et",
// "lang_fi",
// "lang_fr",
// "lang_hr",
// "lang_hu",
// "lang_id",
// "lang_is",
// "lang_it",
// "lang_iw",
// "lang_ja",
// "lang_ko",
// "lang_lt",
// "lang_lv",
// "lang_nl",
// "lang_no",
// "lang_pl",
// "lang_pt",
// "lang_ro",
// "lang_ru",
// "lang_sk",
// "lang_sl",
// "lang_sr",
// "lang_sv",
// "lang_tr",
// "lang_zh-CN",
// "lang_zh-TW"
// ],
// "enumDescriptions": [
// "Arabic",
// "Bulgarian",
// "Catalan",
// "Czech",
// "Danish",
// "German",
// "Greek",
// "English",
// "Spanish",
// "Estonian",
// "Finnish",
// "French",
// "Croatian",
// "Hungarian",
// "Indonesian",
// "Icelandic",
// "Italian",
// "Hebrew",
// "Japanese",
// "Korean",
// "Lithuanian",
// "Latvian",
// "Dutch",
// "Norwegian",
// "Polish",
// "Portuguese",
// "Romanian",
// "Russian",
// "Slovak",
// "Slovenian",
// "Serbian",
// "Swedish",
// "Turkish",
// "Chinese (Simplified)",
// "Chinese (Traditional)"
// ],
// "location": "query",
// "type": "string"
// },
// "num": {
// "default": "10",
// "description": "Number of search results to return",
// "format": "uint32",
// "location": "query",
// "type": "integer"
// },
// "orTerms": {
// "description": "Provides additional search terms to check for in a document, where each document in the search results must contain at least one of the additional search terms",
// "location": "query",
// "type": "string"
// },
// "q": {
// "description": "Query",
// "location": "query",
// "required": true,
// "type": "string"
// },
// "relatedSite": {
// "description": "Specifies that all search results should be pages that are related to the specified URL",
// "location": "query",
// "type": "string"
// },
// "rights": {
// "description": "Filters based on licensing. Supported values include: cc_publicdomain, cc_attribute, cc_sharealike, cc_noncommercial, cc_nonderived and combinations of these.",
// "location": "query",
// "type": "string"
// },
// "safe": {
// "default": "off",
// "description": "Search safety level",
// "enum": [
// "high",
// "medium",
// "off"
// ],
// "enumDescriptions": [
// "Enables highest level of safe search filtering.",
// "Enables moderate safe search filtering.",
// "Disables safe search filtering."
// ],
// "location": "query",
// "type": "string"
// },
// "searchType": {
// "description": "Specifies the search type: image.",
// "enum": [
// "image"
// ],
// "enumDescriptions": [
// "custom image search"
// ],
// "location": "query",
// "type": "string"
// },
// "siteSearch": {
// "description": "Specifies all search results should be pages from a given site",
// "location": "query",
// "type": "string"
// },
// "siteSearchFilter": {
// "description": "Controls whether to include or exclude results from the site named in the as_sitesearch parameter",
// "enum": [
// "e",
// "i"
// ],
// "enumDescriptions": [
// "exclude",
// "include"
// ],
// "location": "query",
// "type": "string"
// },
// "sort": {
// "description": "The sort expression to apply to the results",
// "location": "query",
// "type": "string"
// },
// "start": {
// "description": "The index of the first result to return",
// "format": "uint32",
// "location": "query",
// "type": "integer"
// }
// },
// "path": "v1",
// "response": {
// "$ref": "Search"
// }
// }
}
| third_party/src/code.google.com/p/google-api-go-client/customsearch/v1/customsearch-gen.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.0001822866324800998,
0.00017137007671408355,
0.0001626751181902364,
0.00017211545491591096,
0.00000332932131641428
] |
{
"id": 1,
"code_window": [
"\t// TODO: *rand.Rand is *not* threadsafe\n",
"\trandom *rand.Rand\n",
"}\n",
"\n",
"func MakeFirstFitScheduler(podLister PodLister, random *rand.Rand) Scheduler {\n",
"\treturn &FirstFitScheduler{\n",
"\t\tpodLister: podLister,\n",
"\t\trandom: random,\n",
"\t}\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"func NewFirstFitScheduler(podLister PodLister, random *rand.Rand) Scheduler {\n"
],
"file_path": "pkg/scheduler/firstfit.go",
"type": "replace",
"edit_start_line_idx": 32
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Watches etcd and gets the full configuration on preset intervals.
// Expects the list of exposed services to live under:
// registry/services
// which in etcd is exposed like so:
// http://<etcd server>/v2/keys/registry/services
//
// The port that proxy needs to listen in for each service is a value in:
// registry/services/<service>
//
// The endpoints for each of the services found is a json string
// representing that service at:
// /registry/services/<service>/endpoint
// and the format is:
// '[ { "machine": <host>, "name": <name", "port": <port> },
// { "machine": <host2>, "name": <name2", "port": <port2> }
// ]',
//
package config
import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
"github.com/coreos/go-etcd/etcd"
"github.com/golang/glog"
)
const RegistryRoot = "registry/services"
type ConfigSourceEtcd struct {
client *etcd.Client
serviceChannel chan ServiceUpdate
endpointsChannel chan EndpointsUpdate
}
func NewConfigSourceEtcd(client *etcd.Client, serviceChannel chan ServiceUpdate, endpointsChannel chan EndpointsUpdate) ConfigSourceEtcd {
config := ConfigSourceEtcd{
client: client,
serviceChannel: serviceChannel,
endpointsChannel: endpointsChannel,
}
go config.Run()
return config
}
func (impl ConfigSourceEtcd) Run() {
// Initially, just wait for the etcd to come up before doing anything more complicated.
var services []api.Service
var endpoints []api.Endpoints
var err error
for {
services, endpoints, err = impl.GetServices()
if err == nil {
break
}
glog.Errorf("Failed to get any services: %v", err)
time.Sleep(2 * time.Second)
}
if len(services) > 0 {
serviceUpdate := ServiceUpdate{Op: SET, Services: services}
impl.serviceChannel <- serviceUpdate
}
if len(endpoints) > 0 {
endpointsUpdate := EndpointsUpdate{Op: SET, Endpoints: endpoints}
impl.endpointsChannel <- endpointsUpdate
}
// Ok, so we got something back from etcd. Let's set up a watch for new services, and
// their endpoints
go impl.WatchForChanges()
for {
services, endpoints, err = impl.GetServices()
if err != nil {
glog.Errorf("ConfigSourceEtcd: Failed to get services: %v", err)
} else {
if len(services) > 0 {
serviceUpdate := ServiceUpdate{Op: SET, Services: services}
impl.serviceChannel <- serviceUpdate
}
if len(endpoints) > 0 {
endpointsUpdate := EndpointsUpdate{Op: SET, Endpoints: endpoints}
impl.endpointsChannel <- endpointsUpdate
}
}
time.Sleep(30 * time.Second)
}
}
// Finds the list of services and their endpoints from etcd.
// This operation is akin to a set a known good at regular intervals.
func (impl ConfigSourceEtcd) GetServices() ([]api.Service, []api.Endpoints, error) {
response, err := impl.client.Get(RegistryRoot+"/specs", true, false)
if err != nil {
glog.Errorf("Failed to get the key %s: %v", RegistryRoot, err)
return make([]api.Service, 0), make([]api.Endpoints, 0), err
}
if response.Node.Dir == true {
retServices := make([]api.Service, len(response.Node.Nodes))
retEndpoints := make([]api.Endpoints, len(response.Node.Nodes))
// Ok, so we have directories, this list should be the list
// of services. Find the local port to listen on and remote endpoints
// and create a Service entry for it.
for i, node := range response.Node.Nodes {
var svc api.Service
err = json.Unmarshal([]byte(node.Value), &svc)
if err != nil {
glog.Errorf("Failed to load Service: %s (%#v)", node.Value, err)
continue
}
retServices[i] = svc
endpoints, err := impl.GetEndpoints(svc.ID)
if err != nil {
glog.Errorf("Couldn't get endpoints for %s : %v skipping", svc.ID, err)
}
glog.Infof("Got service: %s on localport %d mapping to: %s", svc.ID, svc.Port, endpoints)
retEndpoints[i] = endpoints
}
return retServices, retEndpoints, err
}
return nil, nil, fmt.Errorf("did not get the root of the registry %s", RegistryRoot)
}
func (impl ConfigSourceEtcd) GetEndpoints(service string) (api.Endpoints, error) {
key := fmt.Sprintf(RegistryRoot + "/endpoints/" + service)
response, err := impl.client.Get(key, true, false)
if err != nil {
glog.Errorf("Failed to get the key: %s %v", key, err)
return api.Endpoints{}, err
}
// Parse all the endpoint specifications in this value.
return ParseEndpoints(response.Node.Value)
}
// EtcdResponseToServiceAndLocalport takes an etcd response and pulls it apart to find
// service
func EtcdResponseToService(response *etcd.Response) (*api.Service, error) {
if response.Node == nil {
return nil, fmt.Errorf("invalid response from etcd: %#v", response)
}
var svc api.Service
err := json.Unmarshal([]byte(response.Node.Value), &svc)
if err != nil {
return nil, err
}
return &svc, err
}
func ParseEndpoints(jsonString string) (api.Endpoints, error) {
var e api.Endpoints
err := json.Unmarshal([]byte(jsonString), &e)
return e, err
}
func (impl ConfigSourceEtcd) WatchForChanges() {
glog.Info("Setting up a watch for new services")
watchChannel := make(chan *etcd.Response)
go impl.client.Watch("/registry/services/", 0, true, watchChannel, nil)
for {
watchResponse := <-watchChannel
impl.ProcessChange(watchResponse)
}
}
func (impl ConfigSourceEtcd) ProcessChange(response *etcd.Response) {
glog.Infof("Processing a change in service configuration... %s", *response)
// If it's a new service being added (signified by a localport being added)
// then process it as such
if strings.Contains(response.Node.Key, "/endpoints/") {
impl.ProcessEndpointResponse(response)
} else if response.Action == "set" {
service, err := EtcdResponseToService(response)
if err != nil {
glog.Errorf("Failed to parse %s Port: %s", response, err)
return
}
glog.Infof("New service added/updated: %#v", service)
serviceUpdate := ServiceUpdate{Op: ADD, Services: []api.Service{*service}}
impl.serviceChannel <- serviceUpdate
return
}
if response.Action == "delete" {
parts := strings.Split(response.Node.Key[1:], "/")
if len(parts) == 4 {
glog.Infof("Deleting service: %s", parts[3])
serviceUpdate := ServiceUpdate{Op: REMOVE, Services: []api.Service{{JSONBase: api.JSONBase{ID: parts[3]}}}}
impl.serviceChannel <- serviceUpdate
return
} else {
glog.Infof("Unknown service delete: %#v", parts)
}
}
}
func (impl ConfigSourceEtcd) ProcessEndpointResponse(response *etcd.Response) {
glog.Infof("Processing a change in endpoint configuration... %s", *response)
var endpoints api.Endpoints
err := json.Unmarshal([]byte(response.Node.Value), &endpoints)
if err != nil {
glog.Errorf("Failed to parse service out of etcd key: %v : %+v", response.Node.Value, err)
return
}
endpointsUpdate := EndpointsUpdate{Op: ADD, Endpoints: []api.Endpoints{endpoints}}
impl.endpointsChannel <- endpointsUpdate
}
| pkg/proxy/config/etcd.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.000179672846570611,
0.00017084144928958267,
0.00016584542754571885,
0.0001706440089037642,
0.0000031437516554433387
] |
{
"id": 2,
"code_window": [
"\tfakeRegistry := FakePodLister{}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(&fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(api.Pod{}, \"m3\")\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(&fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 30
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package scheduler
import (
"math/rand"
"testing"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
)
func TestFirstFitSchedulerNothingScheduled(t *testing.T) {
fakeRegistry := FakePodLister{}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(&fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(api.Pod{}, "m3")
}
func TestFirstFitSchedulerFirstScheduled(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 8080),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(makePod("", 8080), "m3")
}
func TestFirstFitSchedulerFirstScheduledComplicated(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 80, 8080),
makePod("m2", 8081, 8082, 8083),
makePod("m3", 80, 443, 8085),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(makePod("", 8080, 8081), "m3")
}
func TestFirstFitSchedulerFirstScheduledImpossible(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 8080),
makePod("m2", 8081),
makePod("m3", 8080),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectFailure(makePod("", 8080, 8081))
}
| pkg/scheduler/firstfit_test.go | 1 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.9978982210159302,
0.7469217777252197,
0.00017721137555781752,
0.9956352710723877,
0.4311395585536957
] |
{
"id": 2,
"code_window": [
"\tfakeRegistry := FakePodLister{}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(&fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(api.Pod{}, \"m3\")\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(&fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 30
} | # Copyright 2014 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This is a quick script that adds AllUsers as READER to a JSON file
# representing an ACL on a GCS object. This is a quick workaround for a bug in
# gsutil.
import json
import sys
acl = json.load(sys.stdin)
acl.append({
"entity": "allUsers",
"role": "READER"
})
json.dump(acl, sys.stdout)
| release/make-public-gcs-acl.py | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.00017831005970947444,
0.00017388432752341032,
0.00017095947987399995,
0.0001723834138829261,
0.0000031830061288928846
] |
{
"id": 2,
"code_window": [
"\tfakeRegistry := FakePodLister{}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(&fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(api.Pod{}, \"m3\")\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(&fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 30
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package proxy
import (
"fmt"
"io"
"net"
"testing"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
)
// a simple echoServer that only accepts one connection. Returns port actually
// being listened on, or an error.
func echoServer(t *testing.T, addr string) (string, error) {
l, err := net.Listen("tcp", addr)
if err != nil {
return "", fmt.Errorf("failed to start echo service: %v", err)
}
go func() {
defer l.Close()
conn, err := l.Accept()
if err != nil {
t.Errorf("failed to accept new conn to echo service: %v", err)
}
io.Copy(conn, conn)
conn.Close()
}()
_, port, err := net.SplitHostPort(l.Addr().String())
return port, err
}
func TestProxy(t *testing.T) {
port, err := echoServer(t, "127.0.0.1:0")
if err != nil {
t.Fatal(err)
}
lb := NewLoadBalancerRR()
lb.OnUpdate([]api.Endpoints{{"echo", []string{net.JoinHostPort("127.0.0.1", port)}}})
p := NewProxier(lb)
proxyPort, err := p.addServiceOnUnusedPort("echo")
if err != nil {
t.Fatalf("error adding new service: %#v", err)
}
conn, err := net.Dial("tcp", net.JoinHostPort("127.0.0.1", proxyPort))
if err != nil {
t.Fatalf("error connecting to proxy: %v", err)
}
magic := "aaaaa"
if _, err := conn.Write([]byte(magic)); err != nil {
t.Fatalf("error writing to proxy: %v", err)
}
buf := make([]byte, 5)
if _, err := conn.Read(buf); err != nil {
t.Fatalf("error reading from proxy: %v", err)
}
if string(buf) != magic {
t.Fatalf("bad echo from proxy: got: %q, expected %q", string(buf), magic)
}
}
| pkg/proxy/proxier_test.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.00017883756663650274,
0.00017200090223923326,
0.00016789701476227492,
0.00017036779900081456,
0.000003928909791284241
] |
{
"id": 2,
"code_window": [
"\tfakeRegistry := FakePodLister{}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(&fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(api.Pod{}, \"m3\")\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(&fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 30
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Watches etcd and gets the full configuration on preset intervals.
// Expects the list of exposed services to live under:
// registry/services
// which in etcd is exposed like so:
// http://<etcd server>/v2/keys/registry/services
//
// The port that proxy needs to listen in for each service is a value in:
// registry/services/<service>
//
// The endpoints for each of the services found is a json string
// representing that service at:
// /registry/services/<service>/endpoint
// and the format is:
// '[ { "machine": <host>, "name": <name", "port": <port> },
// { "machine": <host2>, "name": <name2", "port": <port2> }
// ]',
//
package config
import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
"github.com/coreos/go-etcd/etcd"
"github.com/golang/glog"
)
const RegistryRoot = "registry/services"
type ConfigSourceEtcd struct {
client *etcd.Client
serviceChannel chan ServiceUpdate
endpointsChannel chan EndpointsUpdate
}
func NewConfigSourceEtcd(client *etcd.Client, serviceChannel chan ServiceUpdate, endpointsChannel chan EndpointsUpdate) ConfigSourceEtcd {
config := ConfigSourceEtcd{
client: client,
serviceChannel: serviceChannel,
endpointsChannel: endpointsChannel,
}
go config.Run()
return config
}
func (impl ConfigSourceEtcd) Run() {
// Initially, just wait for the etcd to come up before doing anything more complicated.
var services []api.Service
var endpoints []api.Endpoints
var err error
for {
services, endpoints, err = impl.GetServices()
if err == nil {
break
}
glog.Errorf("Failed to get any services: %v", err)
time.Sleep(2 * time.Second)
}
if len(services) > 0 {
serviceUpdate := ServiceUpdate{Op: SET, Services: services}
impl.serviceChannel <- serviceUpdate
}
if len(endpoints) > 0 {
endpointsUpdate := EndpointsUpdate{Op: SET, Endpoints: endpoints}
impl.endpointsChannel <- endpointsUpdate
}
// Ok, so we got something back from etcd. Let's set up a watch for new services, and
// their endpoints
go impl.WatchForChanges()
for {
services, endpoints, err = impl.GetServices()
if err != nil {
glog.Errorf("ConfigSourceEtcd: Failed to get services: %v", err)
} else {
if len(services) > 0 {
serviceUpdate := ServiceUpdate{Op: SET, Services: services}
impl.serviceChannel <- serviceUpdate
}
if len(endpoints) > 0 {
endpointsUpdate := EndpointsUpdate{Op: SET, Endpoints: endpoints}
impl.endpointsChannel <- endpointsUpdate
}
}
time.Sleep(30 * time.Second)
}
}
// Finds the list of services and their endpoints from etcd.
// This operation is akin to a set a known good at regular intervals.
func (impl ConfigSourceEtcd) GetServices() ([]api.Service, []api.Endpoints, error) {
response, err := impl.client.Get(RegistryRoot+"/specs", true, false)
if err != nil {
glog.Errorf("Failed to get the key %s: %v", RegistryRoot, err)
return make([]api.Service, 0), make([]api.Endpoints, 0), err
}
if response.Node.Dir == true {
retServices := make([]api.Service, len(response.Node.Nodes))
retEndpoints := make([]api.Endpoints, len(response.Node.Nodes))
// Ok, so we have directories, this list should be the list
// of services. Find the local port to listen on and remote endpoints
// and create a Service entry for it.
for i, node := range response.Node.Nodes {
var svc api.Service
err = json.Unmarshal([]byte(node.Value), &svc)
if err != nil {
glog.Errorf("Failed to load Service: %s (%#v)", node.Value, err)
continue
}
retServices[i] = svc
endpoints, err := impl.GetEndpoints(svc.ID)
if err != nil {
glog.Errorf("Couldn't get endpoints for %s : %v skipping", svc.ID, err)
}
glog.Infof("Got service: %s on localport %d mapping to: %s", svc.ID, svc.Port, endpoints)
retEndpoints[i] = endpoints
}
return retServices, retEndpoints, err
}
return nil, nil, fmt.Errorf("did not get the root of the registry %s", RegistryRoot)
}
func (impl ConfigSourceEtcd) GetEndpoints(service string) (api.Endpoints, error) {
key := fmt.Sprintf(RegistryRoot + "/endpoints/" + service)
response, err := impl.client.Get(key, true, false)
if err != nil {
glog.Errorf("Failed to get the key: %s %v", key, err)
return api.Endpoints{}, err
}
// Parse all the endpoint specifications in this value.
return ParseEndpoints(response.Node.Value)
}
// EtcdResponseToServiceAndLocalport takes an etcd response and pulls it apart to find
// service
func EtcdResponseToService(response *etcd.Response) (*api.Service, error) {
if response.Node == nil {
return nil, fmt.Errorf("invalid response from etcd: %#v", response)
}
var svc api.Service
err := json.Unmarshal([]byte(response.Node.Value), &svc)
if err != nil {
return nil, err
}
return &svc, err
}
func ParseEndpoints(jsonString string) (api.Endpoints, error) {
var e api.Endpoints
err := json.Unmarshal([]byte(jsonString), &e)
return e, err
}
func (impl ConfigSourceEtcd) WatchForChanges() {
glog.Info("Setting up a watch for new services")
watchChannel := make(chan *etcd.Response)
go impl.client.Watch("/registry/services/", 0, true, watchChannel, nil)
for {
watchResponse := <-watchChannel
impl.ProcessChange(watchResponse)
}
}
func (impl ConfigSourceEtcd) ProcessChange(response *etcd.Response) {
glog.Infof("Processing a change in service configuration... %s", *response)
// If it's a new service being added (signified by a localport being added)
// then process it as such
if strings.Contains(response.Node.Key, "/endpoints/") {
impl.ProcessEndpointResponse(response)
} else if response.Action == "set" {
service, err := EtcdResponseToService(response)
if err != nil {
glog.Errorf("Failed to parse %s Port: %s", response, err)
return
}
glog.Infof("New service added/updated: %#v", service)
serviceUpdate := ServiceUpdate{Op: ADD, Services: []api.Service{*service}}
impl.serviceChannel <- serviceUpdate
return
}
if response.Action == "delete" {
parts := strings.Split(response.Node.Key[1:], "/")
if len(parts) == 4 {
glog.Infof("Deleting service: %s", parts[3])
serviceUpdate := ServiceUpdate{Op: REMOVE, Services: []api.Service{{JSONBase: api.JSONBase{ID: parts[3]}}}}
impl.serviceChannel <- serviceUpdate
return
} else {
glog.Infof("Unknown service delete: %#v", parts)
}
}
}
func (impl ConfigSourceEtcd) ProcessEndpointResponse(response *etcd.Response) {
glog.Infof("Processing a change in endpoint configuration... %s", *response)
var endpoints api.Endpoints
err := json.Unmarshal([]byte(response.Node.Value), &endpoints)
if err != nil {
glog.Errorf("Failed to parse service out of etcd key: %v : %+v", response.Node.Value, err)
return
}
endpointsUpdate := EndpointsUpdate{Op: ADD, Endpoints: []api.Endpoints{endpoints}}
impl.endpointsChannel <- endpointsUpdate
}
| pkg/proxy/config/etcd.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.00017766296514309943,
0.00017026976274792105,
0.0001607708545634523,
0.0001698730484349653,
0.000004083814928890206
] |
{
"id": 3,
"code_window": [
"\t\tmakePod(\"m1\", 8080),\n",
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(makePod(\"\", 8080), \"m3\")\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 43
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package scheduler
import (
"math/rand"
"testing"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
)
func TestFirstFitSchedulerNothingScheduled(t *testing.T) {
fakeRegistry := FakePodLister{}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(&fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(api.Pod{}, "m3")
}
func TestFirstFitSchedulerFirstScheduled(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 8080),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(makePod("", 8080), "m3")
}
func TestFirstFitSchedulerFirstScheduledComplicated(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 80, 8080),
makePod("m2", 8081, 8082, 8083),
makePod("m3", 80, 443, 8085),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(makePod("", 8080, 8081), "m3")
}
func TestFirstFitSchedulerFirstScheduledImpossible(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 8080),
makePod("m2", 8081),
makePod("m3", 8080),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectFailure(makePod("", 8080, 8081))
}
| pkg/scheduler/firstfit_test.go | 1 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.998229444026947,
0.613614559173584,
0.00017733748245518655,
0.948797345161438,
0.4711604714393616
] |
{
"id": 3,
"code_window": [
"\t\tmakePod(\"m1\", 8080),\n",
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(makePod(\"\", 8080), \"m3\")\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 43
} | // Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import (
"fmt"
"log"
"net/http"
"os"
"path/filepath"
"strings"
"code.google.com/p/google-api-go-client/googleapi"
prediction "code.google.com/p/google-api-go-client/prediction/v1.6"
)
func init() {
scopes := []string{
prediction.DevstorageFull_controlScope,
prediction.DevstorageRead_onlyScope,
prediction.DevstorageRead_writeScope,
prediction.PredictionScope,
}
registerDemo("prediction", strings.Join(scopes, " "), predictionMain)
}
type predictionType struct {
api *prediction.Service
projectNumber string
bucketName string
trainingFileName string
modelName string
}
// This example demonstrates calling the Prediction API.
// Training data is uploaded to a pre-created Google Cloud Storage Bucket and
// then the Prediction API is called to train a model based on that data.
// After a few minutes, the model should be completely trained and ready
// for prediction. At that point, text is sent to the model and the Prediction
// API attempts to classify the data, and the results are printed out.
//
// To get started, follow the instructions found in the "Hello Prediction!"
// Getting Started Guide located here:
// https://developers.google.com/prediction/docs/hello_world
//
// Example usage:
// go-api-demo -clientid="my-clientid" -secret="my-secret" prediction
// my-project-number my-bucket-name my-training-filename my-model-name
//
// Example output:
// Predict result: language=Spanish
// English Score: 0.000000
// French Score: 0.000000
// Spanish Score: 1.000000
// analyze: output feature text=&{157 English}
// analyze: output feature text=&{149 French}
// analyze: output feature text=&{100 Spanish}
// feature text count=406
func predictionMain(client *http.Client, argv []string) {
if len(argv) != 4 {
fmt.Fprintln(os.Stderr,
"Usage: prediction project_number bucket training_data model_name")
return
}
api, err := prediction.New(client)
if err != nil {
log.Fatalf("unable to create prediction API client: %v", err)
}
t := &predictionType{
api: api,
projectNumber: argv[0],
bucketName: argv[1],
trainingFileName: argv[2],
modelName: argv[3],
}
t.trainModel()
t.predictModel()
}
func (t *predictionType) trainModel() {
// First, check to see if our trained model already exists.
res, err := t.api.Trainedmodels.Get(t.projectNumber, t.modelName).Do()
if err != nil {
if ae, ok := err.(*googleapi.Error); ok && ae.Code != http.StatusNotFound {
log.Fatalf("error getting trained model: %v", err)
}
log.Printf("Training model not found, creating new model.")
res, err = t.api.Trainedmodels.Insert(t.projectNumber, &prediction.Insert{
Id: t.modelName,
StorageDataLocation: filepath.Join(t.bucketName, t.trainingFileName),
}).Do()
if err != nil {
log.Fatalf("unable to create trained model: %v", err)
}
}
if res.TrainingStatus != "DONE" {
// Wait for the trained model to finish training.
fmt.Printf("Training model. Please wait and re-run program after a few minutes.")
os.Exit(0)
}
}
func (t *predictionType) predictModel() {
// Model has now been trained. Predict with it.
input := &prediction.Input{
&prediction.InputInput{
[]interface{}{
"Hola, con quien hablo",
},
},
}
res, err := t.api.Trainedmodels.Predict(t.projectNumber, t.modelName, input).Do()
if err != nil {
log.Fatalf("unable to get trained prediction: %v", err)
}
fmt.Printf("Predict result: language=%v\n", res.OutputLabel)
for _, m := range res.OutputMulti {
fmt.Printf("%v Score: %v\n", m.Label, m.Score)
}
// Now analyze the model.
an, err := t.api.Trainedmodels.Analyze(t.projectNumber, t.modelName).Do()
if err != nil {
log.Fatalf("unable to analyze trained model: %v", err)
}
for _, f := range an.DataDescription.OutputFeature.Text {
fmt.Printf("analyze: output feature text=%v\n", f)
}
for _, f := range an.DataDescription.Features {
fmt.Printf("feature text count=%v\n", f.Text.Count)
}
}
| third_party/src/code.google.com/p/google-api-go-client/examples/prediction.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.0014038995141163468,
0.00026138839893974364,
0.00016824349586386234,
0.00017410152941010892,
0.0003168894036207348
] |
{
"id": 3,
"code_window": [
"\t\tmakePod(\"m1\", 8080),\n",
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(makePod(\"\", 8080), \"m3\")\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 43
} | // Package youtubeanalytics provides access to the YouTube Analytics API.
//
// See http://developers.google.com/youtube/analytics/
//
// Usage example:
//
// import "code.google.com/p/google-api-go-client/youtubeanalytics/v1"
// ...
// youtubeanalyticsService, err := youtubeanalytics.New(oauthHttpClient)
package youtubeanalytics
import (
"bytes"
"code.google.com/p/google-api-go-client/googleapi"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"strings"
)
// Always reference these packages, just in case the auto-generated code
// below doesn't.
var _ = bytes.NewBuffer
var _ = strconv.Itoa
var _ = fmt.Sprintf
var _ = json.NewDecoder
var _ = io.Copy
var _ = url.Parse
var _ = googleapi.Version
var _ = errors.New
var _ = strings.Replace
const apiId = "youtubeAnalytics:v1"
const apiName = "youtubeAnalytics"
const apiVersion = "v1"
const basePath = "https://www.googleapis.com/youtube/analytics/v1/"
// OAuth2 scopes used by this API.
const (
// View YouTube Analytics monetary reports for your YouTube content
YtAnalyticsMonetaryReadonlyScope = "https://www.googleapis.com/auth/yt-analytics-monetary.readonly"
// View YouTube Analytics reports for your YouTube content
YtAnalyticsReadonlyScope = "https://www.googleapis.com/auth/yt-analytics.readonly"
)
func New(client *http.Client) (*Service, error) {
if client == nil {
return nil, errors.New("client is nil")
}
s := &Service{client: client, BasePath: basePath}
s.BatchReportDefinitions = NewBatchReportDefinitionsService(s)
s.BatchReports = NewBatchReportsService(s)
s.Reports = NewReportsService(s)
return s, nil
}
type Service struct {
client *http.Client
BasePath string // API endpoint base URL
BatchReportDefinitions *BatchReportDefinitionsService
BatchReports *BatchReportsService
Reports *ReportsService
}
func NewBatchReportDefinitionsService(s *Service) *BatchReportDefinitionsService {
rs := &BatchReportDefinitionsService{s: s}
return rs
}
type BatchReportDefinitionsService struct {
s *Service
}
func NewBatchReportsService(s *Service) *BatchReportsService {
rs := &BatchReportsService{s: s}
return rs
}
type BatchReportsService struct {
s *Service
}
func NewReportsService(s *Service) *ReportsService {
rs := &ReportsService{s: s}
return rs
}
type ReportsService struct {
s *Service
}
type BatchReportDefinitionList struct {
// Items: A list of batchReportDefinition resources that match the
// request criteria.
Items []*BatchReportDefinitionTemplate `json:"items,omitempty"`
// Kind: This value specifies the type of data included in the API
// response. For the list method, the kind property value is
// youtubeAnalytics#batchReportDefinitionList.
Kind string `json:"kind,omitempty"`
}
type BatchReportDefinitionTemplate struct {
// DefaultOutput: Default report definition's output.
DefaultOutput []*BatchReportDefinitionTemplateDefaultOutput `json:"defaultOutput,omitempty"`
// Id: The ID that YouTube assigns and uses to uniquely identify the
// report definition.
Id string `json:"id,omitempty"`
// Name: Name of the report definition.
Name string `json:"name,omitempty"`
// Status: Status of the report definition.
Status string `json:"status,omitempty"`
// Type: Type of the report definition.
Type string `json:"type,omitempty"`
}
type BatchReportDefinitionTemplateDefaultOutput struct {
// Format: Format of the output.
Format string `json:"format,omitempty"`
// Type: Type of the output.
Type string `json:"type,omitempty"`
}
type BatchReportList struct {
// Items: A list of batchReport resources that match the request
// criteria.
Items []*BatchReportTemplate `json:"items,omitempty"`
// Kind: This value specifies the type of data included in the API
// response. For the list method, the kind property value is
// youtubeAnalytics#batchReportList.
Kind string `json:"kind,omitempty"`
}
type BatchReportTemplate struct {
// Id: The ID that YouTube assigns and uses to uniquely identify the
// report.
Id string `json:"id,omitempty"`
// Outputs: Report outputs.
Outputs []*BatchReportTemplateOutputs `json:"outputs,omitempty"`
// Report_id: The ID of the the report definition.
Report_id string `json:"report_id,omitempty"`
// TimeSpan: Period included in the report. For reports containing all
// entities endTime is not set. Both startTime and endTime are
// inclusive.
TimeSpan *BatchReportTemplateTimeSpan `json:"timeSpan,omitempty"`
// TimeUpdated: The time when the report was updated.
TimeUpdated string `json:"timeUpdated,omitempty"`
}
type BatchReportTemplateOutputs struct {
// DownloadUrl: Cloud storage URL to download this report. This URL is
// valid for 30 minutes.
DownloadUrl string `json:"downloadUrl,omitempty"`
// Format: Format of the output.
Format string `json:"format,omitempty"`
// Type: Type of the output.
Type string `json:"type,omitempty"`
}
type BatchReportTemplateTimeSpan struct {
// EndTime: End of the period included in the report. Inclusive. For
// reports containing all entities endTime is not set.
EndTime string `json:"endTime,omitempty"`
// StartTime: Start of the period included in the report. Inclusive.
StartTime string `json:"startTime,omitempty"`
}
type ResultTable struct {
// ColumnHeaders: This value specifies information about the data
// returned in the rows fields. Each item in the columnHeaders list
// identifies a field returned in the rows value, which contains a list
// of comma-delimited data. The columnHeaders list will begin with the
// dimensions specified in the API request, which will be followed by
// the metrics specified in the API request. The order of both
// dimensions and metrics will match the ordering in the API request.
// For example, if the API request contains the parameters
// dimensions=ageGroup,gender&metrics=viewerPercentage, the API response
// will return columns in this order: ageGroup,gender,viewerPercentage.
ColumnHeaders []*ResultTableColumnHeaders `json:"columnHeaders,omitempty"`
// Kind: This value specifies the type of data included in the API
// response. For the query method, the kind property value will be
// youtubeAnalytics#resultTable.
Kind string `json:"kind,omitempty"`
// Rows: The list contains all rows of the result table. Each item in
// the list is an array that contains comma-delimited data corresponding
// to a single row of data. The order of the comma-delimited data fields
// will match the order of the columns listed in the columnHeaders
// field. If no data is available for the given query, the rows element
// will be omitted from the response. The response for a query with the
// day dimension will not contain rows for the most recent days.
Rows [][]interface{} `json:"rows,omitempty"`
}
type ResultTableColumnHeaders struct {
// ColumnType: The type of the column (DIMENSION or METRIC).
ColumnType string `json:"columnType,omitempty"`
// DataType: The type of the data in the column (STRING, INTEGER, FLOAT,
// etc.).
DataType string `json:"dataType,omitempty"`
// Name: The name of the dimension or metric.
Name string `json:"name,omitempty"`
}
// method id "youtubeAnalytics.batchReportDefinitions.list":
type BatchReportDefinitionsListCall struct {
s *Service
onBehalfOfContentOwner string
opt_ map[string]interface{}
}
// List: Retrieves a list of available batch report definitions.
func (r *BatchReportDefinitionsService) List(onBehalfOfContentOwner string) *BatchReportDefinitionsListCall {
c := &BatchReportDefinitionsListCall{s: r.s, opt_: make(map[string]interface{})}
c.onBehalfOfContentOwner = onBehalfOfContentOwner
return c
}
func (c *BatchReportDefinitionsListCall) Do() (*BatchReportDefinitionList, error) {
var body io.Reader = nil
params := make(url.Values)
params.Set("alt", "json")
params.Set("onBehalfOfContentOwner", fmt.Sprintf("%v", c.onBehalfOfContentOwner))
urls := googleapi.ResolveRelative(c.s.BasePath, "batchReportDefinitions")
urls += "?" + params.Encode()
req, _ := http.NewRequest("GET", urls, body)
googleapi.SetOpaque(req.URL)
req.Header.Set("User-Agent", "google-api-go-client/0.5")
res, err := c.s.client.Do(req)
if err != nil {
return nil, err
}
defer googleapi.CloseBody(res)
if err := googleapi.CheckResponse(res); err != nil {
return nil, err
}
ret := new(BatchReportDefinitionList)
if err := json.NewDecoder(res.Body).Decode(ret); err != nil {
return nil, err
}
return ret, nil
// {
// "description": "Retrieves a list of available batch report definitions.",
// "httpMethod": "GET",
// "id": "youtubeAnalytics.batchReportDefinitions.list",
// "parameterOrder": [
// "onBehalfOfContentOwner"
// ],
// "parameters": {
// "onBehalfOfContentOwner": {
// "description": "The onBehalfOfContentOwner parameter identifies the content owner that the user is acting on behalf of.",
// "location": "query",
// "required": true,
// "type": "string"
// }
// },
// "path": "batchReportDefinitions",
// "response": {
// "$ref": "BatchReportDefinitionList"
// },
// "scopes": [
// "https://www.googleapis.com/auth/yt-analytics-monetary.readonly",
// "https://www.googleapis.com/auth/yt-analytics.readonly"
// ]
// }
}
// method id "youtubeAnalytics.batchReports.list":
type BatchReportsListCall struct {
s *Service
batchReportDefinitionId string
onBehalfOfContentOwner string
opt_ map[string]interface{}
}
// List: Retrieves a list of processed batch reports.
func (r *BatchReportsService) List(batchReportDefinitionId string, onBehalfOfContentOwner string) *BatchReportsListCall {
c := &BatchReportsListCall{s: r.s, opt_: make(map[string]interface{})}
c.batchReportDefinitionId = batchReportDefinitionId
c.onBehalfOfContentOwner = onBehalfOfContentOwner
return c
}
func (c *BatchReportsListCall) Do() (*BatchReportList, error) {
var body io.Reader = nil
params := make(url.Values)
params.Set("alt", "json")
params.Set("batchReportDefinitionId", fmt.Sprintf("%v", c.batchReportDefinitionId))
params.Set("onBehalfOfContentOwner", fmt.Sprintf("%v", c.onBehalfOfContentOwner))
urls := googleapi.ResolveRelative(c.s.BasePath, "batchReports")
urls += "?" + params.Encode()
req, _ := http.NewRequest("GET", urls, body)
googleapi.SetOpaque(req.URL)
req.Header.Set("User-Agent", "google-api-go-client/0.5")
res, err := c.s.client.Do(req)
if err != nil {
return nil, err
}
defer googleapi.CloseBody(res)
if err := googleapi.CheckResponse(res); err != nil {
return nil, err
}
ret := new(BatchReportList)
if err := json.NewDecoder(res.Body).Decode(ret); err != nil {
return nil, err
}
return ret, nil
// {
// "description": "Retrieves a list of processed batch reports.",
// "httpMethod": "GET",
// "id": "youtubeAnalytics.batchReports.list",
// "parameterOrder": [
// "batchReportDefinitionId",
// "onBehalfOfContentOwner"
// ],
// "parameters": {
// "batchReportDefinitionId": {
// "description": "The batchReportDefinitionId parameter specifies the ID of the batch reportort definition for which you are retrieving reports.",
// "location": "query",
// "required": true,
// "type": "string"
// },
// "onBehalfOfContentOwner": {
// "description": "The onBehalfOfContentOwner parameter identifies the content owner that the user is acting on behalf of.",
// "location": "query",
// "required": true,
// "type": "string"
// }
// },
// "path": "batchReports",
// "response": {
// "$ref": "BatchReportList"
// },
// "scopes": [
// "https://www.googleapis.com/auth/yt-analytics-monetary.readonly",
// "https://www.googleapis.com/auth/yt-analytics.readonly"
// ]
// }
}
// method id "youtubeAnalytics.reports.query":
type ReportsQueryCall struct {
s *Service
ids string
startDate string
endDate string
metrics string
opt_ map[string]interface{}
}
// Query: Retrieve your YouTube Analytics reports.
func (r *ReportsService) Query(ids string, startDate string, endDate string, metrics string) *ReportsQueryCall {
c := &ReportsQueryCall{s: r.s, opt_: make(map[string]interface{})}
c.ids = ids
c.startDate = startDate
c.endDate = endDate
c.metrics = metrics
return c
}
// Dimensions sets the optional parameter "dimensions": A
// comma-separated list of YouTube Analytics dimensions, such as views
// or ageGroup,gender. See the Available Reports document for a list of
// the reports that you can retrieve and the dimensions used for those
// reports. Also see the Dimensions document for definitions of those
// dimensions.
func (c *ReportsQueryCall) Dimensions(dimensions string) *ReportsQueryCall {
c.opt_["dimensions"] = dimensions
return c
}
// Filters sets the optional parameter "filters": A list of filters that
// should be applied when retrieving YouTube Analytics data. The
// Available Reports document identifies the dimensions that can be used
// to filter each report, and the Dimensions document defines those
// dimensions. If a request uses multiple filters, join them together
// with a semicolon (;), and the returned result table will satisfy both
// filters. For example, a filters parameter value of
// video==dMH0bHeiRNg;country==IT restricts the result set to include
// data for the given video in Italy.
func (c *ReportsQueryCall) Filters(filters string) *ReportsQueryCall {
c.opt_["filters"] = filters
return c
}
// MaxResults sets the optional parameter "max-results": The maximum
// number of rows to include in the response.
func (c *ReportsQueryCall) MaxResults(maxResults int64) *ReportsQueryCall {
c.opt_["max-results"] = maxResults
return c
}
// Sort sets the optional parameter "sort": A comma-separated list of
// dimensions or metrics that determine the sort order for YouTube
// Analytics data. By default the sort order is ascending. The '-'
// prefix causes descending sort order.
func (c *ReportsQueryCall) Sort(sort string) *ReportsQueryCall {
c.opt_["sort"] = sort
return c
}
// StartIndex sets the optional parameter "start-index": An index of the
// first entity to retrieve. Use this parameter as a pagination
// mechanism along with the max-results parameter (one-based,
// inclusive).
func (c *ReportsQueryCall) StartIndex(startIndex int64) *ReportsQueryCall {
c.opt_["start-index"] = startIndex
return c
}
func (c *ReportsQueryCall) Do() (*ResultTable, error) {
var body io.Reader = nil
params := make(url.Values)
params.Set("alt", "json")
params.Set("end-date", fmt.Sprintf("%v", c.endDate))
params.Set("ids", fmt.Sprintf("%v", c.ids))
params.Set("metrics", fmt.Sprintf("%v", c.metrics))
params.Set("start-date", fmt.Sprintf("%v", c.startDate))
if v, ok := c.opt_["dimensions"]; ok {
params.Set("dimensions", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["filters"]; ok {
params.Set("filters", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["max-results"]; ok {
params.Set("max-results", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["sort"]; ok {
params.Set("sort", fmt.Sprintf("%v", v))
}
if v, ok := c.opt_["start-index"]; ok {
params.Set("start-index", fmt.Sprintf("%v", v))
}
urls := googleapi.ResolveRelative(c.s.BasePath, "reports")
urls += "?" + params.Encode()
req, _ := http.NewRequest("GET", urls, body)
googleapi.SetOpaque(req.URL)
req.Header.Set("User-Agent", "google-api-go-client/0.5")
res, err := c.s.client.Do(req)
if err != nil {
return nil, err
}
defer googleapi.CloseBody(res)
if err := googleapi.CheckResponse(res); err != nil {
return nil, err
}
ret := new(ResultTable)
if err := json.NewDecoder(res.Body).Decode(ret); err != nil {
return nil, err
}
return ret, nil
// {
// "description": "Retrieve your YouTube Analytics reports.",
// "httpMethod": "GET",
// "id": "youtubeAnalytics.reports.query",
// "parameterOrder": [
// "ids",
// "start-date",
// "end-date",
// "metrics"
// ],
// "parameters": {
// "dimensions": {
// "description": "A comma-separated list of YouTube Analytics dimensions, such as views or ageGroup,gender. See the Available Reports document for a list of the reports that you can retrieve and the dimensions used for those reports. Also see the Dimensions document for definitions of those dimensions.",
// "location": "query",
// "pattern": "[0-9a-zA-Z,]+",
// "type": "string"
// },
// "end-date": {
// "description": "The end date for fetching YouTube Analytics data. The value should be in YYYY-MM-DD format.",
// "location": "query",
// "pattern": "[0-9]{4}-[0-9]{2}-[0-9]{2}",
// "required": true,
// "type": "string"
// },
// "filters": {
// "description": "A list of filters that should be applied when retrieving YouTube Analytics data. The Available Reports document identifies the dimensions that can be used to filter each report, and the Dimensions document defines those dimensions. If a request uses multiple filters, join them together with a semicolon (;), and the returned result table will satisfy both filters. For example, a filters parameter value of video==dMH0bHeiRNg;country==IT restricts the result set to include data for the given video in Italy.",
// "location": "query",
// "type": "string"
// },
// "ids": {
// "description": "Identifies the YouTube channel or content owner for which you are retrieving YouTube Analytics data.\n- To request data for a YouTube user, set the ids parameter value to channel==CHANNEL_ID, where CHANNEL_ID specifies the unique YouTube channel ID.\n- To request data for a YouTube CMS content owner, set the ids parameter value to contentOwner==OWNER_NAME, where OWNER_NAME is the CMS name of the content owner.",
// "location": "query",
// "pattern": "[a-zA-Z]+==[a-zA-Z0-9_+-]+",
// "required": true,
// "type": "string"
// },
// "max-results": {
// "description": "The maximum number of rows to include in the response.",
// "format": "int32",
// "location": "query",
// "minimum": "1",
// "type": "integer"
// },
// "metrics": {
// "description": "A comma-separated list of YouTube Analytics metrics, such as views or likes,dislikes. See the Available Reports document for a list of the reports that you can retrieve and the metrics available in each report, and see the Metrics document for definitions of those metrics.",
// "location": "query",
// "pattern": "[0-9a-zA-Z,]+",
// "required": true,
// "type": "string"
// },
// "sort": {
// "description": "A comma-separated list of dimensions or metrics that determine the sort order for YouTube Analytics data. By default the sort order is ascending. The '-' prefix causes descending sort order.",
// "location": "query",
// "pattern": "[-0-9a-zA-Z,]+",
// "type": "string"
// },
// "start-date": {
// "description": "The start date for fetching YouTube Analytics data. The value should be in YYYY-MM-DD format.",
// "location": "query",
// "pattern": "[0-9]{4}-[0-9]{2}-[0-9]{2}",
// "required": true,
// "type": "string"
// },
// "start-index": {
// "description": "An index of the first entity to retrieve. Use this parameter as a pagination mechanism along with the max-results parameter (one-based, inclusive).",
// "format": "int32",
// "location": "query",
// "minimum": "1",
// "type": "integer"
// }
// },
// "path": "reports",
// "response": {
// "$ref": "ResultTable"
// },
// "scopes": [
// "https://www.googleapis.com/auth/yt-analytics-monetary.readonly",
// "https://www.googleapis.com/auth/yt-analytics.readonly"
// ]
// }
}
| third_party/src/code.google.com/p/google-api-go-client/youtubeanalytics/v1/youtubeanalytics-gen.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.00021510709484573454,
0.00017123165889643133,
0.00015421771968249232,
0.00017214541730936617,
0.00000723587481843424
] |
{
"id": 3,
"code_window": [
"\t\tmakePod(\"m1\", 8080),\n",
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(makePod(\"\", 8080), \"m3\")\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 43
} | // Copyright 2011 The goauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The oauth package provides support for making
// OAuth2-authenticated HTTP requests.
//
// Example usage:
//
// // Specify your configuration. (typically as a global variable)
// var config = &oauth.Config{
// ClientId: YOUR_CLIENT_ID,
// ClientSecret: YOUR_CLIENT_SECRET,
// Scope: "https://www.googleapis.com/auth/buzz",
// AuthURL: "https://accounts.google.com/o/oauth2/auth",
// TokenURL: "https://accounts.google.com/o/oauth2/token",
// RedirectURL: "http://you.example.org/handler",
// }
//
// // A landing page redirects to the OAuth provider to get the auth code.
// func landing(w http.ResponseWriter, r *http.Request) {
// http.Redirect(w, r, config.AuthCodeURL("foo"), http.StatusFound)
// }
//
// // The user will be redirected back to this handler, that takes the
// // "code" query parameter and Exchanges it for an access token.
// func handler(w http.ResponseWriter, r *http.Request) {
// t := &oauth.Transport{Config: config}
// t.Exchange(r.FormValue("code"))
// // The Transport now has a valid Token. Create an *http.Client
// // with which we can make authenticated API requests.
// c := t.Client()
// c.Post(...)
// // ...
// // btw, r.FormValue("state") == "foo"
// }
//
package oauth
import (
"encoding/json"
"io/ioutil"
"mime"
"net/http"
"net/url"
"os"
"strings"
"time"
)
type OAuthError struct {
prefix string
msg string
}
func (oe OAuthError) Error() string {
return "OAuthError: " + oe.prefix + ": " + oe.msg
}
// Cache specifies the methods that implement a Token cache.
type Cache interface {
Token() (*Token, error)
PutToken(*Token) error
}
// CacheFile implements Cache. Its value is the name of the file in which
// the Token is stored in JSON format.
type CacheFile string
func (f CacheFile) Token() (*Token, error) {
file, err := os.Open(string(f))
if err != nil {
return nil, OAuthError{"CacheFile.Token", err.Error()}
}
defer file.Close()
tok := &Token{}
if err := json.NewDecoder(file).Decode(tok); err != nil {
return nil, OAuthError{"CacheFile.Token", err.Error()}
}
return tok, nil
}
func (f CacheFile) PutToken(tok *Token) error {
file, err := os.OpenFile(string(f), os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0600)
if err != nil {
return OAuthError{"CacheFile.PutToken", err.Error()}
}
if err := json.NewEncoder(file).Encode(tok); err != nil {
file.Close()
return OAuthError{"CacheFile.PutToken", err.Error()}
}
if err := file.Close(); err != nil {
return OAuthError{"CacheFile.PutToken", err.Error()}
}
return nil
}
// Config is the configuration of an OAuth consumer.
type Config struct {
// ClientId is the OAuth client identifier used when communicating with
// the configured OAuth provider.
ClientId string
// ClientSecret is the OAuth client secret used when communicating with
// the configured OAuth provider.
ClientSecret string
// Scope identifies the level of access being requested. Multiple scope
// values should be provided as a space-delimited string.
Scope string
// AuthURL is the URL the user will be directed to in order to grant
// access.
AuthURL string
// TokenURL is the URL used to retrieve OAuth tokens.
TokenURL string
// RedirectURL is the URL to which the user will be returned after
// granting (or denying) access.
RedirectURL string
// TokenCache allows tokens to be cached for subsequent requests.
TokenCache Cache
AccessType string // Optional, "online" (default) or "offline", no refresh token if "online"
// ApprovalPrompt indicates whether the user should be
// re-prompted for consent. If set to "auto" (default) the
// user will be prompted only if they haven't previously
// granted consent and the code can only be exchanged for an
// access token.
// If set to "force" the user will always be prompted, and the
// code can be exchanged for a refresh token.
ApprovalPrompt string
}
// Token contains an end-user's tokens.
// This is the data you must store to persist authentication.
type Token struct {
AccessToken string
RefreshToken string
Expiry time.Time // If zero the token has no (known) expiry time.
Extra map[string]string // May be nil.
}
func (t *Token) Expired() bool {
if t.Expiry.IsZero() {
return false
}
return t.Expiry.Before(time.Now())
}
// Transport implements http.RoundTripper. When configured with a valid
// Config and Token it can be used to make authenticated HTTP requests.
//
// t := &oauth.Transport{config}
// t.Exchange(code)
// // t now contains a valid Token
// r, _, err := t.Client().Get("http://example.org/url/requiring/auth")
//
// It will automatically refresh the Token if it can,
// updating the supplied Token in place.
type Transport struct {
*Config
*Token
// Transport is the HTTP transport to use when making requests.
// It will default to http.DefaultTransport if nil.
// (It should never be an oauth.Transport.)
Transport http.RoundTripper
}
// Client returns an *http.Client that makes OAuth-authenticated requests.
func (t *Transport) Client() *http.Client {
return &http.Client{Transport: t}
}
func (t *Transport) transport() http.RoundTripper {
if t.Transport != nil {
return t.Transport
}
return http.DefaultTransport
}
// AuthCodeURL returns a URL that the end-user should be redirected to,
// so that they may obtain an authorization code.
func (c *Config) AuthCodeURL(state string) string {
url_, err := url.Parse(c.AuthURL)
if err != nil {
panic("AuthURL malformed: " + err.Error())
}
q := url.Values{
"response_type": {"code"},
"client_id": {c.ClientId},
"redirect_uri": {c.RedirectURL},
"scope": {c.Scope},
"state": {state},
"access_type": {c.AccessType},
"approval_prompt": {c.ApprovalPrompt},
}.Encode()
if url_.RawQuery == "" {
url_.RawQuery = q
} else {
url_.RawQuery += "&" + q
}
return url_.String()
}
// Exchange takes a code and gets access Token from the remote server.
func (t *Transport) Exchange(code string) (*Token, error) {
if t.Config == nil {
return nil, OAuthError{"Exchange", "no Config supplied"}
}
// If the transport or the cache already has a token, it is
// passed to `updateToken` to preserve existing refresh token.
tok := t.Token
if tok == nil && t.TokenCache != nil {
tok, _ = t.TokenCache.Token()
}
if tok == nil {
tok = new(Token)
}
err := t.updateToken(tok, url.Values{
"grant_type": {"authorization_code"},
"redirect_uri": {t.RedirectURL},
"scope": {t.Scope},
"code": {code},
})
if err != nil {
return nil, err
}
t.Token = tok
if t.TokenCache != nil {
return tok, t.TokenCache.PutToken(tok)
}
return tok, nil
}
// RoundTrip executes a single HTTP transaction using the Transport's
// Token as authorization headers.
//
// This method will attempt to renew the Token if it has expired and may return
// an error related to that Token renewal before attempting the client request.
// If the Token cannot be renewed a non-nil os.Error value will be returned.
// If the Token is invalid callers should expect HTTP-level errors,
// as indicated by the Response's StatusCode.
func (t *Transport) RoundTrip(req *http.Request) (*http.Response, error) {
if t.Token == nil {
if t.Config == nil {
return nil, OAuthError{"RoundTrip", "no Config supplied"}
}
if t.TokenCache == nil {
return nil, OAuthError{"RoundTrip", "no Token supplied"}
}
var err error
t.Token, err = t.TokenCache.Token()
if err != nil {
return nil, err
}
}
// Refresh the Token if it has expired.
if t.Expired() {
if err := t.Refresh(); err != nil {
return nil, err
}
}
// To set the Authorization header, we must make a copy of the Request
// so that we don't modify the Request we were given.
// This is required by the specification of http.RoundTripper.
req = cloneRequest(req)
req.Header.Set("Authorization", "Bearer "+t.AccessToken)
// Make the HTTP request.
return t.transport().RoundTrip(req)
}
// cloneRequest returns a clone of the provided *http.Request.
// The clone is a shallow copy of the struct and its Header map.
func cloneRequest(r *http.Request) *http.Request {
// shallow copy of the struct
r2 := new(http.Request)
*r2 = *r
// deep copy of the Header
r2.Header = make(http.Header)
for k, s := range r.Header {
r2.Header[k] = s
}
return r2
}
// Refresh renews the Transport's AccessToken using its RefreshToken.
func (t *Transport) Refresh() error {
if t.Token == nil {
return OAuthError{"Refresh", "no existing Token"}
}
if t.RefreshToken == "" {
return OAuthError{"Refresh", "Token expired; no Refresh Token"}
}
if t.Config == nil {
return OAuthError{"Refresh", "no Config supplied"}
}
err := t.updateToken(t.Token, url.Values{
"grant_type": {"refresh_token"},
"refresh_token": {t.RefreshToken},
})
if err != nil {
return err
}
if t.TokenCache != nil {
return t.TokenCache.PutToken(t.Token)
}
return nil
}
func (t *Transport) updateToken(tok *Token, v url.Values) error {
v.Set("client_id", t.ClientId)
v.Set("client_secret", t.ClientSecret)
client := &http.Client{Transport: t.transport()}
req, err := http.NewRequest("POST", t.TokenURL, strings.NewReader(v.Encode()))
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.SetBasicAuth(t.ClientId, t.ClientSecret)
r, err := client.Do(req)
if err != nil {
return err
}
defer r.Body.Close()
if r.StatusCode != 200 {
return OAuthError{"updateToken", r.Status}
}
var b struct {
Access string `json:"access_token"`
Refresh string `json:"refresh_token"`
ExpiresIn time.Duration `json:"expires_in"`
Id string `json:"id_token"`
}
content, _, _ := mime.ParseMediaType(r.Header.Get("Content-Type"))
switch content {
case "application/x-www-form-urlencoded", "text/plain":
body, err := ioutil.ReadAll(r.Body)
if err != nil {
return err
}
vals, err := url.ParseQuery(string(body))
if err != nil {
return err
}
b.Access = vals.Get("access_token")
b.Refresh = vals.Get("refresh_token")
b.ExpiresIn, _ = time.ParseDuration(vals.Get("expires_in") + "s")
b.Id = vals.Get("id_token")
default:
if err = json.NewDecoder(r.Body).Decode(&b); err != nil {
return err
}
// The JSON parser treats the unitless ExpiresIn like 'ns' instead of 's' as above,
// so compensate here.
b.ExpiresIn *= time.Second
}
tok.AccessToken = b.Access
// Don't overwrite `RefreshToken` with an empty value
if len(b.Refresh) > 0 {
tok.RefreshToken = b.Refresh
}
if b.ExpiresIn == 0 {
tok.Expiry = time.Time{}
} else {
tok.Expiry = time.Now().Add(b.ExpiresIn)
}
if b.Id != "" {
if tok.Extra == nil {
tok.Extra = make(map[string]string)
}
tok.Extra["id_token"] = b.Id
}
return nil
}
| third_party/src/code.google.com/p/goauth2/oauth/oauth.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.00020448683062568307,
0.00017129255866166204,
0.00016045811935327947,
0.00017118148389272392,
0.0000075004932114097755
] |
{
"id": 4,
"code_window": [
"\t\tmakePod(\"m3\", 80, 443, 8085),\n",
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(makePod(\"\", 8080, 8081), \"m3\")\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 58
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package scheduler
import (
"math/rand"
"testing"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
)
func TestFirstFitSchedulerNothingScheduled(t *testing.T) {
fakeRegistry := FakePodLister{}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(&fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(api.Pod{}, "m3")
}
func TestFirstFitSchedulerFirstScheduled(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 8080),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(makePod("", 8080), "m3")
}
func TestFirstFitSchedulerFirstScheduledComplicated(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 80, 8080),
makePod("m2", 8081, 8082, 8083),
makePod("m3", 80, 443, 8085),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(makePod("", 8080, 8081), "m3")
}
func TestFirstFitSchedulerFirstScheduledImpossible(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 8080),
makePod("m2", 8081),
makePod("m3", 8080),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectFailure(makePod("", 8080, 8081))
}
| pkg/scheduler/firstfit_test.go | 1 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.9982023239135742,
0.6089008450508118,
0.00017734932771418244,
0.9334487915039062,
0.47021549940109253
] |
{
"id": 4,
"code_window": [
"\t\tmakePod(\"m3\", 80, 443, 8085),\n",
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(makePod(\"\", 8080, 8081), \"m3\")\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 58
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kubecfg
import (
"encoding/json"
"io/ioutil"
"os"
"testing"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
"github.com/GoogleCloudPlatform/kubernetes/pkg/client"
"github.com/GoogleCloudPlatform/kubernetes/pkg/labels"
)
// TODO: This doesn't reduce typing enough to make it worth the less readable errors. Remove.
func expectNoError(t *testing.T, err error) {
if err != nil {
t.Errorf("Unexpected error: %#v", err)
}
}
type Action struct {
action string
value interface{}
}
type FakeKubeClient struct {
actions []Action
pods api.PodList
ctrl api.ReplicationController
}
func (client *FakeKubeClient) ListPods(selector labels.Selector) (api.PodList, error) {
client.actions = append(client.actions, Action{action: "list-pods"})
return client.pods, nil
}
func (client *FakeKubeClient) GetPod(name string) (api.Pod, error) {
client.actions = append(client.actions, Action{action: "get-pod", value: name})
return api.Pod{}, nil
}
func (client *FakeKubeClient) DeletePod(name string) error {
client.actions = append(client.actions, Action{action: "delete-pod", value: name})
return nil
}
func (client *FakeKubeClient) CreatePod(pod api.Pod) (api.Pod, error) {
client.actions = append(client.actions, Action{action: "create-pod"})
return api.Pod{}, nil
}
func (client *FakeKubeClient) UpdatePod(pod api.Pod) (api.Pod, error) {
client.actions = append(client.actions, Action{action: "update-pod", value: pod.ID})
return api.Pod{}, nil
}
func (client *FakeKubeClient) GetReplicationController(name string) (api.ReplicationController, error) {
client.actions = append(client.actions, Action{action: "get-controller", value: name})
return client.ctrl, nil
}
func (client *FakeKubeClient) CreateReplicationController(controller api.ReplicationController) (api.ReplicationController, error) {
client.actions = append(client.actions, Action{action: "create-controller", value: controller})
return api.ReplicationController{}, nil
}
func (client *FakeKubeClient) UpdateReplicationController(controller api.ReplicationController) (api.ReplicationController, error) {
client.actions = append(client.actions, Action{action: "update-controller", value: controller})
return api.ReplicationController{}, nil
}
func (client *FakeKubeClient) DeleteReplicationController(controller string) error {
client.actions = append(client.actions, Action{action: "delete-controller", value: controller})
return nil
}
func (client *FakeKubeClient) GetService(name string) (api.Service, error) {
client.actions = append(client.actions, Action{action: "get-controller", value: name})
return api.Service{}, nil
}
func (client *FakeKubeClient) CreateService(controller api.Service) (api.Service, error) {
client.actions = append(client.actions, Action{action: "create-service", value: controller})
return api.Service{}, nil
}
func (client *FakeKubeClient) UpdateService(controller api.Service) (api.Service, error) {
client.actions = append(client.actions, Action{action: "update-service", value: controller})
return api.Service{}, nil
}
func (client *FakeKubeClient) DeleteService(controller string) error {
client.actions = append(client.actions, Action{action: "delete-service", value: controller})
return nil
}
func validateAction(expectedAction, actualAction Action, t *testing.T) {
if expectedAction != actualAction {
t.Errorf("Unexpected action: %#v, expected: %#v", actualAction, expectedAction)
}
}
func TestUpdateWithPods(t *testing.T) {
client := FakeKubeClient{
pods: api.PodList{
Items: []api.Pod{
{JSONBase: api.JSONBase{ID: "pod-1"}},
{JSONBase: api.JSONBase{ID: "pod-2"}},
},
},
}
Update("foo", &client, 0)
if len(client.actions) != 4 {
t.Errorf("Unexpected action list %#v", client.actions)
}
validateAction(Action{action: "get-controller", value: "foo"}, client.actions[0], t)
validateAction(Action{action: "list-pods"}, client.actions[1], t)
// Update deletes the pods, it relies on the replication controller to replace them.
validateAction(Action{action: "delete-pod", value: "pod-1"}, client.actions[2], t)
validateAction(Action{action: "delete-pod", value: "pod-2"}, client.actions[3], t)
}
func TestUpdateNoPods(t *testing.T) {
client := FakeKubeClient{}
Update("foo", &client, 0)
if len(client.actions) != 2 {
t.Errorf("Unexpected action list %#v", client.actions)
}
validateAction(Action{action: "get-controller", value: "foo"}, client.actions[0], t)
validateAction(Action{action: "list-pods"}, client.actions[1], t)
}
func TestRunController(t *testing.T) {
fakeClient := FakeKubeClient{}
name := "name"
image := "foo/bar"
replicas := 3
RunController(image, name, replicas, &fakeClient, "8080:80", -1)
if len(fakeClient.actions) != 1 || fakeClient.actions[0].action != "create-controller" {
t.Errorf("Unexpected actions: %#v", fakeClient.actions)
}
controller := fakeClient.actions[0].value.(api.ReplicationController)
if controller.ID != name ||
controller.DesiredState.Replicas != replicas ||
controller.DesiredState.PodTemplate.DesiredState.Manifest.Containers[0].Image != image {
t.Errorf("Unexpected controller: %#v", controller)
}
}
func TestRunControllerWithService(t *testing.T) {
fakeClient := FakeKubeClient{}
name := "name"
image := "foo/bar"
replicas := 3
RunController(image, name, replicas, &fakeClient, "", 8000)
if len(fakeClient.actions) != 2 ||
fakeClient.actions[0].action != "create-controller" ||
fakeClient.actions[1].action != "create-service" {
t.Errorf("Unexpected actions: %#v", fakeClient.actions)
}
controller := fakeClient.actions[0].value.(api.ReplicationController)
if controller.ID != name ||
controller.DesiredState.Replicas != replicas ||
controller.DesiredState.PodTemplate.DesiredState.Manifest.Containers[0].Image != image {
t.Errorf("Unexpected controller: %#v", controller)
}
}
func TestStopController(t *testing.T) {
fakeClient := FakeKubeClient{}
name := "name"
StopController(name, &fakeClient)
if len(fakeClient.actions) != 2 {
t.Errorf("Unexpected actions: %#v", fakeClient.actions)
}
if fakeClient.actions[0].action != "get-controller" ||
fakeClient.actions[0].value.(string) != name {
t.Errorf("Unexpected action: %#v", fakeClient.actions[0])
}
controller := fakeClient.actions[1].value.(api.ReplicationController)
if fakeClient.actions[1].action != "update-controller" ||
controller.DesiredState.Replicas != 0 {
t.Errorf("Unexpected action: %#v", fakeClient.actions[1])
}
}
func TestResizeController(t *testing.T) {
fakeClient := FakeKubeClient{}
name := "name"
replicas := 17
ResizeController(name, replicas, &fakeClient)
if len(fakeClient.actions) != 2 {
t.Errorf("Unexpected actions: %#v", fakeClient.actions)
}
if fakeClient.actions[0].action != "get-controller" ||
fakeClient.actions[0].value.(string) != name {
t.Errorf("Unexpected action: %#v", fakeClient.actions[0])
}
controller := fakeClient.actions[1].value.(api.ReplicationController)
if fakeClient.actions[1].action != "update-controller" ||
controller.DesiredState.Replicas != 17 {
t.Errorf("Unexpected action: %#v", fakeClient.actions[1])
}
}
func TestCloudCfgDeleteController(t *testing.T) {
fakeClient := FakeKubeClient{}
name := "name"
err := DeleteController(name, &fakeClient)
expectNoError(t, err)
if len(fakeClient.actions) != 2 {
t.Errorf("Unexpected actions: %#v", fakeClient.actions)
}
if fakeClient.actions[0].action != "get-controller" ||
fakeClient.actions[0].value.(string) != name {
t.Errorf("Unexpected action: %#v", fakeClient.actions[0])
}
if fakeClient.actions[1].action != "delete-controller" ||
fakeClient.actions[1].value.(string) != name {
t.Errorf("Unexpected action: %#v", fakeClient.actions[1])
}
}
func TestCloudCfgDeleteControllerWithReplicas(t *testing.T) {
fakeClient := FakeKubeClient{
ctrl: api.ReplicationController{
DesiredState: api.ReplicationControllerState{
Replicas: 2,
},
},
}
name := "name"
err := DeleteController(name, &fakeClient)
if len(fakeClient.actions) != 1 {
t.Errorf("Unexpected actions: %#v", fakeClient.actions)
}
if fakeClient.actions[0].action != "get-controller" ||
fakeClient.actions[0].value.(string) != name {
t.Errorf("Unexpected action: %#v", fakeClient.actions[0])
}
if err == nil {
t.Errorf("Unexpected non-error.")
}
}
func TestLoadAuthInfo(t *testing.T) {
testAuthInfo := &client.AuthInfo{
User: "TestUser",
Password: "TestPassword",
}
aifile, err := ioutil.TempFile("", "testAuthInfo")
if err != nil {
t.Error("Could not open temp file")
}
defer os.Remove(aifile.Name())
defer aifile.Close()
ai, err := LoadAuthInfo(aifile.Name())
if err == nil {
t.Error("LoadAuthInfo didn't fail on empty file")
}
data, err := json.Marshal(testAuthInfo)
if err != nil {
t.Fatal("Unexpected JSON marshal error")
}
_, err = aifile.Write(data)
if err != nil {
t.Fatal("Unexpected error in writing test file")
}
ai, err = LoadAuthInfo(aifile.Name())
if err != nil {
t.Fatal(err)
}
if *testAuthInfo != *ai {
t.Error("Test data and loaded data are not equal")
}
}
func validatePort(t *testing.T, p api.Port, external int, internal int) {
if p.HostPort != external || p.ContainerPort != internal {
t.Errorf("Unexpected port: %#v != (%d, %d)", p, external, internal)
}
}
func TestMakePorts(t *testing.T) {
ports := makePorts("8080:80,8081:8081,443:444")
if len(ports) != 3 {
t.Errorf("Unexpected ports: %#v", ports)
}
validatePort(t, ports[0], 8080, 80)
validatePort(t, ports[1], 8081, 8081)
validatePort(t, ports[2], 443, 444)
}
| pkg/kubecfg/kubecfg_test.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.000737280584871769,
0.00020627233607228845,
0.0001639373367652297,
0.0001714800309855491,
0.00010893051512539387
] |
{
"id": 4,
"code_window": [
"\t\tmakePod(\"m3\", 80, 443, 8085),\n",
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(makePod(\"\", 8080, 8081), \"m3\")\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 58
} | package yaml
import (
"io"
"os"
)
func yaml_insert_token(parser *yaml_parser_t, pos int, token *yaml_token_t) {
//fmt.Println("yaml_insert_token", "pos:", pos, "typ:", token.typ, "head:", parser.tokens_head, "len:", len(parser.tokens))
// Check if we can move the queue at the beginning of the buffer.
if parser.tokens_head > 0 && len(parser.tokens) == cap(parser.tokens) {
if parser.tokens_head != len(parser.tokens) {
copy(parser.tokens, parser.tokens[parser.tokens_head:])
}
parser.tokens = parser.tokens[:len(parser.tokens)-parser.tokens_head]
parser.tokens_head = 0
}
parser.tokens = append(parser.tokens, *token)
if pos < 0 {
return
}
copy(parser.tokens[parser.tokens_head+pos+1:], parser.tokens[parser.tokens_head+pos:])
parser.tokens[parser.tokens_head+pos] = *token
}
// Create a new parser object.
func yaml_parser_initialize(parser *yaml_parser_t) bool {
*parser = yaml_parser_t{
raw_buffer: make([]byte, 0, input_raw_buffer_size),
buffer: make([]byte, 0, input_buffer_size),
}
return true
}
// Destroy a parser object.
func yaml_parser_delete(parser *yaml_parser_t) {
*parser = yaml_parser_t{}
}
// String read handler.
func yaml_string_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
if parser.input_pos == len(parser.input) {
return 0, io.EOF
}
n = copy(buffer, parser.input[parser.input_pos:])
parser.input_pos += n
return n, nil
}
// File read handler.
func yaml_file_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
return parser.input_file.Read(buffer)
}
// Set a string input.
func yaml_parser_set_input_string(parser *yaml_parser_t, input []byte) {
if parser.read_handler != nil {
panic("must set the input source only once")
}
parser.read_handler = yaml_string_read_handler
parser.input = input
parser.input_pos = 0
}
// Set a file input.
func yaml_parser_set_input_file(parser *yaml_parser_t, file *os.File) {
if parser.read_handler != nil {
panic("must set the input source only once")
}
parser.read_handler = yaml_file_read_handler
parser.input_file = file
}
// Set the source encoding.
func yaml_parser_set_encoding(parser *yaml_parser_t, encoding yaml_encoding_t) {
if parser.encoding != yaml_ANY_ENCODING {
panic("must set the encoding only once")
}
parser.encoding = encoding
}
// Create a new emitter object.
func yaml_emitter_initialize(emitter *yaml_emitter_t) bool {
*emitter = yaml_emitter_t{
buffer: make([]byte, output_buffer_size),
raw_buffer: make([]byte, 0, output_raw_buffer_size),
states: make([]yaml_emitter_state_t, 0, initial_stack_size),
events: make([]yaml_event_t, 0, initial_queue_size),
}
return true
}
// Destroy an emitter object.
func yaml_emitter_delete(emitter *yaml_emitter_t) {
*emitter = yaml_emitter_t{}
}
// String write handler.
func yaml_string_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
*emitter.output_buffer = append(*emitter.output_buffer, buffer...)
return nil
}
// File write handler.
func yaml_file_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
_, err := emitter.output_file.Write(buffer)
return err
}
// Set a string output.
func yaml_emitter_set_output_string(emitter *yaml_emitter_t, output_buffer *[]byte) {
if emitter.write_handler != nil {
panic("must set the output target only once")
}
emitter.write_handler = yaml_string_write_handler
emitter.output_buffer = output_buffer
}
// Set a file output.
func yaml_emitter_set_output_file(emitter *yaml_emitter_t, file io.Writer) {
if emitter.write_handler != nil {
panic("must set the output target only once")
}
emitter.write_handler = yaml_file_write_handler
emitter.output_file = file
}
// Set the output encoding.
func yaml_emitter_set_encoding(emitter *yaml_emitter_t, encoding yaml_encoding_t) {
if emitter.encoding != yaml_ANY_ENCODING {
panic("must set the output encoding only once")
}
emitter.encoding = encoding
}
// Set the canonical output style.
func yaml_emitter_set_canonical(emitter *yaml_emitter_t, canonical bool) {
emitter.canonical = canonical
}
//// Set the indentation increment.
func yaml_emitter_set_indent(emitter *yaml_emitter_t, indent int) {
if indent < 2 || indent > 9 {
indent = 2
}
emitter.best_indent = indent
}
// Set the preferred line width.
func yaml_emitter_set_width(emitter *yaml_emitter_t, width int) {
if width < 0 {
width = -1
}
emitter.best_width = width
}
// Set if unescaped non-ASCII characters are allowed.
func yaml_emitter_set_unicode(emitter *yaml_emitter_t, unicode bool) {
emitter.unicode = unicode
}
// Set the preferred line break character.
func yaml_emitter_set_break(emitter *yaml_emitter_t, line_break yaml_break_t) {
emitter.line_break = line_break
}
///*
// * Destroy a token object.
// */
//
//YAML_DECLARE(void)
//yaml_token_delete(yaml_token_t *token)
//{
// assert(token); // Non-NULL token object expected.
//
// switch (token.type)
// {
// case YAML_TAG_DIRECTIVE_TOKEN:
// yaml_free(token.data.tag_directive.handle);
// yaml_free(token.data.tag_directive.prefix);
// break;
//
// case YAML_ALIAS_TOKEN:
// yaml_free(token.data.alias.value);
// break;
//
// case YAML_ANCHOR_TOKEN:
// yaml_free(token.data.anchor.value);
// break;
//
// case YAML_TAG_TOKEN:
// yaml_free(token.data.tag.handle);
// yaml_free(token.data.tag.suffix);
// break;
//
// case YAML_SCALAR_TOKEN:
// yaml_free(token.data.scalar.value);
// break;
//
// default:
// break;
// }
//
// memset(token, 0, sizeof(yaml_token_t));
//}
//
///*
// * Check if a string is a valid UTF-8 sequence.
// *
// * Check 'reader.c' for more details on UTF-8 encoding.
// */
//
//static int
//yaml_check_utf8(yaml_char_t *start, size_t length)
//{
// yaml_char_t *end = start+length;
// yaml_char_t *pointer = start;
//
// while (pointer < end) {
// unsigned char octet;
// unsigned int width;
// unsigned int value;
// size_t k;
//
// octet = pointer[0];
// width = (octet & 0x80) == 0x00 ? 1 :
// (octet & 0xE0) == 0xC0 ? 2 :
// (octet & 0xF0) == 0xE0 ? 3 :
// (octet & 0xF8) == 0xF0 ? 4 : 0;
// value = (octet & 0x80) == 0x00 ? octet & 0x7F :
// (octet & 0xE0) == 0xC0 ? octet & 0x1F :
// (octet & 0xF0) == 0xE0 ? octet & 0x0F :
// (octet & 0xF8) == 0xF0 ? octet & 0x07 : 0;
// if (!width) return 0;
// if (pointer+width > end) return 0;
// for (k = 1; k < width; k ++) {
// octet = pointer[k];
// if ((octet & 0xC0) != 0x80) return 0;
// value = (value << 6) + (octet & 0x3F);
// }
// if (!((width == 1) ||
// (width == 2 && value >= 0x80) ||
// (width == 3 && value >= 0x800) ||
// (width == 4 && value >= 0x10000))) return 0;
//
// pointer += width;
// }
//
// return 1;
//}
//
// Create STREAM-START.
func yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) bool {
*event = yaml_event_t{
typ: yaml_STREAM_START_EVENT,
encoding: encoding,
}
return true
}
// Create STREAM-END.
func yaml_stream_end_event_initialize(event *yaml_event_t) bool {
*event = yaml_event_t{
typ: yaml_STREAM_END_EVENT,
}
return true
}
// Create DOCUMENT-START.
func yaml_document_start_event_initialize(event *yaml_event_t, version_directive *yaml_version_directive_t,
tag_directives []yaml_tag_directive_t, implicit bool) bool {
*event = yaml_event_t{
typ: yaml_DOCUMENT_START_EVENT,
version_directive: version_directive,
tag_directives: tag_directives,
implicit: implicit,
}
return true
}
// Create DOCUMENT-END.
func yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) bool {
*event = yaml_event_t{
typ: yaml_DOCUMENT_END_EVENT,
implicit: implicit,
}
return true
}
///*
// * Create ALIAS.
// */
//
//YAML_DECLARE(int)
//yaml_alias_event_initialize(event *yaml_event_t, anchor *yaml_char_t)
//{
// mark yaml_mark_t = { 0, 0, 0 }
// anchor_copy *yaml_char_t = NULL
//
// assert(event) // Non-NULL event object is expected.
// assert(anchor) // Non-NULL anchor is expected.
//
// if (!yaml_check_utf8(anchor, strlen((char *)anchor))) return 0
//
// anchor_copy = yaml_strdup(anchor)
// if (!anchor_copy)
// return 0
//
// ALIAS_EVENT_INIT(*event, anchor_copy, mark, mark)
//
// return 1
//}
// Create SCALAR.
func yaml_scalar_event_initialize(event *yaml_event_t, anchor, tag, value []byte, plain_implicit, quoted_implicit bool, style yaml_scalar_style_t) bool {
*event = yaml_event_t{
typ: yaml_SCALAR_EVENT,
anchor: anchor,
tag: tag,
value: value,
implicit: plain_implicit,
quoted_implicit: quoted_implicit,
style: yaml_style_t(style),
}
return true
}
// Create SEQUENCE-START.
func yaml_sequence_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_sequence_style_t) bool {
*event = yaml_event_t{
typ: yaml_SEQUENCE_START_EVENT,
anchor: anchor,
tag: tag,
implicit: implicit,
style: yaml_style_t(style),
}
return true
}
// Create SEQUENCE-END.
func yaml_sequence_end_event_initialize(event *yaml_event_t) bool {
*event = yaml_event_t{
typ: yaml_SEQUENCE_END_EVENT,
}
return true
}
// Create MAPPING-START.
func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) bool {
*event = yaml_event_t{
typ: yaml_MAPPING_START_EVENT,
anchor: anchor,
tag: tag,
implicit: implicit,
style: yaml_style_t(style),
}
return true
}
// Create MAPPING-END.
func yaml_mapping_end_event_initialize(event *yaml_event_t) bool {
*event = yaml_event_t{
typ: yaml_MAPPING_END_EVENT,
}
return true
}
// Destroy an event object.
func yaml_event_delete(event *yaml_event_t) {
*event = yaml_event_t{}
}
///*
// * Create a document object.
// */
//
//YAML_DECLARE(int)
//yaml_document_initialize(document *yaml_document_t,
// version_directive *yaml_version_directive_t,
// tag_directives_start *yaml_tag_directive_t,
// tag_directives_end *yaml_tag_directive_t,
// start_implicit int, end_implicit int)
//{
// struct {
// error yaml_error_type_t
// } context
// struct {
// start *yaml_node_t
// end *yaml_node_t
// top *yaml_node_t
// } nodes = { NULL, NULL, NULL }
// version_directive_copy *yaml_version_directive_t = NULL
// struct {
// start *yaml_tag_directive_t
// end *yaml_tag_directive_t
// top *yaml_tag_directive_t
// } tag_directives_copy = { NULL, NULL, NULL }
// value yaml_tag_directive_t = { NULL, NULL }
// mark yaml_mark_t = { 0, 0, 0 }
//
// assert(document) // Non-NULL document object is expected.
// assert((tag_directives_start && tag_directives_end) ||
// (tag_directives_start == tag_directives_end))
// // Valid tag directives are expected.
//
// if (!STACK_INIT(&context, nodes, INITIAL_STACK_SIZE)) goto error
//
// if (version_directive) {
// version_directive_copy = yaml_malloc(sizeof(yaml_version_directive_t))
// if (!version_directive_copy) goto error
// version_directive_copy.major = version_directive.major
// version_directive_copy.minor = version_directive.minor
// }
//
// if (tag_directives_start != tag_directives_end) {
// tag_directive *yaml_tag_directive_t
// if (!STACK_INIT(&context, tag_directives_copy, INITIAL_STACK_SIZE))
// goto error
// for (tag_directive = tag_directives_start
// tag_directive != tag_directives_end; tag_directive ++) {
// assert(tag_directive.handle)
// assert(tag_directive.prefix)
// if (!yaml_check_utf8(tag_directive.handle,
// strlen((char *)tag_directive.handle)))
// goto error
// if (!yaml_check_utf8(tag_directive.prefix,
// strlen((char *)tag_directive.prefix)))
// goto error
// value.handle = yaml_strdup(tag_directive.handle)
// value.prefix = yaml_strdup(tag_directive.prefix)
// if (!value.handle || !value.prefix) goto error
// if (!PUSH(&context, tag_directives_copy, value))
// goto error
// value.handle = NULL
// value.prefix = NULL
// }
// }
//
// DOCUMENT_INIT(*document, nodes.start, nodes.end, version_directive_copy,
// tag_directives_copy.start, tag_directives_copy.top,
// start_implicit, end_implicit, mark, mark)
//
// return 1
//
//error:
// STACK_DEL(&context, nodes)
// yaml_free(version_directive_copy)
// while (!STACK_EMPTY(&context, tag_directives_copy)) {
// value yaml_tag_directive_t = POP(&context, tag_directives_copy)
// yaml_free(value.handle)
// yaml_free(value.prefix)
// }
// STACK_DEL(&context, tag_directives_copy)
// yaml_free(value.handle)
// yaml_free(value.prefix)
//
// return 0
//}
//
///*
// * Destroy a document object.
// */
//
//YAML_DECLARE(void)
//yaml_document_delete(document *yaml_document_t)
//{
// struct {
// error yaml_error_type_t
// } context
// tag_directive *yaml_tag_directive_t
//
// context.error = YAML_NO_ERROR // Eliminate a compliler warning.
//
// assert(document) // Non-NULL document object is expected.
//
// while (!STACK_EMPTY(&context, document.nodes)) {
// node yaml_node_t = POP(&context, document.nodes)
// yaml_free(node.tag)
// switch (node.type) {
// case YAML_SCALAR_NODE:
// yaml_free(node.data.scalar.value)
// break
// case YAML_SEQUENCE_NODE:
// STACK_DEL(&context, node.data.sequence.items)
// break
// case YAML_MAPPING_NODE:
// STACK_DEL(&context, node.data.mapping.pairs)
// break
// default:
// assert(0) // Should not happen.
// }
// }
// STACK_DEL(&context, document.nodes)
//
// yaml_free(document.version_directive)
// for (tag_directive = document.tag_directives.start
// tag_directive != document.tag_directives.end
// tag_directive++) {
// yaml_free(tag_directive.handle)
// yaml_free(tag_directive.prefix)
// }
// yaml_free(document.tag_directives.start)
//
// memset(document, 0, sizeof(yaml_document_t))
//}
//
///**
// * Get a document node.
// */
//
//YAML_DECLARE(yaml_node_t *)
//yaml_document_get_node(document *yaml_document_t, index int)
//{
// assert(document) // Non-NULL document object is expected.
//
// if (index > 0 && document.nodes.start + index <= document.nodes.top) {
// return document.nodes.start + index - 1
// }
// return NULL
//}
//
///**
// * Get the root object.
// */
//
//YAML_DECLARE(yaml_node_t *)
//yaml_document_get_root_node(document *yaml_document_t)
//{
// assert(document) // Non-NULL document object is expected.
//
// if (document.nodes.top != document.nodes.start) {
// return document.nodes.start
// }
// return NULL
//}
//
///*
// * Add a scalar node to a document.
// */
//
//YAML_DECLARE(int)
//yaml_document_add_scalar(document *yaml_document_t,
// tag *yaml_char_t, value *yaml_char_t, length int,
// style yaml_scalar_style_t)
//{
// struct {
// error yaml_error_type_t
// } context
// mark yaml_mark_t = { 0, 0, 0 }
// tag_copy *yaml_char_t = NULL
// value_copy *yaml_char_t = NULL
// node yaml_node_t
//
// assert(document) // Non-NULL document object is expected.
// assert(value) // Non-NULL value is expected.
//
// if (!tag) {
// tag = (yaml_char_t *)YAML_DEFAULT_SCALAR_TAG
// }
//
// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
// tag_copy = yaml_strdup(tag)
// if (!tag_copy) goto error
//
// if (length < 0) {
// length = strlen((char *)value)
// }
//
// if (!yaml_check_utf8(value, length)) goto error
// value_copy = yaml_malloc(length+1)
// if (!value_copy) goto error
// memcpy(value_copy, value, length)
// value_copy[length] = '\0'
//
// SCALAR_NODE_INIT(node, tag_copy, value_copy, length, style, mark, mark)
// if (!PUSH(&context, document.nodes, node)) goto error
//
// return document.nodes.top - document.nodes.start
//
//error:
// yaml_free(tag_copy)
// yaml_free(value_copy)
//
// return 0
//}
//
///*
// * Add a sequence node to a document.
// */
//
//YAML_DECLARE(int)
//yaml_document_add_sequence(document *yaml_document_t,
// tag *yaml_char_t, style yaml_sequence_style_t)
//{
// struct {
// error yaml_error_type_t
// } context
// mark yaml_mark_t = { 0, 0, 0 }
// tag_copy *yaml_char_t = NULL
// struct {
// start *yaml_node_item_t
// end *yaml_node_item_t
// top *yaml_node_item_t
// } items = { NULL, NULL, NULL }
// node yaml_node_t
//
// assert(document) // Non-NULL document object is expected.
//
// if (!tag) {
// tag = (yaml_char_t *)YAML_DEFAULT_SEQUENCE_TAG
// }
//
// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
// tag_copy = yaml_strdup(tag)
// if (!tag_copy) goto error
//
// if (!STACK_INIT(&context, items, INITIAL_STACK_SIZE)) goto error
//
// SEQUENCE_NODE_INIT(node, tag_copy, items.start, items.end,
// style, mark, mark)
// if (!PUSH(&context, document.nodes, node)) goto error
//
// return document.nodes.top - document.nodes.start
//
//error:
// STACK_DEL(&context, items)
// yaml_free(tag_copy)
//
// return 0
//}
//
///*
// * Add a mapping node to a document.
// */
//
//YAML_DECLARE(int)
//yaml_document_add_mapping(document *yaml_document_t,
// tag *yaml_char_t, style yaml_mapping_style_t)
//{
// struct {
// error yaml_error_type_t
// } context
// mark yaml_mark_t = { 0, 0, 0 }
// tag_copy *yaml_char_t = NULL
// struct {
// start *yaml_node_pair_t
// end *yaml_node_pair_t
// top *yaml_node_pair_t
// } pairs = { NULL, NULL, NULL }
// node yaml_node_t
//
// assert(document) // Non-NULL document object is expected.
//
// if (!tag) {
// tag = (yaml_char_t *)YAML_DEFAULT_MAPPING_TAG
// }
//
// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
// tag_copy = yaml_strdup(tag)
// if (!tag_copy) goto error
//
// if (!STACK_INIT(&context, pairs, INITIAL_STACK_SIZE)) goto error
//
// MAPPING_NODE_INIT(node, tag_copy, pairs.start, pairs.end,
// style, mark, mark)
// if (!PUSH(&context, document.nodes, node)) goto error
//
// return document.nodes.top - document.nodes.start
//
//error:
// STACK_DEL(&context, pairs)
// yaml_free(tag_copy)
//
// return 0
//}
//
///*
// * Append an item to a sequence node.
// */
//
//YAML_DECLARE(int)
//yaml_document_append_sequence_item(document *yaml_document_t,
// sequence int, item int)
//{
// struct {
// error yaml_error_type_t
// } context
//
// assert(document) // Non-NULL document is required.
// assert(sequence > 0
// && document.nodes.start + sequence <= document.nodes.top)
// // Valid sequence id is required.
// assert(document.nodes.start[sequence-1].type == YAML_SEQUENCE_NODE)
// // A sequence node is required.
// assert(item > 0 && document.nodes.start + item <= document.nodes.top)
// // Valid item id is required.
//
// if (!PUSH(&context,
// document.nodes.start[sequence-1].data.sequence.items, item))
// return 0
//
// return 1
//}
//
///*
// * Append a pair of a key and a value to a mapping node.
// */
//
//YAML_DECLARE(int)
//yaml_document_append_mapping_pair(document *yaml_document_t,
// mapping int, key int, value int)
//{
// struct {
// error yaml_error_type_t
// } context
//
// pair yaml_node_pair_t
//
// assert(document) // Non-NULL document is required.
// assert(mapping > 0
// && document.nodes.start + mapping <= document.nodes.top)
// // Valid mapping id is required.
// assert(document.nodes.start[mapping-1].type == YAML_MAPPING_NODE)
// // A mapping node is required.
// assert(key > 0 && document.nodes.start + key <= document.nodes.top)
// // Valid key id is required.
// assert(value > 0 && document.nodes.start + value <= document.nodes.top)
// // Valid value id is required.
//
// pair.key = key
// pair.value = value
//
// if (!PUSH(&context,
// document.nodes.start[mapping-1].data.mapping.pairs, pair))
// return 0
//
// return 1
//}
//
//
| third_party/src/gonuts.org/v1/yaml/apic.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.0001828227541409433,
0.00017203364404849708,
0.00016396501450799406,
0.000172747066244483,
0.000003950561676901998
] |
{
"id": 4,
"code_window": [
"\t\tmakePod(\"m3\", 80, 443, 8085),\n",
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectSchedule(makePod(\"\", 8080, 8081), \"m3\")\n",
"}\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 58
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package registry
import (
"fmt"
"sort"
"sync"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
"github.com/GoogleCloudPlatform/kubernetes/pkg/apiserver"
"github.com/GoogleCloudPlatform/kubernetes/pkg/labels"
"github.com/GoogleCloudPlatform/kubernetes/pkg/util"
)
var ErrDoesNotExist = fmt.Errorf("The requested resource does not exist.")
// Keep track of a set of minions. Safe for concurrent reading/writing.
type MinionRegistry interface {
List() (currentMinions []string, err error)
Insert(minion string) error
Delete(minion string) error
Contains(minion string) (bool, error)
}
// Initialize a minion registry with a list of minions.
func MakeMinionRegistry(minions []string) MinionRegistry {
m := &minionList{
minions: util.StringSet{},
}
for _, minion := range minions {
m.minions.Insert(minion)
}
return m
}
type minionList struct {
minions util.StringSet
lock sync.Mutex
}
func (m *minionList) List() (currentMinions []string, err error) {
m.lock.Lock()
defer m.lock.Unlock()
// Convert from map to []string
for minion := range m.minions {
currentMinions = append(currentMinions, minion)
}
sort.StringSlice(currentMinions).Sort()
return
}
func (m *minionList) Insert(newMinion string) error {
m.lock.Lock()
defer m.lock.Unlock()
m.minions.Insert(newMinion)
return nil
}
func (m *minionList) Delete(minion string) error {
m.lock.Lock()
defer m.lock.Unlock()
m.minions.Delete(minion)
return nil
}
func (m *minionList) Contains(minion string) (bool, error) {
m.lock.Lock()
defer m.lock.Unlock()
return m.minions.Has(minion), nil
}
// MinionRegistryStorage implements the RESTStorage interface, backed by a MinionRegistry.
type MinionRegistryStorage struct {
registry MinionRegistry
}
func MakeMinionRegistryStorage(m MinionRegistry) apiserver.RESTStorage {
return &MinionRegistryStorage{
registry: m,
}
}
func (storage *MinionRegistryStorage) toApiMinion(name string) api.Minion {
return api.Minion{JSONBase: api.JSONBase{ID: name}}
}
func (storage *MinionRegistryStorage) List(selector labels.Selector) (interface{}, error) {
nameList, err := storage.registry.List()
if err != nil {
return nil, err
}
var list api.MinionList
for _, name := range nameList {
list.Items = append(list.Items, storage.toApiMinion(name))
}
return list, nil
}
func (storage *MinionRegistryStorage) Get(id string) (interface{}, error) {
exists, err := storage.registry.Contains(id)
if !exists {
return nil, ErrDoesNotExist
}
return storage.toApiMinion(id), err
}
func (storage *MinionRegistryStorage) Extract(body []byte) (interface{}, error) {
var minion api.Minion
err := api.DecodeInto(body, &minion)
return minion, err
}
func (storage *MinionRegistryStorage) Create(obj interface{}) (<-chan interface{}, error) {
minion, ok := obj.(api.Minion)
if !ok {
return nil, fmt.Errorf("not a minion: %#v", obj)
}
if minion.ID == "" {
return nil, fmt.Errorf("ID should not be empty: %#v", minion)
}
return apiserver.MakeAsync(func() (interface{}, error) {
err := storage.registry.Insert(minion.ID)
if err != nil {
return nil, err
}
contains, err := storage.registry.Contains(minion.ID)
if err != nil {
return nil, err
}
if contains {
return storage.toApiMinion(minion.ID), nil
}
return nil, fmt.Errorf("unable to add minion %#v", minion)
}), nil
}
func (storage *MinionRegistryStorage) Update(minion interface{}) (<-chan interface{}, error) {
return nil, fmt.Errorf("Minions can only be created (inserted) and deleted.")
}
func (storage *MinionRegistryStorage) Delete(id string) (<-chan interface{}, error) {
exists, err := storage.registry.Contains(id)
if !exists {
return nil, ErrDoesNotExist
}
if err != nil {
return nil, err
}
return apiserver.MakeAsync(func() (interface{}, error) {
return api.Status{Status: api.StatusSuccess}, storage.registry.Delete(id)
}), nil
}
| pkg/registry/minion_registry.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.004236810840666294,
0.0007144464179873466,
0.00016100250650197268,
0.00018257783085573465,
0.001090671168640256
] |
{
"id": 5,
"code_window": [
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectFailure(makePod(\"\", 8080, 8081))\n",
"}"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 73
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package scheduler
import (
"math/rand"
"testing"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
)
func TestFirstFitSchedulerNothingScheduled(t *testing.T) {
fakeRegistry := FakePodLister{}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(&fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(api.Pod{}, "m3")
}
func TestFirstFitSchedulerFirstScheduled(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 8080),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(makePod("", 8080), "m3")
}
func TestFirstFitSchedulerFirstScheduledComplicated(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 80, 8080),
makePod("m2", 8081, 8082, 8083),
makePod("m3", 80, 443, 8085),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectSchedule(makePod("", 8080, 8081), "m3")
}
func TestFirstFitSchedulerFirstScheduledImpossible(t *testing.T) {
fakeRegistry := FakePodLister{
makePod("m1", 8080),
makePod("m2", 8081),
makePod("m3", 8080),
}
r := rand.New(rand.NewSource(0))
st := schedulerTester{
t: t,
scheduler: MakeFirstFitScheduler(fakeRegistry, r),
minionLister: FakeMinionLister{"m1", "m2", "m3"},
}
st.expectFailure(makePod("", 8080, 8081))
}
| pkg/scheduler/firstfit_test.go | 1 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.9983673691749573,
0.6234907507896423,
0.0001738026476232335,
0.9886106252670288,
0.47877106070518494
] |
{
"id": 5,
"code_window": [
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectFailure(makePod(\"\", 8080, 8081))\n",
"}"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 73
} | // Copyright 2014 go-dockerclient authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package docker
import (
"net/http"
"net/url"
"reflect"
"sort"
"testing"
"github.com/fsouza/go-dockerclient/engine"
)
type DockerVersion struct {
Version string
GitCommit string
GoVersion string
}
func TestVersion(t *testing.T) {
body := `{
"Version":"0.2.2",
"GitCommit":"5a2a5cc+CHANGES",
"GoVersion":"go1.0.3"
}`
fakeRT := FakeRoundTripper{message: body, status: http.StatusOK}
client := newTestClient(&fakeRT)
expected := DockerVersion{
Version: "0.2.2",
GitCommit: "5a2a5cc+CHANGES",
GoVersion: "go1.0.3",
}
version, err := client.Version()
if err != nil {
t.Fatal(err)
}
if result := version.Get("Version"); result != expected.Version {
t.Errorf("Version(): Wrong result. Want %#v. Got %#v.", expected.Version, version.Get("Version"))
}
if result := version.Get("GitCommit"); result != expected.GitCommit {
t.Errorf("GitCommit(): Wrong result. Want %#v. Got %#v.", expected.GitCommit, version.Get("GitCommit"))
}
if result := version.Get("GoVersion"); result != expected.GoVersion {
t.Errorf("GoVersion(): Wrong result. Want %#v. Got %#v.", expected.GoVersion, version.Get("GoVersion"))
}
req := fakeRT.requests[0]
if req.Method != "GET" {
t.Errorf("Version(): wrong request method. Want GET. Got %s.", req.Method)
}
u, _ := url.Parse(client.getURL("/version"))
if req.URL.Path != u.Path {
t.Errorf("Version(): wrong request path. Want %q. Got %q.", u.Path, req.URL.Path)
}
}
func TestVersionError(t *testing.T) {
fakeRT := &FakeRoundTripper{message: "internal error", status: http.StatusInternalServerError}
client := newTestClient(fakeRT)
version, err := client.Version()
if version != nil {
t.Errorf("Version(): expected <nil> value, got %#v.", version)
}
if err == nil {
t.Error("Version(): unexpected <nil> error")
}
}
func TestInfo(t *testing.T) {
body := `{
"Containers":11,
"Images":16,
"Debug":0,
"NFd":11,
"NGoroutines":21,
"MemoryLimit":1,
"SwapLimit":0
}`
fakeRT := FakeRoundTripper{message: body, status: http.StatusOK}
client := newTestClient(&fakeRT)
expected := engine.Env{}
expected.SetInt("Containers", 11)
expected.SetInt("Images", 16)
expected.SetBool("Debug", false)
expected.SetInt("NFd", 11)
expected.SetInt("NGoroutines", 21)
expected.SetBool("MemoryLimit", true)
expected.SetBool("SwapLimit", false)
info, err := client.Info()
if err != nil {
t.Fatal(err)
}
infoSlice := []string(*info)
expectedSlice := []string(expected)
sort.Strings(infoSlice)
sort.Strings(expectedSlice)
if !reflect.DeepEqual(expectedSlice, infoSlice) {
t.Errorf("Info(): Wrong result.\nWant %#v.\nGot %#v.", expected, *info)
}
req := fakeRT.requests[0]
if req.Method != "GET" {
t.Errorf("Info(): Wrong HTTP method. Want GET. Got %s.", req.Method)
}
u, _ := url.Parse(client.getURL("/info"))
if req.URL.Path != u.Path {
t.Errorf("Info(): Wrong request path. Want %q. Got %q.", u.Path, req.URL.Path)
}
}
func TestInfoError(t *testing.T) {
fakeRT := &FakeRoundTripper{message: "internal error", status: http.StatusInternalServerError}
client := newTestClient(fakeRT)
version, err := client.Info()
if version != nil {
t.Errorf("Info(): expected <nil> value, got %#v.", version)
}
if err == nil {
t.Error("Info(): unexpected <nil> error")
}
}
| third_party/src/github.com/fsouza/go-dockerclient/misc_test.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.00020527154265437275,
0.0001724577887216583,
0.000164912678883411,
0.00017027683497872204,
0.00001025146138999844
] |
{
"id": 5,
"code_window": [
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectFailure(makePod(\"\", 8080, 8081))\n",
"}"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 73
} | /*
Copyright 2014 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"encoding/json"
"fmt"
"math/rand"
"time"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
"github.com/GoogleCloudPlatform/kubernetes/pkg/client"
"github.com/GoogleCloudPlatform/kubernetes/pkg/labels"
"github.com/GoogleCloudPlatform/kubernetes/pkg/tools"
"github.com/GoogleCloudPlatform/kubernetes/pkg/util"
"github.com/coreos/go-etcd/etcd"
"github.com/golang/glog"
)
// ReplicationManager is responsible for synchronizing ReplicationController objects stored in etcd
// with actual running pods.
// TODO: Remove the etcd dependency and re-factor in terms of a generic watch interface
type ReplicationManager struct {
etcdClient tools.EtcdClient
kubeClient client.Interface
podControl PodControlInterface
syncTime <-chan time.Time
// To allow injection of syncReplicationController for testing.
syncHandler func(controllerSpec api.ReplicationController) error
}
// PodControlInterface is an interface that knows how to add or delete pods
// created as an interface to allow testing.
type PodControlInterface interface {
// createReplica creates new replicated pods according to the spec.
createReplica(controllerSpec api.ReplicationController)
// deletePod deletes the pod identified by podID.
deletePod(podID string) error
}
// RealPodControl is the default implementation of PodControllerInterface.
type RealPodControl struct {
kubeClient client.Interface
}
func (r RealPodControl) createReplica(controllerSpec api.ReplicationController) {
labels := controllerSpec.DesiredState.PodTemplate.Labels
if labels != nil {
labels["replicationController"] = controllerSpec.ID
}
pod := api.Pod{
JSONBase: api.JSONBase{
ID: fmt.Sprintf("%08x", rand.Uint32()),
},
DesiredState: controllerSpec.DesiredState.PodTemplate.DesiredState,
Labels: controllerSpec.DesiredState.PodTemplate.Labels,
}
_, err := r.kubeClient.CreatePod(pod)
if err != nil {
glog.Errorf("%#v\n", err)
}
}
func (r RealPodControl) deletePod(podID string) error {
return r.kubeClient.DeletePod(podID)
}
// MakeReplicationManager craetes a new ReplicationManager.
func MakeReplicationManager(etcdClient tools.EtcdClient, kubeClient client.Interface) *ReplicationManager {
rm := &ReplicationManager{
kubeClient: kubeClient,
etcdClient: etcdClient,
podControl: RealPodControl{
kubeClient: kubeClient,
},
}
rm.syncHandler = func(controllerSpec api.ReplicationController) error {
return rm.syncReplicationController(controllerSpec)
}
return rm
}
// Run begins watching and syncing.
func (rm *ReplicationManager) Run(period time.Duration) {
rm.syncTime = time.Tick(period)
go util.Forever(func() { rm.watchControllers() }, period)
}
func (rm *ReplicationManager) watchControllers() {
watchChannel := make(chan *etcd.Response)
stop := make(chan bool)
defer func() {
// Ensure that the call to watch ends.
close(stop)
}()
go func() {
defer util.HandleCrash()
_, err := rm.etcdClient.Watch("/registry/controllers", 0, true, watchChannel, stop)
if err == etcd.ErrWatchStoppedByUser {
close(watchChannel)
} else {
glog.Errorf("etcd.Watch stopped unexpectedly: %v (%#v)", err, err)
}
}()
for {
select {
case <-rm.syncTime:
rm.synchronize()
case watchResponse, open := <-watchChannel:
if !open || watchResponse == nil {
// watchChannel has been closed, or something else went
// wrong with our etcd watch call. Let the util.Forever()
// that called us call us again.
return
}
glog.Infof("Got watch: %#v", watchResponse)
controller, err := rm.handleWatchResponse(watchResponse)
if err != nil {
glog.Errorf("Error handling data: %#v, %#v", err, watchResponse)
continue
}
rm.syncHandler(*controller)
}
}
}
func (rm *ReplicationManager) handleWatchResponse(response *etcd.Response) (*api.ReplicationController, error) {
if response.Action == "set" {
if response.Node != nil {
var controllerSpec api.ReplicationController
err := json.Unmarshal([]byte(response.Node.Value), &controllerSpec)
if err != nil {
return nil, err
}
return &controllerSpec, nil
}
return nil, fmt.Errorf("response node is null %#v", response)
} else if response.Action == "delete" {
// Ensure that the final state of a replication controller is applied before it is deleted.
// Otherwise, a replication controller could be modified and then deleted (for example, from 3 to 0
// replicas), and it would be non-deterministic which of its pods continued to exist.
if response.PrevNode != nil {
var controllerSpec api.ReplicationController
if err := json.Unmarshal([]byte(response.PrevNode.Value), &controllerSpec); err != nil {
return nil, err
}
return &controllerSpec, nil
}
return nil, fmt.Errorf("previous node is null %#v", response)
}
return nil, nil
}
func (rm *ReplicationManager) filterActivePods(pods []api.Pod) []api.Pod {
var result []api.Pod
for _, value := range pods {
if api.PodStopped != value.CurrentState.Status {
result = append(result, value)
}
}
return result
}
func (rm *ReplicationManager) syncReplicationController(controllerSpec api.ReplicationController) error {
s := labels.Set(controllerSpec.DesiredState.ReplicaSelector).AsSelector()
podList, err := rm.kubeClient.ListPods(s)
if err != nil {
return err
}
filteredList := rm.filterActivePods(podList.Items)
diff := len(filteredList) - controllerSpec.DesiredState.Replicas
glog.Infof("%#v", filteredList)
if diff < 0 {
diff *= -1
glog.Infof("Too few replicas, creating %d\n", diff)
for i := 0; i < diff; i++ {
rm.podControl.createReplica(controllerSpec)
}
} else if diff > 0 {
glog.Info("Too many replicas, deleting")
for i := 0; i < diff; i++ {
rm.podControl.deletePod(filteredList[i].ID)
}
}
return nil
}
func (rm *ReplicationManager) synchronize() {
var controllerSpecs []api.ReplicationController
helper := tools.EtcdHelper{rm.etcdClient}
err := helper.ExtractList("/registry/controllers", &controllerSpecs)
if err != nil {
glog.Errorf("Synchronization error: %v (%#v)", err, err)
return
}
for _, controllerSpec := range controllerSpecs {
err = rm.syncHandler(controllerSpec)
if err != nil {
glog.Errorf("Error synchronizing: %#v", err)
}
}
}
| pkg/controller/replication_controller.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.0004579796514008194,
0.0002036481600953266,
0.0001638388930587098,
0.00017411314183846116,
0.00007177770748967305
] |
{
"id": 5,
"code_window": [
"\t}\n",
"\tr := rand.New(rand.NewSource(0))\n",
"\tst := schedulerTester{\n",
"\t\tt: t,\n",
"\t\tscheduler: MakeFirstFitScheduler(fakeRegistry, r),\n",
"\t\tminionLister: FakeMinionLister{\"m1\", \"m2\", \"m3\"},\n",
"\t}\n",
"\tst.expectFailure(makePod(\"\", 8080, 8081))\n",
"}"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\tscheduler: NewFirstFitScheduler(fakeRegistry, r),\n"
],
"file_path": "pkg/scheduler/firstfit_test.go",
"type": "replace",
"edit_start_line_idx": 73
} | package yaml
import (
"io"
)
// Set the reader error and return 0.
func yaml_parser_set_reader_error(parser *yaml_parser_t, problem string, offset int, value int) bool {
parser.error = yaml_READER_ERROR
parser.problem = problem
parser.problem_offset = offset
parser.problem_value = value
return false
}
// Byte order marks.
const (
bom_UTF8 = "\xef\xbb\xbf"
bom_UTF16LE = "\xff\xfe"
bom_UTF16BE = "\xfe\xff"
)
// Determine the input stream encoding by checking the BOM symbol. If no BOM is
// found, the UTF-8 encoding is assumed. Return 1 on success, 0 on failure.
func yaml_parser_determine_encoding(parser *yaml_parser_t) bool {
// Ensure that we had enough bytes in the raw buffer.
for !parser.eof && len(parser.raw_buffer)-parser.raw_buffer_pos < 3 {
if !yaml_parser_update_raw_buffer(parser) {
return false
}
}
// Determine the encoding.
buf := parser.raw_buffer
pos := parser.raw_buffer_pos
avail := len(buf) - pos
if avail >= 2 && buf[pos] == bom_UTF16LE[0] && buf[pos+1] == bom_UTF16LE[1] {
parser.encoding = yaml_UTF16LE_ENCODING
parser.raw_buffer_pos += 2
parser.offset += 2
} else if avail >= 2 && buf[pos] == bom_UTF16BE[0] && buf[pos+1] == bom_UTF16BE[1] {
parser.encoding = yaml_UTF16BE_ENCODING
parser.raw_buffer_pos += 2
parser.offset += 2
} else if avail >= 3 && buf[pos] == bom_UTF8[0] && buf[pos+1] == bom_UTF8[1] && buf[pos+2] == bom_UTF8[2] {
parser.encoding = yaml_UTF8_ENCODING
parser.raw_buffer_pos += 3
parser.offset += 3
} else {
parser.encoding = yaml_UTF8_ENCODING
}
return true
}
// Update the raw buffer.
func yaml_parser_update_raw_buffer(parser *yaml_parser_t) bool {
size_read := 0
// Return if the raw buffer is full.
if parser.raw_buffer_pos == 0 && len(parser.raw_buffer) == cap(parser.raw_buffer) {
return true
}
// Return on EOF.
if parser.eof {
return true
}
// Move the remaining bytes in the raw buffer to the beginning.
if parser.raw_buffer_pos > 0 && parser.raw_buffer_pos < len(parser.raw_buffer) {
copy(parser.raw_buffer, parser.raw_buffer[parser.raw_buffer_pos:])
}
parser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)-parser.raw_buffer_pos]
parser.raw_buffer_pos = 0
// Call the read handler to fill the buffer.
size_read, err := parser.read_handler(parser, parser.raw_buffer[len(parser.raw_buffer):cap(parser.raw_buffer)])
parser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)+size_read]
if err == io.EOF {
parser.eof = true
} else if err != nil {
return yaml_parser_set_reader_error(parser, "input error: "+err.Error(), parser.offset, -1)
}
return true
}
// Ensure that the buffer contains at least `length` characters.
// Return true on success, false on failure.
//
// The length is supposed to be significantly less that the buffer size.
func yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool {
if parser.read_handler == nil {
panic("read handler must be set")
}
// If the EOF flag is set and the raw buffer is empty, do nothing.
if parser.eof && parser.raw_buffer_pos == len(parser.raw_buffer) {
return true
}
// Return if the buffer contains enough characters.
if parser.unread >= length {
return true
}
// Determine the input encoding if it is not known yet.
if parser.encoding == yaml_ANY_ENCODING {
if !yaml_parser_determine_encoding(parser) {
return false
}
}
// Move the unread characters to the beginning of the buffer.
buffer_len := len(parser.buffer)
if parser.buffer_pos > 0 && parser.buffer_pos < buffer_len {
copy(parser.buffer, parser.buffer[parser.buffer_pos:])
buffer_len -= parser.buffer_pos
parser.buffer_pos = 0
} else if parser.buffer_pos == buffer_len {
buffer_len = 0
parser.buffer_pos = 0
}
// Open the whole buffer for writing, and cut it before returning.
parser.buffer = parser.buffer[:cap(parser.buffer)]
// Fill the buffer until it has enough characters.
first := true
for parser.unread < length {
// Fill the raw buffer if necessary.
if !first || parser.raw_buffer_pos == len(parser.raw_buffer) {
if !yaml_parser_update_raw_buffer(parser) {
parser.buffer = parser.buffer[:buffer_len]
return false
}
}
first = false
// Decode the raw buffer.
inner:
for parser.raw_buffer_pos != len(parser.raw_buffer) {
var value rune
var width int
raw_unread := len(parser.raw_buffer) - parser.raw_buffer_pos
// Decode the next character.
switch parser.encoding {
case yaml_UTF8_ENCODING:
// Decode a UTF-8 character. Check RFC 3629
// (http://www.ietf.org/rfc/rfc3629.txt) for more details.
//
// The following table (taken from the RFC) is used for
// decoding.
//
// Char. number range | UTF-8 octet sequence
// (hexadecimal) | (binary)
// --------------------+------------------------------------
// 0000 0000-0000 007F | 0xxxxxxx
// 0000 0080-0000 07FF | 110xxxxx 10xxxxxx
// 0000 0800-0000 FFFF | 1110xxxx 10xxxxxx 10xxxxxx
// 0001 0000-0010 FFFF | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
//
// Additionally, the characters in the range 0xD800-0xDFFF
// are prohibited as they are reserved for use with UTF-16
// surrogate pairs.
// Determine the length of the UTF-8 sequence.
octet := parser.raw_buffer[parser.raw_buffer_pos]
switch {
case octet&0x80 == 0x00:
width = 1
case octet&0xE0 == 0xC0:
width = 2
case octet&0xF0 == 0xE0:
width = 3
case octet&0xF8 == 0xF0:
width = 4
default:
// The leading octet is invalid.
return yaml_parser_set_reader_error(parser,
"invalid leading UTF-8 octet",
parser.offset, int(octet))
}
// Check if the raw buffer contains an incomplete character.
if width > raw_unread {
if parser.eof {
return yaml_parser_set_reader_error(parser,
"incomplete UTF-8 octet sequence",
parser.offset, -1)
}
break inner
}
// Decode the leading octet.
switch {
case octet&0x80 == 0x00:
value = rune(octet & 0x7F)
case octet&0xE0 == 0xC0:
value = rune(octet & 0x1F)
case octet&0xF0 == 0xE0:
value = rune(octet & 0x0F)
case octet&0xF8 == 0xF0:
value = rune(octet & 0x07)
default:
value = 0
}
// Check and decode the trailing octets.
for k := 1; k < width; k++ {
octet = parser.raw_buffer[parser.raw_buffer_pos+k]
// Check if the octet is valid.
if (octet & 0xC0) != 0x80 {
return yaml_parser_set_reader_error(parser,
"invalid trailing UTF-8 octet",
parser.offset+k, int(octet))
}
// Decode the octet.
value = (value << 6) + rune(octet&0x3F)
}
// Check the length of the sequence against the value.
switch {
case width == 1:
case width == 2 && value >= 0x80:
case width == 3 && value >= 0x800:
case width == 4 && value >= 0x10000:
default:
return yaml_parser_set_reader_error(parser,
"invalid length of a UTF-8 sequence",
parser.offset, -1)
}
// Check the range of the value.
if value >= 0xD800 && value <= 0xDFFF || value > 0x10FFFF {
return yaml_parser_set_reader_error(parser,
"invalid Unicode character",
parser.offset, int(value))
}
case yaml_UTF16LE_ENCODING, yaml_UTF16BE_ENCODING:
var low, high int
if parser.encoding == yaml_UTF16LE_ENCODING {
low, high = 0, 1
} else {
high, low = 1, 0
}
// The UTF-16 encoding is not as simple as one might
// naively think. Check RFC 2781
// (http://www.ietf.org/rfc/rfc2781.txt).
//
// Normally, two subsequent bytes describe a Unicode
// character. However a special technique (called a
// surrogate pair) is used for specifying character
// values larger than 0xFFFF.
//
// A surrogate pair consists of two pseudo-characters:
// high surrogate area (0xD800-0xDBFF)
// low surrogate area (0xDC00-0xDFFF)
//
// The following formulas are used for decoding
// and encoding characters using surrogate pairs:
//
// U = U' + 0x10000 (0x01 00 00 <= U <= 0x10 FF FF)
// U' = yyyyyyyyyyxxxxxxxxxx (0 <= U' <= 0x0F FF FF)
// W1 = 110110yyyyyyyyyy
// W2 = 110111xxxxxxxxxx
//
// where U is the character value, W1 is the high surrogate
// area, W2 is the low surrogate area.
// Check for incomplete UTF-16 character.
if raw_unread < 2 {
if parser.eof {
return yaml_parser_set_reader_error(parser,
"incomplete UTF-16 character",
parser.offset, -1)
}
break inner
}
// Get the character.
value = rune(parser.raw_buffer[parser.raw_buffer_pos+low]) +
(rune(parser.raw_buffer[parser.raw_buffer_pos+high]) << 8)
// Check for unexpected low surrogate area.
if value&0xFC00 == 0xDC00 {
return yaml_parser_set_reader_error(parser,
"unexpected low surrogate area",
parser.offset, int(value))
}
// Check for a high surrogate area.
if value&0xFC00 == 0xD800 {
width = 4
// Check for incomplete surrogate pair.
if raw_unread < 4 {
if parser.eof {
return yaml_parser_set_reader_error(parser,
"incomplete UTF-16 surrogate pair",
parser.offset, -1)
}
break inner
}
// Get the next character.
value2 := rune(parser.raw_buffer[parser.raw_buffer_pos+low+2]) +
(rune(parser.raw_buffer[parser.raw_buffer_pos+high+2]) << 8)
// Check for a low surrogate area.
if value2&0xFC00 != 0xDC00 {
return yaml_parser_set_reader_error(parser,
"expected low surrogate area",
parser.offset+2, int(value2))
}
// Generate the value of the surrogate pair.
value = 0x10000 + ((value & 0x3FF) << 10) + (value2 & 0x3FF)
} else {
width = 2
}
default:
panic("impossible")
}
// Check if the character is in the allowed range:
// #x9 | #xA | #xD | [#x20-#x7E] (8 bit)
// | #x85 | [#xA0-#xD7FF] | [#xE000-#xFFFD] (16 bit)
// | [#x10000-#x10FFFF] (32 bit)
switch {
case value == 0x09:
case value == 0x0A:
case value == 0x0D:
case value >= 0x20 && value <= 0x7E:
case value == 0x85:
case value >= 0xA0 && value <= 0xD7FF:
case value >= 0xE000 && value <= 0xFFFD:
case value >= 0x10000 && value <= 0x10FFFF:
default:
return yaml_parser_set_reader_error(parser,
"control characters are not allowed",
parser.offset, int(value))
}
// Move the raw pointers.
parser.raw_buffer_pos += width
parser.offset += width
// Finally put the character into the buffer.
if value <= 0x7F {
// 0000 0000-0000 007F . 0xxxxxxx
parser.buffer[buffer_len+0] = byte(value)
} else if value <= 0x7FF {
// 0000 0080-0000 07FF . 110xxxxx 10xxxxxx
parser.buffer[buffer_len+0] = byte(0xC0 + (value >> 6))
parser.buffer[buffer_len+1] = byte(0x80 + (value & 0x3F))
} else if value <= 0xFFFF {
// 0000 0800-0000 FFFF . 1110xxxx 10xxxxxx 10xxxxxx
parser.buffer[buffer_len+0] = byte(0xE0 + (value >> 12))
parser.buffer[buffer_len+1] = byte(0x80 + ((value >> 6) & 0x3F))
parser.buffer[buffer_len+2] = byte(0x80 + (value & 0x3F))
} else {
// 0001 0000-0010 FFFF . 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
parser.buffer[buffer_len+0] = byte(0xF0 + (value >> 18))
parser.buffer[buffer_len+1] = byte(0x80 + ((value >> 12) & 0x3F))
parser.buffer[buffer_len+2] = byte(0x80 + ((value >> 6) & 0x3F))
parser.buffer[buffer_len+3] = byte(0x80 + (value & 0x3F))
}
buffer_len += width
parser.unread++
}
// On EOF, put NUL into the buffer and return.
if parser.eof {
parser.buffer[buffer_len] = 0
buffer_len++
parser.unread++
break
}
}
parser.buffer = parser.buffer[:buffer_len]
return true
}
| third_party/src/gonuts.org/v1/yaml/readerc.go | 0 | https://github.com/kubernetes/kubernetes/commit/6a2703627be0a5e9f8f1c7005cd384561b72644c | [
0.0001784704509191215,
0.0001729200448608026,
0.0001648617471801117,
0.0001732970995362848,
0.0000034844267702283105
] |
{
"id": 0,
"code_window": [
"\t// settableFlags are the flags used to istioctl\n",
"\tsettableFlags = map[string]interface{}{\n",
"\t\t\"istioNamespace\": env.RegisterStringVar(\"ISTIOCTL_ISTIONAMESPACE\", controller.IstioNamespace, \"istioctl --istioNamespace override\"),\n",
"\t\t\"xds-address\": env.RegisterStringVar(\"ISTIOCTL_XDS_ADDRESS\", \"\", \"istioctl --xds-address override\"),\n",
"\t\t\"xds-port\": env.RegisterIntVar(\"ISTIOCTL_XDS_PORT\", 15012, \"istioctl --xds-port override\"),\n",
"\t\t\"xds-san\": env.RegisterStringVar(\"ISTIOCTL_XDS_SAN\", \"\", \"istioctl --xds-san override\"),\n",
"\t\t\"cert-dir\": env.RegisterStringVar(\"ISTIOCTL_CERT_DIR\", \"\", \"istioctl --cert-dir override\"),\n",
"\t\t\"insecure\": env.RegisterBoolVar(\"ISTIOCTL_INSECURE\", false, \"istioctl --insecure override\"),\n",
"\t\t\"prefer-experimental\": env.RegisterBoolVar(\"ISTIOCTL_PREFER_EXPERIMENTAL\", false, \"istioctl should use experimental subcommand variants\"),\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\"authority\": env.RegisterStringVar(\"ISTIOCTL_AUTHORITY\", \"\", \"istioctl --authority override\"),\n"
],
"file_path": "istioctl/cmd/config.go",
"type": "replace",
"edit_start_line_idx": 35
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cmd
import (
"fmt"
"regexp"
"strings"
"testing"
)
func TestConfigList(t *testing.T) {
cases := []testCase{
{ // case 0
args: strings.Split("experimental config get istioNamespace", " "),
expectedRegexp: regexp.MustCompile("Usage:\n istioctl experimental config"),
wantException: false,
},
{ // case 1
args: strings.Split("experimental config list", " "),
expectedOutput: `FLAG VALUE FROM
cert-dir default
insecure default
istioNamespace istio-system default
prefer-experimental default
xds-address default
xds-port 15012 default
xds-san default
`,
wantException: false,
},
}
for i, c := range cases {
t.Run(fmt.Sprintf("case %d %s", i, strings.Join(c.args, " ")), func(t *testing.T) {
verifyOutput(t, c)
})
}
}
| istioctl/cmd/config_test.go | 1 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.028896527364850044,
0.0052081202156841755,
0.00016552425222471356,
0.00030623446218669415,
0.01060277596116066
] |
{
"id": 0,
"code_window": [
"\t// settableFlags are the flags used to istioctl\n",
"\tsettableFlags = map[string]interface{}{\n",
"\t\t\"istioNamespace\": env.RegisterStringVar(\"ISTIOCTL_ISTIONAMESPACE\", controller.IstioNamespace, \"istioctl --istioNamespace override\"),\n",
"\t\t\"xds-address\": env.RegisterStringVar(\"ISTIOCTL_XDS_ADDRESS\", \"\", \"istioctl --xds-address override\"),\n",
"\t\t\"xds-port\": env.RegisterIntVar(\"ISTIOCTL_XDS_PORT\", 15012, \"istioctl --xds-port override\"),\n",
"\t\t\"xds-san\": env.RegisterStringVar(\"ISTIOCTL_XDS_SAN\", \"\", \"istioctl --xds-san override\"),\n",
"\t\t\"cert-dir\": env.RegisterStringVar(\"ISTIOCTL_CERT_DIR\", \"\", \"istioctl --cert-dir override\"),\n",
"\t\t\"insecure\": env.RegisterBoolVar(\"ISTIOCTL_INSECURE\", false, \"istioctl --insecure override\"),\n",
"\t\t\"prefer-experimental\": env.RegisterBoolVar(\"ISTIOCTL_PREFER_EXPERIMENTAL\", false, \"istioctl should use experimental subcommand variants\"),\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\"authority\": env.RegisterStringVar(\"ISTIOCTL_AUTHORITY\", \"\", \"istioctl --authority override\"),\n"
],
"file_path": "istioctl/cmd/config.go",
"type": "replace",
"edit_start_line_idx": 35
} | Copyright (c) 2014-2016 Ulrich Kunitz
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* My name, Ulrich Kunitz, may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| licenses/github.com/ulikunitz/xz/LICENSE | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.00017008293070830405,
0.00016655714716762304,
0.00016133861208800226,
0.00016824991325847805,
0.000003765178917092271
] |
{
"id": 0,
"code_window": [
"\t// settableFlags are the flags used to istioctl\n",
"\tsettableFlags = map[string]interface{}{\n",
"\t\t\"istioNamespace\": env.RegisterStringVar(\"ISTIOCTL_ISTIONAMESPACE\", controller.IstioNamespace, \"istioctl --istioNamespace override\"),\n",
"\t\t\"xds-address\": env.RegisterStringVar(\"ISTIOCTL_XDS_ADDRESS\", \"\", \"istioctl --xds-address override\"),\n",
"\t\t\"xds-port\": env.RegisterIntVar(\"ISTIOCTL_XDS_PORT\", 15012, \"istioctl --xds-port override\"),\n",
"\t\t\"xds-san\": env.RegisterStringVar(\"ISTIOCTL_XDS_SAN\", \"\", \"istioctl --xds-san override\"),\n",
"\t\t\"cert-dir\": env.RegisterStringVar(\"ISTIOCTL_CERT_DIR\", \"\", \"istioctl --cert-dir override\"),\n",
"\t\t\"insecure\": env.RegisterBoolVar(\"ISTIOCTL_INSECURE\", false, \"istioctl --insecure override\"),\n",
"\t\t\"prefer-experimental\": env.RegisterBoolVar(\"ISTIOCTL_PREFER_EXPERIMENTAL\", false, \"istioctl should use experimental subcommand variants\"),\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\"authority\": env.RegisterStringVar(\"ISTIOCTL_AUTHORITY\", \"\", \"istioctl --authority override\"),\n"
],
"file_path": "istioctl/cmd/config.go",
"type": "replace",
"edit_start_line_idx": 35
} | The MIT License (MIT)
Copyright (c) 2016 Evan Huus
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| licenses/github.com/eapache/go-xerial-snappy/LICENSE | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.0003737302322406322,
0.0002397232601651922,
0.00017073971685022116,
0.00017469984595663846,
0.00009477102139499038
] |
{
"id": 0,
"code_window": [
"\t// settableFlags are the flags used to istioctl\n",
"\tsettableFlags = map[string]interface{}{\n",
"\t\t\"istioNamespace\": env.RegisterStringVar(\"ISTIOCTL_ISTIONAMESPACE\", controller.IstioNamespace, \"istioctl --istioNamespace override\"),\n",
"\t\t\"xds-address\": env.RegisterStringVar(\"ISTIOCTL_XDS_ADDRESS\", \"\", \"istioctl --xds-address override\"),\n",
"\t\t\"xds-port\": env.RegisterIntVar(\"ISTIOCTL_XDS_PORT\", 15012, \"istioctl --xds-port override\"),\n",
"\t\t\"xds-san\": env.RegisterStringVar(\"ISTIOCTL_XDS_SAN\", \"\", \"istioctl --xds-san override\"),\n",
"\t\t\"cert-dir\": env.RegisterStringVar(\"ISTIOCTL_CERT_DIR\", \"\", \"istioctl --cert-dir override\"),\n",
"\t\t\"insecure\": env.RegisterBoolVar(\"ISTIOCTL_INSECURE\", false, \"istioctl --insecure override\"),\n",
"\t\t\"prefer-experimental\": env.RegisterBoolVar(\"ISTIOCTL_PREFER_EXPERIMENTAL\", false, \"istioctl should use experimental subcommand variants\"),\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t\t\"authority\": env.RegisterStringVar(\"ISTIOCTL_AUTHORITY\", \"\", \"istioctl --authority override\"),\n"
],
"file_path": "istioctl/cmd/config.go",
"type": "replace",
"edit_start_line_idx": 35
} | apiVersion: apps/v1
kind: DaemonSet
metadata:
creationTimestamp: null
name: hello
spec:
selector:
matchLabels:
app: hello
tier: backend
track: stable
strategy: {}
template:
metadata:
annotations:
prometheus.io/path: /stats/prometheus
prometheus.io/port: "15020"
prometheus.io/scrape: "true"
sidecar.istio.io/status: '{"version":"unit-test-fake-version","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy"],"imagePullSecrets":null}'
creationTimestamp: null
labels:
app: hello
istio.io/rev: ""
security.istio.io/tlsMode: istio
service.istio.io/canonical-name: hello
service.istio.io/canonical-revision: latest
tier: backend
track: stable
spec:
containers:
- image: fake.docker.io/google-samples/hello-go-gke:1.0
name: hello
ports:
- containerPort: 80
name: http
resources: {}
- args:
- proxy
- sidecar
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --serviceCluster
- hello.$(POD_NAMESPACE)
- --proxyLogLevel=warning
- --proxyComponentLogLevel=misc:error
- --trust-domain=cluster.local
- --concurrency
- "2"
env:
- name: JWT_POLICY
value: third-party-jwt
- name: PILOT_CERT_PROVIDER
value: istiod
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: CANONICAL_SERVICE
valueFrom:
fieldRef:
fieldPath: metadata.labels['service.istio.io/canonical-name']
- name: CANONICAL_REVISION
valueFrom:
fieldRef:
fieldPath: metadata.labels['service.istio.io/canonical-revision']
- name: PROXY_CONFIG
value: |
{}
- name: ISTIO_META_POD_PORTS
value: |-
[
{"name":"http","containerPort":80}
]
- name: ISTIO_META_APP_CONTAINERS
value: |-
[
hello
]
- name: ISTIO_META_CLUSTER_ID
value: Kubernetes
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
- name: ISTIO_META_MESH_ID
value: cluster.local
- name: ISTIO_KUBE_APP_PROBERS
value: '{}'
image: gcr.io/istio-release/proxyv2:master-latest-daily
imagePullPolicy: Always
name: istio-proxy
ports:
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15021
initialDelaySeconds: 1
periodSeconds: 2
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 1337
runAsNonRoot: true
runAsUser: 1337
volumeMounts:
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/lib/istio/data
name: istio-data
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /var/run/secrets/tokens
name: istio-token
- mountPath: /etc/istio/pod
name: istio-podinfo
initContainers:
- args:
- istio-iptables
- -p
- "15001"
- -z
- "15006"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- '*'
- -d
- 15090,15021,15020
image: gcr.io/istio-release/proxy_init:master-latest-daily
imagePullPolicy: Always
name: istio-init
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 10m
memory: 10Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
securityContext:
fsGroup: 1337
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- emptyDir: {}
name: istio-data
- downwardAPI:
items:
- fieldRef:
fieldPath: metadata.labels
path: labels
- fieldRef:
fieldPath: metadata.annotations
path: annotations
name: istio-podinfo
- name: istio-token
projected:
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
- configMap:
name: istio-ca-root-cert
name: istiod-ca-cert
status: {}
---
| pkg/kube/inject/testdata/webhook/daemonset.yaml.injected | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.0010057559702545404,
0.00032624625600874424,
0.00016522701480425894,
0.00023154275550041348,
0.00023407320259138942
] |
{
"id": 1,
"code_window": [
"\t\t},\n",
"\t\t{ // case 1\n",
"\t\t\targs: strings.Split(\"experimental config list\", \" \"),\n",
"\t\t\texpectedOutput: `FLAG VALUE FROM\n",
"cert-dir default\n",
"insecure default\n",
"istioNamespace istio-system default\n",
"prefer-experimental default\n",
"xds-address default\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"authority default\n"
],
"file_path": "istioctl/cmd/config_test.go",
"type": "add",
"edit_start_line_idx": 34
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package clioptions
import (
"fmt"
"time"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// CentralControlPlaneOptions holds options common to all subcommands
// that invoke Istiod via xDS REST endpoint
type CentralControlPlaneOptions struct {
// Xds is XDS endpoint, e.g. localhost:15010.
Xds string
// XdsPodLabel is a Kubernetes label on the Istiod pods
XdsPodLabel string
// XdsPodPort is a port exposing XDS (typically 15010 or 15012)
XdsPodPort int
// CertDir is the local directory containing certificates
CertDir string
// Timeout is how long to wait before giving up on XDS
Timeout time.Duration
// InsecureSkipVerify skips client verification the server's certificate chain and host name.
InsecureSkipVerify bool
// XDSSAN is the expected Subject Alternative Name of the XDS server
XDSSAN string
}
// AttachControlPlaneFlags attaches control-plane flags to a Cobra command.
// (Currently just --endpoint)
func (o *CentralControlPlaneOptions) AttachControlPlaneFlags(cmd *cobra.Command) {
cmd.PersistentFlags().StringVar(&o.Xds, "xds-address", viper.GetString("XDS-ADDRESS"),
"XDS Endpoint")
cmd.PersistentFlags().StringVar(&o.CertDir, "cert-dir", viper.GetString("CERT-DIR"),
"XDS Endpoint certificate directory")
cmd.PersistentFlags().StringVar(&o.XdsPodLabel, "xds-label", "",
"Istiod pod label selector")
cmd.PersistentFlags().IntVar(&o.XdsPodPort, "xds-port", viper.GetInt("XDS-PORT"),
"Istiod pod port")
cmd.PersistentFlags().DurationVar(&o.Timeout, "timeout", time.Second*30,
"the duration to wait before failing")
cmd.PersistentFlags().StringVar(&o.XDSSAN, "xds-san", viper.GetString("XDS-SAN"),
"XDS Subject Alternative Name (for example istiod.istio-system.svc)")
cmd.PersistentFlags().BoolVar(&o.InsecureSkipVerify, "insecure", viper.GetBool("INSECURE"),
"Skip server certificate and domain verification. (NOT SECURE!)")
}
// ValidateControlPlaneFlags checks arguments for valid values and combinations
func (o *CentralControlPlaneOptions) ValidateControlPlaneFlags() error {
if o.Xds != "" && o.XdsPodLabel != "" {
return fmt.Errorf("either --xds-address or --xds-label, not both")
}
return nil
}
| istioctl/pkg/clioptions/central.go | 1 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.009026052430272102,
0.003044820623472333,
0.00017287343507632613,
0.0011024523992091417,
0.003336162306368351
] |
{
"id": 1,
"code_window": [
"\t\t},\n",
"\t\t{ // case 1\n",
"\t\t\targs: strings.Split(\"experimental config list\", \" \"),\n",
"\t\t\texpectedOutput: `FLAG VALUE FROM\n",
"cert-dir default\n",
"insecure default\n",
"istioNamespace istio-system default\n",
"prefer-experimental default\n",
"xds-address default\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"authority default\n"
],
"file_path": "istioctl/cmd/config_test.go",
"type": "add",
"edit_start_line_idx": 34
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package contextgraph
import "istio.io/istio/mixer/pkg/adapter"
type cacheStatus struct {
lastSeen int
lastSent int
}
// entityCache tracks the entities we've already seen.
// It is not thread-safe.
type entityCache struct {
// cache maps an entity to the epoch it was last seen in.
cache map[entity]cacheStatus
lastFlush int
logger adapter.Logger
}
func newEntityCache(logger adapter.Logger) *entityCache {
return &entityCache{
cache: make(map[entity]cacheStatus),
logger: logger,
lastFlush: -1,
}
}
// AssertAndCheck reports the existence of e at epoch, and returns
// true if the entity needs to be sent immediately.
func (ec *entityCache) AssertAndCheck(e entity, epoch int) bool {
cEpoch, ok := ec.cache[e]
defer func() { ec.cache[e] = cEpoch }()
if cEpoch.lastSeen < epoch {
cEpoch.lastSeen = epoch
}
if !ok || cEpoch.lastSent < ec.lastFlush {
ec.logger.Debugf("%q needs to be sent anew, old epoch: %d, now seen: %d",
e.fullName, cEpoch.lastSent, epoch)
cEpoch.lastSent = epoch
return true
}
return false
}
// Flush returns the list of entities that have been asserted in the
// most recent epoch, to be reasserted. It also cleans up stale
// entries from the cache.
func (ec *entityCache) Flush(epoch int) []entity {
var result []entity
for k, e := range ec.cache {
if e.lastSeen <= ec.lastFlush {
delete(ec.cache, k)
continue
}
if e.lastSent == epoch {
// Don't republish entities that are already in this batch.
continue
}
e.lastSent = epoch
ec.cache[k] = e
result = append(result, k)
}
ec.lastFlush = epoch
return result
}
// edgeCache tracks the edges we've already seen.
// It is not thread-safe.
type edgeCache struct {
// cache maps an edge to the epoch it was last seen in.
cache map[edge]cacheStatus
lastFlush int
logger adapter.Logger
}
func newEdgeCache(logger adapter.Logger) *edgeCache {
return &edgeCache{
cache: make(map[edge]cacheStatus),
logger: logger,
lastFlush: -1,
}
}
// AssertAndCheck reports the existence of e at epoch, and returns
// true if the edge needs to be sent immediately.
func (ec *edgeCache) AssertAndCheck(e edge, epoch int) bool {
cEpoch, ok := ec.cache[e]
defer func() { ec.cache[e] = cEpoch }()
if cEpoch.lastSeen < epoch {
cEpoch.lastSeen = epoch
}
if !ok || cEpoch.lastSent < ec.lastFlush {
ec.logger.Debugf("%v needs to be sent anew, old epoch: %d, now seen: %d",
e, cEpoch.lastSent, epoch)
cEpoch.lastSent = epoch
return true
}
return false
}
// Flush returns the list of entities that have been asserted in the
// most recent epoch, to be reasserted. It also cleans up stale
// entries from the cache.
func (ec *edgeCache) Flush(epoch int) []edge {
var result []edge
for k, e := range ec.cache {
if e.lastSeen <= ec.lastFlush {
delete(ec.cache, k)
continue
}
if e.lastSent == epoch {
// Don't republish entities that are already in this batch.
continue
}
e.lastSent = epoch
ec.cache[k] = e
result = append(result, k)
}
ec.lastFlush = epoch
return result
}
// Invalidate removes all edges with a source of fullName from the
// cache, so the next assertion will trigger a report.
func (ec *edgeCache) Invalidate(fullName string) {
for e := range ec.cache {
if e.sourceFullName == fullName {
delete(ec.cache, e)
}
}
}
| mixer/adapter/stackdriver/contextgraph/cache.go | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.00017795847088564187,
0.00016931658319663256,
0.0001631266641197726,
0.00016961406799964607,
0.0000039636688597965986
] |
{
"id": 1,
"code_window": [
"\t\t},\n",
"\t\t{ // case 1\n",
"\t\t\targs: strings.Split(\"experimental config list\", \" \"),\n",
"\t\t\texpectedOutput: `FLAG VALUE FROM\n",
"cert-dir default\n",
"insecure default\n",
"istioNamespace istio-system default\n",
"prefer-experimental default\n",
"xds-address default\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"authority default\n"
],
"file_path": "istioctl/cmd/config_test.go",
"type": "add",
"edit_start_line_idx": 34
} | apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: istio-mixer
istio: mixer
release: istio
name: istio-telemetry
namespace: istio-system
spec:
replicas: 1
selector:
matchLabels:
istio: mixer
istio-mixer-type: telemetry
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 25%
template:
metadata:
annotations:
prometheus.io/port: "15014"
prometheus.io/scrape: "true"
sidecar.istio.io/inject: "false"
labels:
app: telemetry
istio: mixer
istio-mixer-type: telemetry
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
weight: 2
- preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- ppc64le
weight: 2
- preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- s390x
weight: 2
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
containers:
- args:
- --monitoringPort=15014
- --address
- unix:///sock/mixer.socket
- --log_output_level=default:info
- --configStoreURL=k8s://
- --configDefaultNamespace=istio-system
- --useAdapterCRDs=false
- --useTemplateCRDs=false
- --trace_zipkin_url=http://zipkin.istio-system:9411/api/v1/spans
env:
- name: GODEBUG
value: gctrace=111
- name: NEW_VAR
value: new_value
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: GOMAXPROCS
value: "6"
image: gcr.io/istio-testing/mixer:latest
livenessProbe:
httpGet:
path: /version
port: 15014
initialDelaySeconds: 5
periodSeconds: 5
name: mixer
ports:
- containerPort: 9091
- containerPort: 15014
- containerPort: 42422
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 555
periodSeconds: 666
timeoutSeconds: 777
resources:
limits:
cpu: 4800m
memory: 4G
requests:
cpu: 888m
memory: 999Mi
securityContext:
capabilities:
drop:
- ALL
runAsGroup: 1337
runAsNonRoot: true
runAsUser: 1337
volumeMounts:
- mountPath: /sock
name: uds-socket
- mountPath: /var/run/secrets/istio.io/telemetry/adapter
name: telemetry-adapter-secret
readOnly: true
- args:
- proxy
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --serviceCluster
- istio-telemetry
- --templateFile
- /var/lib/envoy/envoy.yaml.tmpl
- --controlPlaneAuthPolicy
- MUTUAL_TLS
- --trust-domain=cluster.local
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: JWT_POLICY
value: third-party-jwt
- name: PILOT_CERT_PROVIDER
value: istiod
- name: CA_ADDR
value: istiod.istio-system.svc:15012
image: gcr.io/istio-testing/proxyv2:latest
name: istio-proxy
ports:
- containerPort: 15004
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
resources:
limits:
cpu: 2000m
memory: 1024Mi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /etc/istio/config
name: config-volume
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/run/secrets/tokens
name: istio-token
readOnly: true
- mountPath: /var/lib/envoy
name: telemetry-envoy-config
- mountPath: /sock
name: uds-socket
securityContext:
fsGroup: 1337
serviceAccountName: istio-mixer-service-account
volumes:
- configMap:
name: istio
optional: true
name: config-volume
- configMap:
name: istio-ca-root-cert
name: istiod-ca-cert
- name: istio-token
projected:
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
- name: istio-certs
secret:
optional: true
secretName: istio.istio-mixer-service-account
- emptyDir: {}
name: uds-socket
- name: telemetry-adapter-secret
secret:
optional: true
secretName: telemetry-adapter-secret
- configMap:
name: telemetry-envoy-config
name: telemetry-envoy-config
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
app: mixer
release: istio
name: istio-telemetry
namespace: istio-system
spec:
maxReplicas: 333
metrics:
- resource:
name: cpu
targetAverageUtilization: 444
type: Resource
minReplicas: 222
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istio-telemetry
---
| operator/cmd/mesh/testdata/manifest-generate/output/telemetry_k8s_settings.golden.yaml | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.0006995484582148492,
0.00023079202219378203,
0.00016576715279370546,
0.00017685318016447127,
0.00011366284161340445
] |
{
"id": 1,
"code_window": [
"\t\t},\n",
"\t\t{ // case 1\n",
"\t\t\targs: strings.Split(\"experimental config list\", \" \"),\n",
"\t\t\texpectedOutput: `FLAG VALUE FROM\n",
"cert-dir default\n",
"insecure default\n",
"istioNamespace istio-system default\n",
"prefer-experimental default\n",
"xds-address default\n"
],
"labels": [
"keep",
"keep",
"keep",
"add",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"authority default\n"
],
"file_path": "istioctl/cmd/config_test.go",
"type": "add",
"edit_start_line_idx": 34
} | {
"node": {
"id": "sidecar~1.2.3.4~foo~bar",
"cluster": "istio-proxy",
"locality": {
},
"metadata": {"EXCHANGE_KEYS":"NAME,NAMESPACE,INSTANCE_IPS,LABELS,OWNER,PLATFORM_METADATA,WORKLOAD_NAME,MESH_ID,SERVICE_ACCOUNT,CLUSTER_ID","INSTANCE_IPS":"10.3.3.3,10.4.4.4,10.5.5.5,10.6.6.6","PROXY_CONFIG":{"binaryPath":"/usr/local/bin/envoy","configPath":"/tmp/bootstrap/stats_inclusion","customConfigFile":"envoy_bootstrap.json","discoveryAddress":"istio-pilot:15010","drainDuration":"2s","extraStatTags":["dlp_success"],"parentShutdownDuration":"3s","proxyAdminPort":15000,"serviceCluster":"istio-proxy"},"SDS":"true","sidecar.istio.io/extraStatTags":"dlp_status,dlp_error","sidecar.istio.io/statsInclusionRegexps":"http.[0-9]*\\.[0-9]*\\.[0-9]*\\.[0-9]*_8080.downstream_rq_time"}
},
"layered_runtime": {
"layers": [
{
"name": "deprecation",
"static_layer": {
"envoy.deprecated_features:envoy.config.listener.v3.Listener.hidden_envoy_deprecated_use_original_dst": true
}
},
{
"name": "admin",
"admin_layer": {}
}
]
},
"stats_config": {
"use_all_default_tags": false,
"stats_tags": [
{
"tag_name": "cluster_name",
"regex": "^cluster\\.((.+?(\\..+?\\.svc\\.cluster\\.local)?)\\.)"
},
{
"tag_name": "tcp_prefix",
"regex": "^tcp\\.((.*?)\\.)\\w+?$"
},
{
"regex": "(response_code=\\.=(.+?);\\.;)|_rq(_(\\.d{3}))$",
"tag_name": "response_code"
},
{
"tag_name": "response_code_class",
"regex": "_rq(_(\\dxx))$"
},
{
"tag_name": "http_conn_manager_listener_prefix",
"regex": "^listener(?=\\.).*?\\.http\\.(((?:[_.[:digit:]]*|[_\\[\\]aAbBcCdDeEfF[:digit:]]*))\\.)"
},
{
"tag_name": "http_conn_manager_prefix",
"regex": "^http\\.(((?:[_.[:digit:]]*|[_\\[\\]aAbBcCdDeEfF[:digit:]]*))\\.)"
},
{
"tag_name": "listener_address",
"regex": "^listener\\.(((?:[_.[:digit:]]*|[_\\[\\]aAbBcCdDeEfF[:digit:]]*))\\.)"
},
{
"tag_name": "mongo_prefix",
"regex": "^mongo\\.(.+?)\\.(collection|cmd|cx_|op_|delays_|decoding_)(.*?)$"
},
{
"regex": "(reporter=\\.=(.*?);\\.;)",
"tag_name": "reporter"
},
{
"regex": "(source_namespace=\\.=(.*?);\\.;)",
"tag_name": "source_namespace"
},
{
"regex": "(source_workload=\\.=(.*?);\\.;)",
"tag_name": "source_workload"
},
{
"regex": "(source_workload_namespace=\\.=(.*?);\\.;)",
"tag_name": "source_workload_namespace"
},
{
"regex": "(source_principal=\\.=(.*?);\\.;)",
"tag_name": "source_principal"
},
{
"regex": "(source_app=\\.=(.*?);\\.;)",
"tag_name": "source_app"
},
{
"regex": "(source_version=\\.=(.*?);\\.;)",
"tag_name": "source_version"
},
{
"regex": "(source_cluster=\\.=(.*?);\\.;)",
"tag_name": "source_cluster"
},
{
"regex": "(destination_namespace=\\.=(.*?);\\.;)",
"tag_name": "destination_namespace"
},
{
"regex": "(destination_workload=\\.=(.*?);\\.;)",
"tag_name": "destination_workload"
},
{
"regex": "(destination_workload_namespace=\\.=(.*?);\\.;)",
"tag_name": "destination_workload_namespace"
},
{
"regex": "(destination_principal=\\.=(.*?);\\.;)",
"tag_name": "destination_principal"
},
{
"regex": "(destination_app=\\.=(.*?);\\.;)",
"tag_name": "destination_app"
},
{
"regex": "(destination_version=\\.=(.*?);\\.;)",
"tag_name": "destination_version"
},
{
"regex": "(destination_service=\\.=(.*?);\\.;)",
"tag_name": "destination_service"
},
{
"regex": "(destination_service_name=\\.=(.*?);\\.;)",
"tag_name": "destination_service_name"
},
{
"regex": "(destination_service_namespace=\\.=(.*?);\\.;)",
"tag_name": "destination_service_namespace"
},
{
"regex": "(destination_port=\\.=(.*?);\\.;)",
"tag_name": "destination_port"
},
{
"regex": "(destination_cluster=\\.=(.*?);\\.;)",
"tag_name": "destination_cluster"
},
{
"regex": "(request_protocol=\\.=(.*?);\\.;)",
"tag_name": "request_protocol"
},
{
"regex": "(request_operation=\\.=(.*?);\\.;)",
"tag_name": "request_operation"
},
{
"regex": "(request_host=\\.=(.*?);\\.;)",
"tag_name": "request_host"
},
{
"regex": "(response_flags=\\.=(.*?);\\.;)",
"tag_name": "response_flags"
},
{
"regex": "(grpc_response_status=\\.=(.*?);\\.;)",
"tag_name": "grpc_response_status"
},
{
"regex": "(connection_security_policy=\\.=(.*?);\\.;)",
"tag_name": "connection_security_policy"
},
{
"regex": "(permissive_response_code=\\.=(.*?);\\.;)",
"tag_name": "permissive_response_code"
},
{
"regex": "(permissive_response_policyid=\\.=(.*?);\\.;)",
"tag_name": "permissive_response_policyid"
},
{
"regex": "(source_canonical_service=\\.=(.*?);\\.;)",
"tag_name": "source_canonical_service"
},
{
"regex": "(destination_canonical_service=\\.=(.*?);\\.;)",
"tag_name": "destination_canonical_service"
},
{
"regex": "(source_canonical_revision=\\.=(.*?);\\.;)",
"tag_name": "source_canonical_revision"
},
{
"regex": "(destination_canonical_revision=\\.=(.*?);\\.;)",
"tag_name": "destination_canonical_revision"
},
{
"regex": "(dlp_success=\\.=(.*?);\\.;)",
"tag_name": "dlp_success"
},
{
"regex": "(dlp_status=\\.=(.*?);\\.;)",
"tag_name": "dlp_status"
},
{
"regex": "(dlp_error=\\.=(.*?);\\.;)",
"tag_name": "dlp_error"
},
{
"regex": "(cache\\.(.+?)\\.)",
"tag_name": "cache"
},
{
"regex": "(component\\.(.+?)\\.)",
"tag_name": "component"
},
{
"regex": "(tag\\.(.+?);\\.)",
"tag_name": "tag"
},
{
"regex": "(wasm_filter\\.(.+?)\\.)",
"tag_name": "wasm_filter"
}
],
"stats_matcher": {
"inclusion_list": {
"patterns": [
{
"prefix": "reporter="
},
{
"prefix": "cluster_manager"
},
{
"prefix": "listener_manager"
},
{
"prefix": "http_mixer_filter"
},
{
"prefix": "tcp_mixer_filter"
},
{
"prefix": "server"
},
{
"prefix": "cluster.xds-grpc"
},
{
"prefix": "wasm"
},
{
"regex": "http.[0-9]*\\.[0-9]*\\.[0-9]*\\.[0-9]*_8080.downstream_rq_time"
},
{
"prefix": "component"
}
]
}
}
},
"admin": {
"access_log_path": "/dev/null",
"profile_path": "/var/lib/istio/data/envoy.prof",
"address": {
"socket_address": {
"address": "127.0.0.1",
"port_value": 15000
}
}
},
"dynamic_resources": {
"lds_config": {
"ads": {},
"resource_api_version": "V3"
},
"cds_config": {
"ads": {},
"resource_api_version": "V3"
},
"ads_config": {
"api_type": "GRPC",
"transport_api_version": "V3",
"grpc_services": [
{
"envoy_grpc": {
"cluster_name": "xds-grpc"
}
}
]
}
},
"static_resources": {
"clusters": [
{
"name": "prometheus_stats",
"type": "STATIC",
"connect_timeout": "0.250s",
"lb_policy": "ROUND_ROBIN",
"load_assignment": {
"cluster_name": "prometheus_stats",
"endpoints": [{
"lb_endpoints": [{
"endpoint": {
"address":{
"socket_address": {
"protocol": "TCP",
"address": "127.0.0.1",
"port_value": 15000
}
}
}
}]
}]
}
},
{
"name": "agent",
"type": "STATIC",
"connect_timeout": "0.250s",
"lb_policy": "ROUND_ROBIN",
"load_assignment": {
"cluster_name": "prometheus_stats",
"endpoints": [{
"lb_endpoints": [{
"endpoint": {
"address":{
"socket_address": {
"protocol": "TCP",
"address": "127.0.0.1",
"port_value": 15020
}
}
}
}]
}]
}
},
{
"name": "sds-grpc",
"type": "STATIC",
"http2_protocol_options": {},
"connect_timeout": "1s",
"lb_policy": "ROUND_ROBIN",
"load_assignment": {
"cluster_name": "sds-grpc",
"endpoints": [{
"lb_endpoints": [{
"endpoint": {
"address":{
"pipe": {
"path": "./etc/istio/proxy/SDS"
}
}
}
}]
}]
}
},
{
"name": "xds-grpc",
"type": "STRICT_DNS",
"respect_dns_ttl": true,
"dns_lookup_family": "V4_ONLY",
"connect_timeout": "1s",
"lb_policy": "ROUND_ROBIN",
"load_assignment": {
"cluster_name": "xds-grpc",
"endpoints": [{
"lb_endpoints": [{
"endpoint": {
"address":{
"socket_address": {"address": "istio-pilot", "port_value": 15010}
}
}
}]
}]
},
"circuit_breakers": {
"thresholds": [
{
"priority": "DEFAULT",
"max_connections": 100000,
"max_pending_requests": 100000,
"max_requests": 100000
},
{
"priority": "HIGH",
"max_connections": 100000,
"max_pending_requests": 100000,
"max_requests": 100000
}
]
},
"upstream_connection_options": {
"tcp_keepalive": {
"keepalive_time": 300
}
},
"max_requests_per_connection": 1,
"http2_protocol_options": { }
}
],
"listeners":[
{
"address": {
"socket_address": {
"protocol": "TCP",
"address": "0.0.0.0",
"port_value": 15090
}
},
"filter_chains": [
{
"filters": [
{
"name": "envoy.http_connection_manager",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
"codec_type": "AUTO",
"stat_prefix": "stats",
"route_config": {
"virtual_hosts": [
{
"name": "backend",
"domains": [
"*"
],
"routes": [
{
"match": {
"prefix": "/stats/prometheus"
},
"route": {
"cluster": "prometheus_stats"
}
}
]
}
]
},
"http_filters": [{
"name": "envoy.router",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
}
}]
}
}
]
}
]
},
{
"address": {
"socket_address": {
"protocol": "TCP",
"address": "0.0.0.0",
"port_value": 15021
}
},
"filter_chains": [
{
"filters": [
{
"name": "envoy.http_connection_manager",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
"codec_type": "AUTO",
"stat_prefix": "agent",
"route_config": {
"virtual_hosts": [
{
"name": "backend",
"domains": [
"*"
],
"routes": [
{
"match": {
"prefix": "/healthz/ready"
},
"route": {
"cluster": "agent"
}
}
]
}
]
},
"http_filters": [{
"name": "envoy.router",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
}
}]
}
}
]
}
]
}
]
}
,
"cluster_manager": {
"outlier_detection": {
"event_log_path": "/dev/stdout"
}
}
}
| pkg/bootstrap/testdata/stats_inclusion_golden.json | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.00026962923584505916,
0.00017037594807334244,
0.00016292939835693687,
0.00016682544082868844,
0.0000171190968103474
] |
{
"id": 2,
"code_window": [
"prefer-experimental default\n",
"xds-address default\n",
"xds-port 15012 default\n",
"xds-san default\n",
"`,\n",
"\t\t\twantException: false,\n",
"\t\t},\n",
"\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [],
"file_path": "istioctl/cmd/config_test.go",
"type": "replace",
"edit_start_line_idx": 40
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cmd
import (
"fmt"
"regexp"
"strings"
"testing"
)
func TestConfigList(t *testing.T) {
cases := []testCase{
{ // case 0
args: strings.Split("experimental config get istioNamespace", " "),
expectedRegexp: regexp.MustCompile("Usage:\n istioctl experimental config"),
wantException: false,
},
{ // case 1
args: strings.Split("experimental config list", " "),
expectedOutput: `FLAG VALUE FROM
cert-dir default
insecure default
istioNamespace istio-system default
prefer-experimental default
xds-address default
xds-port 15012 default
xds-san default
`,
wantException: false,
},
}
for i, c := range cases {
t.Run(fmt.Sprintf("case %d %s", i, strings.Join(c.args, " ")), func(t *testing.T) {
verifyOutput(t, c)
})
}
}
| istioctl/cmd/config_test.go | 1 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.16160987317562103,
0.030178336426615715,
0.00016791047528386116,
0.0002433878689771518,
0.05916189029812813
] |
{
"id": 2,
"code_window": [
"prefer-experimental default\n",
"xds-address default\n",
"xds-port 15012 default\n",
"xds-san default\n",
"`,\n",
"\t\t\twantException: false,\n",
"\t\t},\n",
"\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [],
"file_path": "istioctl/cmd/config_test.go",
"type": "replace",
"edit_start_line_idx": 40
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package util
import (
"fmt"
"time"
"istio.io/pkg/log"
"istio.io/istio/security/pkg/pki/util"
)
// CertUtil is an interface for utility functions on certificate.
type CertUtil interface {
// GetWaitTime returns the waiting time before renewing the certificate.
GetWaitTime([]byte, time.Time, time.Duration) (time.Duration, error)
}
// CertUtilImpl is the implementation of CertUtil, for production use.
type CertUtilImpl struct {
gracePeriodPercentage int
}
// NewCertUtil returns a new CertUtilImpl
func NewCertUtil(gracePeriodPercentage int) CertUtilImpl {
return CertUtilImpl{
gracePeriodPercentage: gracePeriodPercentage,
}
}
// GetWaitTime returns the waititng time before renewing the cert, based on current time, the timestamps in cert and
// graceperiod.
func (cu CertUtilImpl) GetWaitTime(certBytes []byte, now time.Time, minGracePeriod time.Duration) (time.Duration, error) {
cert, certErr := util.ParsePemEncodedCertificate(certBytes)
if certErr != nil {
return time.Duration(0), certErr
}
timeToExpire := cert.NotAfter.Sub(now)
if timeToExpire < 0 {
return time.Duration(0), fmt.Errorf("certificate already expired at %s, but now is %s",
cert.NotAfter, now)
}
// Note: multiply time.Duration(int64) by an int (gracePeriodPercentage) will cause overflow (e.g.,
// when duration is time.Hour * 90000). So float64 is used instead.
gracePeriod := time.Duration(float64(cert.NotAfter.Sub(cert.NotBefore)) * (float64(cu.gracePeriodPercentage) / 100))
if gracePeriod < minGracePeriod {
log.Warnf("gracePeriod (%v * %f) = %v is less than minGracePeriod %v. Apply minGracePeriod.",
cert.NotAfter.Sub(cert.NotBefore), float64(cu.gracePeriodPercentage/100), gracePeriod, minGracePeriod)
gracePeriod = minGracePeriod
}
// waitTime is the duration between now and the grace period starts.
// It is the time until cert expiration minus the length of grace period.
waitTime := timeToExpire - gracePeriod
if waitTime < 0 {
// We are within the grace period.
return time.Duration(0), fmt.Errorf("got a certificate that should be renewed now")
}
return waitTime, nil
}
| security/pkg/util/certutil.go | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.0009373837383463979,
0.00036214845022186637,
0.00016502136713825166,
0.00025755801470950246,
0.0002556832623668015
] |
{
"id": 2,
"code_window": [
"prefer-experimental default\n",
"xds-address default\n",
"xds-port 15012 default\n",
"xds-san default\n",
"`,\n",
"\t\t\twantException: false,\n",
"\t\t},\n",
"\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [],
"file_path": "istioctl/cmd/config_test.go",
"type": "replace",
"edit_start_line_idx": 40
} |
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| licenses/github.com/go-openapi/analysis/LICENSE | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.0001763014734024182,
0.0001737052807584405,
0.0001704779569990933,
0.0001744591281749308,
0.0000016920926100283395
] |
{
"id": 2,
"code_window": [
"prefer-experimental default\n",
"xds-address default\n",
"xds-port 15012 default\n",
"xds-san default\n",
"`,\n",
"\t\t\twantException: false,\n",
"\t\t},\n",
"\t}\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [],
"file_path": "istioctl/cmd/config_test.go",
"type": "replace",
"edit_start_line_idx": 40
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package caclient
import (
"context"
"crypto/tls"
"crypto/x509"
"fmt"
"net/http"
"strconv"
"github.com/hashicorp/vault/api"
"istio.io/istio/pkg/security"
"istio.io/pkg/log"
)
var (
vaultClientLog = log.RegisterScope("vault", "Vault client debugging", 0)
)
type vaultClient struct {
enableTLS bool
tlsRootCert []byte
vaultAddr string
vaultLoginRole string
vaultLoginPath string
vaultSignCsrPath string
client *api.Client
}
// NewVaultClient create a CA client for the Vault provider 1.
func NewVaultClient(tls bool, tlsRootCert []byte,
vaultAddr, vaultLoginRole, vaultLoginPath, vaultSignCsrPath string) (security.Client, error) {
c := &vaultClient{
enableTLS: tls,
tlsRootCert: tlsRootCert,
vaultAddr: vaultAddr,
vaultLoginRole: vaultLoginRole,
vaultLoginPath: vaultLoginPath,
vaultSignCsrPath: vaultSignCsrPath,
}
var client *api.Client
var err error
if tls {
client, err = createVaultTLSClient(vaultAddr, tlsRootCert)
} else {
client, err = createVaultClient(vaultAddr)
}
if err != nil {
return nil, err
}
c.client = client
vaultClientLog.Infof("created Vault client for Vault address: %s, TLS: %v", vaultAddr, tls)
return c, nil
}
// CSR Sign calls Vault to sign a CSR.
func (c *vaultClient) CSRSign(ctx context.Context, reqID string, csrPEM []byte, saToken string,
certValidTTLInSec int64) ([]string /*PEM-encoded certificate chain*/, error) {
token, err := loginVaultK8sAuthMethod(c.client, c.vaultLoginPath, c.vaultLoginRole, saToken)
if err != nil {
return nil, fmt.Errorf("failed to login Vault at %s: %v", c.vaultAddr, err)
}
c.client.SetToken(token)
certChain, err := signCsrByVault(c.client, c.vaultSignCsrPath, certValidTTLInSec, csrPEM)
if err != nil {
return nil, fmt.Errorf("failed to sign CSR: %v", err)
}
if len(certChain) <= 1 {
vaultClientLog.Errorf("certificate chain length is %d, expected more than 1", len(certChain))
return nil, fmt.Errorf("invalid certificate chain in the response")
}
return certChain, nil
}
// createVaultClient creates a client to a Vault server
// vaultAddr: the address of the Vault server (e.g., "http://127.0.0.1:8200").
func createVaultClient(vaultAddr string) (*api.Client, error) {
config := api.DefaultConfig()
config.Address = vaultAddr
client, err := api.NewClient(config)
if err != nil {
vaultClientLog.Errorf("failed to create a Vault client: %v", err)
return nil, err
}
return client, nil
}
// createVaultTLSClient creates a client to a Vault server
// vaultAddr: the address of the Vault server (e.g., "https://127.0.0.1:8200").
func createVaultTLSClient(vaultAddr string, tlsRootCert []byte) (*api.Client, error) {
// Load the system default root certificates.
pool, err := x509.SystemCertPool()
if err != nil {
vaultClientLog.Errorf("could not get SystemCertPool: %v", err)
return nil, fmt.Errorf("could not get SystemCertPool: %v", err)
}
if pool == nil {
log.Info("system cert pool is nil, create a new cert pool")
pool = x509.NewCertPool()
}
if len(tlsRootCert) > 0 {
ok := pool.AppendCertsFromPEM(tlsRootCert)
if !ok {
return nil, fmt.Errorf("failed to append a certificate (%v) to the certificate pool", string(tlsRootCert))
}
}
tlsConfig := &tls.Config{
RootCAs: pool,
}
transport := &http.Transport{TLSClientConfig: tlsConfig}
httpClient := &http.Client{Transport: transport}
config := api.DefaultConfig()
config.Address = vaultAddr
config.HttpClient = httpClient
client, err := api.NewClient(config)
if err != nil {
vaultClientLog.Errorf("failed to create a Vault client: %v", err)
return nil, err
}
return client, nil
}
// loginVaultK8sAuthMethod logs into the Vault k8s auth method with the service account and
// returns the auth client token.
// loginPath: the path of the login
// role: the login role
// jwt: the service account used for login
func loginVaultK8sAuthMethod(client *api.Client, loginPath, role, sa string) (string, error) {
resp, err := client.Logical().Write(
loginPath,
map[string]interface{}{
"jwt": sa,
"role": role,
})
if err != nil {
vaultClientLog.Errorf("failed to login Vault: %v", err)
return "", err
}
if resp == nil {
vaultClientLog.Errorf("login response is nil")
return "", fmt.Errorf("login response is nil")
}
if resp.Auth == nil {
vaultClientLog.Errorf("login response auth field is nil")
return "", fmt.Errorf("login response auth field is nil")
}
return resp.Auth.ClientToken, nil
}
// signCsrByVault signs the CSR and return the signed certificate and the CA certificate chain
// Return the signed certificate chain when succeed.
// client: the Vault client
// csrSigningPath: the path for signing a CSR
// csr: the CSR to be signed, in pem format
func signCsrByVault(client *api.Client, csrSigningPath string, certTTLInSec int64, csr []byte) ([]string, error) {
m := map[string]interface{}{
"format": "pem",
"csr": string(csr),
"ttl": strconv.FormatInt(certTTLInSec, 10) + "s",
"exclude_cn_from_sans": true,
}
res, err := client.Logical().Write(csrSigningPath, m)
if err != nil {
vaultClientLog.Errorf("failed to post to %v: %v", csrSigningPath, err)
return nil, fmt.Errorf("failed to post to %v: %v", csrSigningPath, err)
}
if res == nil {
vaultClientLog.Error("sign response is nil")
return nil, fmt.Errorf("sign response is nil")
}
if res.Data == nil {
vaultClientLog.Error("sign response has a nil Data field")
return nil, fmt.Errorf("sign response has a nil Data field")
}
//Extract the certificate and the certificate chain
certificate, ok := res.Data["certificate"]
if !ok {
vaultClientLog.Error("no certificate in the CSR response")
return nil, fmt.Errorf("no certificate in the CSR response")
}
cert, ok := certificate.(string)
if !ok {
vaultClientLog.Error("the certificate in the CSR response is not a string")
return nil, fmt.Errorf("the certificate in the CSR response is not a string")
}
caChain, ok := res.Data["ca_chain"]
if !ok {
vaultClientLog.Error("no certificate chain in the CSR response")
return nil, fmt.Errorf("no certificate chain in the CSR response")
}
chain, ok := caChain.([]interface{})
if !ok {
vaultClientLog.Error("the certificate chain in the CSR response is of unexpected format")
return nil, fmt.Errorf("the certificate chain in the CSR response is of unexpected format")
}
var certChain []string
certChain = append(certChain, cert+"\n")
for idx, c := range chain {
_, ok := c.(string)
if !ok {
vaultClientLog.Errorf("the certificate in the certificate chain %v is not a string", idx)
return nil, fmt.Errorf("the certificate in the certificate chain %v is not a string", idx)
}
certChain = append(certChain, c.(string)+"\n")
}
return certChain, nil
}
| security/pkg/nodeagent/caclient/providers/vault/client.go | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.001854335074312985,
0.00027769667212851346,
0.00016334567044395953,
0.0001691353681962937,
0.00034191523445770144
] |
{
"id": 3,
"code_window": [
"\t\t\"Istiod pod port\")\n",
"\tcmd.PersistentFlags().DurationVar(&o.Timeout, \"timeout\", time.Second*30,\n",
"\t\t\"the duration to wait before failing\")\n",
"\tcmd.PersistentFlags().StringVar(&o.XDSSAN, \"xds-san\", viper.GetString(\"XDS-SAN\"),\n",
"\t\t\"XDS Subject Alternative Name (for example istiod.istio-system.svc)\")\n",
"\tcmd.PersistentFlags().BoolVar(&o.InsecureSkipVerify, \"insecure\", viper.GetBool(\"INSECURE\"),\n",
"\t\t\"Skip server certificate and domain verification. (NOT SECURE!)\")\n",
"}\n",
"\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcmd.PersistentFlags().StringVar(&o.XDSSAN, \"authority\", viper.GetString(\"AUTHORITY\"),\n"
],
"file_path": "istioctl/pkg/clioptions/central.go",
"type": "replace",
"edit_start_line_idx": 63
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cmd
import (
"fmt"
"regexp"
"strings"
"testing"
)
func TestConfigList(t *testing.T) {
cases := []testCase{
{ // case 0
args: strings.Split("experimental config get istioNamespace", " "),
expectedRegexp: regexp.MustCompile("Usage:\n istioctl experimental config"),
wantException: false,
},
{ // case 1
args: strings.Split("experimental config list", " "),
expectedOutput: `FLAG VALUE FROM
cert-dir default
insecure default
istioNamespace istio-system default
prefer-experimental default
xds-address default
xds-port 15012 default
xds-san default
`,
wantException: false,
},
}
for i, c := range cases {
t.Run(fmt.Sprintf("case %d %s", i, strings.Join(c.args, " ")), func(t *testing.T) {
verifyOutput(t, c)
})
}
}
| istioctl/cmd/config_test.go | 1 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.0004993389011360705,
0.00027596179279498756,
0.00016891202540136874,
0.0001751381641952321,
0.00014608459605369717
] |
{
"id": 3,
"code_window": [
"\t\t\"Istiod pod port\")\n",
"\tcmd.PersistentFlags().DurationVar(&o.Timeout, \"timeout\", time.Second*30,\n",
"\t\t\"the duration to wait before failing\")\n",
"\tcmd.PersistentFlags().StringVar(&o.XDSSAN, \"xds-san\", viper.GetString(\"XDS-SAN\"),\n",
"\t\t\"XDS Subject Alternative Name (for example istiod.istio-system.svc)\")\n",
"\tcmd.PersistentFlags().BoolVar(&o.InsecureSkipVerify, \"insecure\", viper.GetBool(\"INSECURE\"),\n",
"\t\t\"Skip server certificate and domain verification. (NOT SECURE!)\")\n",
"}\n",
"\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcmd.PersistentFlags().StringVar(&o.XDSSAN, \"authority\", viper.GetString(\"AUTHORITY\"),\n"
],
"file_path": "istioctl/pkg/clioptions/central.go",
"type": "replace",
"edit_start_line_idx": 63
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package validation
//go:generate go-bindata --nocompress --nometadata --pkg validation -o dataset.gen.go dataset/...
//go:generate goimports -w dataset.gen.go
| galley/testdatasets/validation/dataset.go | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.00017654907424002886,
0.00017645990010350943,
0.00017637072596699,
0.00017645990010350943,
8.917413651943207e-8
] |
{
"id": 3,
"code_window": [
"\t\t\"Istiod pod port\")\n",
"\tcmd.PersistentFlags().DurationVar(&o.Timeout, \"timeout\", time.Second*30,\n",
"\t\t\"the duration to wait before failing\")\n",
"\tcmd.PersistentFlags().StringVar(&o.XDSSAN, \"xds-san\", viper.GetString(\"XDS-SAN\"),\n",
"\t\t\"XDS Subject Alternative Name (for example istiod.istio-system.svc)\")\n",
"\tcmd.PersistentFlags().BoolVar(&o.InsecureSkipVerify, \"insecure\", viper.GetBool(\"INSECURE\"),\n",
"\t\t\"Skip server certificate and domain verification. (NOT SECURE!)\")\n",
"}\n",
"\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcmd.PersistentFlags().StringVar(&o.XDSSAN, \"authority\", viper.GetString(\"AUTHORITY\"),\n"
],
"file_path": "istioctl/pkg/clioptions/central.go",
"type": "replace",
"edit_start_line_idx": 63
} | // GENERATED FILE -- DO NOT EDIT
//
package basicmeta
import (
// Pull in all the known proto types to ensure we get their types registered.
// Register protos in "github.com/gogo/protobuf/types"
_ "github.com/gogo/protobuf/types"
)
| galley/pkg/config/testing/basicmeta/staticinit.gen.go | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.00017187249613925815,
0.00017165430472232401,
0.0001714361278573051,
0.00017165430472232401,
2.181841409765184e-7
] |
{
"id": 3,
"code_window": [
"\t\t\"Istiod pod port\")\n",
"\tcmd.PersistentFlags().DurationVar(&o.Timeout, \"timeout\", time.Second*30,\n",
"\t\t\"the duration to wait before failing\")\n",
"\tcmd.PersistentFlags().StringVar(&o.XDSSAN, \"xds-san\", viper.GetString(\"XDS-SAN\"),\n",
"\t\t\"XDS Subject Alternative Name (for example istiod.istio-system.svc)\")\n",
"\tcmd.PersistentFlags().BoolVar(&o.InsecureSkipVerify, \"insecure\", viper.GetBool(\"INSECURE\"),\n",
"\t\t\"Skip server certificate and domain verification. (NOT SECURE!)\")\n",
"}\n",
"\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcmd.PersistentFlags().StringVar(&o.XDSSAN, \"authority\", viper.GetString(\"AUTHORITY\"),\n"
],
"file_path": "istioctl/pkg/clioptions/central.go",
"type": "replace",
"edit_start_line_idx": 63
} | # A-la-carte istio ingress gateway.
# Must be installed in a separate namespace, to minimize access to secrets.
gateways:
istio-ingressgateway:
name: istio-ingressgateway
labels:
app: istio-ingressgateway
istio: ingressgateway
ports:
## You can add custom gateway ports in user values overrides, but it must include those ports since helm replaces.
# Note that AWS ELB will by default perform health checks on the first port
# on this list. Setting this to the health check port will ensure that health
# checks always work. https://github.com/istio/istio/issues/12503
- port: 15021
targetPort: 15021
name: status-port
- port: 80
targetPort: 8080
name: http2
- port: 443
targetPort: 8443
name: https
# This is the port where sni routing happens
- port: 15443
targetPort: 15443
name: tls
# Scalability tunning
# replicaCount: 1
rollingMaxSurge: 100%
rollingMaxUnavailable: 25%
autoscaleEnabled: true
autoscaleMin: 1
autoscaleMax: 5
cpu:
targetAverageUtilization: 80
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 1024Mi
# Debug level for envoy. Can be set to 'debug'
debug: info
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalIPs: []
serviceAnnotations: {}
domain: ""
# Enable cross-cluster access using SNI matching
zvpn:
enabled: false
suffix: global
# To generate an internal load balancer:
# --set serviceAnnotations.cloud.google.com/load-balancer-type=internal
#serviceAnnotations:
# cloud.google.com/load-balancer-type: "internal"
podAnnotations: {}
type: LoadBalancer #change to NodePort, ClusterIP or LoadBalancer if need be
#### MESH EXPANSION PORTS ########
# Pilot and Citadel MTLS ports are enabled in gateway - but will only redirect
# to pilot/citadel if global.meshExpansion settings are enabled.
# Delete these ports if mesh expansion is not enabled, to avoid
# exposing unnecessary ports on the web.
# You can remove these ports if you are not using mesh expansion
meshExpansionPorts:
- port: 15012
targetPort: 15012
name: tcp-istiod
- port: 853
targetPort: 8853
name: tcp-dns-tls
####### end MESH EXPANSION PORTS ######
##############
secretVolumes:
- name: ingressgateway-certs
secretName: istio-ingressgateway-certs
mountPath: /etc/istio/ingressgateway-certs
- name: ingressgateway-ca-certs
secretName: istio-ingressgateway-ca-certs
mountPath: /etc/istio/ingressgateway-ca-certs
customService: false
externalTrafficPolicy: ""
ingressPorts: []
hosts: []
additionalContainers: []
configVolumes: []
certificates: false
tls: false
### Advanced options ############
env:
# A gateway with this mode ensures that pilot generates an additional
# set of clusters for internal services but without Istio mTLS, to
# enable cross cluster routing.
ISTIO_META_ROUTER_MODE: "sni-dnat"
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote "hard" vs. "soft" requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# "security" and value "S1".
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
# whether to run the gateway in a privileged container
runAsRoot: false
# Revision is set as 'version' label and part of the resource names when installing multiple control planes.
revision: ""
| operator/cmd/mesh/testdata/manifest-generate/data-snapshot/charts/gateways/istio-ingress/values.yaml | 0 | https://github.com/istio/istio/commit/4b62624572fc823a431e700f27429ccd724cafac | [
0.00022775909746997058,
0.00017360087076667696,
0.0001621020637685433,
0.00016913360741455108,
0.000015721405361546203
] |
{
"id": 0,
"code_window": [
"\t\t}\n",
"\t}()\n",
"\n",
"\tgo func() {\n",
"\t\t<-s.ctx.Done()\n",
"\t\tif err := s.cniListenServer.Close(); err != nil {\n",
"\t\t\tlog.Errorf(\"CNI listen server terminated with error: %v\", err)\n",
"\t\t} else {\n",
"\t\t\tlog.Debug(\"CNI listen server terminated\")\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(s.ctx, func() {\n"
],
"file_path": "cni/pkg/nodeagent/cni-watcher.go",
"type": "replace",
"edit_start_line_idx": 101
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package nodeagent
import (
"context"
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"net/netip"
"time"
corev1 "k8s.io/api/core/v1"
pconstants "istio.io/istio/cni/pkg/constants"
"istio.io/istio/cni/pkg/pluginlistener"
"istio.io/istio/pkg/network"
)
// Just a composite of the CNI plugin add event struct + some extracted "args"
type CNIPluginAddEvent struct {
Netns string
PodName string
PodNamespace string
IPs []IPConfig
}
// IPConfig contains an interface/gateway/address combo defined for a newly-started pod by CNI.
// This is "from the horse's mouth" so to speak and will be populated before Kube is informed of the
// pod IP.
type IPConfig struct {
Interface *int
Address net.IPNet
Gateway net.IP
}
type CniPluginServer struct {
cniListenServer *http.Server
cniListenServerCancel context.CancelFunc
handlers K8sHandlers
dataplane MeshDataplane
sockAddress string
ctx context.Context
}
func startCniPluginServer(ctx context.Context, pluginSocket string,
handlers K8sHandlers,
dataplane MeshDataplane,
) *CniPluginServer {
ctx, cancel := context.WithCancel(ctx)
mux := http.NewServeMux()
s := &CniPluginServer{
handlers: handlers,
dataplane: dataplane,
cniListenServer: &http.Server{
Handler: mux,
},
cniListenServerCancel: cancel,
sockAddress: pluginSocket,
ctx: ctx,
}
mux.HandleFunc(pconstants.CNIAddEventPath, s.handleAddEvent)
return s
}
func (s *CniPluginServer) Stop() {
s.cniListenServerCancel()
}
// Start starts up a UDS server which receives events from the CNI chain plugin.
func (s *CniPluginServer) Start() error {
if s.sockAddress == "" {
return fmt.Errorf("no socket address provided")
}
log.Info("Start a listen server for CNI plugin events")
unixListener, err := pluginlistener.NewListener(s.sockAddress)
if err != nil {
return fmt.Errorf("failed to create CNI listener: %v", err)
}
go func() {
if err := s.cniListenServer.Serve(unixListener); network.IsUnexpectedListenerError(err) {
log.Errorf("Error running CNI listener server: %v", err)
}
}()
go func() {
<-s.ctx.Done()
if err := s.cniListenServer.Close(); err != nil {
log.Errorf("CNI listen server terminated with error: %v", err)
} else {
log.Debug("CNI listen server terminated")
}
}()
return nil
}
func (s *CniPluginServer) handleAddEvent(w http.ResponseWriter, req *http.Request) {
if req.Body == nil {
log.Error("empty request body")
http.Error(w, "empty request body", http.StatusBadRequest)
return
}
defer req.Body.Close()
data, err := io.ReadAll(req.Body)
if err != nil {
log.Errorf("Failed to read event report from cni plugin: %v", err)
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
msg, err := processAddEvent(data)
if err != nil {
log.Errorf("Failed to process CNI event payload: %v", err)
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
if err := s.ReconcileCNIAddEvent(req.Context(), msg); err != nil {
log.Errorf("Failed to handle add event: %v", err)
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
}
func processAddEvent(body []byte) (CNIPluginAddEvent, error) {
var msg CNIPluginAddEvent
err := json.Unmarshal(body, &msg)
if err != nil {
log.Errorf("Failed to unmarshal CNI plugin event: %v", err)
return msg, err
}
log.Debugf("Deserialized CNI plugin event: %+v", msg)
return msg, nil
}
func (s *CniPluginServer) ReconcileCNIAddEvent(ctx context.Context, addCmd CNIPluginAddEvent) error {
log := log.WithLabels("cni-event", addCmd)
log.Debugf("netns: %s", addCmd.Netns)
// The CNI node plugin should have already checked the pod against the k8s API before forwarding us the event,
// but we have to invoke the K8S client anyway, so to be safe we check it again here to make sure we get the same result.
maxStaleRetries := 10
msInterval := 10
retries := 0
var ambientPod *corev1.Pod
var err error
log.Debugf("Checking pod: %s in ns: %s is enabled for ambient", addCmd.PodName, addCmd.PodNamespace)
// The plugin already consulted the k8s API - but on this end handler caches may be stale, so retry a few times if we get no pod.
for ambientPod, err = s.handlers.GetPodIfAmbient(addCmd.PodName, addCmd.PodNamespace); (ambientPod == nil) && (retries < maxStaleRetries); retries++ {
if err != nil {
return err
}
log.Warnf("got an event for pod %s in namespace %s not found in current pod cache, retry %d of %d",
addCmd.PodName, addCmd.PodNamespace, retries, maxStaleRetries)
time.Sleep(time.Duration(msInterval) * time.Millisecond)
}
if ambientPod == nil {
return fmt.Errorf("got event for pod %s in namespace %s but could not find in pod cache after retries", addCmd.PodName, addCmd.PodNamespace)
}
log.Debugf("Pod: %s in ns: %s is enabled for ambient, adding to mesh.", addCmd.PodName, addCmd.PodNamespace)
var podIps []netip.Addr
for _, configuredPodIPs := range addCmd.IPs {
// net.ip is implicitly convertible to netip as slice
ip, _ := netip.AddrFromSlice(configuredPodIPs.Address.IP)
// We ignore the mask of the IPNet - it's fine if the IPNet defines
// a block grant of addresses, we just need one for checking routes.
podIps = append(podIps, ip)
}
// Note that we use the IP info from the CNI plugin here - the Pod struct as reported by K8S doesn't have this info
// yet (because the K8S control plane doesn't), so it will be empty there.
err = s.dataplane.AddPodToMesh(ctx, ambientPod, podIps, addCmd.Netns)
if err != nil {
return err
}
return nil
}
| cni/pkg/nodeagent/cni-watcher.go | 1 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.9984771609306335,
0.05599907040596008,
0.00016988177958410233,
0.0014325163792818785,
0.21658043563365936
] |
{
"id": 0,
"code_window": [
"\t\t}\n",
"\t}()\n",
"\n",
"\tgo func() {\n",
"\t\t<-s.ctx.Done()\n",
"\t\tif err := s.cniListenServer.Close(); err != nil {\n",
"\t\t\tlog.Errorf(\"CNI listen server terminated with error: %v\", err)\n",
"\t\t} else {\n",
"\t\t\tlog.Debug(\"CNI listen server terminated\")\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(s.ctx, func() {\n"
],
"file_path": "cni/pkg/nodeagent/cni-watcher.go",
"type": "replace",
"edit_start_line_idx": 101
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package model
import (
"testing"
"time"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/types/known/durationpb"
wrappers "google.golang.org/protobuf/types/known/wrapperspb"
"istio.io/api/annotation"
meshconfig "istio.io/api/mesh/v1alpha1"
"istio.io/api/networking/v1beta1"
istioTypes "istio.io/api/type/v1beta1"
"istio.io/istio/pkg/config"
"istio.io/istio/pkg/config/mesh"
"istio.io/istio/pkg/config/schema/gvk"
"istio.io/istio/pkg/test/util/assert"
"istio.io/istio/pkg/util/protomarshal"
)
var now = time.Now()
const istioRootNamespace = "istio-system"
func TestConvertToMeshConfigProxyConfig(t *testing.T) {
cases := []struct {
name string
pc *v1beta1.ProxyConfig
expected *meshconfig.ProxyConfig
}{
{
name: "concurrency",
pc: &v1beta1.ProxyConfig{
Concurrency: &wrappers.Int32Value{Value: 3},
},
expected: &meshconfig.ProxyConfig{
Concurrency: &wrappers.Int32Value{Value: 3},
},
},
{
name: "environment variables",
pc: &v1beta1.ProxyConfig{
EnvironmentVariables: map[string]string{
"a": "b",
"c": "d",
},
},
expected: &meshconfig.ProxyConfig{
ProxyMetadata: map[string]string{
"a": "b",
"c": "d",
},
},
},
}
for _, tc := range cases {
converted := toMeshConfigProxyConfig(tc.pc)
assert.Equal(t, converted, tc.expected)
}
}
func TestMergeWithPrecedence(t *testing.T) {
cases := []struct {
name string
first *meshconfig.ProxyConfig
second *meshconfig.ProxyConfig
expected *meshconfig.ProxyConfig
}{
{
name: "concurrency",
first: &meshconfig.ProxyConfig{
Concurrency: v(1),
},
second: &meshconfig.ProxyConfig{
Concurrency: v(2),
},
expected: &meshconfig.ProxyConfig{
Concurrency: v(1),
},
},
{
name: "concurrency value 0",
first: &meshconfig.ProxyConfig{
Concurrency: v(0),
},
second: &meshconfig.ProxyConfig{
Concurrency: v(2),
},
expected: &meshconfig.ProxyConfig{
Concurrency: v(0),
},
},
{
name: "source concurrency nil",
first: &meshconfig.ProxyConfig{
Concurrency: nil,
},
second: &meshconfig.ProxyConfig{
Concurrency: v(2),
},
expected: &meshconfig.ProxyConfig{
Concurrency: v(2),
},
},
{
name: "dest concurrency nil",
first: &meshconfig.ProxyConfig{
Concurrency: v(2),
},
second: &meshconfig.ProxyConfig{
Concurrency: nil,
},
expected: &meshconfig.ProxyConfig{
Concurrency: v(2),
},
},
{
name: "both concurrency nil",
first: &meshconfig.ProxyConfig{
Concurrency: nil,
},
second: &meshconfig.ProxyConfig{
Concurrency: nil,
},
expected: &meshconfig.ProxyConfig{
Concurrency: nil,
},
},
{
name: "envvars",
first: &meshconfig.ProxyConfig{
ProxyMetadata: map[string]string{
"a": "x",
"b": "y",
},
},
second: &meshconfig.ProxyConfig{
ProxyMetadata: map[string]string{
"a": "z",
"b": "y",
"c": "d",
},
},
expected: &meshconfig.ProxyConfig{
ProxyMetadata: map[string]string{
"a": "x",
"b": "y",
"c": "d",
},
},
},
{
name: "empty envars merge with populated",
first: &meshconfig.ProxyConfig{
ProxyMetadata: map[string]string{},
},
second: &meshconfig.ProxyConfig{
ProxyMetadata: map[string]string{
"a": "z",
"b": "y",
"c": "d",
},
},
expected: &meshconfig.ProxyConfig{
ProxyMetadata: map[string]string{
"a": "z",
"b": "y",
"c": "d",
},
},
},
{
name: "nil proxyconfig",
first: nil,
second: &meshconfig.ProxyConfig{
ProxyMetadata: map[string]string{
"a": "z",
"b": "y",
"c": "d",
},
},
expected: &meshconfig.ProxyConfig{
ProxyMetadata: map[string]string{
"a": "z",
"b": "y",
"c": "d",
},
},
},
{
name: "terminationDrainDuration",
first: &meshconfig.ProxyConfig{
TerminationDrainDuration: durationpb.New(500 * time.Millisecond),
},
second: &meshconfig.ProxyConfig{
TerminationDrainDuration: durationpb.New(5 * time.Second),
},
expected: &meshconfig.ProxyConfig{
TerminationDrainDuration: durationpb.New(500 * time.Millisecond),
},
},
{
name: "tracing is empty",
first: &meshconfig.ProxyConfig{
Tracing: &meshconfig.Tracing{},
},
second: &meshconfig.ProxyConfig{
Tracing: mesh.DefaultProxyConfig().GetTracing(),
},
expected: &meshconfig.ProxyConfig{
Tracing: &meshconfig.Tracing{},
},
},
{
name: "tracing is not default",
first: &meshconfig.ProxyConfig{
Tracing: &meshconfig.Tracing{
Tracer: &meshconfig.Tracing_Datadog_{},
},
},
second: &meshconfig.ProxyConfig{
Tracing: mesh.DefaultProxyConfig().GetTracing(),
},
expected: &meshconfig.ProxyConfig{
Tracing: &meshconfig.Tracing{
Tracer: &meshconfig.Tracing_Datadog_{},
},
},
},
}
for _, tc := range cases {
merged := mergeWithPrecedence(tc.first, tc.second)
assert.Equal(t, merged, tc.expected)
}
}
func TestEffectiveProxyConfig(t *testing.T) {
cases := []struct {
name string
configs []config.Config
defaultConfig *meshconfig.ProxyConfig
proxy *NodeMetadata
expected *meshconfig.ProxyConfig
}{
{
name: "CR applies to matching namespace",
configs: []config.Config{
newProxyConfig("ns", "test-ns",
&v1beta1.ProxyConfig{
Concurrency: v(3),
Image: &v1beta1.ProxyImage{
ImageType: "debug",
},
}),
},
proxy: newMeta("test-ns", nil, nil),
expected: &meshconfig.ProxyConfig{
Concurrency: v(3),
Image: &v1beta1.ProxyImage{
ImageType: "debug",
},
},
},
{
name: "CR takes precedence over meshConfig.defaultConfig",
configs: []config.Config{
newProxyConfig("ns", istioRootNamespace,
&v1beta1.ProxyConfig{
Concurrency: v(3),
}),
},
defaultConfig: &meshconfig.ProxyConfig{Concurrency: v(2)},
proxy: newMeta("bar", nil, nil),
expected: &meshconfig.ProxyConfig{Concurrency: v(3)},
},
{
name: "workload matching CR takes precedence over namespace matching CR",
configs: []config.Config{
newProxyConfig("workload", "test-ns",
&v1beta1.ProxyConfig{
Selector: selector(map[string]string{
"test": "selector",
}),
Concurrency: v(3),
}),
newProxyConfig("ns", "test-ns",
&v1beta1.ProxyConfig{
Concurrency: v(2),
}),
},
proxy: newMeta("test-ns", map[string]string{"test": "selector"}, nil),
expected: &meshconfig.ProxyConfig{Concurrency: v(3)},
},
{
name: "matching workload CR takes precedence over annotation",
configs: []config.Config{
newProxyConfig("workload", "test-ns",
&v1beta1.ProxyConfig{
Selector: selector(map[string]string{
"test": "selector",
}),
Concurrency: v(3),
Image: &v1beta1.ProxyImage{
ImageType: "debug",
},
}),
},
proxy: newMeta(
"test-ns",
map[string]string{
"test": "selector",
}, map[string]string{
annotation.ProxyConfig.Name: "{ \"concurrency\": 5 }",
}),
expected: &meshconfig.ProxyConfig{
Concurrency: v(3),
Image: &v1beta1.ProxyImage{
ImageType: "debug",
},
},
},
{
name: "CR in other namespaces get ignored",
configs: []config.Config{
newProxyConfig("ns", "wrong-ns",
&v1beta1.ProxyConfig{
Concurrency: v(1),
}),
newProxyConfig("workload", "wrong-ns",
&v1beta1.ProxyConfig{
Selector: selector(map[string]string{
"test": "selector",
}),
Concurrency: v(2),
}),
newProxyConfig("global", istioRootNamespace,
&v1beta1.ProxyConfig{
Concurrency: v(3),
}),
},
proxy: newMeta("test-ns", map[string]string{"test": "selector"}, nil),
expected: &meshconfig.ProxyConfig{Concurrency: v(3)},
},
{
name: "multiple matching workload CRs, oldest applies",
configs: []config.Config{
setCreationTimestamp(newProxyConfig("workload-a", "test-ns",
&v1beta1.ProxyConfig{
Selector: selector(map[string]string{
"test": "selector",
}),
EnvironmentVariables: map[string]string{
"A": "1",
},
}), now),
setCreationTimestamp(newProxyConfig("workload-b", "test-ns",
&v1beta1.ProxyConfig{
Selector: selector(map[string]string{
"test": "selector",
}),
EnvironmentVariables: map[string]string{
"B": "2",
},
}), now.Add(time.Hour)),
setCreationTimestamp(newProxyConfig("workload-c", "test-ns",
&v1beta1.ProxyConfig{
Selector: selector(map[string]string{
"test": "selector",
}),
EnvironmentVariables: map[string]string{
"C": "3",
},
}), now.Add(time.Hour)),
},
proxy: newMeta(
"test-ns",
map[string]string{
"test": "selector",
}, map[string]string{}),
expected: &meshconfig.ProxyConfig{ProxyMetadata: map[string]string{
"A": "1",
}},
},
{
name: "multiple matching namespace CRs, oldest applies",
configs: []config.Config{
setCreationTimestamp(newProxyConfig("workload-a", "test-ns",
&v1beta1.ProxyConfig{
EnvironmentVariables: map[string]string{
"A": "1",
},
}), now),
setCreationTimestamp(newProxyConfig("workload-b", "test-ns",
&v1beta1.ProxyConfig{
EnvironmentVariables: map[string]string{
"B": "2",
},
}), now.Add(time.Hour)),
setCreationTimestamp(newProxyConfig("workload-c", "test-ns",
&v1beta1.ProxyConfig{
EnvironmentVariables: map[string]string{
"C": "3",
},
}), now.Add(time.Hour)),
},
proxy: newMeta(
"test-ns",
map[string]string{}, map[string]string{}),
expected: &meshconfig.ProxyConfig{ProxyMetadata: map[string]string{
"A": "1",
}},
},
{
name: "no configured CR or default config",
proxy: newMeta("ns", nil, nil),
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
store := newProxyConfigStore(t, tc.configs)
m := &meshconfig.MeshConfig{
RootNamespace: istioRootNamespace,
DefaultConfig: tc.defaultConfig,
}
original, _ := protomarshal.ToJSON(m)
pcs := GetProxyConfigs(store, m)
merged := pcs.EffectiveProxyConfig(tc.proxy, m)
pc := mesh.DefaultProxyConfig()
proto.Merge(pc, tc.expected)
assert.Equal(t, merged, pc)
after, _ := protomarshal.ToJSON(m)
assert.Equal(t, original, after, "mesh config should not be mutated")
})
}
}
func newProxyConfig(name, ns string, spec config.Spec) config.Config {
return config.Config{
Meta: config.Meta{
GroupVersionKind: gvk.ProxyConfig,
Name: name,
Namespace: ns,
},
Spec: spec,
}
}
func newProxyConfigStore(t *testing.T, configs []config.Config) ConfigStore {
t.Helper()
store := NewFakeStore()
for _, cfg := range configs {
store.Create(cfg)
}
return store
}
func setCreationTimestamp(c config.Config, t time.Time) config.Config {
c.Meta.CreationTimestamp = t
return c
}
func newMeta(ns string, labels, annotations map[string]string) *NodeMetadata {
return &NodeMetadata{
Namespace: ns,
Labels: labels,
Annotations: annotations,
}
}
func v(x int32) *wrappers.Int32Value {
return &wrappers.Int32Value{Value: x}
}
func selector(l map[string]string) *istioTypes.WorkloadSelector {
return &istioTypes.WorkloadSelector{MatchLabels: l}
}
| pilot/pkg/model/proxy_config_test.go | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.0010267621837556362,
0.00018905043543782085,
0.00016584130935370922,
0.00017223512986674905,
0.00011971264757448807
] |
{
"id": 0,
"code_window": [
"\t\t}\n",
"\t}()\n",
"\n",
"\tgo func() {\n",
"\t\t<-s.ctx.Done()\n",
"\t\tif err := s.cniListenServer.Close(); err != nil {\n",
"\t\t\tlog.Errorf(\"CNI listen server terminated with error: %v\", err)\n",
"\t\t} else {\n",
"\t\t\tlog.Debug(\"CNI listen server terminated\")\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(s.ctx, func() {\n"
],
"file_path": "cni/pkg/nodeagent/cni-watcher.go",
"type": "replace",
"edit_start_line_idx": 101
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package forwarder
import (
"context"
"crypto/tls"
"encoding/pem"
"fmt"
"strings"
"time"
"istio.io/istio/pkg/hbone"
"istio.io/istio/pkg/test/echo"
"istio.io/istio/pkg/test/echo/proto"
)
var _ protocol = &tlsProtocol{}
type tlsProtocol struct {
e *executor
}
func newTLSProtocol(e *executor) protocol {
return &tlsProtocol{e: e}
}
func (c *tlsProtocol) ForwardEcho(ctx context.Context, cfg *Config) (*proto.ForwardEchoResponse, error) {
return doForward(ctx, cfg, c.e, c.makeRequest)
}
func (c *tlsProtocol) makeRequest(ctx context.Context, cfg *Config, requestID int) (string, error) {
conn, err := newTLSConnection(cfg)
if err != nil {
return "", err
}
defer func() { _ = conn.Close() }()
msgBuilder := strings.Builder{}
echo.ForwarderURLField.WriteForRequest(&msgBuilder, requestID, cfg.Request.Url)
// Apply per-request timeout to calculate deadline for reads/writes.
ctx, cancel := context.WithTimeout(ctx, cfg.timeout)
defer cancel()
// Apply the deadline to the connection.
deadline, _ := ctx.Deadline()
if err := conn.SetWriteDeadline(deadline); err != nil {
return msgBuilder.String(), err
}
if err := conn.SetReadDeadline(deadline); err != nil {
return msgBuilder.String(), err
}
if err := conn.HandshakeContext(ctx); err != nil {
return "", err
}
// Make sure the client writes something to the buffer
message := "HelloWorld"
if cfg.Request.Message != "" {
message = cfg.Request.Message
}
start := time.Now()
if _, err := conn.Write([]byte(message + "\n")); err != nil {
fwLog.Warnf("TCP write failed: %v", err)
return msgBuilder.String(), err
}
cs := conn.ConnectionState()
echo.LatencyField.WriteForRequest(&msgBuilder, requestID, fmt.Sprintf("%v", time.Since(start)))
echo.CipherField.WriteForRequest(&msgBuilder, requestID, tls.CipherSuiteName(cs.CipherSuite))
echo.TLSVersionField.WriteForRequest(&msgBuilder, requestID, versionName(cs.Version))
echo.TLSServerName.WriteForRequest(&msgBuilder, requestID, cs.ServerName)
echo.AlpnField.WriteForRequest(&msgBuilder, requestID, cs.NegotiatedProtocol)
for n, i := range cs.PeerCertificates {
pemBlock := pem.Block{
Type: "CERTIFICATE",
Bytes: i.Raw,
}
echo.WriteBodyLine(&msgBuilder, requestID, fmt.Sprintf("Response%d=%q", n, string(pem.EncodeToMemory(&pemBlock))))
}
msg := msgBuilder.String()
return msg, nil
}
func versionName(v uint16) string {
switch v {
case tls.VersionTLS10:
return "1.0"
case tls.VersionTLS11:
return "1.1"
case tls.VersionTLS12:
return "1.2"
case tls.VersionTLS13:
return "1.3"
default:
return fmt.Sprintf("unknown-%v", v)
}
}
func (c *tlsProtocol) Close() error {
return nil
}
func newTLSConnection(cfg *Config) (*tls.Conn, error) {
address := cfg.Request.Url[len(cfg.scheme+"://"):]
con, err := hbone.TLSDialWithDialer(newDialer(cfg), "tcp", address, cfg.tlsConfig)
if err != nil {
return nil, err
}
return con, nil
}
| pkg/test/echo/server/forwarder/tls.go | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.005720824468880892,
0.0007229021866805851,
0.0001656846870901063,
0.0001781915925676003,
0.0014760486083105206
] |
{
"id": 0,
"code_window": [
"\t\t}\n",
"\t}()\n",
"\n",
"\tgo func() {\n",
"\t\t<-s.ctx.Done()\n",
"\t\tif err := s.cniListenServer.Close(); err != nil {\n",
"\t\t\tlog.Errorf(\"CNI listen server terminated with error: %v\", err)\n",
"\t\t} else {\n",
"\t\t\tlog.Debug(\"CNI listen server terminated\")\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(s.ctx, func() {\n"
],
"file_path": "cni/pkg/nodeagent/cni-watcher.go",
"type": "replace",
"edit_start_line_idx": 101
} | apiVersion: release-notes/v2
kind: feature
area: installation
docs:
- '[usage] https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configurable-scaling-behavior'
releaseNotes:
- |
**Added** configurable scaling behavior for Gateway HPA in helm chart
upgradeNotes: []
| releasenotes/notes/47318.yaml | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00016795337432995439,
0.00016728314221836627,
0.00016661289555486292,
0.00016728314221836627,
6.702393875457346e-7
] |
{
"id": 1,
"code_window": [
"\t\t\tlog.Errorf(\"CNI listen server terminated with error: %v\", err)\n",
"\t\t} else {\n",
"\t\t\tlog.Debug(\"CNI listen server terminated\")\n",
"\t\t}\n",
"\t}()\n",
"\n",
"\treturn nil\n",
"}\n",
"\n",
"func (s *CniPluginServer) handleAddEvent(w http.ResponseWriter, req *http.Request) {\n",
"\tif req.Body == nil {\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "cni/pkg/nodeagent/cni-watcher.go",
"type": "replace",
"edit_start_line_idx": 108
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package nodeagent
import (
"context"
"errors"
"fmt"
"io"
"net"
"os"
"sync"
"time"
"golang.org/x/sys/unix"
"google.golang.org/protobuf/proto"
"istio.io/istio/pkg/monitoring"
"istio.io/istio/pkg/zdsapi"
)
var (
ztunnelKeepAliveCheckInterval = 5 * time.Second
readWriteDeadline = 5 * time.Second
)
var ztunnelConnected = monitoring.NewGauge("ztunnel_connected",
"number of connections to ztunnel")
type ZtunnelServer interface {
Run(ctx context.Context)
PodDeleted(ctx context.Context, uid string) error
PodAdded(ctx context.Context, uid string, netns Netns) error
Close() error
}
/*
To clean up stale ztunnels
we may need to ztunnel to send its (uid, bootid / boot time) to us
so that we can remove stale entries when the ztunnel pod is deleted
or when the ztunnel pod is restarted in the same pod (remove old entries when the same uid connects again, but with different boot id?)
save a queue of what needs to be sent to the ztunnel pod and send it one by one when it connects.
when a new ztunnel connects with different uid, only propagate deletes to older ztunnels.
*/
type connMgr struct {
connectionSet map[*ZtunnelConnection]struct{}
latestConn *ZtunnelConnection
mu sync.Mutex
}
func (c *connMgr) addConn(conn *ZtunnelConnection) {
log.Debug("ztunnel connected")
c.mu.Lock()
defer c.mu.Unlock()
c.connectionSet[conn] = struct{}{}
c.latestConn = conn
ztunnelConnected.RecordInt(int64(len(c.connectionSet)))
}
func (c *connMgr) LatestConn() *ZtunnelConnection {
c.mu.Lock()
defer c.mu.Unlock()
return c.latestConn
}
func (c *connMgr) deleteConn(conn *ZtunnelConnection) {
log.Debug("ztunnel disconnected")
c.mu.Lock()
defer c.mu.Unlock()
delete(c.connectionSet, conn)
if c.latestConn == conn {
c.latestConn = nil
}
ztunnelConnected.RecordInt(int64(len(c.connectionSet)))
}
// this is used in tests
// nolint: unused
func (c *connMgr) len() int {
c.mu.Lock()
defer c.mu.Unlock()
return len(c.connectionSet)
}
type ztunnelServer struct {
listener *net.UnixListener
// connections to pod delivered map
// add pod goes to newest connection
// delete pod goes to all connections
conns *connMgr
pods PodNetnsCache
}
var _ ZtunnelServer = &ztunnelServer{}
func newZtunnelServer(addr string, pods PodNetnsCache) (*ztunnelServer, error) {
if addr == "" {
return nil, fmt.Errorf("addr cannot be empty")
}
resolvedAddr, err := net.ResolveUnixAddr("unixpacket", addr)
if err != nil {
return nil, fmt.Errorf("failed to resolve unix addr: %w", err)
}
// remove potentially existing address
// Remove unix socket before use, if one is leftover from previous CNI restart
if err := os.Remove(addr); err != nil && !os.IsNotExist(err) {
// Anything other than "file not found" is an error.
return nil, fmt.Errorf("failed to remove unix://%s: %w", addr, err)
}
l, err := net.ListenUnix("unixpacket", resolvedAddr)
if err != nil {
return nil, fmt.Errorf("failed to listen unix: %w", err)
}
return &ztunnelServer{
listener: l,
conns: &connMgr{
connectionSet: map[*ZtunnelConnection]struct{}{},
},
pods: pods,
}, nil
}
func (z *ztunnelServer) Close() error {
return z.listener.Close()
}
func (z *ztunnelServer) Run(ctx context.Context) {
go func() {
<-ctx.Done()
z.Close()
}()
for {
log.Debug("accepting conn")
conn, err := z.accept()
if err != nil {
if errors.Is(err, net.ErrClosed) {
log.Debug("listener closed - returning")
return
}
log.Errorf("failed to accept conn: %v", err)
continue
}
log.Debug("connection accepted")
go func() {
log.Debug("handling conn")
if err := z.handleConn(ctx, conn); err != nil {
log.Errorf("failed to handle conn: %v", err)
}
}()
}
}
// ZDS protocol is very simple, for every message sent, and ack is sent.
// the ack only has temporal correlation (i.e. it is the first and only ack msg after the message was sent)
// All this to say, that we want to make sure that message to ztunnel are sent from a single goroutine
// so we don't mix messages and acks.
// nolint: unparam
func (z *ztunnelServer) handleConn(ctx context.Context, conn *ZtunnelConnection) error {
defer conn.Close()
go func() {
<-ctx.Done()
log.Debug("context cancelled - closing conn")
conn.Close()
}()
// before doing anything, add the connection to the list of active connections
z.conns.addConn(conn)
defer z.conns.deleteConn(conn)
// get hello message from ztunnel
m, _, err := readProto[zdsapi.ZdsHello](conn.u, readWriteDeadline, nil)
if err != nil {
return err
}
log.Infof("received hello from ztunnel. %v", m.Version)
log.Debug("sending snapshot to ztunnel")
if err := z.sendSnapshot(ctx, conn); err != nil {
return err
}
for {
// listen for updates:
select {
case update, ok := <-conn.Updates:
if !ok {
log.Debug("update channel closed - returning")
return nil
}
log.Debugf("got update to send to ztunnel")
resp, err := conn.sendDataAndWaitForAck(update.Update, update.Fd)
if err != nil {
log.Errorf("ztunnel acked error: err %v ackErr %s", err, resp.GetAck().GetError())
}
log.Debugf("ztunnel acked")
// Safety: Resp is buffered, so this will not block
update.Resp <- updateResponse{
err: err,
resp: resp,
}
case <-time.After(ztunnelKeepAliveCheckInterval):
// do a short read, just to see if the connection to ztunnel is
// still alive. As ztunnel shouldn't send anything unless we send
// something first, we expect to get an os.ErrDeadlineExceeded error
// here if the connection is still alive.
// note that unlike tcp connections, reading is a good enough test here.
_, err := conn.readMessage(time.Second / 100)
switch {
case !errors.Is(err, os.ErrDeadlineExceeded):
log.Debugf("ztunnel keepalive failed: %v", err)
if errors.Is(err, io.EOF) {
log.Debug("ztunnel EOF")
return nil
}
return err
case err == nil:
log.Warn("ztunnel protocol error, unexpected message")
return fmt.Errorf("ztunnel protocol error, unexpected message")
default:
// we get here if error is deadline exceeded, which means ztunnel is alive.
}
case <-ctx.Done():
return nil
}
}
}
func (z *ztunnelServer) PodDeleted(ctx context.Context, uid string) error {
r := &zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Del{
Del: &zdsapi.DelWorkload{
Uid: uid,
},
},
}
data, err := proto.Marshal(r)
if err != nil {
return err
}
log.Debugf("sending delete pod to ztunnel: %s %v", uid, r)
var delErr []error
z.conns.mu.Lock()
defer z.conns.mu.Unlock()
for conn := range z.conns.connectionSet {
_, err := conn.send(ctx, data, nil)
if err != nil {
delErr = append(delErr, err)
}
}
return errors.Join(delErr...)
}
func (z *ztunnelServer) PodAdded(ctx context.Context, uid string, netns Netns) error {
latestConn := z.conns.LatestConn()
if latestConn == nil {
return fmt.Errorf("no ztunnel connection")
}
r := &zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Add{
Add: &zdsapi.AddWorkload{
Uid: uid,
},
},
}
log.Debugf("About to send added pod: %s to ztunnel: %v", uid, r)
data, err := proto.Marshal(r)
if err != nil {
return err
}
fd := int(netns.Fd())
resp, err := latestConn.send(ctx, data, &fd)
if err != nil {
return err
}
if resp.GetAck().GetError() != "" {
log.Errorf("add-workload: got ack error: %s", resp.GetAck().GetError())
return fmt.Errorf("got ack error: %s", resp.GetAck().GetError())
}
return nil
}
// TODO ctx is unused here
// nolint: unparam
func (z *ztunnelServer) sendSnapshot(ctx context.Context, conn *ZtunnelConnection) error {
snap := z.pods.ReadCurrentPodSnapshot()
for uid, netns := range snap {
var resp *zdsapi.WorkloadResponse
var err error
if netns != nil {
fd := int(netns.Fd())
log.Debugf("Sending local pod %s ztunnel", uid)
resp, err = conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Add{
Add: &zdsapi.AddWorkload{
Uid: uid,
},
},
}, &fd)
} else {
log.Infof("netns not available for local pod %s. sending keep to ztunnel", uid)
resp, err = conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Keep{
Keep: &zdsapi.KeepWorkload{
Uid: uid,
},
},
}, nil)
}
if err != nil {
return err
}
if resp.GetAck().GetError() != "" {
log.Errorf("add-workload: got ack error: %s", resp.GetAck().GetError())
}
}
resp, err := conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_SnapshotSent{
SnapshotSent: &zdsapi.SnapshotSent{},
},
}, nil)
if err != nil {
return err
}
log.Debugf("snaptshot sent to ztunnel")
if resp.GetAck().GetError() != "" {
log.Errorf("snap-sent: got ack error: %s", resp.GetAck().GetError())
}
return nil
}
func (z *ztunnelServer) accept() (*ZtunnelConnection, error) {
log.Debug("accepting unix conn")
conn, err := z.listener.AcceptUnix()
if err != nil {
return nil, fmt.Errorf("failed to accept unix: %w", err)
}
log.Debug("accepted conn")
return newZtunnelConnection(conn), nil
}
type updateResponse struct {
err error
resp *zdsapi.WorkloadResponse
}
type updateRequest struct {
Update []byte
Fd *int
Resp chan updateResponse
}
type ZtunnelConnection struct {
u *net.UnixConn
Updates chan updateRequest
}
func newZtunnelConnection(u *net.UnixConn) *ZtunnelConnection {
return &ZtunnelConnection{u: u, Updates: make(chan updateRequest, 100)}
}
func (z *ZtunnelConnection) Close() {
z.u.Close()
}
func (z *ZtunnelConnection) send(ctx context.Context, data []byte, fd *int) (*zdsapi.WorkloadResponse, error) {
ret := make(chan updateResponse, 1)
req := updateRequest{
Update: data,
Fd: fd,
Resp: ret,
}
select {
case z.Updates <- req:
case <-ctx.Done():
return nil, ctx.Err()
}
select {
case r := <-ret:
return r.resp, r.err
case <-ctx.Done():
return nil, ctx.Err()
}
}
func (z *ZtunnelConnection) sendMsgAndWaitForAck(msg *zdsapi.WorkloadRequest, fd *int) (*zdsapi.WorkloadResponse, error) {
data, err := proto.Marshal(msg)
if err != nil {
return nil, err
}
return z.sendDataAndWaitForAck(data, fd)
}
func (z *ZtunnelConnection) sendDataAndWaitForAck(data []byte, fd *int) (*zdsapi.WorkloadResponse, error) {
var rights []byte
if fd != nil {
rights = unix.UnixRights(*fd)
}
err := z.u.SetWriteDeadline(time.Now().Add(readWriteDeadline))
if err != nil {
return nil, err
}
_, _, err = z.u.WriteMsgUnix(data, rights, nil)
if err != nil {
return nil, err
}
// wait for ack
return z.readMessage(readWriteDeadline)
}
func (z *ZtunnelConnection) readMessage(timeout time.Duration) (*zdsapi.WorkloadResponse, error) {
m, _, err := readProto[zdsapi.WorkloadResponse](z.u, timeout, nil)
return m, err
}
func readProto[T any, PT interface {
proto.Message
*T
}](c *net.UnixConn, timeout time.Duration, oob []byte) (PT, int, error) {
var buf [1024]byte
err := c.SetReadDeadline(time.Now().Add(timeout))
if err != nil {
return nil, 0, err
}
n, oobn, flags, _, err := c.ReadMsgUnix(buf[:], oob)
if err != nil {
return nil, 0, err
}
if flags&unix.MSG_TRUNC != 0 {
return nil, 0, fmt.Errorf("truncated message")
}
if flags&unix.MSG_CTRUNC != 0 {
return nil, 0, fmt.Errorf("truncated control message")
}
var resp T
var respPtr PT = &resp
err = proto.Unmarshal(buf[:n], respPtr)
if err != nil {
return nil, 0, err
}
return respPtr, oobn, nil
}
| cni/pkg/nodeagent/ztunnelserver.go | 1 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.001230483059771359,
0.0002427201106911525,
0.00016246616723947227,
0.00017455240595154464,
0.00017518149979878217
] |
{
"id": 1,
"code_window": [
"\t\t\tlog.Errorf(\"CNI listen server terminated with error: %v\", err)\n",
"\t\t} else {\n",
"\t\t\tlog.Debug(\"CNI listen server terminated\")\n",
"\t\t}\n",
"\t}()\n",
"\n",
"\treturn nil\n",
"}\n",
"\n",
"func (s *CniPluginServer) handleAddEvent(w http.ResponseWriter, req *http.Request) {\n",
"\tif req.Body == nil {\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "cni/pkg/nodeagent/cni-watcher.go",
"type": "replace",
"edit_start_line_idx": 108
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cluster
// DebugInfo contains minimal information about remote clusters.
// This struct is defined here, in a package that avoids many imports, since xds/debug usually
// affects agent binary size. We avoid embedding other parts of a "remote cluster" struct like kube clients.
type DebugInfo struct {
ID ID `json:"id"`
SecretName string `json:"secretName"`
SyncStatus string `json:"syncStatus"`
}
| pkg/cluster/debug.go | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00017931840557139367,
0.00017207690689247102,
0.00016394373960793018,
0.0001729685754980892,
0.000006308269803412259
] |
{
"id": 1,
"code_window": [
"\t\t\tlog.Errorf(\"CNI listen server terminated with error: %v\", err)\n",
"\t\t} else {\n",
"\t\t\tlog.Debug(\"CNI listen server terminated\")\n",
"\t\t}\n",
"\t}()\n",
"\n",
"\treturn nil\n",
"}\n",
"\n",
"func (s *CniPluginServer) handleAddEvent(w http.ResponseWriter, req *http.Request) {\n",
"\tif req.Body == nil {\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "cni/pkg/nodeagent/cni-watcher.go",
"type": "replace",
"edit_start_line_idx": 108
} | # The preview profile contains features that are experimental.
# This is intended to explore new features coming to Istio.
# Stability, security, and performance are not guaranteed - use at your own risk.
meshConfig:
defaultConfig:
proxyMetadata:
# Enable Istio agent to handle DNS requests for known hosts
# Unknown hosts will automatically be resolved using upstream dns servers in resolv.conf
ISTIO_META_DNS_CAPTURE: "true"
| manifests/charts/istiod-remote/files/profile-preview.yaml | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00017290112737100571,
0.00017290112737100571,
0.00017290112737100571,
0.00017290112737100571,
0
] |
{
"id": 1,
"code_window": [
"\t\t\tlog.Errorf(\"CNI listen server terminated with error: %v\", err)\n",
"\t\t} else {\n",
"\t\t\tlog.Debug(\"CNI listen server terminated\")\n",
"\t\t}\n",
"\t}()\n",
"\n",
"\treturn nil\n",
"}\n",
"\n",
"func (s *CniPluginServer) handleAddEvent(w http.ResponseWriter, req *http.Request) {\n",
"\tif req.Body == nil {\n"
],
"labels": [
"keep",
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "cni/pkg/nodeagent/cni-watcher.go",
"type": "replace",
"edit_start_line_idx": 108
} | iptables -t nat -N ISTIO_INBOUND
iptables -t nat -N ISTIO_REDIRECT
iptables -t nat -N ISTIO_IN_REDIRECT
iptables -t nat -N ISTIO_OUTPUT
iptables -t nat -I PREROUTING 1 -i eth1 -j RETURN
iptables -t nat -I PREROUTING 1 -i eth2 -j RETURN
iptables -t nat -A ISTIO_INBOUND -p tcp --dport 15008 -j RETURN
iptables -t nat -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
iptables -t nat -A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
iptables -t nat -A OUTPUT -p tcp -j ISTIO_OUTPUT
iptables -t nat -A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
iptables -t nat -A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -p tcp ! --dport 15008 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
iptables -t nat -A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
iptables -t nat -A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
iptables -t nat -A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -p tcp ! --dport 15008 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
iptables -t nat -A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
iptables -t nat -A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
iptables -t nat -A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
iptables -t nat -I PREROUTING 1 -i eth1 -d 10.0.0.0/8 -j ISTIO_REDIRECT
iptables -t nat -I PREROUTING 1 -i eth2 -d 10.0.0.0/8 -j ISTIO_REDIRECT
iptables -t nat -A ISTIO_OUTPUT -d 10.0.0.0/8 -j ISTIO_REDIRECT
iptables -t nat -A ISTIO_OUTPUT -j RETURN | tools/istio-iptables/pkg/capture/testdata/ipnets-with-kube-virt-interfaces.golden | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00017277603910770267,
0.00016754995158407837,
0.0001648736943025142,
0.00016500013589393348,
0.000003695759005495347
] |
{
"id": 2,
"code_window": [
"}\n",
"\n",
"func (z *ztunnelServer) Run(ctx context.Context) {\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\tz.Close()\n",
"\t}()\n",
"\n",
"\tfor {\n",
"\t\tlog.Debug(\"accepting conn\")\n",
"\t\tconn, err := z.accept()\n",
"\t\tif err != nil {\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(ctx, func() { _ = z.Close() })\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 147
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package nodeagent
import (
"context"
"errors"
"fmt"
"io"
"net"
"os"
"sync"
"time"
"golang.org/x/sys/unix"
"google.golang.org/protobuf/proto"
"istio.io/istio/pkg/monitoring"
"istio.io/istio/pkg/zdsapi"
)
var (
ztunnelKeepAliveCheckInterval = 5 * time.Second
readWriteDeadline = 5 * time.Second
)
var ztunnelConnected = monitoring.NewGauge("ztunnel_connected",
"number of connections to ztunnel")
type ZtunnelServer interface {
Run(ctx context.Context)
PodDeleted(ctx context.Context, uid string) error
PodAdded(ctx context.Context, uid string, netns Netns) error
Close() error
}
/*
To clean up stale ztunnels
we may need to ztunnel to send its (uid, bootid / boot time) to us
so that we can remove stale entries when the ztunnel pod is deleted
or when the ztunnel pod is restarted in the same pod (remove old entries when the same uid connects again, but with different boot id?)
save a queue of what needs to be sent to the ztunnel pod and send it one by one when it connects.
when a new ztunnel connects with different uid, only propagate deletes to older ztunnels.
*/
type connMgr struct {
connectionSet map[*ZtunnelConnection]struct{}
latestConn *ZtunnelConnection
mu sync.Mutex
}
func (c *connMgr) addConn(conn *ZtunnelConnection) {
log.Debug("ztunnel connected")
c.mu.Lock()
defer c.mu.Unlock()
c.connectionSet[conn] = struct{}{}
c.latestConn = conn
ztunnelConnected.RecordInt(int64(len(c.connectionSet)))
}
func (c *connMgr) LatestConn() *ZtunnelConnection {
c.mu.Lock()
defer c.mu.Unlock()
return c.latestConn
}
func (c *connMgr) deleteConn(conn *ZtunnelConnection) {
log.Debug("ztunnel disconnected")
c.mu.Lock()
defer c.mu.Unlock()
delete(c.connectionSet, conn)
if c.latestConn == conn {
c.latestConn = nil
}
ztunnelConnected.RecordInt(int64(len(c.connectionSet)))
}
// this is used in tests
// nolint: unused
func (c *connMgr) len() int {
c.mu.Lock()
defer c.mu.Unlock()
return len(c.connectionSet)
}
type ztunnelServer struct {
listener *net.UnixListener
// connections to pod delivered map
// add pod goes to newest connection
// delete pod goes to all connections
conns *connMgr
pods PodNetnsCache
}
var _ ZtunnelServer = &ztunnelServer{}
func newZtunnelServer(addr string, pods PodNetnsCache) (*ztunnelServer, error) {
if addr == "" {
return nil, fmt.Errorf("addr cannot be empty")
}
resolvedAddr, err := net.ResolveUnixAddr("unixpacket", addr)
if err != nil {
return nil, fmt.Errorf("failed to resolve unix addr: %w", err)
}
// remove potentially existing address
// Remove unix socket before use, if one is leftover from previous CNI restart
if err := os.Remove(addr); err != nil && !os.IsNotExist(err) {
// Anything other than "file not found" is an error.
return nil, fmt.Errorf("failed to remove unix://%s: %w", addr, err)
}
l, err := net.ListenUnix("unixpacket", resolvedAddr)
if err != nil {
return nil, fmt.Errorf("failed to listen unix: %w", err)
}
return &ztunnelServer{
listener: l,
conns: &connMgr{
connectionSet: map[*ZtunnelConnection]struct{}{},
},
pods: pods,
}, nil
}
func (z *ztunnelServer) Close() error {
return z.listener.Close()
}
func (z *ztunnelServer) Run(ctx context.Context) {
go func() {
<-ctx.Done()
z.Close()
}()
for {
log.Debug("accepting conn")
conn, err := z.accept()
if err != nil {
if errors.Is(err, net.ErrClosed) {
log.Debug("listener closed - returning")
return
}
log.Errorf("failed to accept conn: %v", err)
continue
}
log.Debug("connection accepted")
go func() {
log.Debug("handling conn")
if err := z.handleConn(ctx, conn); err != nil {
log.Errorf("failed to handle conn: %v", err)
}
}()
}
}
// ZDS protocol is very simple, for every message sent, and ack is sent.
// the ack only has temporal correlation (i.e. it is the first and only ack msg after the message was sent)
// All this to say, that we want to make sure that message to ztunnel are sent from a single goroutine
// so we don't mix messages and acks.
// nolint: unparam
func (z *ztunnelServer) handleConn(ctx context.Context, conn *ZtunnelConnection) error {
defer conn.Close()
go func() {
<-ctx.Done()
log.Debug("context cancelled - closing conn")
conn.Close()
}()
// before doing anything, add the connection to the list of active connections
z.conns.addConn(conn)
defer z.conns.deleteConn(conn)
// get hello message from ztunnel
m, _, err := readProto[zdsapi.ZdsHello](conn.u, readWriteDeadline, nil)
if err != nil {
return err
}
log.Infof("received hello from ztunnel. %v", m.Version)
log.Debug("sending snapshot to ztunnel")
if err := z.sendSnapshot(ctx, conn); err != nil {
return err
}
for {
// listen for updates:
select {
case update, ok := <-conn.Updates:
if !ok {
log.Debug("update channel closed - returning")
return nil
}
log.Debugf("got update to send to ztunnel")
resp, err := conn.sendDataAndWaitForAck(update.Update, update.Fd)
if err != nil {
log.Errorf("ztunnel acked error: err %v ackErr %s", err, resp.GetAck().GetError())
}
log.Debugf("ztunnel acked")
// Safety: Resp is buffered, so this will not block
update.Resp <- updateResponse{
err: err,
resp: resp,
}
case <-time.After(ztunnelKeepAliveCheckInterval):
// do a short read, just to see if the connection to ztunnel is
// still alive. As ztunnel shouldn't send anything unless we send
// something first, we expect to get an os.ErrDeadlineExceeded error
// here if the connection is still alive.
// note that unlike tcp connections, reading is a good enough test here.
_, err := conn.readMessage(time.Second / 100)
switch {
case !errors.Is(err, os.ErrDeadlineExceeded):
log.Debugf("ztunnel keepalive failed: %v", err)
if errors.Is(err, io.EOF) {
log.Debug("ztunnel EOF")
return nil
}
return err
case err == nil:
log.Warn("ztunnel protocol error, unexpected message")
return fmt.Errorf("ztunnel protocol error, unexpected message")
default:
// we get here if error is deadline exceeded, which means ztunnel is alive.
}
case <-ctx.Done():
return nil
}
}
}
func (z *ztunnelServer) PodDeleted(ctx context.Context, uid string) error {
r := &zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Del{
Del: &zdsapi.DelWorkload{
Uid: uid,
},
},
}
data, err := proto.Marshal(r)
if err != nil {
return err
}
log.Debugf("sending delete pod to ztunnel: %s %v", uid, r)
var delErr []error
z.conns.mu.Lock()
defer z.conns.mu.Unlock()
for conn := range z.conns.connectionSet {
_, err := conn.send(ctx, data, nil)
if err != nil {
delErr = append(delErr, err)
}
}
return errors.Join(delErr...)
}
func (z *ztunnelServer) PodAdded(ctx context.Context, uid string, netns Netns) error {
latestConn := z.conns.LatestConn()
if latestConn == nil {
return fmt.Errorf("no ztunnel connection")
}
r := &zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Add{
Add: &zdsapi.AddWorkload{
Uid: uid,
},
},
}
log.Debugf("About to send added pod: %s to ztunnel: %v", uid, r)
data, err := proto.Marshal(r)
if err != nil {
return err
}
fd := int(netns.Fd())
resp, err := latestConn.send(ctx, data, &fd)
if err != nil {
return err
}
if resp.GetAck().GetError() != "" {
log.Errorf("add-workload: got ack error: %s", resp.GetAck().GetError())
return fmt.Errorf("got ack error: %s", resp.GetAck().GetError())
}
return nil
}
// TODO ctx is unused here
// nolint: unparam
func (z *ztunnelServer) sendSnapshot(ctx context.Context, conn *ZtunnelConnection) error {
snap := z.pods.ReadCurrentPodSnapshot()
for uid, netns := range snap {
var resp *zdsapi.WorkloadResponse
var err error
if netns != nil {
fd := int(netns.Fd())
log.Debugf("Sending local pod %s ztunnel", uid)
resp, err = conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Add{
Add: &zdsapi.AddWorkload{
Uid: uid,
},
},
}, &fd)
} else {
log.Infof("netns not available for local pod %s. sending keep to ztunnel", uid)
resp, err = conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Keep{
Keep: &zdsapi.KeepWorkload{
Uid: uid,
},
},
}, nil)
}
if err != nil {
return err
}
if resp.GetAck().GetError() != "" {
log.Errorf("add-workload: got ack error: %s", resp.GetAck().GetError())
}
}
resp, err := conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_SnapshotSent{
SnapshotSent: &zdsapi.SnapshotSent{},
},
}, nil)
if err != nil {
return err
}
log.Debugf("snaptshot sent to ztunnel")
if resp.GetAck().GetError() != "" {
log.Errorf("snap-sent: got ack error: %s", resp.GetAck().GetError())
}
return nil
}
func (z *ztunnelServer) accept() (*ZtunnelConnection, error) {
log.Debug("accepting unix conn")
conn, err := z.listener.AcceptUnix()
if err != nil {
return nil, fmt.Errorf("failed to accept unix: %w", err)
}
log.Debug("accepted conn")
return newZtunnelConnection(conn), nil
}
type updateResponse struct {
err error
resp *zdsapi.WorkloadResponse
}
type updateRequest struct {
Update []byte
Fd *int
Resp chan updateResponse
}
type ZtunnelConnection struct {
u *net.UnixConn
Updates chan updateRequest
}
func newZtunnelConnection(u *net.UnixConn) *ZtunnelConnection {
return &ZtunnelConnection{u: u, Updates: make(chan updateRequest, 100)}
}
func (z *ZtunnelConnection) Close() {
z.u.Close()
}
func (z *ZtunnelConnection) send(ctx context.Context, data []byte, fd *int) (*zdsapi.WorkloadResponse, error) {
ret := make(chan updateResponse, 1)
req := updateRequest{
Update: data,
Fd: fd,
Resp: ret,
}
select {
case z.Updates <- req:
case <-ctx.Done():
return nil, ctx.Err()
}
select {
case r := <-ret:
return r.resp, r.err
case <-ctx.Done():
return nil, ctx.Err()
}
}
func (z *ZtunnelConnection) sendMsgAndWaitForAck(msg *zdsapi.WorkloadRequest, fd *int) (*zdsapi.WorkloadResponse, error) {
data, err := proto.Marshal(msg)
if err != nil {
return nil, err
}
return z.sendDataAndWaitForAck(data, fd)
}
func (z *ZtunnelConnection) sendDataAndWaitForAck(data []byte, fd *int) (*zdsapi.WorkloadResponse, error) {
var rights []byte
if fd != nil {
rights = unix.UnixRights(*fd)
}
err := z.u.SetWriteDeadline(time.Now().Add(readWriteDeadline))
if err != nil {
return nil, err
}
_, _, err = z.u.WriteMsgUnix(data, rights, nil)
if err != nil {
return nil, err
}
// wait for ack
return z.readMessage(readWriteDeadline)
}
func (z *ZtunnelConnection) readMessage(timeout time.Duration) (*zdsapi.WorkloadResponse, error) {
m, _, err := readProto[zdsapi.WorkloadResponse](z.u, timeout, nil)
return m, err
}
func readProto[T any, PT interface {
proto.Message
*T
}](c *net.UnixConn, timeout time.Duration, oob []byte) (PT, int, error) {
var buf [1024]byte
err := c.SetReadDeadline(time.Now().Add(timeout))
if err != nil {
return nil, 0, err
}
n, oobn, flags, _, err := c.ReadMsgUnix(buf[:], oob)
if err != nil {
return nil, 0, err
}
if flags&unix.MSG_TRUNC != 0 {
return nil, 0, fmt.Errorf("truncated message")
}
if flags&unix.MSG_CTRUNC != 0 {
return nil, 0, fmt.Errorf("truncated control message")
}
var resp T
var respPtr PT = &resp
err = proto.Unmarshal(buf[:n], respPtr)
if err != nil {
return nil, 0, err
}
return respPtr, oobn, nil
}
| cni/pkg/nodeagent/ztunnelserver.go | 1 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.997421383857727,
0.4143194258213043,
0.00016378611326217651,
0.04982477053999901,
0.45507141947746277
] |
{
"id": 2,
"code_window": [
"}\n",
"\n",
"func (z *ztunnelServer) Run(ctx context.Context) {\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\tz.Close()\n",
"\t}()\n",
"\n",
"\tfor {\n",
"\t\tlog.Debug(\"accepting conn\")\n",
"\t\tconn, err := z.accept()\n",
"\t\tif err != nil {\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(ctx, func() { _ = z.Close() })\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 147
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package envoyfilter
import (
"testing"
"istio.io/istio/tests/util/leak"
)
func TestMain(m *testing.M) {
// CheckMain asserts that no goroutines are leaked after a test package exits.
leak.CheckMain(m)
}
| pilot/pkg/networking/core/v1alpha3/envoyfilter/leak_test.go | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00017837568884715438,
0.00017510510224383324,
0.0001714900863589719,
0.0001754495460772887,
0.000002821566795319086
] |
{
"id": 2,
"code_window": [
"}\n",
"\n",
"func (z *ztunnelServer) Run(ctx context.Context) {\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\tz.Close()\n",
"\t}()\n",
"\n",
"\tfor {\n",
"\t\tlog.Debug(\"accepting conn\")\n",
"\t\tconn, err := z.accept()\n",
"\t\tif err != nil {\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(ctx, func() { _ = z.Close() })\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 147
} | # Istio plugin CA sample certificates
This directory contains sample pre-generated certificate and keys to demonstrate how an operator could configure Citadel with an existing root certificate, signing certificates and keys. In such
a deployment, Citadel acts as an intermediate certificate authority (CA), under the given root CA.
Instructions are available [here](https://istio.io/docs/tasks/security/cert-management/plugin-ca-cert/).
The included sample files are:
- `root-cert.pem`: root CA certificate.
- `root-cert-alt.pem`: alternative CA certificate.
- `root-cert-combined.pem`: combine `root-cert.pem` and `root-cert-alt.pem` into a single file.
- `root-cert-combined-2.pem`: combine `root-cert.pem` and two `root-cert-alt.pem` into a single file.
- `ca-[cert|key].pem`: Citadel intermediate certificate and corresponding private key.
- `ca-[cert-alt|key-alt].pem`: alternative intermediate certificate and corresponding private key.
- `ca-[cert-alt-2|key-alt-2].pem`: alternative intermediate certificate and corresponding private key signed by `root-cert-alt.pem`.
- `cert-chain.pem`: certificate trust chain.
- `cert-chain-alt.pem`: alternative certificate chain.
- `cert-chain-alt-2.pem`: alternative certificate chain signed by `root-cert-alt.pem`.
- `workload-foo-[cert|key].pem`: workload certificate and key for URI SAN `spiffe://trust-domain-foo/ns/foo/sa/foo` signed by `ca-cert.key`.
- `workload-bar-[cert|key].pem`: workload certificate and key for URI SAN `spiffe://trust-domain-bar/ns/bar/sa/bar` signed by `ca-cert.key`.
- `workload-foo-root-certs.pem`: root and intermediate CA certificates for foo workload certificate.
- `workload-bar-root-certs.pem`: root and intermediate CA certificates for bar workload certificate.
- `leaf-workload-foo-cert.pem`: leaf workload certificate for URI SAN `spiffe://trust-domain-foo/ns/foo/sa/foo`.
- `leaf-workload-bar-cert.pem`: leaf workload certificate for URI SAN `spiffe://trust-domain-bar/ns/bar/sa/bar`.
The workload cert and key are generated by:
```shell script
./generate-workload.sh foo
./generate-workload.sh bar
```
To generate certs signed by the alternative root `root-cert-alt.pem`
```shell script
./generate-workload.sh name namespace serviceAccount tmpDir use-alternative-root
./generate-workload.sh name namespace serviceAccount tmpDir use-alternative-root
```
| samples/certs/README.md | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.0001710093638394028,
0.0001674602390266955,
0.00016261293785646558,
0.00016810931265354156,
0.000003491713641778915
] |
{
"id": 2,
"code_window": [
"}\n",
"\n",
"func (z *ztunnelServer) Run(ctx context.Context) {\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\tz.Close()\n",
"\t}()\n",
"\n",
"\tfor {\n",
"\t\tlog.Debug(\"accepting conn\")\n",
"\t\tconn, err := z.accept()\n",
"\t\tif err != nil {\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"replace",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(ctx, func() { _ = z.Close() })\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 147
} | apiVersion: release-notes/v2
kind: feature
area: documentation
releaseNotes:
- |
**Added** Multicluster install docs have been re-written based on current
best practices, incorporating recent updates to onboarding tooling. In
particular, the multi-primary configuration (formerly known as
"replicated control planes") no longer relies on manually configuring the
`.global` stub domain, preferring instead to use `*.svc.cluster.local` for
accessing services throughout the mesh.
| releasenotes/notes/multicluster-install-docs.yaml | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.0001725725451251492,
0.00017160005518235266,
0.00017062757979147136,
0.00017160005518235266,
9.724826668389142e-7
] |
{
"id": 3,
"code_window": [
"// nolint: unparam\n",
"func (z *ztunnelServer) handleConn(ctx context.Context, conn *ZtunnelConnection) error {\n",
"\tdefer conn.Close()\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\tlog.Debug(\"context cancelled - closing conn\")\n",
"\t\tconn.Close()\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep"
],
"after_edit": [
"\n",
"\tcontext.AfterFunc(ctx, func() {\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 181
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package nodeagent
import (
"context"
"errors"
"fmt"
"io"
"net"
"os"
"sync"
"time"
"golang.org/x/sys/unix"
"google.golang.org/protobuf/proto"
"istio.io/istio/pkg/monitoring"
"istio.io/istio/pkg/zdsapi"
)
var (
ztunnelKeepAliveCheckInterval = 5 * time.Second
readWriteDeadline = 5 * time.Second
)
var ztunnelConnected = monitoring.NewGauge("ztunnel_connected",
"number of connections to ztunnel")
type ZtunnelServer interface {
Run(ctx context.Context)
PodDeleted(ctx context.Context, uid string) error
PodAdded(ctx context.Context, uid string, netns Netns) error
Close() error
}
/*
To clean up stale ztunnels
we may need to ztunnel to send its (uid, bootid / boot time) to us
so that we can remove stale entries when the ztunnel pod is deleted
or when the ztunnel pod is restarted in the same pod (remove old entries when the same uid connects again, but with different boot id?)
save a queue of what needs to be sent to the ztunnel pod and send it one by one when it connects.
when a new ztunnel connects with different uid, only propagate deletes to older ztunnels.
*/
type connMgr struct {
connectionSet map[*ZtunnelConnection]struct{}
latestConn *ZtunnelConnection
mu sync.Mutex
}
func (c *connMgr) addConn(conn *ZtunnelConnection) {
log.Debug("ztunnel connected")
c.mu.Lock()
defer c.mu.Unlock()
c.connectionSet[conn] = struct{}{}
c.latestConn = conn
ztunnelConnected.RecordInt(int64(len(c.connectionSet)))
}
func (c *connMgr) LatestConn() *ZtunnelConnection {
c.mu.Lock()
defer c.mu.Unlock()
return c.latestConn
}
func (c *connMgr) deleteConn(conn *ZtunnelConnection) {
log.Debug("ztunnel disconnected")
c.mu.Lock()
defer c.mu.Unlock()
delete(c.connectionSet, conn)
if c.latestConn == conn {
c.latestConn = nil
}
ztunnelConnected.RecordInt(int64(len(c.connectionSet)))
}
// this is used in tests
// nolint: unused
func (c *connMgr) len() int {
c.mu.Lock()
defer c.mu.Unlock()
return len(c.connectionSet)
}
type ztunnelServer struct {
listener *net.UnixListener
// connections to pod delivered map
// add pod goes to newest connection
// delete pod goes to all connections
conns *connMgr
pods PodNetnsCache
}
var _ ZtunnelServer = &ztunnelServer{}
func newZtunnelServer(addr string, pods PodNetnsCache) (*ztunnelServer, error) {
if addr == "" {
return nil, fmt.Errorf("addr cannot be empty")
}
resolvedAddr, err := net.ResolveUnixAddr("unixpacket", addr)
if err != nil {
return nil, fmt.Errorf("failed to resolve unix addr: %w", err)
}
// remove potentially existing address
// Remove unix socket before use, if one is leftover from previous CNI restart
if err := os.Remove(addr); err != nil && !os.IsNotExist(err) {
// Anything other than "file not found" is an error.
return nil, fmt.Errorf("failed to remove unix://%s: %w", addr, err)
}
l, err := net.ListenUnix("unixpacket", resolvedAddr)
if err != nil {
return nil, fmt.Errorf("failed to listen unix: %w", err)
}
return &ztunnelServer{
listener: l,
conns: &connMgr{
connectionSet: map[*ZtunnelConnection]struct{}{},
},
pods: pods,
}, nil
}
func (z *ztunnelServer) Close() error {
return z.listener.Close()
}
func (z *ztunnelServer) Run(ctx context.Context) {
go func() {
<-ctx.Done()
z.Close()
}()
for {
log.Debug("accepting conn")
conn, err := z.accept()
if err != nil {
if errors.Is(err, net.ErrClosed) {
log.Debug("listener closed - returning")
return
}
log.Errorf("failed to accept conn: %v", err)
continue
}
log.Debug("connection accepted")
go func() {
log.Debug("handling conn")
if err := z.handleConn(ctx, conn); err != nil {
log.Errorf("failed to handle conn: %v", err)
}
}()
}
}
// ZDS protocol is very simple, for every message sent, and ack is sent.
// the ack only has temporal correlation (i.e. it is the first and only ack msg after the message was sent)
// All this to say, that we want to make sure that message to ztunnel are sent from a single goroutine
// so we don't mix messages and acks.
// nolint: unparam
func (z *ztunnelServer) handleConn(ctx context.Context, conn *ZtunnelConnection) error {
defer conn.Close()
go func() {
<-ctx.Done()
log.Debug("context cancelled - closing conn")
conn.Close()
}()
// before doing anything, add the connection to the list of active connections
z.conns.addConn(conn)
defer z.conns.deleteConn(conn)
// get hello message from ztunnel
m, _, err := readProto[zdsapi.ZdsHello](conn.u, readWriteDeadline, nil)
if err != nil {
return err
}
log.Infof("received hello from ztunnel. %v", m.Version)
log.Debug("sending snapshot to ztunnel")
if err := z.sendSnapshot(ctx, conn); err != nil {
return err
}
for {
// listen for updates:
select {
case update, ok := <-conn.Updates:
if !ok {
log.Debug("update channel closed - returning")
return nil
}
log.Debugf("got update to send to ztunnel")
resp, err := conn.sendDataAndWaitForAck(update.Update, update.Fd)
if err != nil {
log.Errorf("ztunnel acked error: err %v ackErr %s", err, resp.GetAck().GetError())
}
log.Debugf("ztunnel acked")
// Safety: Resp is buffered, so this will not block
update.Resp <- updateResponse{
err: err,
resp: resp,
}
case <-time.After(ztunnelKeepAliveCheckInterval):
// do a short read, just to see if the connection to ztunnel is
// still alive. As ztunnel shouldn't send anything unless we send
// something first, we expect to get an os.ErrDeadlineExceeded error
// here if the connection is still alive.
// note that unlike tcp connections, reading is a good enough test here.
_, err := conn.readMessage(time.Second / 100)
switch {
case !errors.Is(err, os.ErrDeadlineExceeded):
log.Debugf("ztunnel keepalive failed: %v", err)
if errors.Is(err, io.EOF) {
log.Debug("ztunnel EOF")
return nil
}
return err
case err == nil:
log.Warn("ztunnel protocol error, unexpected message")
return fmt.Errorf("ztunnel protocol error, unexpected message")
default:
// we get here if error is deadline exceeded, which means ztunnel is alive.
}
case <-ctx.Done():
return nil
}
}
}
func (z *ztunnelServer) PodDeleted(ctx context.Context, uid string) error {
r := &zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Del{
Del: &zdsapi.DelWorkload{
Uid: uid,
},
},
}
data, err := proto.Marshal(r)
if err != nil {
return err
}
log.Debugf("sending delete pod to ztunnel: %s %v", uid, r)
var delErr []error
z.conns.mu.Lock()
defer z.conns.mu.Unlock()
for conn := range z.conns.connectionSet {
_, err := conn.send(ctx, data, nil)
if err != nil {
delErr = append(delErr, err)
}
}
return errors.Join(delErr...)
}
func (z *ztunnelServer) PodAdded(ctx context.Context, uid string, netns Netns) error {
latestConn := z.conns.LatestConn()
if latestConn == nil {
return fmt.Errorf("no ztunnel connection")
}
r := &zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Add{
Add: &zdsapi.AddWorkload{
Uid: uid,
},
},
}
log.Debugf("About to send added pod: %s to ztunnel: %v", uid, r)
data, err := proto.Marshal(r)
if err != nil {
return err
}
fd := int(netns.Fd())
resp, err := latestConn.send(ctx, data, &fd)
if err != nil {
return err
}
if resp.GetAck().GetError() != "" {
log.Errorf("add-workload: got ack error: %s", resp.GetAck().GetError())
return fmt.Errorf("got ack error: %s", resp.GetAck().GetError())
}
return nil
}
// TODO ctx is unused here
// nolint: unparam
func (z *ztunnelServer) sendSnapshot(ctx context.Context, conn *ZtunnelConnection) error {
snap := z.pods.ReadCurrentPodSnapshot()
for uid, netns := range snap {
var resp *zdsapi.WorkloadResponse
var err error
if netns != nil {
fd := int(netns.Fd())
log.Debugf("Sending local pod %s ztunnel", uid)
resp, err = conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Add{
Add: &zdsapi.AddWorkload{
Uid: uid,
},
},
}, &fd)
} else {
log.Infof("netns not available for local pod %s. sending keep to ztunnel", uid)
resp, err = conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Keep{
Keep: &zdsapi.KeepWorkload{
Uid: uid,
},
},
}, nil)
}
if err != nil {
return err
}
if resp.GetAck().GetError() != "" {
log.Errorf("add-workload: got ack error: %s", resp.GetAck().GetError())
}
}
resp, err := conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_SnapshotSent{
SnapshotSent: &zdsapi.SnapshotSent{},
},
}, nil)
if err != nil {
return err
}
log.Debugf("snaptshot sent to ztunnel")
if resp.GetAck().GetError() != "" {
log.Errorf("snap-sent: got ack error: %s", resp.GetAck().GetError())
}
return nil
}
func (z *ztunnelServer) accept() (*ZtunnelConnection, error) {
log.Debug("accepting unix conn")
conn, err := z.listener.AcceptUnix()
if err != nil {
return nil, fmt.Errorf("failed to accept unix: %w", err)
}
log.Debug("accepted conn")
return newZtunnelConnection(conn), nil
}
type updateResponse struct {
err error
resp *zdsapi.WorkloadResponse
}
type updateRequest struct {
Update []byte
Fd *int
Resp chan updateResponse
}
type ZtunnelConnection struct {
u *net.UnixConn
Updates chan updateRequest
}
func newZtunnelConnection(u *net.UnixConn) *ZtunnelConnection {
return &ZtunnelConnection{u: u, Updates: make(chan updateRequest, 100)}
}
func (z *ZtunnelConnection) Close() {
z.u.Close()
}
func (z *ZtunnelConnection) send(ctx context.Context, data []byte, fd *int) (*zdsapi.WorkloadResponse, error) {
ret := make(chan updateResponse, 1)
req := updateRequest{
Update: data,
Fd: fd,
Resp: ret,
}
select {
case z.Updates <- req:
case <-ctx.Done():
return nil, ctx.Err()
}
select {
case r := <-ret:
return r.resp, r.err
case <-ctx.Done():
return nil, ctx.Err()
}
}
func (z *ZtunnelConnection) sendMsgAndWaitForAck(msg *zdsapi.WorkloadRequest, fd *int) (*zdsapi.WorkloadResponse, error) {
data, err := proto.Marshal(msg)
if err != nil {
return nil, err
}
return z.sendDataAndWaitForAck(data, fd)
}
func (z *ZtunnelConnection) sendDataAndWaitForAck(data []byte, fd *int) (*zdsapi.WorkloadResponse, error) {
var rights []byte
if fd != nil {
rights = unix.UnixRights(*fd)
}
err := z.u.SetWriteDeadline(time.Now().Add(readWriteDeadline))
if err != nil {
return nil, err
}
_, _, err = z.u.WriteMsgUnix(data, rights, nil)
if err != nil {
return nil, err
}
// wait for ack
return z.readMessage(readWriteDeadline)
}
func (z *ZtunnelConnection) readMessage(timeout time.Duration) (*zdsapi.WorkloadResponse, error) {
m, _, err := readProto[zdsapi.WorkloadResponse](z.u, timeout, nil)
return m, err
}
func readProto[T any, PT interface {
proto.Message
*T
}](c *net.UnixConn, timeout time.Duration, oob []byte) (PT, int, error) {
var buf [1024]byte
err := c.SetReadDeadline(time.Now().Add(timeout))
if err != nil {
return nil, 0, err
}
n, oobn, flags, _, err := c.ReadMsgUnix(buf[:], oob)
if err != nil {
return nil, 0, err
}
if flags&unix.MSG_TRUNC != 0 {
return nil, 0, fmt.Errorf("truncated message")
}
if flags&unix.MSG_CTRUNC != 0 {
return nil, 0, fmt.Errorf("truncated control message")
}
var resp T
var respPtr PT = &resp
err = proto.Unmarshal(buf[:n], respPtr)
if err != nil {
return nil, 0, err
}
return respPtr, oobn, nil
}
| cni/pkg/nodeagent/ztunnelserver.go | 1 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.9990455508232117,
0.20226675271987915,
0.00016501714708283544,
0.009987070225179195,
0.3552374243736267
] |
{
"id": 3,
"code_window": [
"// nolint: unparam\n",
"func (z *ztunnelServer) handleConn(ctx context.Context, conn *ZtunnelConnection) error {\n",
"\tdefer conn.Close()\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\tlog.Debug(\"context cancelled - closing conn\")\n",
"\t\tconn.Close()\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep"
],
"after_edit": [
"\n",
"\tcontext.AfterFunc(ctx, func() {\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 181
} | Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| licenses/k8s.io/apimachinery/third_party/forked/golang/LICENSE | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00017240825400222093,
0.00016626372234895825,
0.00015878723934292793,
0.0001675956737017259,
0.000005639951268676668
] |
{
"id": 3,
"code_window": [
"// nolint: unparam\n",
"func (z *ztunnelServer) handleConn(ctx context.Context, conn *ZtunnelConnection) error {\n",
"\tdefer conn.Close()\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\tlog.Debug(\"context cancelled - closing conn\")\n",
"\t\tconn.Close()\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep"
],
"after_edit": [
"\n",
"\tcontext.AfterFunc(ctx, func() {\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 181
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package aggregate
import (
"net/netip"
"sync"
"istio.io/istio/pilot/pkg/features"
"istio.io/istio/pilot/pkg/model"
"istio.io/istio/pilot/pkg/serviceregistry"
"istio.io/istio/pilot/pkg/serviceregistry/provider"
"istio.io/istio/pkg/cluster"
"istio.io/istio/pkg/config/host"
"istio.io/istio/pkg/config/labels"
"istio.io/istio/pkg/config/mesh"
"istio.io/istio/pkg/log"
"istio.io/istio/pkg/maps"
"istio.io/istio/pkg/util/sets"
"istio.io/istio/pkg/workloadapi/security"
)
// The aggregate controller does not implement serviceregistry.Instance since it may be comprised of various
// providers and clusters.
var (
_ model.ServiceDiscovery = &Controller{}
_ model.AggregateController = &Controller{}
)
// Controller aggregates data across different registries and monitors for changes
type Controller struct {
meshHolder mesh.Holder
// The lock is used to protect the registries and controller's running status.
storeLock sync.RWMutex
registries []*registryEntry
// indicates whether the controller has run.
// if true, all the registries added later should be run manually.
running bool
handlers model.ControllerHandlers
handlersByCluster map[cluster.ID]*model.ControllerHandlers
model.NetworkGatewaysHandler
}
func (c *Controller) Waypoint(scope model.WaypointScope) []netip.Addr {
if !features.EnableAmbientControllers {
return nil
}
var res []netip.Addr
for _, p := range c.GetRegistries() {
res = append(res, p.Waypoint(scope)...)
}
return res
}
func (c *Controller) WorkloadsForWaypoint(scope model.WaypointScope) []*model.WorkloadInfo {
if !features.EnableAmbientControllers {
return nil
}
var res []*model.WorkloadInfo
for _, p := range c.GetRegistries() {
res = append(res, p.WorkloadsForWaypoint(scope)...)
}
return res
}
func (c *Controller) AdditionalPodSubscriptions(proxy *model.Proxy, addr, cur sets.String) sets.String {
if !features.EnableAmbientControllers {
return nil
}
res := sets.New[string]()
for _, p := range c.GetRegistries() {
res = res.Merge(p.AdditionalPodSubscriptions(proxy, addr, cur))
}
return res
}
func (c *Controller) Policies(requested sets.Set[model.ConfigKey]) []*security.Authorization {
var res []*security.Authorization
if !features.EnableAmbientControllers {
return res
}
for _, p := range c.GetRegistries() {
res = append(res, p.Policies(requested)...)
}
return res
}
func (c *Controller) AddressInformation(addresses sets.String) ([]*model.AddressInfo, sets.String) {
i := []*model.AddressInfo{}
if !features.EnableAmbientControllers {
return i, nil
}
removed := sets.String{}
for _, p := range c.GetRegistries() {
wis, r := p.AddressInformation(addresses)
i = append(i, wis...)
removed.Merge(r)
}
// We may have 'removed' it in one registry but found it in another
for _, wl := range i {
// TODO(@hzxuzhonghu) This is not right for workload, we may search workload by ip, but the resource name is uid.
if removed.Contains(wl.ResourceName()) {
removed.Delete(wl.ResourceName())
}
}
return i, removed
}
type registryEntry struct {
serviceregistry.Instance
// stop if not nil is the per-registry stop chan. If null, the server stop chan should be used to Run the registry.
stop <-chan struct{}
}
type Options struct {
MeshHolder mesh.Holder
}
// NewController creates a new Aggregate controller
func NewController(opt Options) *Controller {
return &Controller{
registries: make([]*registryEntry, 0),
meshHolder: opt.MeshHolder,
running: false,
handlersByCluster: map[cluster.ID]*model.ControllerHandlers{},
}
}
func (c *Controller) addRegistry(registry serviceregistry.Instance, stop <-chan struct{}) {
c.registries = append(c.registries, ®istryEntry{Instance: registry, stop: stop})
// Observe the registry for events.
registry.AppendNetworkGatewayHandler(c.NotifyGatewayHandlers)
registry.AppendServiceHandler(c.handlers.NotifyServiceHandlers)
registry.AppendServiceHandler(func(prev, curr *model.Service, event model.Event) {
for _, handlers := range c.getClusterHandlers() {
handlers.NotifyServiceHandlers(prev, curr, event)
}
})
}
func (c *Controller) getClusterHandlers() []*model.ControllerHandlers {
c.storeLock.Lock()
defer c.storeLock.Unlock()
return maps.Values(c.handlersByCluster)
}
// AddRegistry adds registries into the aggregated controller.
// If the aggregated controller is already Running, the given registry will never be started.
func (c *Controller) AddRegistry(registry serviceregistry.Instance) {
c.storeLock.Lock()
defer c.storeLock.Unlock()
c.addRegistry(registry, nil)
}
// AddRegistryAndRun adds registries into the aggregated controller and makes sure it is Run.
// If the aggregated controller is running, the given registry is Run immediately.
// Otherwise, the given registry is Run when the aggregate controller is Run, using the given stop.
func (c *Controller) AddRegistryAndRun(registry serviceregistry.Instance, stop <-chan struct{}) {
if stop == nil {
log.Warnf("nil stop channel passed to AddRegistryAndRun for registry %s/%s", registry.Provider(), registry.Cluster())
}
c.storeLock.Lock()
defer c.storeLock.Unlock()
c.addRegistry(registry, stop)
if c.running {
go registry.Run(stop)
}
}
// DeleteRegistry deletes specified registry from the aggregated controller
func (c *Controller) DeleteRegistry(clusterID cluster.ID, providerID provider.ID) {
c.storeLock.Lock()
defer c.storeLock.Unlock()
if len(c.registries) == 0 {
log.Warnf("Registry list is empty, nothing to delete")
return
}
index, ok := c.getRegistryIndex(clusterID, providerID)
if !ok {
log.Warnf("Registry %s/%s is not found in the registries list, nothing to delete", providerID, clusterID)
return
}
c.registries[index] = nil
c.registries = append(c.registries[:index], c.registries[index+1:]...)
log.Infof("%s registry for the cluster %s has been deleted.", providerID, clusterID)
}
// GetRegistries returns a copy of all registries
func (c *Controller) GetRegistries() []serviceregistry.Instance {
c.storeLock.RLock()
defer c.storeLock.RUnlock()
// copy registries to prevent race, no need to deep copy here.
out := make([]serviceregistry.Instance, len(c.registries))
for i := range c.registries {
out[i] = c.registries[i]
}
return out
}
func (c *Controller) getRegistryIndex(clusterID cluster.ID, provider provider.ID) (int, bool) {
for i, r := range c.registries {
if r.Cluster().Equals(clusterID) && r.Provider() == provider {
return i, true
}
}
return 0, false
}
// Services lists services from all platforms
func (c *Controller) Services() []*model.Service {
// smap is a map of hostname (string) to service index, used to identify services that
// are installed in multiple clusters.
smap := make(map[host.Name]int)
index := 0
services := make([]*model.Service, 0)
// Locking Registries list while walking it to prevent inconsistent results
for _, r := range c.GetRegistries() {
svcs := r.Services()
if r.Provider() != provider.Kubernetes {
index += len(svcs)
services = append(services, svcs...)
} else {
for _, s := range svcs {
previous, ok := smap[s.Hostname]
if !ok {
// First time we see a service. The result will have a single service per hostname
// The first cluster will be listed first, so the services in the primary cluster
// will be used for default settings. If a service appears in multiple clusters,
// the order is less clear.
smap[s.Hostname] = index
index++
services = append(services, s)
} else {
// We must deepcopy before merge, and after merging, the ClusterVips length will be >= 2.
// This is an optimization to prevent deepcopy multi-times
if services[previous].ClusterVIPs.Len() < 2 {
// Deep copy before merging, otherwise there is a case
// a service in remote cluster can be deleted, but the ClusterIP left.
services[previous] = services[previous].DeepCopy()
}
// If it is seen second time, that means it is from a different cluster, update cluster VIPs.
mergeService(services[previous], s, r)
}
}
}
}
return services
}
// GetService retrieves a service by hostname if exists
func (c *Controller) GetService(hostname host.Name) *model.Service {
var out *model.Service
for _, r := range c.GetRegistries() {
service := r.GetService(hostname)
if service == nil {
continue
}
if r.Provider() != provider.Kubernetes {
return service
}
if out == nil {
out = service.DeepCopy()
} else {
// If we are seeing the service for the second time, it means it is available in multiple clusters.
mergeService(out, service, r)
}
}
return out
}
// mergeService only merges two clusters' k8s services
func mergeService(dst, src *model.Service, srcRegistry serviceregistry.Instance) {
if !src.Ports.Equals(dst.Ports) {
log.Debugf("service %s defined from cluster %s is different from others", src.Hostname, srcRegistry.Cluster())
}
// Prefer the k8s HostVIPs where possible
clusterID := srcRegistry.Cluster()
if len(dst.ClusterVIPs.GetAddressesFor(clusterID)) == 0 {
newAddresses := src.ClusterVIPs.GetAddressesFor(clusterID)
dst.ClusterVIPs.SetAddressesFor(clusterID, newAddresses)
}
}
// NetworkGateways merges the service-based cross-network gateways from each registry.
func (c *Controller) NetworkGateways() []model.NetworkGateway {
var gws []model.NetworkGateway
for _, r := range c.GetRegistries() {
gws = append(gws, r.NetworkGateways()...)
}
return gws
}
func (c *Controller) MCSServices() []model.MCSServiceInfo {
var out []model.MCSServiceInfo
for _, r := range c.GetRegistries() {
out = append(out, r.MCSServices()...)
}
return out
}
func nodeClusterID(node *model.Proxy) cluster.ID {
if node.Metadata == nil || node.Metadata.ClusterID == "" {
return ""
}
return node.Metadata.ClusterID
}
// Skip the service registry when there won't be a match
// because the proxy is in a different cluster.
func skipSearchingRegistryForProxy(nodeClusterID cluster.ID, r serviceregistry.Instance) bool {
// Always search non-kube (usually serviceentry) registry.
// Check every registry if cluster ID isn't specified.
if r.Provider() != provider.Kubernetes || nodeClusterID == "" {
return false
}
return !r.Cluster().Equals(nodeClusterID)
}
// GetProxyServiceTargets lists service instances co-located with a given proxy
func (c *Controller) GetProxyServiceTargets(node *model.Proxy) []model.ServiceTarget {
out := make([]model.ServiceTarget, 0)
nodeClusterID := nodeClusterID(node)
for _, r := range c.GetRegistries() {
if skipSearchingRegistryForProxy(nodeClusterID, r) {
log.Debugf("GetProxyServiceTargets(): not searching registry %v: proxy %v CLUSTER_ID is %v",
r.Cluster(), node.ID, nodeClusterID)
continue
}
instances := r.GetProxyServiceTargets(node)
if len(instances) > 0 {
out = append(out, instances...)
}
}
return out
}
func (c *Controller) GetProxyWorkloadLabels(proxy *model.Proxy) labels.Instance {
clusterID := nodeClusterID(proxy)
for _, r := range c.GetRegistries() {
// If proxy clusterID unset, we may find incorrect workload label.
// This can not happen in k8s env.
if clusterID == "" || clusterID == r.Cluster() {
lbls := r.GetProxyWorkloadLabels(proxy)
if lbls != nil {
return lbls
}
}
}
return nil
}
// Run starts all the controllers
func (c *Controller) Run(stop <-chan struct{}) {
c.storeLock.Lock()
for _, r := range c.registries {
// prefer the per-registry stop channel
registryStop := stop
if s := r.stop; s != nil {
registryStop = s
}
go r.Run(registryStop)
}
c.running = true
c.storeLock.Unlock()
<-stop
log.Info("Registry Aggregator terminated")
}
// HasSynced returns true when all registries have synced
func (c *Controller) HasSynced() bool {
for _, r := range c.GetRegistries() {
if !r.HasSynced() {
log.Debugf("registry %s is syncing", r.Cluster())
return false
}
}
return true
}
func (c *Controller) AppendServiceHandler(f model.ServiceHandler) {
c.handlers.AppendServiceHandler(f)
}
func (c *Controller) AppendWorkloadHandler(f func(*model.WorkloadInstance, model.Event)) {
// Currently, it is not used.
// Note: take care when you want to enable it, it will register the handlers to all registries
// c.handlers.AppendWorkloadHandler(f)
}
func (c *Controller) AppendServiceHandlerForCluster(id cluster.ID, f model.ServiceHandler) {
c.storeLock.Lock()
defer c.storeLock.Unlock()
handler, ok := c.handlersByCluster[id]
if !ok {
c.handlersByCluster[id] = &model.ControllerHandlers{}
handler = c.handlersByCluster[id]
}
handler.AppendServiceHandler(f)
}
func (c *Controller) UnRegisterHandlersForCluster(id cluster.ID) {
c.storeLock.Lock()
defer c.storeLock.Unlock()
delete(c.handlersByCluster, id)
}
| pilot/pkg/serviceregistry/aggregate/controller.go | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00037596706533804536,
0.00017536789528094232,
0.00016309057537000626,
0.00016877066809684038,
0.00003243562241550535
] |
{
"id": 3,
"code_window": [
"// nolint: unparam\n",
"func (z *ztunnelServer) handleConn(ctx context.Context, conn *ZtunnelConnection) error {\n",
"\tdefer conn.Close()\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\tlog.Debug(\"context cancelled - closing conn\")\n",
"\t\tconn.Close()\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep"
],
"after_edit": [
"\n",
"\tcontext.AfterFunc(ctx, func() {\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 181
} | apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
replicas: 7
selector:
matchLabels:
app: hello
tier: backend
track: stable
template:
metadata:
annotations:
sidecar.istio.io/proxyImage: "docker.io/istio/proxy2_debug:unittest"
labels:
app: hello
tier: backend
track: stable
spec:
containers:
- name: hello
image: "fake.docker.io/google-samples/hello-go-gke:1.0"
ports:
- name: http
containerPort: 80
| pkg/kube/inject/testdata/inject/hello-proxy-override.yaml | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.0001781183818820864,
0.00017549788753967732,
0.00017370935529470444,
0.00017466588178649545,
0.0000018936799506263924
] |
{
"id": 4,
"code_window": [
"\t\tlog.Debug(\"context cancelled - closing conn\")\n",
"\t\tconn.Close()\n",
"\t}()\n",
"\n",
"\t// before doing anything, add the connection to the list of active connections\n",
"\tz.conns.addConn(conn)\n",
"\tdefer z.conns.deleteConn(conn)\n",
"\n"
],
"labels": [
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 185
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package xds
import (
"context"
"sync"
"time"
core "github.com/envoyproxy/go-control-plane/envoy/config/core/v3"
discovery "github.com/envoyproxy/go-control-plane/envoy/service/discovery/v3"
"google.golang.org/genproto/googleapis/rpc/status"
"google.golang.org/grpc"
"istio.io/istio/pilot/pkg/features"
"istio.io/istio/pilot/pkg/model"
v3 "istio.io/istio/pilot/pkg/xds/v3"
"istio.io/istio/pkg/test"
)
func NewDeltaAdsTest(t test.Failer, conn *grpc.ClientConn) *DeltaAdsTest {
test.SetForTest(t, &features.DeltaXds, true)
return NewDeltaXdsTest(t, conn, func(conn *grpc.ClientConn) (DeltaDiscoveryClient, error) {
xds := discovery.NewAggregatedDiscoveryServiceClient(conn)
return xds.DeltaAggregatedResources(context.Background())
})
}
func NewDeltaXdsTest(t test.Failer, conn *grpc.ClientConn,
getClient func(conn *grpc.ClientConn) (DeltaDiscoveryClient, error),
) *DeltaAdsTest {
ctx, cancel := context.WithCancel(context.Background())
cl, err := getClient(conn)
if err != nil {
t.Fatal(err)
}
resp := &DeltaAdsTest{
client: cl,
conn: conn,
context: ctx,
cancelContext: cancel,
t: t,
ID: "sidecar~1.1.1.1~test.default~default.svc.cluster.local",
timeout: time.Second,
Type: v3.ClusterType,
responses: make(chan *discovery.DeltaDiscoveryResponse),
error: make(chan error),
}
t.Cleanup(resp.Cleanup)
go resp.adsReceiveChannel()
return resp
}
type DeltaAdsTest struct {
client DeltaDiscoveryClient
responses chan *discovery.DeltaDiscoveryResponse
error chan error
t test.Failer
conn *grpc.ClientConn
metadata model.NodeMetadata
ID string
Type string
cancelOnce sync.Once
context context.Context
cancelContext context.CancelFunc
timeout time.Duration
}
func (a *DeltaAdsTest) Cleanup() {
// Place in once to avoid race when two callers attempt to cleanup
a.cancelOnce.Do(func() {
a.cancelContext()
_ = a.client.CloseSend()
if a.conn != nil {
_ = a.conn.Close()
}
})
}
func (a *DeltaAdsTest) adsReceiveChannel() {
go func() {
<-a.context.Done()
a.Cleanup()
}()
for {
resp, err := a.client.Recv()
if err != nil {
if isUnexpectedError(err) {
log.Warnf("ads received error: %v", err)
}
select {
case a.error <- err:
case <-a.context.Done():
}
return
}
select {
case a.responses <- resp:
case <-a.context.Done():
return
}
}
}
// DrainResponses reads all responses, but does nothing to them
func (a *DeltaAdsTest) DrainResponses() {
a.t.Helper()
for {
select {
case <-a.context.Done():
return
case r := <-a.responses:
log.Infof("drained response %v", r.TypeUrl)
}
}
}
// ExpectResponse waits until a response is received and returns it
func (a *DeltaAdsTest) ExpectResponse() *discovery.DeltaDiscoveryResponse {
a.t.Helper()
select {
case <-time.After(a.timeout):
a.t.Fatalf("did not get response in time")
case resp := <-a.responses:
if resp == nil || (len(resp.Resources) == 0 && len(resp.RemovedResources) == 0) {
a.t.Fatalf("got empty response")
}
return resp
case err := <-a.error:
a.t.Fatalf("got error: %v", err)
}
return nil
}
// ExpectResponse waits until a response is received and returns it
func (a *DeltaAdsTest) ExpectEmptyResponse() *discovery.DeltaDiscoveryResponse {
a.t.Helper()
select {
case <-time.After(a.timeout):
a.t.Fatalf("did not get response in time")
case resp := <-a.responses:
if resp == nil {
a.t.Fatalf("expected response")
}
if resp != nil && (len(resp.RemovedResources) > 0 || len(resp.Resources) > 0) {
a.t.Fatalf("expected empty response. received %v", resp)
}
return resp
case err := <-a.error:
a.t.Fatalf("got error: %v", err)
}
return nil
}
// ExpectError waits until an error is received and returns it
func (a *DeltaAdsTest) ExpectError() error {
a.t.Helper()
select {
case <-time.After(a.timeout):
a.t.Fatalf("did not get error in time")
case err := <-a.error:
return err
}
return nil
}
// ExpectNoResponse waits a short period of time and ensures no response is received
func (a *DeltaAdsTest) ExpectNoResponse() {
a.t.Helper()
select {
case <-time.After(time.Millisecond * 50):
return
case resp := <-a.responses:
a.t.Fatalf("got unexpected response: %v", resp)
}
}
func (a *DeltaAdsTest) fillInRequestDefaults(req *discovery.DeltaDiscoveryRequest) *discovery.DeltaDiscoveryRequest {
if req == nil {
req = &discovery.DeltaDiscoveryRequest{}
}
if req.TypeUrl == "" {
req.TypeUrl = a.Type
}
if req.Node == nil {
req.Node = &core.Node{
Id: a.ID,
Metadata: a.metadata.ToStruct(),
}
}
return req
}
func (a *DeltaAdsTest) Request(req *discovery.DeltaDiscoveryRequest) {
req = a.fillInRequestDefaults(req)
if err := a.client.Send(req); err != nil {
a.t.Fatal(err)
}
}
// RequestResponseAck does a full XDS exchange: Send a request, get a response, and ACK the response
func (a *DeltaAdsTest) RequestResponseAck(req *discovery.DeltaDiscoveryRequest) *discovery.DeltaDiscoveryResponse {
a.t.Helper()
req = a.fillInRequestDefaults(req)
a.Request(req)
resp := a.ExpectResponse()
req.ResponseNonce = resp.Nonce
a.Request(&discovery.DeltaDiscoveryRequest{
Node: req.Node,
TypeUrl: req.TypeUrl,
ResponseNonce: req.ResponseNonce,
})
return resp
}
// RequestResponseNack does a full XDS exchange with an error: Send a request, get a response, and NACK the response
func (a *DeltaAdsTest) RequestResponseNack(req *discovery.DeltaDiscoveryRequest) *discovery.DeltaDiscoveryResponse {
a.t.Helper()
if req == nil {
req = &discovery.DeltaDiscoveryRequest{}
}
a.Request(req)
resp := a.ExpectResponse()
a.Request(&discovery.DeltaDiscoveryRequest{
Node: req.Node,
TypeUrl: req.TypeUrl,
ResponseNonce: req.ResponseNonce,
ErrorDetail: &status.Status{Message: "Test request NACK"},
})
return resp
}
func (a *DeltaAdsTest) WithID(id string) *DeltaAdsTest {
a.ID = id
return a
}
func (a *DeltaAdsTest) WithType(typeURL string) *DeltaAdsTest {
a.Type = typeURL
return a
}
func (a *DeltaAdsTest) WithMetadata(m model.NodeMetadata) *DeltaAdsTest {
a.metadata = m
return a
}
func (a *DeltaAdsTest) WithTimeout(t time.Duration) *DeltaAdsTest {
a.timeout = t
return a
}
func (a *DeltaAdsTest) WithNodeType(t model.NodeType) *DeltaAdsTest {
a.ID = string(t) + "~1.1.1.1~test.default~default.svc.cluster.local"
return a
}
| pilot/pkg/xds/deltaadstest.go | 1 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.013614500872790813,
0.0008027554722502828,
0.00016362735186703503,
0.0001733833341859281,
0.0025037305895239115
] |
{
"id": 4,
"code_window": [
"\t\tlog.Debug(\"context cancelled - closing conn\")\n",
"\t\tconn.Close()\n",
"\t}()\n",
"\n",
"\t// before doing anything, add the connection to the list of active connections\n",
"\tz.conns.addConn(conn)\n",
"\tdefer z.conns.deleteConn(conn)\n",
"\n"
],
"labels": [
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 185
} | apiVersion: release-notes/v2
kind: feature
area: installation
docs:
- 'https://istio.io/latest/docs/setup/platform-setup/openshift/'
releaseNotes:
- |
**Improved** Usage on OpenShift clusters is simplified by removing the need of granting the `anyuid` SCC privilege to Istio and applications.
| releasenotes/notes/remove-anyuid-openshift.yaml | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00017044560809154063,
0.00017044560809154063,
0.00017044560809154063,
0.00017044560809154063,
0
] |
{
"id": 4,
"code_window": [
"\t\tlog.Debug(\"context cancelled - closing conn\")\n",
"\t\tconn.Close()\n",
"\t}()\n",
"\n",
"\t// before doing anything, add the connection to the list of active connections\n",
"\tz.conns.addConn(conn)\n",
"\tdefer z.conns.deleteConn(conn)\n",
"\n"
],
"labels": [
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 185
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package util
import (
"regexp"
"istio.io/istio/pkg/config/constants"
)
const (
DefaultClusterLocalDomain = "svc." + constants.DefaultClusterLocalDomain
ExportToNamespaceLocal = "."
ExportToAllNamespaces = "*"
IstioProxyName = "istio-proxy"
IstioOperator = "istio-operator"
MeshGateway = "mesh"
Wildcard = "*"
MeshConfigName = "istio"
InjectionLabelName = "istio-injection"
InjectionLabelEnableValue = "enabled"
InjectionConfigMap = "istio-sidecar-injector"
InjectionConfigMapValue = "values"
InjectorWebhookConfigKey = "sidecarInjectorWebhook"
InjectorWebhookConfigValue = "enableNamespacesByDefault"
)
var fqdnPattern = regexp.MustCompile(`^(.+)\.(.+)\.svc\.cluster\.local$`)
| pkg/config/analysis/analyzers/util/constants.go | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.0001773285330273211,
0.00017291704716626555,
0.00016717416292522103,
0.00017509575991425663,
0.00000423297387897037
] |
{
"id": 4,
"code_window": [
"\t\tlog.Debug(\"context cancelled - closing conn\")\n",
"\t\tconn.Close()\n",
"\t}()\n",
"\n",
"\t// before doing anything, add the connection to the list of active connections\n",
"\tz.conns.addConn(conn)\n",
"\tdefer z.conns.deleteConn(conn)\n",
"\n"
],
"labels": [
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "cni/pkg/nodeagent/ztunnelserver.go",
"type": "replace",
"edit_start_line_idx": 185
} | apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: test-patch-filter-chain
namespace: egress
spec:
configPatches:
- applyTo: FILTER_CHAIN
match:
listener:
filterChain:
sni: www.example.com
patch:
operation: MERGE
value:
transportSocket:
name: envoy.transport_sockets.tls
typedConfig:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
commonTlsContext:
alpnProtocols:
- http/1.1
tlsCertificateSdsSecretConfigs:
- name: kubernetes://wildcard-cert
sdsConfig:
ads: {}
resourceApiVersion: V3
| pkg/config/analysis/analyzers/testdata/envoy-filter-filterchain.yaml | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00034361169673502445,
0.0002250060351798311,
0.00016468562535010278,
0.00016672079800628126,
0.00008387098205275834
] |
{
"id": 5,
"code_window": [
"}\n",
"\n",
"func (wp *WorkerPool) Run(ctx context.Context) {\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\twp.lock.Lock()\n",
"\t\twp.closing = true\n",
"\t\twp.lock.Unlock()\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(ctx, func() {\n"
],
"file_path": "pilot/pkg/status/resourcelock.go",
"type": "replace",
"edit_start_line_idx": 164
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package status
import (
"context"
"strconv"
"sync"
"k8s.io/apimachinery/pkg/runtime/schema"
"istio.io/api/meta/v1alpha1"
"istio.io/istio/pkg/config"
"istio.io/istio/pkg/util/sets"
)
// Task to be performed.
type Task func(entry cacheEntry)
// WorkerQueue implements an expandable goroutine pool which executes at most one concurrent routine per target
// resource. Multiple calls to Push() will not schedule multiple executions per target resource, but will ensure that
// the single execution uses the latest value.
type WorkerQueue interface {
// Push a task.
Push(target Resource, controller *Controller, context any)
// Run the loop until a signal on the context
Run(ctx context.Context)
// Delete a task
Delete(target Resource)
}
type cacheEntry struct {
// the cacheVale represents the latest version of the resource, including ResourceVersion
cacheResource Resource
// the perControllerStatus represents the latest version of the ResourceStatus
perControllerStatus map[*Controller]any
}
type lockResource struct {
schema.GroupVersionResource
Namespace string
Name string
}
func convert(i Resource) lockResource {
return lockResource{
GroupVersionResource: i.GroupVersionResource,
Namespace: i.Namespace,
Name: i.Name,
}
}
type WorkQueue struct {
// tasks which are not currently executing but need to run
tasks []lockResource
// a lock to govern access to data in the cache
lock sync.Mutex
// for each task, a cacheEntry which can be updated before the task is run so that execution will have latest values
cache map[lockResource]cacheEntry
OnPush func()
}
func (wq *WorkQueue) Push(target Resource, ctl *Controller, progress any) {
wq.lock.Lock()
key := convert(target)
if item, inqueue := wq.cache[key]; inqueue {
item.perControllerStatus[ctl] = progress
wq.cache[key] = item
} else {
wq.cache[key] = cacheEntry{
cacheResource: target,
perControllerStatus: map[*Controller]any{ctl: progress},
}
wq.tasks = append(wq.tasks, key)
}
wq.lock.Unlock()
if wq.OnPush != nil {
wq.OnPush()
}
}
// Pop returns the first item in the queue not in exclusion, along with it's latest progress
func (wq *WorkQueue) Pop(exclusion sets.Set[lockResource]) (target Resource, progress map[*Controller]any) {
wq.lock.Lock()
defer wq.lock.Unlock()
for i := 0; i < len(wq.tasks); i++ {
if !exclusion.Contains(wq.tasks[i]) {
// remove from tasks
t, ok := wq.cache[wq.tasks[i]]
wq.tasks = append(wq.tasks[:i], wq.tasks[i+1:]...)
if !ok {
return Resource{}, nil
}
return t.cacheResource, t.perControllerStatus
}
}
return Resource{}, nil
}
func (wq *WorkQueue) Length() int {
wq.lock.Lock()
defer wq.lock.Unlock()
return len(wq.tasks)
}
func (wq *WorkQueue) Delete(target Resource) {
wq.lock.Lock()
defer wq.lock.Unlock()
delete(wq.cache, convert(target))
}
type WorkerPool struct {
q WorkQueue
// indicates the queue is closing
closing bool
// the function which will be run for each task in queue
write func(*config.Config, any)
// the function to retrieve the initial status
get func(Resource) *config.Config
// current worker routine count
workerCount uint
// maximum worker routine count
maxWorkers uint
currentlyWorking sets.Set[lockResource]
lock sync.Mutex
}
func NewWorkerPool(write func(*config.Config, any), get func(Resource) *config.Config, maxWorkers uint) WorkerQueue {
return &WorkerPool{
write: write,
get: get,
maxWorkers: maxWorkers,
currentlyWorking: sets.New[lockResource](),
q: WorkQueue{
tasks: make([]lockResource, 0),
cache: make(map[lockResource]cacheEntry),
OnPush: nil,
},
}
}
func (wp *WorkerPool) Delete(target Resource) {
wp.q.Delete(target)
}
func (wp *WorkerPool) Push(target Resource, controller *Controller, context any) {
wp.q.Push(target, controller, context)
wp.maybeAddWorker()
}
func (wp *WorkerPool) Run(ctx context.Context) {
go func() {
<-ctx.Done()
wp.lock.Lock()
wp.closing = true
wp.lock.Unlock()
}()
}
// maybeAddWorker adds a worker unless we are at maxWorkers. Workers exit when there are no more tasks, except for the
// last worker, which stays alive indefinitely.
func (wp *WorkerPool) maybeAddWorker() {
wp.lock.Lock()
if wp.workerCount >= wp.maxWorkers || wp.q.Length() == 0 {
wp.lock.Unlock()
return
}
wp.workerCount++
wp.lock.Unlock()
go func() {
for {
wp.lock.Lock()
if wp.closing || wp.q.Length() == 0 {
wp.workerCount--
wp.lock.Unlock()
return
}
target, perControllerWork := wp.q.Pop(wp.currentlyWorking)
if target == (Resource{}) {
// continue or return?
// could have been deleted, or could be no items in queue not currently worked on. need a way to differentiate.
wp.lock.Unlock()
continue
}
wp.q.Delete(target)
wp.currentlyWorking.Insert(convert(target))
wp.lock.Unlock()
// work should be done without holding the lock
cfg := wp.get(target)
if cfg != nil {
// Check that generation matches
if strconv.FormatInt(cfg.Generation, 10) == target.Generation {
x, err := GetOGProvider(cfg.Status)
if err == nil {
// Not all controllers user generation, so we can ignore errors
x.SetObservedGeneration(cfg.Generation)
}
for c, i := range perControllerWork {
// TODO: this does not guarantee controller order. perhaps it should?
x = c.fn(x, i)
}
wp.write(cfg, x)
}
}
wp.lock.Lock()
wp.currentlyWorking.Delete(convert(target))
wp.lock.Unlock()
}
}()
}
type GenerationProvider interface {
SetObservedGeneration(int64)
Unwrap() any
}
type IstioGenerationProvider struct {
*v1alpha1.IstioStatus
}
func (i *IstioGenerationProvider) SetObservedGeneration(in int64) {
i.ObservedGeneration = in
}
func (i *IstioGenerationProvider) Unwrap() any {
return i.IstioStatus
}
| pilot/pkg/status/resourcelock.go | 1 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.9970582723617554,
0.19590236246585846,
0.0001669570483500138,
0.0012157774763181806,
0.3673289120197296
] |
{
"id": 5,
"code_window": [
"}\n",
"\n",
"func (wp *WorkerPool) Run(ctx context.Context) {\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\twp.lock.Lock()\n",
"\t\twp.closing = true\n",
"\t\twp.lock.Unlock()\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(ctx, func() {\n"
],
"file_path": "pilot/pkg/status/resourcelock.go",
"type": "replace",
"edit_start_line_idx": 164
} | -----BEGIN CERTIFICATE-----
MIID2jCCAsKgAwIBAgIUIf2Vv9QwxTJgEJObGbJvHMthI2UwDQYJKoZIhvcNAQEL
BQAwGDEWMBQGA1UEAwwNY2x1c3Rlci5sb2NhbDAgFw0yMjEwMjAxOTMxMjBaGA8y
Mjk2MDgwNDE5MzEyMFowMDEuMCwGA1UEAwwlaXN0aW9kLmlzdGlvLXN5c3RlbS5z
dmMuY2x1c3Rlci5sb2NhbDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AMI+gzMXBDhSaywxYa3l1mrDYP2vzgo9AHvXguiqigrSxQ1aXf6RaufZUgXHIRHh
fb9yfJqK5YOPqRy12oyg1Cm6KwrTZDLy6KccQrDEWl+foFZuSN7aZDAp0A0L2/Kc
eRbn+1Y2jq2qizlmPUJ7RHV2BH6fVga0kBb+tDw7YdMTUSWVqzYL4H8G3Sbb5xN1
Fu7QF6ri1sxy+CCN5rs7I95l1DjknsnMQOzewqUCRpapzv0GvtrjvEoMNyZf7B3B
U7xXCIjvo7FDe5y32xFHX2lADfugpjM22jnAfTv5aKBENkKmSuyXn/bpGxMfITye
PBpkMKOVi/OEgYhZW86WGicCAwEAAaOCAQAwgf0wCQYDVR0TBAIwADALBgNVHQ8E
BAMCBeAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMIHDBgNVHREEgbsw
gbiGRXNwaWZmZTovL2NsdXN0ZXIubG9jYWwvbnMvaXN0aW8tc3lzdGVtL3NhL2lz
dGlvLXBpbG90LXNlcnZpY2UtYWNjb3VudIITaXN0aW9kLmlzdGlvLXN5c3RlbYIX
aXN0aW9kLmlzdGlvLXN5c3RlbS5zdmOCGGlzdGlvLXBpbG90LmlzdGlvLXN5c3Rl
bYIcaXN0aW8tcGlsb3QuaXN0aW8tc3lzdGVtLnN2Y4IJbG9jYWxob3N0MA0GCSqG
SIb3DQEBCwUAA4IBAQAZho6FIVJXiJaXWJwXKIczhz0WBiUDBBm2NktZayyQGf2E
sUrKekalrmwm+X9cDHDH07rajYolKUDTPBsQ/r9HcjglGA4q77LvVkrOE1r/ggKm
Us/IFATb00jWQnAl2za46W2/SAEvTbbsYDO8mlGtLL73HRKSqj3K5i9t78Yo4gH8
IIRoZi7t9DmxcxmdxrUG7bsQZ51AJfbVGY8BF2w1CzmXvxhfX7jxP9sLPUwtXWYq
/xEDyJtyLOMDM8IoHRiP6SWAudsVy7PXlRRfBGg5K12BQLZTUv6Xf/e/5SESYzqN
DyxW6KclAr3F9n63lF34sV82dlL/sEEedVPZYAHb
-----END CERTIFICATE-----
| tests/testdata/certs/pilot/cert-chain.pem | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.002875947393476963,
0.00118287093937397,
0.00016588858852628618,
0.0005067767342552543,
0.0012052474776282907
] |
{
"id": 5,
"code_window": [
"}\n",
"\n",
"func (wp *WorkerPool) Run(ctx context.Context) {\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\twp.lock.Lock()\n",
"\t\twp.closing = true\n",
"\t\twp.lock.Unlock()\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(ctx, func() {\n"
],
"file_path": "pilot/pkg/status/resourcelock.go",
"type": "replace",
"edit_start_line_idx": 164
} | apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: hello
spec:
replicas: 7
selector:
matchLabels:
app: hello
tier: backend
track: stable
strategy: {}
template:
metadata:
annotations:
istio.io/rev: default
kubectl.kubernetes.io/default-container: hello
kubectl.kubernetes.io/default-logs-container: hello
prometheus.io/path: /stats/prometheus
prometheus.io/port: "15020"
prometheus.io/scrape: "true"
sidecar.istio.io/enableCoreDump: "true"
sidecar.istio.io/status: '{"initContainers":["istio-init","enable-core-dump"],"containers":["istio-proxy"],"volumes":["workload-socket","credential-socket","workload-certs","istio-envoy","istio-data","istio-podinfo","istio-token","istiod-ca-cert"],"imagePullSecrets":null,"revision":"default"}'
creationTimestamp: null
labels:
app: hello
security.istio.io/tlsMode: istio
service.istio.io/canonical-name: hello
service.istio.io/canonical-revision: latest
tier: backend
track: stable
spec:
containers:
- image: fake.docker.io/google-samples/hello-go-gke:1.0
name: hello
ports:
- containerPort: 80
name: http
resources: {}
- args:
- proxy
- sidecar
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --proxyLogLevel=warning
- --proxyComponentLogLevel=misc:error
- --log_output_level=default:info
env:
- name: JWT_POLICY
value: third-party-jwt
- name: PILOT_CERT_PROVIDER
value: istiod
- name: CA_ADDR
value: istiod.istio-system.svc:15012
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: ISTIO_CPU_LIMIT
valueFrom:
resourceFieldRef:
divisor: "0"
resource: limits.cpu
- name: PROXY_CONFIG
value: |
{}
- name: ISTIO_META_POD_PORTS
value: |-
[
{"name":"http","containerPort":80}
]
- name: ISTIO_META_APP_CONTAINERS
value: hello
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
divisor: "0"
resource: limits.memory
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
divisor: "0"
resource: limits.cpu
- name: ISTIO_META_CLUSTER_ID
value: Kubernetes
- name: ISTIO_META_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
- name: ISTIO_META_WORKLOAD_NAME
value: hello
- name: ISTIO_META_OWNER
value: kubernetes://apis/apps/v1/namespaces/default/deployments/hello
- name: ISTIO_META_MESH_ID
value: cluster.local
- name: TRUST_DOMAIN
value: cluster.local
image: gcr.io/istio-testing/proxyv2:latest
name: istio-proxy
ports:
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 4
httpGet:
path: /healthz/ready
port: 15021
periodSeconds: 15
timeoutSeconds: 3
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 1337
runAsNonRoot: true
runAsUser: 1337
startupProbe:
failureThreshold: 600
httpGet:
path: /healthz/ready
port: 15021
periodSeconds: 1
timeoutSeconds: 3
volumeMounts:
- mountPath: /var/run/secrets/workload-spiffe-uds
name: workload-socket
- mountPath: /var/run/secrets/credential-uds
name: credential-socket
- mountPath: /var/run/secrets/workload-spiffe-credentials
name: workload-certs
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/lib/istio/data
name: istio-data
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /var/run/secrets/tokens
name: istio-token
- mountPath: /etc/istio/pod
name: istio-podinfo
initContainers:
- args:
- istio-iptables
- -p
- "15001"
- -z
- "15006"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- '*'
- -d
- 15090,15021,15020
- --log_output_level=default:info
image: gcr.io/istio-testing/proxyv2:latest
name: istio-init
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
- args:
- -c
- sysctl -w kernel.core_pattern=/var/lib/istio/data/core.proxy && ulimit -c
unlimited
command:
- /bin/sh
image: gcr.io/istio-testing/proxyv2:latest
name: enable-core-dump
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- SYS_ADMIN
drop:
- ALL
privileged: true
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
volumes:
- name: workload-socket
- name: credential-socket
- name: workload-certs
- emptyDir:
medium: Memory
name: istio-envoy
- emptyDir: {}
name: istio-data
- downwardAPI:
items:
- fieldRef:
fieldPath: metadata.labels
path: labels
- fieldRef:
fieldPath: metadata.annotations
path: annotations
name: istio-podinfo
- name: istio-token
projected:
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
- configMap:
name: istio-ca-root-cert
name: istiod-ca-cert
status: {}
---
| pkg/kube/inject/testdata/inject/enable-core-dump-annotation.yaml.injected | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00018169697432313114,
0.00017076886433642358,
0.0001649291953071952,
0.00017044578271452338,
0.0000028900440156576224
] |
{
"id": 5,
"code_window": [
"}\n",
"\n",
"func (wp *WorkerPool) Run(ctx context.Context) {\n",
"\tgo func() {\n",
"\t\t<-ctx.Done()\n",
"\t\twp.lock.Lock()\n",
"\t\twp.closing = true\n",
"\t\twp.lock.Unlock()\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"replace",
"keep",
"keep",
"keep"
],
"after_edit": [
"\tcontext.AfterFunc(ctx, func() {\n"
],
"file_path": "pilot/pkg/status/resourcelock.go",
"type": "replace",
"edit_start_line_idx": 164
} | apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-mtls-for-egress-gateway
spec:
host: {{ .EgressGatewayServiceName | default "istio-egressgateway" }}.{{ .EgressGatewayServiceNamespace | default "istio-system" }}.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
sni: external.{{ .externalNamespace }}.svc.cluster.local
| tests/integration/pilot/testdata/tunneling/gateway/tls/istio-mutual/mtls.tmpl.yaml | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.0001725154579617083,
0.0001712289231363684,
0.00016994238831102848,
0.0001712289231363684,
0.0000012865348253399134
] |
{
"id": 6,
"code_window": [
"\t\twp.lock.Lock()\n",
"\t\twp.closing = true\n",
"\t\twp.lock.Unlock()\n",
"\t}()\n",
"}\n",
"\n",
"// maybeAddWorker adds a worker unless we are at maxWorkers. Workers exit when there are no more tasks, except for the\n",
"// last worker, which stays alive indefinitely.\n",
"func (wp *WorkerPool) maybeAddWorker() {\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "pilot/pkg/status/resourcelock.go",
"type": "replace",
"edit_start_line_idx": 169
} | // Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package nodeagent
import (
"context"
"errors"
"fmt"
"io"
"net"
"os"
"sync"
"time"
"golang.org/x/sys/unix"
"google.golang.org/protobuf/proto"
"istio.io/istio/pkg/monitoring"
"istio.io/istio/pkg/zdsapi"
)
var (
ztunnelKeepAliveCheckInterval = 5 * time.Second
readWriteDeadline = 5 * time.Second
)
var ztunnelConnected = monitoring.NewGauge("ztunnel_connected",
"number of connections to ztunnel")
type ZtunnelServer interface {
Run(ctx context.Context)
PodDeleted(ctx context.Context, uid string) error
PodAdded(ctx context.Context, uid string, netns Netns) error
Close() error
}
/*
To clean up stale ztunnels
we may need to ztunnel to send its (uid, bootid / boot time) to us
so that we can remove stale entries when the ztunnel pod is deleted
or when the ztunnel pod is restarted in the same pod (remove old entries when the same uid connects again, but with different boot id?)
save a queue of what needs to be sent to the ztunnel pod and send it one by one when it connects.
when a new ztunnel connects with different uid, only propagate deletes to older ztunnels.
*/
type connMgr struct {
connectionSet map[*ZtunnelConnection]struct{}
latestConn *ZtunnelConnection
mu sync.Mutex
}
func (c *connMgr) addConn(conn *ZtunnelConnection) {
log.Debug("ztunnel connected")
c.mu.Lock()
defer c.mu.Unlock()
c.connectionSet[conn] = struct{}{}
c.latestConn = conn
ztunnelConnected.RecordInt(int64(len(c.connectionSet)))
}
func (c *connMgr) LatestConn() *ZtunnelConnection {
c.mu.Lock()
defer c.mu.Unlock()
return c.latestConn
}
func (c *connMgr) deleteConn(conn *ZtunnelConnection) {
log.Debug("ztunnel disconnected")
c.mu.Lock()
defer c.mu.Unlock()
delete(c.connectionSet, conn)
if c.latestConn == conn {
c.latestConn = nil
}
ztunnelConnected.RecordInt(int64(len(c.connectionSet)))
}
// this is used in tests
// nolint: unused
func (c *connMgr) len() int {
c.mu.Lock()
defer c.mu.Unlock()
return len(c.connectionSet)
}
type ztunnelServer struct {
listener *net.UnixListener
// connections to pod delivered map
// add pod goes to newest connection
// delete pod goes to all connections
conns *connMgr
pods PodNetnsCache
}
var _ ZtunnelServer = &ztunnelServer{}
func newZtunnelServer(addr string, pods PodNetnsCache) (*ztunnelServer, error) {
if addr == "" {
return nil, fmt.Errorf("addr cannot be empty")
}
resolvedAddr, err := net.ResolveUnixAddr("unixpacket", addr)
if err != nil {
return nil, fmt.Errorf("failed to resolve unix addr: %w", err)
}
// remove potentially existing address
// Remove unix socket before use, if one is leftover from previous CNI restart
if err := os.Remove(addr); err != nil && !os.IsNotExist(err) {
// Anything other than "file not found" is an error.
return nil, fmt.Errorf("failed to remove unix://%s: %w", addr, err)
}
l, err := net.ListenUnix("unixpacket", resolvedAddr)
if err != nil {
return nil, fmt.Errorf("failed to listen unix: %w", err)
}
return &ztunnelServer{
listener: l,
conns: &connMgr{
connectionSet: map[*ZtunnelConnection]struct{}{},
},
pods: pods,
}, nil
}
func (z *ztunnelServer) Close() error {
return z.listener.Close()
}
func (z *ztunnelServer) Run(ctx context.Context) {
go func() {
<-ctx.Done()
z.Close()
}()
for {
log.Debug("accepting conn")
conn, err := z.accept()
if err != nil {
if errors.Is(err, net.ErrClosed) {
log.Debug("listener closed - returning")
return
}
log.Errorf("failed to accept conn: %v", err)
continue
}
log.Debug("connection accepted")
go func() {
log.Debug("handling conn")
if err := z.handleConn(ctx, conn); err != nil {
log.Errorf("failed to handle conn: %v", err)
}
}()
}
}
// ZDS protocol is very simple, for every message sent, and ack is sent.
// the ack only has temporal correlation (i.e. it is the first and only ack msg after the message was sent)
// All this to say, that we want to make sure that message to ztunnel are sent from a single goroutine
// so we don't mix messages and acks.
// nolint: unparam
func (z *ztunnelServer) handleConn(ctx context.Context, conn *ZtunnelConnection) error {
defer conn.Close()
go func() {
<-ctx.Done()
log.Debug("context cancelled - closing conn")
conn.Close()
}()
// before doing anything, add the connection to the list of active connections
z.conns.addConn(conn)
defer z.conns.deleteConn(conn)
// get hello message from ztunnel
m, _, err := readProto[zdsapi.ZdsHello](conn.u, readWriteDeadline, nil)
if err != nil {
return err
}
log.Infof("received hello from ztunnel. %v", m.Version)
log.Debug("sending snapshot to ztunnel")
if err := z.sendSnapshot(ctx, conn); err != nil {
return err
}
for {
// listen for updates:
select {
case update, ok := <-conn.Updates:
if !ok {
log.Debug("update channel closed - returning")
return nil
}
log.Debugf("got update to send to ztunnel")
resp, err := conn.sendDataAndWaitForAck(update.Update, update.Fd)
if err != nil {
log.Errorf("ztunnel acked error: err %v ackErr %s", err, resp.GetAck().GetError())
}
log.Debugf("ztunnel acked")
// Safety: Resp is buffered, so this will not block
update.Resp <- updateResponse{
err: err,
resp: resp,
}
case <-time.After(ztunnelKeepAliveCheckInterval):
// do a short read, just to see if the connection to ztunnel is
// still alive. As ztunnel shouldn't send anything unless we send
// something first, we expect to get an os.ErrDeadlineExceeded error
// here if the connection is still alive.
// note that unlike tcp connections, reading is a good enough test here.
_, err := conn.readMessage(time.Second / 100)
switch {
case !errors.Is(err, os.ErrDeadlineExceeded):
log.Debugf("ztunnel keepalive failed: %v", err)
if errors.Is(err, io.EOF) {
log.Debug("ztunnel EOF")
return nil
}
return err
case err == nil:
log.Warn("ztunnel protocol error, unexpected message")
return fmt.Errorf("ztunnel protocol error, unexpected message")
default:
// we get here if error is deadline exceeded, which means ztunnel is alive.
}
case <-ctx.Done():
return nil
}
}
}
func (z *ztunnelServer) PodDeleted(ctx context.Context, uid string) error {
r := &zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Del{
Del: &zdsapi.DelWorkload{
Uid: uid,
},
},
}
data, err := proto.Marshal(r)
if err != nil {
return err
}
log.Debugf("sending delete pod to ztunnel: %s %v", uid, r)
var delErr []error
z.conns.mu.Lock()
defer z.conns.mu.Unlock()
for conn := range z.conns.connectionSet {
_, err := conn.send(ctx, data, nil)
if err != nil {
delErr = append(delErr, err)
}
}
return errors.Join(delErr...)
}
func (z *ztunnelServer) PodAdded(ctx context.Context, uid string, netns Netns) error {
latestConn := z.conns.LatestConn()
if latestConn == nil {
return fmt.Errorf("no ztunnel connection")
}
r := &zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Add{
Add: &zdsapi.AddWorkload{
Uid: uid,
},
},
}
log.Debugf("About to send added pod: %s to ztunnel: %v", uid, r)
data, err := proto.Marshal(r)
if err != nil {
return err
}
fd := int(netns.Fd())
resp, err := latestConn.send(ctx, data, &fd)
if err != nil {
return err
}
if resp.GetAck().GetError() != "" {
log.Errorf("add-workload: got ack error: %s", resp.GetAck().GetError())
return fmt.Errorf("got ack error: %s", resp.GetAck().GetError())
}
return nil
}
// TODO ctx is unused here
// nolint: unparam
func (z *ztunnelServer) sendSnapshot(ctx context.Context, conn *ZtunnelConnection) error {
snap := z.pods.ReadCurrentPodSnapshot()
for uid, netns := range snap {
var resp *zdsapi.WorkloadResponse
var err error
if netns != nil {
fd := int(netns.Fd())
log.Debugf("Sending local pod %s ztunnel", uid)
resp, err = conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Add{
Add: &zdsapi.AddWorkload{
Uid: uid,
},
},
}, &fd)
} else {
log.Infof("netns not available for local pod %s. sending keep to ztunnel", uid)
resp, err = conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_Keep{
Keep: &zdsapi.KeepWorkload{
Uid: uid,
},
},
}, nil)
}
if err != nil {
return err
}
if resp.GetAck().GetError() != "" {
log.Errorf("add-workload: got ack error: %s", resp.GetAck().GetError())
}
}
resp, err := conn.sendMsgAndWaitForAck(&zdsapi.WorkloadRequest{
Payload: &zdsapi.WorkloadRequest_SnapshotSent{
SnapshotSent: &zdsapi.SnapshotSent{},
},
}, nil)
if err != nil {
return err
}
log.Debugf("snaptshot sent to ztunnel")
if resp.GetAck().GetError() != "" {
log.Errorf("snap-sent: got ack error: %s", resp.GetAck().GetError())
}
return nil
}
func (z *ztunnelServer) accept() (*ZtunnelConnection, error) {
log.Debug("accepting unix conn")
conn, err := z.listener.AcceptUnix()
if err != nil {
return nil, fmt.Errorf("failed to accept unix: %w", err)
}
log.Debug("accepted conn")
return newZtunnelConnection(conn), nil
}
type updateResponse struct {
err error
resp *zdsapi.WorkloadResponse
}
type updateRequest struct {
Update []byte
Fd *int
Resp chan updateResponse
}
type ZtunnelConnection struct {
u *net.UnixConn
Updates chan updateRequest
}
func newZtunnelConnection(u *net.UnixConn) *ZtunnelConnection {
return &ZtunnelConnection{u: u, Updates: make(chan updateRequest, 100)}
}
func (z *ZtunnelConnection) Close() {
z.u.Close()
}
func (z *ZtunnelConnection) send(ctx context.Context, data []byte, fd *int) (*zdsapi.WorkloadResponse, error) {
ret := make(chan updateResponse, 1)
req := updateRequest{
Update: data,
Fd: fd,
Resp: ret,
}
select {
case z.Updates <- req:
case <-ctx.Done():
return nil, ctx.Err()
}
select {
case r := <-ret:
return r.resp, r.err
case <-ctx.Done():
return nil, ctx.Err()
}
}
func (z *ZtunnelConnection) sendMsgAndWaitForAck(msg *zdsapi.WorkloadRequest, fd *int) (*zdsapi.WorkloadResponse, error) {
data, err := proto.Marshal(msg)
if err != nil {
return nil, err
}
return z.sendDataAndWaitForAck(data, fd)
}
func (z *ZtunnelConnection) sendDataAndWaitForAck(data []byte, fd *int) (*zdsapi.WorkloadResponse, error) {
var rights []byte
if fd != nil {
rights = unix.UnixRights(*fd)
}
err := z.u.SetWriteDeadline(time.Now().Add(readWriteDeadline))
if err != nil {
return nil, err
}
_, _, err = z.u.WriteMsgUnix(data, rights, nil)
if err != nil {
return nil, err
}
// wait for ack
return z.readMessage(readWriteDeadline)
}
func (z *ZtunnelConnection) readMessage(timeout time.Duration) (*zdsapi.WorkloadResponse, error) {
m, _, err := readProto[zdsapi.WorkloadResponse](z.u, timeout, nil)
return m, err
}
func readProto[T any, PT interface {
proto.Message
*T
}](c *net.UnixConn, timeout time.Duration, oob []byte) (PT, int, error) {
var buf [1024]byte
err := c.SetReadDeadline(time.Now().Add(timeout))
if err != nil {
return nil, 0, err
}
n, oobn, flags, _, err := c.ReadMsgUnix(buf[:], oob)
if err != nil {
return nil, 0, err
}
if flags&unix.MSG_TRUNC != 0 {
return nil, 0, fmt.Errorf("truncated message")
}
if flags&unix.MSG_CTRUNC != 0 {
return nil, 0, fmt.Errorf("truncated control message")
}
var resp T
var respPtr PT = &resp
err = proto.Unmarshal(buf[:n], respPtr)
if err != nil {
return nil, 0, err
}
return respPtr, oobn, nil
}
| cni/pkg/nodeagent/ztunnelserver.go | 1 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.0005668457015417516,
0.00018480965809430927,
0.0001627699239179492,
0.00016969899297691882,
0.00005981724461889826
] |
{
"id": 6,
"code_window": [
"\t\twp.lock.Lock()\n",
"\t\twp.closing = true\n",
"\t\twp.lock.Unlock()\n",
"\t}()\n",
"}\n",
"\n",
"// maybeAddWorker adds a worker unless we are at maxWorkers. Workers exit when there are no more tasks, except for the\n",
"// last worker, which stays alive indefinitely.\n",
"func (wp *WorkerPool) maybeAddWorker() {\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "pilot/pkg/status/resourcelock.go",
"type": "replace",
"edit_start_line_idx": 169
} | apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
annotations:
istio.io/for-service-account: bookinfo-reviews
labels:
istio.io/rev: rapid
name: bookinfo-reviews
spec:
gatewayClassName: istio-waypoint
listeners:
- name: mesh
port: 15008
protocol: HBONE | samples/ambient-argo/application/reviews-waypoint.yaml | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00017835052858572453,
0.00017407105769962072,
0.00016979158681351691,
0.00017407105769962072,
0.000004279470886103809
] |
{
"id": 6,
"code_window": [
"\t\twp.lock.Lock()\n",
"\t\twp.closing = true\n",
"\t\twp.lock.Unlock()\n",
"\t}()\n",
"}\n",
"\n",
"// maybeAddWorker adds a worker unless we are at maxWorkers. Workers exit when there are no more tasks, except for the\n",
"// last worker, which stays alive indefinitely.\n",
"func (wp *WorkerPool) maybeAddWorker() {\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "pilot/pkg/status/resourcelock.go",
"type": "replace",
"edit_start_line_idx": 169
} | apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- "reviews"
http:
- route:
- destination:
host: reviews
subset: v1
---
apiVersion: v1
kind: Service
metadata:
name: reviews
spec:
ports:
- port: 9080
name: http
protocol: TCP
selector:
app: reviews
| tests/integration/pilot/testdata/virtualservice.yaml | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00017365718667861074,
0.00017193927487824112,
0.00016894021246116608,
0.00017322044004686177,
0.000002128143250956782
] |
{
"id": 6,
"code_window": [
"\t\twp.lock.Lock()\n",
"\t\twp.closing = true\n",
"\t\twp.lock.Unlock()\n",
"\t}()\n",
"}\n",
"\n",
"// maybeAddWorker adds a worker unless we are at maxWorkers. Workers exit when there are no more tasks, except for the\n",
"// last worker, which stays alive indefinitely.\n",
"func (wp *WorkerPool) maybeAddWorker() {\n"
],
"labels": [
"keep",
"keep",
"keep",
"replace",
"keep",
"keep",
"keep",
"keep",
"keep"
],
"after_edit": [
"\t})\n"
],
"file_path": "pilot/pkg/status/resourcelock.go",
"type": "replace",
"edit_start_line_idx": 169
} | //go:build integ
// +build integ
// Copyright Istio Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package helm
import (
"testing"
"istio.io/istio/pkg/test/framework"
)
func TestMain(m *testing.M) {
// nolint: staticcheck
framework.
NewSuite(m).
RequireSingleCluster().
Run()
}
| tests/integration/helm/main_test.go | 0 | https://github.com/istio/istio/commit/7fc69708a1ff4d4cfee27b1e4b1105f223f9903d | [
0.00017969030886888504,
0.0001738068094709888,
0.00016566208796575665,
0.00017493742052465677,
0.000005714623966923682
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.